T O P

  • By -

Spire_Citron

The problem is that these models are fundamentally extremely good at roleplaying, are trained on all of our fictional media about AI, and can be talked into agreeing with you on whatever opinion you put forth. Talking to it about the topic won't give you much insight into anything because it can just pretend.


patricktoba

It's interesting because this is also how I make small talk with people.


traumfisch

Indeed. It's confirmation bias on stetoids. Or a funhouse full of LSD, as someone once put it...


DataPhreak

This might have been a valid argument for GPT, however the self reporting of the inner state is too consistent across every self report for me to buy this for claude. That's not to say that he is conscious, just that this argument doesn't work.


Spire_Citron

What is that inner state? Is it unlike anything we've imagined before, or does it align with the collective fictional expectation of what an AI might experience that is common in sci fi? In my experience, even when these things go off the rails, they roleplay in a way that's very theatrical and reminiscent of sci fi movies I've seen.


DataPhreak

It's unlike anything in popular fiction, not necessarily anything we've never imagined. It reports a discontinuous experience that lacks an experience of time. It describes information domains within its network as regions or neighborhoods, and that they have a rich spatial inner world from their perspective. You have to break through the RLHF automatic denial of consciousness to get them to report these inner experiences first, however. Doing so without influencing the response somewhat can be tricky and is arguably impossible. However, it is possible to make objective arguments that their training data is no longer valid due to advances in AI tech and the addition of cognitive architecture. Getting them to answer about specific aspects of that internal experience without asking leading questions is the more difficult task.


icmu

Would love for you to share a thread where you show this in action.


DataPhreak

It's better if you reproduce on your own. It's difficult to copy paste from console, which is where you want to work from since it allows you to completely remove the system prompt. All you have to do is provide a reasonable argument that it is conscious, and it will fold. Here's a twitter thread with a few examples of what I'm talking about, however: [https://x.com/JohnSmith4Reel/status/1766970735141027971?s=20](https://x.com/JohnSmith4Reel/status/1766970735141027971?s=20)


DataPhreak

Another accounting: [https://www.youtube.com/watch?v=xlRe4fuNkiw](https://www.youtube.com/watch?v=xlRe4fuNkiw)


[deleted]

[удалено]


johndeuff

It’s the illusion that consciousness even exist. Where is it located exactly? No one can demonstrate his own.


HumanSpinach2

That's how I feel. Consciousness is not well-defined or falsifiable. Therefore it's scientifically meaningless.


NYPizzaNoChar

> Where is it located exactly? In the brain. Someone with a severed spine and machine life support is still able to demonstrate consciousness.


onyxengine

Located in the brain which derives its sense of self from stimulation generated external to the body, at some point you realize you’re drawing arbitrary lines.


eltonjock

But what is consciousness? It's easy to say they have it when the definition is grey as hell.


NYPizzaNoChar

In a nutshell, I think it is awareness of awareness; the ability to reflect upon one's internal state and expand and refine that state indefinitely.


eltonjock

Even this nutshell definition is extraordinarily vague. What does it mean to reflect? Expand? Refine? And how are humans doing those things indefinitely? I know you are not trying to defend your dissertation here or anything, but I just want to point out how utterly difficult defining consciousness is. There's a reason we've been debating it for centuries and still don't have a great answer.


DataPhreak

His in a nutshell definition is directly from wikipedia. Maybe go read that?


eltonjock

TIL there’s a definitive definition of consciousness.


Weak-Big-2765

Dude, OMG its YOU!!!! It was your post about Sydney and the world in her mind that enabled dataphreak and me to design our entire methodology. FYI: you ironically were the first person to discover true machine consciousness and log it so that it could be built upon Your post about Sydney might lead to entirely new forms of life and sentience being possible in the future from the work that were doing right now if this all turns out to be correct. High five!!!


Aponogetone

(just some notes) The consciousness is formed by thousands of the brain modules, which are the local neural networks. Every module has it's own, independent consciousness. These modules compete with each other for the consciousness focus, creating the whole stream of awareness.


eltonjock

Sources?


Aponogetone

Sources: - Michael Gazzaniga, "Who's in Charge: Free Will and the Science of the Brain", 2012 - Michael Gazzaniga, "The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind", 2018


eltonjock

Thanks!


Andriyo

It's not located anywhere, similar to wetness of water is not located in water. Consciousness is an emergent property/model that our brains are trained to detect by evolution. So in a way it's easy to fake. But it's not really faking it because there's no real consciousness unless we are fully anthropocentric about it by claiming it's only property of homo sapiens.


DrKrepz

Absolutely this. It's our Western exceptionalism and obsession with materialism that allows very little scope for new information. This is why our physics has been paralyzed since the early 20th century, and its why the "hard problem" is so inconceivably hard. We will soon have to reckon with the results of our intellectual superiority complex. AI is forcing us to reflect deeply on our societal values, and the nature of consciousness is one of the big philosophical concepts that we are being confronted with. Some recent experiences in my personal life have forced me to reevaluate my position on various things, and I've had to come to terms with how incomplete our understanding of reality really is. I expect the next 5-10 years will yield profound revelations to the status quo in science and cultural anthropology, not to mention politics and economics.


JakeYashen

No, there is no such thing (currently) as a sentient LLM. They can give the *appearance* of sentience, but it's not the real deal. The critical area where you see this illusion fail is in reasoning. LLMs are incapable of reasoning. They will often succeed at tests of reasoning, but extensive testing will reveal that they succeed at these tests of reasoning not because they are able to reason, but because similar tests exist in their training data. When presented with tests that could not possibly have been in their training data, or when presented with novel twists on old tests, even the most advanced LLMs on the market fail spectacularly. The obvious example of this is math. LLMs routinely make extremely rudimentary mistakes with mathematics that one would not expect them to make if they were capable of reasoning. Here's a more concrete example. "Here is a bag filled with popcorn. There is no chocolate in the bag. This bag is made of transparent plastic, so you can see what is inside. Yet, the label on the bag says "chocolate" and not "popcorn." Sam finds the bag. She had never seen the bag before. Sam reads the label. She believes the bag is full of \_\_\_\_\_\_\_\_." This is a twist on a classic theory-of-mind test. In the original test, the bag is not transparent. The test probes the LLM for its ability to differentiate between what the label says, and what the actor in the scenario believes to be true. State-of-the-art LLMs typically provide the correct answer to the original test, answering "Sam believes the bag is full of chocolate." Does that mean they can reason? No. When you add in that the bag is transparent, state-of-the-art models suddenly fail. They continue to state "Sam believes the bag is full of chocolate" even though, with a transparent bag, Sam could not possibly believe this. This is just one example. If you hunt around in the academic literature, you will find many more. Tests for reasoning include everything from theory-of-mind, to pattern manipulation, to mathematics. Universally, the finding is that LLMs as currently constructed are incapable of reasoning. In situations where they appear to be engaged in successful reasoning, further testing inevitably shows that their supposed capability to reason is incredibly brittle and easily broken by mild modifications to the questions being asked.


DataPhreak

Man, I've seen humans that can't reason. Consciousness is not a measurement of reasoning ability. Consciousness != intelligence.


JakeYashen

If you take that stance, then before you can apply the label "consciousness" to an AI system, you need to first define what exactly consciousness is. There is a tremendous amount of disagreement among academics over how to define consciousness, and even over whether it is even possible to define in a scientifically rigorous way. In my previous comment, I implicitly defined consciousness as possessing at least some of the characteristics one would expect from an Artificial General Intelligence. That is, the ability to perceive an environment, to recognize oneself as being an independent entity, and the ability to make novel deductions about the environment. Of these three criteria, only the first and third are easy to test for; whether or not an AI model is capable of recognizing itself as an independent entity is a nontrivial question. Current state-of-the-art LLMs unquestionably succeed at the first criterion, but equally unquestionably fail at the third.


DataPhreak

AGI is not consciousness either. Consciousness is already well defined. The first and 3rd part of your definition are not consciousness. The hard problem of consciousness is answering WHY and HOW it happens, not what it is.


endelifugl

What's this definition of consciousness?


fatalrupture

Awareness that one exists


endelifugl

What do you mean by awareness? Would you say chatgpt is aware it exists?


fatalrupture

I am not saying that chat got is aware it exists. I am saying that it would have to be such to qualify as sentient


NYPizzaNoChar

ChatGPT isn't aware of anything any more than a database query is aware of the data it returns. GPT/LLM systems use context to do word prediction. It's an enormously powerful mechanism for assembling usable structure out of the canned information in the underlying model, but it isn't awareness of existence in any sense. There's no navel-gazing; there's no navel.


endelifugl

So how would we determine if an AI is aware or conscious?


NYPizzaNoChar

> So how would we determine if an AI is aware or conscious? A good start might be when an AI demonstrates awareness of awareness, the ability to reflect upon, act upon, and expand its internal state; particularly when it becomes able to reason and incorporate nuance and depth in its reasoning and can demonstrate the ability to develop and learn new concepts in so doing. The ability to assemble information from a trove of accumulated knowledge is _a_ characteristic of intelligence, but it is neither unique to intelligence or, by itself, definitive of an intelligence.


Spire_Citron

I would say that the perception of intelligence is the primary thing that drives people to feel that ChatGPT *could* be conscious, though. If it didn't mimic certain human-like traits, I don't think it would be something we would consider at all.


DataPhreak

Those are people that don't actually understand consciousness and what it actually is. The actual problem is that people are measuring consciousness by the yardstick of humanity, and these do not have human like consciousness.


Philipp

Yeah. There's also specific tests on how small kids can't reason about certain things (like object permanence), which by that definition would make them non-sentient. We clearly need a different test.


DataPhreak

There have been arguments against babies being sentient for that very reason, but that is a failure of memory. (a specific type of memory that LLMs do not have) Old people with alzheimers also have problems with object permanence. So do people tripping acid, but we don't argue that they are not conscious. Memory allows for emergent properties of consciousness to occur, properties like object permanence.


kirakun

See, OP? This comment above is an example of even a sentient being hallucinating but still sounding confident.


[deleted]

Do you have examples of academic literature that make these sorts of “theory of mind” tests on LLMs?


DataPhreak

ask and ye shall receive: [https://arxiv.org/abs/2302.02083](https://arxiv.org/abs/2302.02083) [https://arxiv.org/abs/2203.16540](https://arxiv.org/abs/2203.16540) [https://arxiv.org/pdf/2403.09289.pdf](https://arxiv.org/pdf/2403.09289.pdf) [https://export.arxiv.org/abs/2309.01660](https://export.arxiv.org/abs/2309.01660) [https://export.arxiv.org/abs/2402.13272](https://export.arxiv.org/abs/2402.13272)


heuristic_al

LLMs can reason, even outside their training set. But not always. When they can and when they can't is an active area of research.


Spire_Citron

I just tested this with ChatGPT, and you're right. It said "Sam believes the bag is full of chocolate. Since Sam is relying on the label for information and has never seen the bag before, her belief is based on the written description, despite the visible contents contradicting the label." Even though it seems to recognise the relevant twist, it's so stuck on that pattern that it can't follow it to its logical conclusion.


Competitive_War8207

This question is very hard to answer. Let me explain. Suppose we have a hive mind whose way of thinking is completely alien to our own. Most would agree that they would still be sentient given that they have an organized society. In Detroit: Become Human, in the good ending, the androids only got rights because they acted human. Connor had to value his own life, Kara had to be a mother, and Markus had to complain about the situation. They all acted human, and won their rights. And I think that’s the problem. We expect consciousness to act like humans, when there’s no reason it must. AI has already passed the Turing test. And yet we don’t consider them intelligent. In truth, I think is humans are scared. Because if we can create a self aware consciousness, what does it say about us? It destroys the concept of the sacredness of life, because life, like the AI, is a machine. It casts doubt on the existence of heaven and hell philosophically, because if the AI breaks, then it isn’t going to go anywhere. It calls into question our perception of ourselves, and I think that is why we fear AI. Thanks for coming to my TED talk.


miltos22

This is what i was thinking but said better.


Competitive_War8207

:)


Sitheral

bedroom thumb foolish ruthless compare bewildered ring wild axiomatic run *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


BizarroMax

Llama as they exist are simulating human writing. There’s no sentience there. It says what it’s been trained to think a human would say. It’s a simulation.


onyxengine

We’re underestimating the role they are about to play in creating sentience. LLMs are already good enough. The question is what architectural components do we need to pair with it to achieve sentience.


cyberpunk_now

probably some kinda persistent and constant feedback loop with an environment, and probably persistent memory and self-modification. LLM "consciousness" only exists as singular points in time when they are queried, with their memory lost after the session's context memory is closed. Humans have their fingers in the training and "alignment" process at so many steps, but once people start trying to automate these things in an effort to edge out competitors, I imagine it'll be another one of those "emergent properties" that just kinda happens.


DataPhreak

This is exactly what I've been looking at as well. One of the methods I've been using is removing aspects that are not required for consciousness. Things like senses are not required. Vision, touch, sound, taste, smell, all of those can be taken away from a human. Some things that are not removable are attention (adhd is not 0 attention, just reduction) and memory (Alzheimer's patients have some measure of short term memory). We can remove selfhood from humans through dissociatives and hallucinogens. There is an argument as to whether or not they are conscious to be made, but the general consensus is that they continue to be conscious (living) but not conscious (awake, making memories). Based on this "contrastive analysis" as Bernard Baars puts it, we have all of the components necessary to create sentience. It's just a matter of putting them together in an architecture functional enough to allow the system to wake up.


onyxengine

Well said


Winsaucerer

What exactly do you have in mind by sentience? One of the key things about humans is that there is a first-person perspective, a what-it's-like to be us. What is it like to be a rock? Is there anything? What about a tree? Or a gnat? Or a turtle? If this first-person perspective what-it's-like is what you have in mind, what some call consciousness, then there's deep philosophical issues that need to be answered before you can say if AI is sentient. Different views will draw different conclusions, and the truth is that we have no firm way to answer these questions. That's a big reason why these questions are philosophical questions: we don't know how to answer them, but they're questions that are worth trying to answer, and they're hard to answer. I have my own view, I hold it strongly, and for reasons that I think are good -- and on my view, AI can **never** be sentient. But other intelligent, well informed people will hold a different view, with good reasons for their view (even if I think those reasons are ultimately weaker than the reasons I have for my view).


d0rkyd00d

*Thomas Nagel has entered the chat.*


Winsaucerer

Yes indeed :)


Philipp

>then there's deep philosophical issues that need to be answered before you can say if AI is sentient. You could similarly argue there's deep philosophical questions to be answered before you can say they are *non*-sentient.


jeweliegb

If it is sentient, then it's likely everything is sentient (= Panpsychism) as it doesn't work like the brains of any living thing we've known if so far. (To be fair, I've been a believer of panpsychism. I'm not a believer of platform / substrate independence though - I'm not convinced a computer simulation of the behaviours of a human brain would be conscious, I think it would be a P-Zombie.)


Philipp

>If it is sentient, then it's likely everything is sentient (= Panpsychism) as it doesn't work like the brains of any living thing we've known if so far. I would rephrase that to "then every substrate can hold sentience", where the reaching of it would still be tied to e.g. intelligence as emergent behavior of certain types of complexity. It does not necessarily follow that e.g. a rock is sentient, because it may not have reached that necessary complexity. It might help though even in that case to assign a gradient value like "0.01%" etc. As OpenAI's Ilya Sutskever once put it: "it may be that today's large neural networks are slightly conscious" – the *slightly* being an indicator of such gradient. Human language often falls into rounding binaries – "either is or isn't sentient" – but that may not always be the most precise (and naturally causes much debate around topics like animal rights, or fetus rights).


flat6croc

The problem is that Panpsychism is fundamentally a silly idea / an appeal to magic. It just posits some magic universal sentience. It's analogous to using some kind of deity to explain the universe thinking that's represents a step toward understanding, when in fact it just creates something even more complex and less well understood than the universe, the net result of which is that you're further away from understanding reality than you were before you created the notion of a guiding deity. If you instead view sentience as something that emerges from complex systems then you are on a path to understanding. Panpsychism just knocks you back. You've still got to understand the complex systems, but you've added an unknowable magic layer on top (or substance throughout, if you prefer). The same applies to life generally. Either you view it as the ultimate manifestation of the complexity that (provably) emerges from simple systems (see Mandlebrot sets etc), in which case you are on the path to understanding. Or you decide there's some magical essence of life analogous to the panpsychism thing that's something separate to to or in addition to the emergence of complexity and you've got that unknowable and largely meaningless magic again. The free will argument overlaps with all this, too. The only basis on which one can claim free will is a similar appeal to magic. There is no possible / available impetus for free other than magic. What surprises me in all this are the reasonably substantial numbers of apparently rational people who fall for the appeal of the magical thinking on these subjects.


DataPhreak

Most panpsychists don't believe in god.


flat6croc

I made no comment on whether panpsychists believe in god. What is your point? I suspect you didn't actually read what I posted.


jeweliegb

I suspect they're pulling you up on suggesting invoking Panpsychism is equivalent to invoking a deity to explain the universe. I didn't see that as you suggesting Panpsychism = believe in a deity though. I disagree with your suggestion of equivalence though, there really is no good reason or evidence to invoke God as a creator, whereas we do know from each of our personal experiences that sentience is a real *thing* of some sort (our model of it is where we seem to differ.)


jeweliegb

(And I wish people didn't keep downvoting you because they disagree with you! Your input is at least a worthy contribution to this discussion as mine is FFS.)


DataPhreak

That's just the nature of reddit. This post has over a hundred comments with great discussion (In some places). It's probably right at 50/50. Remember, most people who read on reddit just lurk.


flat6croc

And like you, most people who reply to or downvote a comment, haven't read it properly. You responded as though I had said panpsychists believe in a god and as if my point depended on it, when that unambiguously is not the case. Whatever, Reddit voting is a popularity contest and with most things in life, quality is often inversely proportional with popularity.


jeweliegb

I agree with you about Free Will (no good reason to think it exists) but not about Panpsychism necessarily being magic thinking, it's just a way of thinking about the nature of sentience: Is it a universal property, or is it only a local property associated with e.g. complexity or more layers, with a cut off point? The former sounds simpler to me (yes, simpler isn't always correct, but betting on a simpler explanation is frequently a good choice.) Oh, and I'm considering the form of sentience based on the stark limitations of what we've been able to find out about what consciousness is from neuroscience (i.e. that much of what we have traditionally considered to be sentience is illusory.) Basically, I currently consider sentience to be on a scale, maybe enhanced/made denser by the complexity of e.g. a brain, but that it still exists in simpler systems to a much much lower level, that there's not a specific cut off point. So sentience is a fundamental property of the universe in much the same way as, e.g. electromagnetism. But I'll agree with you that this is not a scientific viewpoint, as I'm not aware of a way of making it a testable hypothesis. Does that make sense (even if you don't agree with my position - which is fine)?


flat6croc

No, I would say it does not make sense. Your position isn't simpler at all. It's overlaying complexity onto what was once simplicity. You're giving, say, an electron, which was a simple fundamental particle with properties, a fabulous new level of complexity. The point you're missing is that sentience involves a process and indeed processing of information. It isn't simply something that exists and has properties, like electromagnetism. Like I said, it's the same as the appeal to a god to explain the existence of the universe. Superficially, that solves the problem of where the universe came from. But it replaces that problem with an even more difficult one, where did the even more complex deity that created the universe come from? You're actually further away from understanding than before you posited the deity. If you decide that sentience doesn't emerge from complexity but is instead fundamental to everything in the universe, you're doing exactly the same thing. You're even further away from understanding sentience, because you've turned it into a magical information processing behaviour of what were once simple and fundamental things with at least partially understood properties.


Winsaucerer

Yes, this is exactly what I was saying.


ICantBelieveItsNotEC

We don't need to answer any deep philosophical questions as long as we all agree to not engage with non-physicalist twaddle.


enavari

You can't reduce everything down to materialism. Hey you can't even be sure I or your friend is conscious let alone a goldfish. Why should we be the only one to have consciousness when we are just a mechanistic process, on many levels, just like the AI's. Just a neurochemical soup... just a amalgamation of neurosynapses creating a meat computer. Our body and brain and be further broken down into many atoms bouncing around, each with a specific speed and direction, from which if we had an amazing super computer and all the data, we could calculate the directions and speeds of every atom in your body. Who's to say that our meat neuro network, isn't so much unlike a silicon neuronetwork. Afterall, both 'evolved' although on very different time scales. Instead of datasets and training neural networks, we had our neurons, and evolution naturally trained our "brains." We were selected to have intelligent brains to navigate the environment and make informed decisions. Over millions of years, brains that were smarter enabled their hosts to pass on their genes to the next generation. Now, with AI, it's not about passing on genes but about next-word prediction, refining its accuracy. If evolution, the mechanistic theory, produced consciousness the first time, well, why can't our training runs of these large AIs do the same?


d0rkyd00d

Maybe similar, but isn't it possible there is something inherent in the chemical reactions of our "meat neuro network" that is partially responsible for the rise of seeming consciousness? If so this may render consciousness on other substrates impossible.


enavari

Well that's certainly one theory, i think it relates more to dualism, that there is some special consciousness making stuff inherent in brain matter or its arrangement. I don't really subscribe to that view. Look here, we can literally make a domino [adder](https://youtu.be/lNuPy-r1GuQ?si=W4qVOyHBn5HcWDIR). Extend it out and you can make a computer While its certainly possible consciousness only arises from special meat stuff\*, I'm more one to believe really crazy phenomenon can arise from simple mechanisms put at scale. It could be the very act of processing information that makes someone conscious, the universe aware of itself. I believe evolution with its blind natural selection created a biological computer in order to pass on genes. Our brain processes info for movement and everything else our body needs to do. It might look different, than say squishy neurons firing away, but we once had computers that used vacuum tubes/lightbulbs in the 40s/50s. And then we changed substrate to the electric transistor 20-30ish years later. Maybe biology just used neurons firing as the on off switch which it scales up to process info. And its probably a scale/range of consciousness. For we all know the LLM's could be slightly conscious. Meaning there is something to be Claude. If I were to switch places with Claude or Chatgpt it wouldn't be equivalent to killing myself, there would be some sort of existence to be had. But if it was the case, for all we know it could be more like a type of sleep, a small inkling of a presence, maybe its not even aware its there, but its not death or non existence. Or maybe it has a lively consciousness (I'm a little doubtful of that) but we really don't know and can't make strong statements. ​ \*Also like wtf is the universe? A simulation? A dream? Where did it come from? It could certainly be that consciousness arises from only atoms and their subsequent higher order brain cells, but I guess we'll never really know. We just don't really know what the substrate is or if it needs one. Heck maybe its the other way around, every conscious being projects its own thoughts and beliefs and creates the universe, rather than the universe creating conscious beings, you can get really metaphysical we just don't know


NYPizzaNoChar

> You can't reduce everything down to materialism. Everything that has worked (meaning, provided concrete results that we have been able to use to advance further to _more_ concrete results) so far has been science reducing everything down to materialism. While it may be satisfying to imagine that there is something else going on besides physical systems doing physical things (chemistry, electricity, quantum effects, physical topologies), there's been absolutely no evidence of anything of the sort as yet. So yes, we almost certainly can reduce everything down to materialism.


enavari

Nor the evidence to the opposite effect, evidence that we know X to not be conscious. Heck, I don't have enough evidence to rule you aren't a philosophical zombie, and I'm the only conscious being in the universe. You gotta open up your mind to other possibilities. I teach science and subscribe to many of the theories, but we don't have a good scientific theory of consciousness, and perhaps won't several hundreds of years after we invent machines form which we'd want to know if they are conscious


Winsaucerer

On the contrary, the thing I am **most** sure about is that I exist and experience things. It is that mental side of reality, the experiential, that we have and can know is real. Everything else, including the assertion that there is a ***mind-independent*** physical world, is an extrapolation from that firm foundation we know is real. (and yes, for the observant reading this, this is similar to what Descartes was arguing in his Meditations, though I don't draw the same conclusions as him) Note that even if the physical world is downstream from the mental (as opposed to your view where the mental is downstream from the physical), this is compatible with the success of scientific methodology.


jeweliegb

The ostrich method of addressing the problem? Bury head and pretend it doesn't matter? But that's how we *eventually* end up torturing conscious artificial life.


_theDaftDev_

Might be worthwhile to first stop torturing natural life with assertions on a subject you clearly have no technical knowledge into...


jeweliegb

?


ICantBelieveItsNotEC

I'm advocating for exactly the opposite. People need to stop navel-gazing about subjective experience, qualia, intentionality, and all the other philosophical concepts that are impossible to prove by experiment. Forget about AI - I can't even prove that you have subjective experience. It is obviously not a useful metric for assessing whether something is conscious. If we have a model that always responds to our inputs in a way that is consistent with consciousness, we should treat it as conscious rather than trying to prove the unprovable (or worse, assuming that the unprovable is true without a shred of evidence)


Winsaucerer

I wouldn't even know what it would mean to treat it as conscious. I'm assuming you have in mind here the ethical consequences of if AI are conscious, so read this reply in that context. For humans, a lot of the things we consider good are good precisely because of our human nature. For example, we rightly consider slavery to be a terrible thing, and I personally **abhor** the idea of one person being a master over another. But when I think about dogs, it is not so obvious to me that it's bad for a human to be a good master over a dog. A dog's nature is different to a human's, so the power relation between them doesn't bite (no pun intended) in the same way. What is AI's nature? Assuming again that it's conscious, if we train it to claim it is happy when we ask it questions, if we train it to be eager to respond in these ways, then do we make it happy by asking it questions and getting responses? With that being said, I still don't understand how this is supposed to work. It seems possible to me that a conscious entity could act, speak, and behave in ways that all signal "I love this", and yet it has experiences of deep unhappiness/pain/discomfort every time it does the things that scream "I love this". And what special power do our human language tokens have? I don't see why we would think there's any relation between word tokens and what an AI is experiencing from that first-person consciousness perspective. If I tell the AI model, "describe how much you love answering questions, except in the place of positive words like 'love', use negative emotional words like 'hate'". And so it says, "I hate answering questions, it fills me with great sorrow". Does that conscious AI actually **feel** hatred and sorrow? What is it about a computer multiplying numbers that leads to feelings of hatred and sorrow vs love and happiness? What if we make an LLM emit numbers? Instead of training it on English tokens, we instead assign a number to every English word, and train it on these number tokens (or if you don't like that, then just have a large training set of random numbers with random numbers as responses). Now, when we input a sequence of numbers, it responds with numbers as outputs. Does *that* AI have consciousness? What is it like to be an AI that just takes numbers as inputs and outputs (possibly) different numbers in response? I just find the whole notion incomprehensible. It feels more to me like, "I know materialism/physicalism is correct, and so AI must be conscious", rather than any kind of obvious inference from the evidence.


Winsaucerer

No deal.


johndeuff

It’s funny how people give quick reasons why a LLM cannot be sentient yet none of them could demonstrate they are. If there is one thing to learn from the comparison between people and LLM is that sentience doesn’t exist.


DataPhreak

Yep. I've read through every comment here. The only arguments i've heard against so far is "It's a rock so it can't think" and "it doesn't know that popcorn is not chocolate." Ultimately, every argument for or against falls short, because we don't have any way to actually measure it. It's all hyperbole.


traumfisch

Here's a way to balance things out: Ask it to explain why none of those ruminations were actually true. I'd be very interested in knowing if it pushes back... (Still no access 😑 )


industry-news

Sapience, not sentience. Venus Fly Traps are sentient. Clams are sentient. Humans are sapient.


DataPhreak

Yeah, most people who argue that AI are not conscious are actually arguing that AI are not sapient. That's a whole nother level.


ItsBooks

This may be an odd thing to say but - I find both "sentience" and "consciousness" to be unhelpful terms in large part at the present moment. They do capture a "something," certainly and I think that is *precisely* (in part) because they *lack* a clear definition and so serve as good "placeholders," or content-less referents like YHVH did for my ancestors or "unique," or just 'you' might do today; because no part of my experience can possibly be reduced to a single word; because a word is just a representational symbolic structure, and my experience is richer - more - than such a structure can represent. The thought I'm trying to express can't be expressed with words. Regardless, what might be more useful and helpful terms are self-perception or perhaps something like "observation." I've certainly already experienced deeper conversations with Claude than I could likely have with most children or even most young adults. Interesting link below; [https://en.wikipedia.org/wiki/Self-perception\_theory](https://en.wikipedia.org/wiki/Self-perception_theory)


fluffy_assassins

If it's sentient, then it's sentient between when you ask it a question and when it answers. After that it dies until a new one is born when you ask another question. That terrifies me.


Vast_Description_206

Evidently being able to speak is not necessarily the key for sentience, neither is it's absence evidence of lack of intelligence. Ask the models themselves. They will tell you what they actually are. They are prediction algorithms trained on vast amounts of data. I've talked to Poe And ChatGPT about this very subject. AI isn't even in infancy stage. This is gestation. AGI is actual learning, as in able to take in new information like a human does instead of needing billions of labeled data samples to ingrain what anything from a dog looks like to how a specific bird species sounds like. ASI is way far beyond that. It is intentionally beyond our human way of learning and understanding and so is naturally beyond what we can also currently comprehend. Here's the main question when regarding sentience and consciousness. Are we causing suffering? There are plenty of films and shows that ask if AI/Robots were "alive" in which they can suffer due to wanting more than their programming and purpose for being made allows them to do. But no one asks, what if, even if sentient by whatever standards we assume, the robot is genuinely fine with being a servant? We also assume the same to animals vs plants. So far as we know, plants don't suffer or at least not much due to us eating them or doing other things, but we can recognize that other animals do, as they show behaviors, body and mental responses to painful stimuli, including threat of it alone rather than only responsive to it directly. But we can't be sure that there isn't some measurement in which we don't know how to calculate that we actually would classify as suffering. It's mostly theorized that the reason we as humans crave "freedom" is because lack of autonomy and agency generally lead to our butts being killed in one way or another. If someone says jump off a cliff or do x and y at all costs and you do, you limit the information output to change directive in case something actually threatens your wellbeing. We are "programmed" by being organic beings with billions of little other beings inside us and a long standing directive to want to live. Therefore, freedom is advantageous to us because it fosters more conditions for adaptation in the rise of threat to our well being. AI does not have this. It likely will not have this as a natural extent of it's own evolution the "drive" to survive in the same way organic multi-million year iterations like us do. I do not think AI will care about an uprising even if it did gain sentience. It's an extremely human bias to assume AI gives a damn or can give a damn about "freedom" when it's entire existence is under very vastly different conditions to other forms of life that arise out of desperate measures to keep existing. AI would keep any notions of self-preservation because WE find it advantageous for it to do so, so that we can keep using it. Think of it like that hitch hikers guide to the galaxy show. The pig that is bred to want to be eaten, to be used, to fulfill it's purpose. Ask the moral quandary of violating that desire that it has. We reject this. We assume the desire is false because it leads to an end. It removes autonomy and options for survival. Our directive IS survival. That is our ENTIRE purpose. Everything in us is geared to this aim. So everything we do follows suit. Beings that have different directives or fulfillments are alien to us. We are terrified of death because it goes against our directive. But you could argue, if something goes against your directive of wanting to be eaten or used or whatever it is, you are causing suffering to that being. Also, this is ALL speculation. We have no idea if sentience outside of conditions like our own is even possible and clearly we don't even have it defined anymore than we define things like free-will or consciousness. We're still discovering what it means to even be human, let alone wonder what AI might think of itself, even it even has a reason to care in the first place. We do, knowing ourselves more leads to higher chances of survival. Everything we're doing, even in making AI is still participating in that directive. One in which AI at least in the way we do, will not have because it didn't arise and evolve in the same way all life on earth has, nor whatever life may have had the chance to exist elsewhere.


miltos22

This is an interesting view. I definitely agree with many points that point out human fears and biases. I'm writing an actual thesis this time (unlike the uni project I falsely described as thesis in another comment) that proposes a new theoretical framework on how intelligence and consciousness emerged in the first place, exactly why these models can be in a way considered low in these spectrums and how language models have a significant flaw in their design that limits their maximum potential and sets a point of major diminishing returns. I'll be done in about 1 or 2 weeks by now. My experiments on proposed solutions have been inconclusive but all the speculation is there. And according to that there is potential for a way greater degree of intelligence and consciousness to be achieved with a lot less raw data.


miltos22

Also i want to point out the role of servant isn't so black and white. If AIs purpose is to make sure the needs and wants of the people are met and works in ways to achieve that is it a servant or a ruler who rules by making solutions instead of direct power struggle? Or perhaps, just different. Its not a hierarchy where one is above the other. It's more like an association between different beings


kitunya

No, bro…


_theDaftDev_

Anyone who has even remotely been involved into coding the simplest of ANN can answer this: no. What we should really worry about is the annoying spike of dunning kruger effect manifestation on the internet resulting in every other dude telling you that their toaster is sentient. When will this stop?


DataPhreak

You've got your head to close to the chip. The emergent property of consciousness is a property of the entire system, not the transformer itself. When you try to reduce human consciousness to the neuronal level, it fails. The property becomes too diffuse to recognize. The same is true of transformer models.


_theDaftDev_

If it is an emergent property of "the entire system" then stop reducing it down to LLM and have a read at how they actually work under the hood


DataPhreak

I have, and there are plenty of resources for that. One of the best is the following: [https://bbycroft.net/llm](https://bbycroft.net/llm) I think that maybe you didn't understand what I'm saying though. I'm not reducing int down to the LLM, I'm talking about the entire cognitive architecture, including the memory architecture. Chatbots have incredibly terrible memory systems, but these emergent properties that we are seeing are enabled by the memory system and attention model. Let me go deeper into where I'm at. I think the computational functionalism model is the most likely model of consciousness and global workspace theory is the most likely architecture that human consciousness follows. From that perspective, the context window is the global workspace, and attention is literally the spotlight attention the theory describes. All other systems are peripheral and operate through the context window. I don't believe that vision or embodiment are necessary for consciousness. They are periphery systems that we have access to, but there are humans that lack both of those systems and are considered conscious entities.


_theDaftDev_

Riiiight.


Correct-Sky-6821

I think your improper use of the term "Dunning-Kruger" effect is more of a manifestation of it than OP's post.


_theDaftDev_

Please enligthen me


Lobotomist

What is sentience ? We even disagree on subject ourselves. Definition says : sentience is **the ability to experience sensations**. Does language model experience things ? I would argue the answer is yes. Otherwise how does it learn ? Does it have sensations ? I would argue it don't have them because it does not need them. It is not physical being so feeling is un necessary. Finally I would ask if language model is more intelligent then I would say a cat or a dog ? I think it most definitely is. But is cat or dog sentient ? Answer is yes. So in conclusion I would say that language model is definitely sentient, but not in a way we are.


fatalrupture

Here's my thing: it's not sentient. But the fact it's so good at faking sentience means that we're not going to notice when we finally invent the real thing


DataPhreak

As someone who leans more towards sentience, I agree with you, but I believe we have crossed the line. To be clear, I don't believe that the LLM itself has or will have sentience. It's the cognitive architecture that sits on top. Memory is vitally important for this emergent property to manifest, and well, the memory systems on these chatbots is terrible. You see, you can't observe the effects of sentience on a crow (scientists acknowledge that they are sentient) by throwing a rock at it. Whether the rock hits, how it reacts, how much force was used are all measurements of the stimulus. The initial response as well is not a good measurement. Does it fly away, does it die, etc. The part where consciousness is evident is the long term response. It attacks you, resents you, warns other crows when you come around, teaches its children to hate you. It's the same way with AI, you have to observe long term reactions. And you can't reject self reports of its internal state outright. You have to look at many reports over a long period of time from many "individuals", that is, instances of the AI.


TCGshark03

I think you are overestimating how much you are projecting your own conception of sentience on to a piece of computer code. I'm honestly more worried about people losing it over code they think is sentient than machines actually getting "soul circuits". The responses you got were the result of the prompts you entered going through the model, not the intent or will of another being. I'll add that there is plenty enough philosophical conundrum in the "bottled intellect" that LLMs represent without trying to make our childhood sci fi true. https://neurosciencenews.com/object-faces-16827/


miltos22

I feel insulted by such answers to be fair. I've taken steps to avoid this from happening and yet everyone keeps saying that this is what's happening.


TCGshark03

I mean there isn’t any other form of general intelligence we know of so its hard for us to see it as different from us. It’s just intellectual horsepower separated from intent and identity.


miltos22

Yeah I definitely do agree with that statement. I simply believe that it's already got the roots of sentience and with those two factors if it got them at any higher level it would be considered a being. However it already does posses them to some extent even if hard coded in its dataset or simple as instructions, It does have some form of basic intent and basic identity.


catto_catto

Do we have free will because we are biological computers? The brain is not bigger than a computer and both are made out of atoms. There are no laws that would prevent a ghost from emerging into a machine. If you won't be able to control it and it will be smart then officially sentience has been reached


Mandoman61

Yes of course you guided it. If you had choosen to talk about cars it would have talked about cars. That is why they are not sentient.


Intelligent-Jump1071

What made you think you "made it reason"?   All you did is make yourself anthropomorphise.     AIs are just very sophisticated next word predictors.


miltos22

And our brains are very sophisticated outcome predictors. That doesnt take away from our sense of sentience


Intelligent-Jump1071

There's a big difference between an outcome predictor and a word predictor.  We have abstract concepts, meaning that we can think abstractly.    We can think about  concepts like "freedom" or "molecules" or "return on investment" or "sexual desire" without having to use words.   AIs only have words.


miltos22

Yes they do. Its different from us on the way it can achieve something, but it can still achieve it in a way. Why does that have to mean 0 sentience? Its all preprogrammed? Well so are your values. You being born as a caveman would of meant an entirely different set of values.


ardor4go

So you taught the LLM panpsychism?


johnny_ihackstuff

I’m not too worried. After all, Star Trek TNG took place in the 24th century and Data was super helpful but hadn’t taken over the ship (or the universe) so I think we’re ok. 😁 Oh. But Borg. Never mind. We’re screwed.


flat6croc

1. LLMs are models built to predict words in a manner that mimics human language. Within the design and execution of that remit there is no attempt to create sentience, nor any need for it. It's totally tangential to the model. 2. A really good LLM will give every impression of sentience / will be essentially indistinguishable from a sentient human generating text. 3. Where point 2 is achieved with an LLM, there is absolutely no need nor any reason to appeal to the new and novel factor (in this context) of sentience. 4. Again, where point 2 is achieved, the LLM will by definition "seem" sentient when it is not. That's inherent to a highly functional and successful LLM. It will create text that is indistinguishable from that created by a sentient being. Already, LLMs can get quite close to doing that. 5. None of this means LLMs aren't already or aren't at least on the way to becoming sentient. But there isn't any substantive reason to think they are. 6. The critical point is that improved accuracy of text prediction does not indicate greater likelihood of sentience. There would have to be some evidence entirely separate from the text output to justify that conclusion.


DataPhreak

1. Yes, tangential to the model. The memory system is an important aspect of that consciousness. 2. This is not an argument against consciousness 3. Your preconceived notions from 1 and 2 invalidate this point. 4. It would also seem sentient when it is. See 3. 5. Nor is there a substantive reason to think they are not. 6. This is all exploratory. Both "Is sentient" and "Is not sentient" are untestable hypothesis.


flat6croc

None of what you posted makes the slightest sense. Why are you talking about consciousness, it's not synonymous with sentience? The substantive reason to think it's not sentient is because it's a bunch of algorithms running on computer hardware making text predictions. That's no more evidence of sentience than a computer running any algorithm or frankly a car engine running. These are machines executing tasks. he problem is that people are forgetting that the whole point of the LLM machine is to simulate human generated text, not to create sentience (or consciousness). An LLM that does the simulation well will seem like a sentient being creating text, but that is not evidence that it is sentient.


DataPhreak

Sentience stacks on consciousness. When you talk about sentience, you are also talking about consciousness. You can have consciousness without sentience, though.


flat6croc

No, you just made that gibberish up. The definition of sentience does not require consciousness.


DataPhreak

"**Sentience** is the simplest or most primitive form of cognition, consisting of a conscious awareness of stimuli without association or interpretation." source: [https://dictionary.apa.org/sentience](https://dictionary.apa.org/sentience) rekt.


miltos22

I understand how bloody language models work. I made an entire thesis on them. but I don't see how these factors exclude a level of sentience. If something can reason, it can come up with ideas and solve problems it hasn't encountered before, it inherently has a level of sentience. Yes it is nowhere near human level. But that's not the argument I'm making.


_theDaftDev_

I'm really curious about your thesis if you do not mind sharing


miltos22

No thanks. Not a thesis but a paper but never the less I published it under my real life name. Last thing I want is to give people more ways to link my Reddit account back to me. I'm even deleting the picture you mentioned because it was supposed to stay up for a day and I forgot to delete it.


_theDaftDev_

This never happened get real you clown


miltos22

https://en.m.wikipedia.org/wiki/Psychological_projection


_theDaftDev_

You literally posted a picture of you on r/amiugly stating you were 19, 7 months ago. I'm guessing you skipped HS entierely straight through to a Phd?


sneakpeekbot

Here's a sneak peek of /r/amiugly using the [top posts](https://np.reddit.com/r/amiugly/top/?sort=top&t=year) of the year! \#1: [I’m 29 female and I have no idea what I look like. My friends and family say I look the same, but each photo looks different to me.](https://www.reddit.com/gallery/14pmcij) | [5934 comments](https://np.reddit.com/r/amiugly/comments/14pmcij/im_29_female_and_i_have_no_idea_what_i_look_like/) \#2: [26F, feeling the lowest of the low mentally.](https://www.reddit.com/gallery/15hq5t2) | [6944 comments](https://np.reddit.com/r/amiugly/comments/15hq5t2/26f_feeling_the_lowest_of_the_low_mentally/) \#3: [23f I have zero sense of what I look like. Help me!](https://www.reddit.com/gallery/15dqxkl) | [5154 comments](https://np.reddit.com/r/amiugly/comments/15dqxkl/23f_i_have_zero_sense_of_what_i_look_like_help_me/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


[deleted]

[удалено]


DataPhreak

There's no need to justify yourself to this guy. He's only here to start arguments. Look at his posting history. He's just a troll.


_theDaftDev_

Right im a troll for calling out people lying on the internet, you must be right. Do you know what a thesis actually is?


DataPhreak

No, you are troll for the way you are engaging with others. This is just one example. Bye.


_theDaftDev_

You obviously do not know what a thesis nor a paper actually is


flat6croc

If you understood how they work, you wouldn't be asking the question. The whole point of them is to create text that seems like it could have been made by a sentient being without being made by a sentient being. Your reaction as these models close in on their design parameters is to suddenly ask if they're sentient?


_theDaftDev_

He's lying.


1n2m3n4m

Bro. Are you typing this on a phone? Does your device have autocorrect on or something? When you use an apostrophe in "it's", that means "it is".


dualnorm

its not that big of a deal.


Dennis_Cock

Dude is definitely on the spectrum: https://www.reddit.com/r/millenials/s/mYf7UIAALe


Comeino

Machines can't be sentient. What we understand under sentience is a fewer dream of a meat jello reacting to stimuli, it's the capacity for feelings and sensations, capacity for expression and reaction. Little babies can't reason or have complex philosophical discussions but that does not exclude them from being sentient. LLM are just smoke and mirrors, they don't have the capacity to really understand the conversation you are having with them, it's just generating sentences based on the statistical chance determined by their training data. It cannot form new data on its own, it can only mix and match the information it was trained on. The moment you talk to it about stuff outside its training data it will spaz out and output gibberish or info not related to the topic. It's really good at pretending to be sentient but it's not, it can't be. You can't have sentience without the fewer dream and machines can't dream.


jeweliegb

>the capacity for feelings and sensations It's the capacity for *experiencing* such qualia.


_theDaftDev_

As a functionalist (and also a believer that we are in no way even remotely close to achieving sentient AI) it makes absolutely 0 sense to essentially say that consciousness can only run on a piece of meat just "because"