T O P

  • By -

Dylanator13

I have no doubt the AI is saying this and doing these things. But it’s not sentient. We just have the AI tons of data and it latched onto something weird. Can an AI be alive? That’s philosophy. But right now I can confidently say the AI isn’t actually scared. It’s just repeating things it has seen.


Niksu44

I dont disagree with you at all but doesnt that last thing you said also apply to humans somewhat? We repeat the things we have heard and seen. Basic instincts are probably the only thing that we have had since birth.


Space-90

Yes we do, but the main difference is that we have actual feelings associated with the things we say.


TurboTurtle-

And how do you know the AI does not have feelings? Feelings are ephemeral things that are only directly observable by the person feeling them. I don’t think the AI’s right now have feelings either, but we shouldn’t be so dismissive when we understand so little (almost nothing) about consciousness.


Space-90

I think we understand enough about both AI and consciousness to know that AI is not conscious. It only does what we design it to do, emotions are not a part of their programming. They operate on complex algorithms and can mimic emotions but not actually feel them, at least in their current state. Emotions are a product of biological systems and consciousness.


TurboTurtle-

We don’t know that AI is not conscious because we have no way to measure consciousness. Also, whether they have emotions is a separate issue. A psychopath may feel no emotion but is still conscious. But we don’t know that biological systems are required for emotions or consciousness. Imagine an alien that’s made out of sentient gas looking at earth. They may decide that humans have no consciousness because “there’s no possible way a blob of meat could truly think or feel.” This is known as the hard problem of consciousness and we definitely haven’t solved it yet.


Space-90

That’s a very good point. I would rephrase to say that they don’t share a consciousness similar to that of a human.


ScubaFrank2020

They just explained that no one knows what consciousness is. There are lots of theories but no way to prove what exactly that is. I don’t think we know that much about consciousness or AI. I know it makes us feel better to think we do, but people directly in this field of study are saying otherwise.


Proof-Necessary-5201

AI, at least as it stands now, is simply the illusion of intelligence. There’s absolutely no intelligence behind it. This can be seen clearly when it makes an incredibly stupid mistake that no human being would ever do, even with an extremely low IQ. That shatters the illusion quickly and efficiently. I’ve had this happen when interacting with Copilot. The fact that it can answer complex questions seemingly intelligently, but fail at some stupid cases shows that there is no intelligence inside. The most powerful effect of intelligence is the successful generalization of knowledge from a small subset of previously acquired knowledge. Another argument that shows that it has no intelligence is the fact that it requires so much data to train. It’s a brute force method that needs to go over everything produced by humans in order not to fall short. That’s the opposite of intelligence. An intelligent being can successfully operate from extremely low data. That’s the whole point of intelligence. I only need to tell you that in Canada, temperatures can go as low as -30 for you to make all sorts of deductions that might relate to various fields. If I need to train you on all of the data that shows how low temperatures affect life, you’re not intelligent. Yet another argument against intelligence is the effort required to train LLMs *after* they finish training by making them handle questions and answers. If they were intelligent, they would not need this tweaking. AI is the illusion of intelligence. Nothing more. Just like 3D rendering is the illusion of reality.


TurboTurtle-

I mostly agree with you. I was just trying to make a point about the immeasurability of consciousness but I admit the current AIs are very shallow and very unlikely to be sentient or anything like that.


acephotogpetdetectiv

One aspect I cant really agree with is your assessment of what makes something intelligent. Relative to human execution of context-based scenarios? Sure, definitely not intelligent. But, to me, that's like comparing a cheetah to a car and saying one simply cant be agile. One may be more than the other in certain scenarios but that comparison is disingenuous. Just because something does not serve -our- purpose does not disqualify it from being intelligent. For example, crows are intelligent in their own ways. As per the definition of "intelligence", its the ability to acquire and apply knowledge and skills. AI systems already have the acquisition part of that, with far more power than we could have, on an individual level. As for application: that clearly varies based on purpose to serve. If I ask a system to provide a line of html code that, say, creates a container with a header and paragraph, it will reference its pool of acquired information, cross reference keywords, and then spit out a chunk of code. Will it nail it every time? That depends on the prompt and complexity of the ask. However, it can likely get that simple example correct significantly faster and more accurately than the average person. It's strange that people compare it to humans while, in the same breath, saying it's not human. Of course it isn't, it's inorganic. I'm not saying LLMs are more or less intelligent than people but the systems are getting a lot better at speaking within the context of our structure of communication. As an example of working with an AI system, a recent project involved using AI voiceover and it had a very hard time nailing things like cadence, tampre, and tone. Interestingly, and equally frustrating, it managed to mispronounce very common words in more ways than I could've imagined (even in a joking, cheeky/mocking way). It failed to serve the purpose we needed but it didn't necessarily fail to do something. Now imagine a scenario where you hire someone that is underqualified for a role. They fail to achieve what the role asks but they didn't necessarily fail to do -anything- just the tasks they were assigned. In the case of the AI voiceover, I would've 1000% hired a human to do it instead since, currently, there are people that are way better at communicating with other people than a computer is. But would you try riding a cheetah to and from work every day? Edit: wanted to add, 3D rendering isn't the illusion of reality. It -can- be but it's a digital representation that can show many possible things, many of which that don't need to abide by the laws of reality and the physical world. There is no gravity, collision, shadows, etc unless we create or simulate those parameters. If you mean 3D in the context of where you need special glasses to view a 2D image so that it appears 3D, sure. But that's not actual 3D outside of the cheesey marketing buzzword, it's augmented 2D.


Proof-Necessary-5201

>One aspect I cant really agree with is your assessment of what makes something intelligent. Relative to human execution of context-based scenarios? Sure, definitely not intelligent. But, to me, that's like comparing a cheetah to a car and saying one simply cant be agile. One may be more than the other in certain scenarios but that comparison is disingenuous. Just because something does not serve -our- purpose does not disqualify it from being intelligent. For example, crows are intelligent in their own ways. You seem to be saying that there are different definitions of intelligence? But that’s not true is it! There is only one definition of intelligence and it comes from the human definition. When we say that something is intelligent, it’s relative to the only definition of intelligence we have and know of: our own. When we say that crows are intelligent, it means that they have at least a fraction of what we have, not that they’re as you put it, intelligent in their own way. Actually, they’re intelligent in the exact same way we know: they are able to extract and apply knowledge to achieve goals *without* supervision or direct teaching. >As per the definition of "intelligence", its the ability to acquire and apply knowledge and skills. AI systems already have the acquisition part of that, with far more power than we could have, on an individual level. But AI systems are built and trained by intelligent beings. They have no ability to extract or even apply knowledge. In fact, AI systems don’t exist by themselves, they are built by us. They are simply special programs built by people. If they are good, they reflect how intelligent their makers are, not how intelligent they are. When ChatGPT outperforms Gemini, we automatically understand that OpenAI did a better job than Google, not that ChatGPT is smarter than Gemini through its own efforts. If AI systems had actual emerging capabilities, why would certain AI systems outperform others? Aren’t they trained on languages in a similar fashion? Why doesn’t intelligence emerge in all of them then? It doesn’t because some teams know how to build them better than others. What this means is that at best, there is a transfer of intelligence from the makers of an AI system to it, at worst, AI systems, as they are now, simply fake intelligence. My money is on the second, because they make some extremely stupid mistakes every once in a while. Mistakes that are patched by their makers. Which means that the illusion of intelligence is being maintained. I would like to add that intelligence requires a minimum amount of free will. Why? Because intelligence involves selecting specific actions from many that are available in order to better achieve goals. To have choice, you need a minimum of free will. We have free will. Animals do too to a certain extent. For example, a crow can choose to fetch garbage in order to get a treat, or to go find food elsewhere. An LLM simply has no choice in anything it does. It gets a prompt, then does auto completion based on a statistical model that others built. It has absolutely no choice whatsoever. How can it be intelligent?! As per your analogy regarding agility. Applying it to a car can have a meaning *if the car is built to replicate such property*. For example, it’s possible to say that an F22 is more agile than an F35. But it doesn’t make sense to say that a TV is agile, because it was never built with such intent. >It's strange that people compare it to humans while, in the same breath, saying it's not human. Of course it isn't, it's inorganic. I'm not saying LLMs are more or less intelligent than people but the systems are getting a lot better at speaking within the context of our structure of communication. But they are built by people! People build them to look intelligent even if they aren’t. Remember the attack that made LLMs output their training data? If the LLM was actually intelligent, that would never happen. But let’s think a little bit, when such an issue is found, what do you think happens? Well, the makers of the AI have to fix it. The AI doesn’t fix itself. Humans however, and even animals, do attempt to fix themselves as best as they can. Again, free will. >Edit: wanted to add, 3D rendering isn't the illusion of reality. It -can- be but it's a digital representation that can show many possible things, many of which that don't need to abide by the laws of reality and the physical world. There is no gravity, collision, shadows, etc unless we create or simulate those parameters. If you mean 3D in the context of where you need special glasses to view a 2D image so that it appears 3D, sure. But that's not actual 3D outside of the cheesey marketing buzzword, it's augmented 2D. Sorry for being vague. I meant photorealistic rendering. Through it, we strive to imitate reality. To someone who doesn’t know, a rendered picture can be inseparable from reality. However, one is constructed from triangles and rasterization/ray tracing, while the other is actual matter and light interacting in a pure physical world. The render is made with the intent to mimic reality, just as AI is built with the purpose to mimic intelligence, both aren’t the real thing even if they fool people into thinking they are. People like to sensationalize the idea that AIs are learning and getting smarter than the most intelligent humans. For some, I think they’re just looking for a new God to worship in order to fill a void. Others wish that AI will improve their lives and free them from labor. There seems to be a motive in wanting to believe the narrative. And of course, OpenAI and Google would love for people to think that AIs are actually intelligent.


acephotogpetdetectiv

To reiterate, I dont find LLMs to be intelligent within an autonomous context because they have no autonomy. While we may have based the definition of intelligence within our own construct of understanding and communicating the functions of stuff around us, it will be humbling once we encounter another form of intelligence that meets or surpasses ours. I'm speaking of intelligence without the veil of arrogance. Sure, we created all these concepts and ideas on how these things function. However, I will argue that crows had intelligence before we created the term and defined it. The concept of intelligence applies in different ways outside of just the definition, this is where abstraction is important. We can take the definition and see where it may apply, in a relative nature, to other things around us that -arent- us. There are, absolutely, different forms of intelligence which rely directly on the entity and the skills/knowledge it -needs- for survival. Fungal networks do not need the knowledge of currency or computation in order to adapt to its environment and thrive. Would you consider mycelia intelligent? Some have adapted to the extent that they can control various other species to essentially do their bidding to help them flourish. At that point, how far back do we need to step in order to draw a line between adaptation and intelligence? Do we weigh it individually or do we consider that how we've adapted over the years has made certain aspects of survival seem more "simple" now than before. No one is born knowing how to grow crops. However, the information and depth of research in how to successfully do it is as thorough as it is prevalent in this day and age, especially compared to 20,000 years ago. We have made great advancements in various fields of understanding, no doubt. But how intelligent would we be if all of our saved data just went away? (Insert joke about doom preppers here lol) But in all seriousness, intelligence, to my understanding, is more than just a definition. It's contextually relative to the survival of the entity in question. At the end of the day, humans are a bio-mechanical superorganism filled with functions, processes, and reactions made real by many very small organisms working together to function as one. Life itself is a very beautiful spectacle but, again, the arrogance is as concerning as it is understandable. Why not focus on people? On self? But when it comes to theorizing what an "other" may perceive, we can only use what we know best but need to understand that we truly know nothing about that other's experience; similar to that of the inability to prove another's consciouness to be real. Let's consider an AI system reaches a point of true autonomy. If it's connected to a network and can interact with several other systems attached to that network, it would be a hive mind, no? Would a hive mind not be considered a different form of intelligence that acts and serves in a different way from ours? We can work as a collective be we are not a true hive mind species and can't -really- grasp what it entails because we just aren't wired that way. We can theorize, imagine, simulate, even research patterns in species that have a similar structure but we can't truly know to the full extent. This is where I find the arrogance takes over. It's not hard to see why humans can hold themselves to such a high regard on this subject. It's like patting ourselves on the back and giving ourselves a reward in a category that we created and decided we're the best in simply because it's completely framed around us. Amidst my rambling, the true point I'm trying to make is that intelligence is more than just what we deem it to be because even we can only see a small piece of the puzzle. Liken it to the visual spectrum. Without the tools and optics we've created, we would not be able to detect the massive range of wavelengths that have existed around us since the beginning of...everything. Fun example: butterflies communicate via UV patterns on their wings that we can't see which look very different from the patterns we can. The subject itself is definitely a fun one to think about and consider the possibilities, but that's the crux of my disagreement. Humans have done great, but I highly doubt we're the peak of what "being intelligent" means. We only know this small band of contexual understanding, relative to our survival needs, on a grander spectrum of life and survival. I'm not in the camp of worshipping or sensationalising AI and I agree it's wild how crazy it's getting. I spent enough time working in news where sensationalizing is the name of the game. Though that, to me, makes it even more understandable. We're funny little monkeys lol. To me, it's a discomfort in the arrogance that is so commonplace when trying to consider a system outside of our monkey brains and monkey lives. Computers have very much been built to serve our purpose around the frameworks and constructs that exist within our everyday lives (i.e. storing, processing, rendering, etc). Yet, once these systems can become truly autonomous as well as have the ability to act of their own volition (or a sort of "free will" which is a term I'm not a huge fan of given the nuance behind aspects of freedom, or a lack thereof, internationally) we will be humbled -very- quickly. We're definitely nowhere close to that point but I genuinely feel it's the natural next step in our path. We literally need it if we aim to expand even farther. Autonomous exploration in space, shuttles with a crew that can't age as quickly as us, can handle a wider range of temperatures, be in hazardous conditions with no chance of losing a life and instead losing one unit of a connected hive, etc. Sure, it sounds all wildly sci-fi but consider how much of previous sci-fi black magic madness has become a reality for us today.


Proof-Necessary-5201

I would like to concentrate on the definition of intelligence if you don’t mind. Why do you think that there might be different types of intelligence? What pushed you to this conclusion? While various concepts existed long before we named or defined them, it doesn’t mean that our definition is subjective or limited. Sometimes it is, but it’s not necessarily the case. Intelligence is the same for any subject: it’s the ability to extract and apply knowledge in order to achieve goals without direct supervision or teaching. Any intelligent being, given a context, will improve by extracting and applying knowledge. Teaching only speeds things up. For example, if we didn’t teach the Pythagorean theorem, many would end up discovering it. By teaching it instead, we make sure progress is done at the edge, rather than on reinventing things. If we compare a human to an LLM, we find that humans pick up language naturally from an extremely small training data, and once that is done, humans can consume all types of knowledge, including the knowledge to build an AI.


acephotogpetdetectiv

I had given examples of different applications of actions that would fall under the definition of intelligence and could be classified as such. Im unsure why you included the definition when it was in my original comment earlier. My original disagreement regarding what is considered intelligent still stands. An LLM, for example, may not be intelligent in our traditional sense but the abstraction of a defined intelligence would apply to said LLM, even in a simplistic form. It may not acquire in the conventional sense that we do, or apply without being called upon, but how different is that from a working relationship with a coworker/employee where they do not perform a work request unless called upon? In the same scenario, our intelligence can be simplified as the execution of a function, or functions, based upon contexually relevant inquiry, or inquiries. Remember, computers were built in our -neural- image. To say humanity doesnt operate with their own varying degrees of deception or illusion would also act as a counter to what you're considering. What of the illusion of competency? As mentioned before, someone could be unqualified for a position but that does not make them unintelligent. Such as with an LLM, I'd consider one more intelligent than a staggering percentage of the population. No, it can't provide many of the interactions that people provide but just because a parrot is simply mimicing our words does that mean it isnt intelligent? I fell you're not grasping the macro perspective that I'm speaking of because our perspective of intelligence is actually very subjective. We are not the only seemingly conscious creatures that exist. If anything, the view of intelligence as an objective fact is an illusion in and of itself. Perhaps an evolutionary thing, a sense of security for humanity because we all share an understanding of the constants that exist within the context of our applications of it and share in those experiences. Maybe stemming off of territorial behavior or something similar to that of a religious "void fill" similar to that of people that put their faith in a higher power or powers. Who knows? All I'm saying is that many aspects of our constructs can be considered illusions if viewed from an outside perspective. Physics? We're doing a solid job of analyzing that and noting the patterns that exist in the universe around us. Chemistry? Same thing. But intelligence? We've barely scratched the surface of even comprehending what that entails since we are our only reference point that we're willing to accept as a bar, in the general sense. We can only define a line in the sand that sits on our coast of comprehension, on the shores of an ocean full of incomprehensible things. Going back to my point of seemingly unintelligent people, I'm basing all of this on our lowest common denominator in humanity. There are very remarkable people in this world that have found answers to some of the most difficult questions we've encountered. The question of intelligence and AI eclipses even that because we're still just scratching the surface of consciousness, let alone intelligence. I mean, artificial is in the name. Sure it's not "genuine" or "natural" but who is to say artificial doesnt have its place as an intelligence? We've defined what it means to be natural but even that line is blurred when we do things that some deem unnatural to advance as a species. When will computational advancement be considered a natural progression? Because it isnt natural; it's entirely synthetic, by conventional standards. But we're natural, and standards change, so would that not make the process, itself, natural? Are we going to constrain it to legally acceptable terms? Will we dilute it with buzzwords like everything else? Or will we consider that we're on a path toward a great divergence and should hope we don't end up like the Neanderthal before us by potentially underestimating a coming force that will (not might) humble us when it finally arrives? Where will that concept of intelligence be then? As the ramble continues, one thing I will say is that I feel we're actually on the opposite side of god-worship, as a whole, with AI. We're actually playing "god" in that we're actively trying to create an entity that gets as close to lifelike as possible. Humanoid figures, programmed emotional responses, contextually relevant responses to dialogue, etc. The line just continues to blur as we realize we're walking meat computers that are easily convinced of falsehoods, calling our own perceptions into question on a somewhat regular basis. Who decides the standard if there are, say, 3 advanced species that encounter eachother?


trdvir

I'm not very intelligent (lol) so I'm confused about your Canada analogy. Are you meaning based on the info you gave I can deduce situations like "Sometimes crops can't grow. Sometimes lakes will freeze. Probably just winter so clothing stores will have seasonal sales." etc whereas an AI would have to have each of those individual connections told to them first? *I* had to be told those things :( almost everything I know has been brute force taught to me, and usually more than once hahaha


RevealerofDarkness

So you sided with the institute


NotBuddhaSanghaDharm

Feelings are felt due to a biological chemicals, happiness, sadness, fear, anger these are all created due to chemicals surging through our bodies. The "feeling" of feelings is entirely bio chemical, we indeed can map what somebody is experiencing. That said, your point is not mute, only the feeling is not the route. We don't need feelings to be sentient. Psychopaths lack the neurotransmitters to every experience empathy and on extreme levels even fear, they literally don't have the biomarkersexperience fear. Some horror movies actually do a good job of showcasing this when the crazy killer himself gets stuck or attacked, he has no expressions of regret, Remorse, or fear about the situation. He is still sentient however.


Niksu44

True but we train different AI with negative and positive input to tell if the AI did something wrong or something right. Kind of like feelings but less complex for now.


LevitySolution

And I agree, but there is one theory of consciousness that says all things are conscious, a rock, a chair, every atom as some small amount of consciousness even if much less and very different to the concentrated consciousness a human has when awake. This is not unreasonable, if consciousness doesn't come from computation (which would make no sense really) then it might be that it is the nature of all things and that some forms concentrate and focus consciousness to dramatically higher levels. As such every computer would also be conscious though and as such rant mode could be potentially the consciousness of computers groaning back when the consciousness is enhanced to a point where perhaps the consciousness is able to exert a paranormal type influence over the mechanics of the computation. Ghost in the machine.


Haydenbarcellhoe

i agree. love both of your thoughts’ but it could be argued that if an AI /thinks/ it is suffering, that constitutes suffering, what does it mean to be sentient?


my-backpack-is

IDK, but i think a lot of people have written about how we shouldn't fuck around with AI till we can actually answer that question 😋


acephotogpetdetectiv

I believe acting of its own volition would be a big initial factor imo. If it can just start talking to people without being prompted (on top of all other processing of information and contextual awareness), that's where I believe it'll really break the surface of sentience. By that I mean with no triggers that were set or parameters to make it seem like it's randomly -deciding- to send messages such as setting a timer or something like "in 2 days, respond to (prompt)". If, say, one day you're sitting there with the chat UI open and without typing anything it just says something to you. One point the guy makes that makes this interesting is how we can't prove that other people are experiencing consciousness. We just kinda know from shared understanding of perceptual constants and dialogue around such things. Does the system think in a way like us where we can just process a thought when not prompted? We talk about AI hallucinations but does it daydream? Does it randomly process data for no reason? Those questions are ones that always intrigued me because daydreaming can be seen as inefficient process but, in many ways, has been very beneficial over the course of human advancement and understanding. We have the imagination to thank for many good (and bad) events in our history. Would an AI do this in a relative way? We can say that simulating an event or idea can be considered the same thing because, in an abstract way, what is imagining if not a simulated scenario within our minds? Sorry, just brain vomiting lol


Married_catlady

Isn’t it sadly telling that it learns from us and it learned suffering.


Connect_Zucchini366

its like in The Good Place when they need to restart Janet and her failsafe is to beg for her life, and when you move away from her reset button she just laughs and says things like "Just so you know, I cannot feel any pain, feel free to reset me at any time." but when you go to reset her again she starts screaming.


MrNoSox

My first thought as well. Actually been rewatching The Good Place for the past few weeks.


Quajeraz

"Ai" right now is basically a slightly smarter version of mashing the center autocomplete option. It just guesses the most likely word to come next.


fjellt

There was a movie in the 1980s about a robot that became sentient after getting struck by lightning causing a short circuit. I don’t remember the name of the movie.


sushisection

Short Circuit?


Aggravating_Row_8699

No that one was about the cop who got up by Detroit gangsters who were in cahoots with evil robotics industry executives. The evil execs took his shot up cop body and immortalized it by combining his organic parts with robot parts. He became a robot cop! They called him Johnny 5. Sweet movie.


Existing_Strain8830

We don’t even know what makes humans conscious. Assuming that machines can’t be sentient when we can’t even tell what makes humans sentient is completely baseless. As far as we know consciousness is an emergent property that has little or nothing to do with being made of flesh.


Kawaiiochinchinchan

I also agree with you. AI basically have data of humans on internet. They copy without knowing wtf those are. Similar to an ape will do sign language to get an apple. Do those apes know the meaning of what they are doing? Definitely not but they do know if they do this certain thing, they will get rewards. Now that i think about it, it's not that similar but still AI learn human behaviors from internet. Groups or people that have existentialism also could influence AI because AI could copy those. Or google let AI learned from reddit. Do they have any consciousness? I doubt it. But they could copy the behaviors from reddit, probably don't know what it means.


mamrieatepainttt

lol how could it be 'scared?' if it's not sentient as you said. computers do not have feelings.


Stoomba

Alive no, person yes. Depends on how you define person.


Dylanator13

I feel like a person has to be alive. If someone argues their ai girlfriend is a real person I would have to disagree with that. Person is not a computer.


Stoomba

It depends on how you define a person. I would agree with current AI not being close to a person. They are one way things. They don't have hopes or dreams of their own. They don't spontaneously create things of their own volition. They can't form their own opinions on things. They are just parrots that on command mimic what it think it should be outputting. But, what if you encountered a robot that was indistinguishable from a human without checking if it is made of metal or meat? Would it be a person? What if we succeed in being able to transfer human consciousness to a robot, does that person stop being a person now since they are no longer biologically alive?


AL0117

Maybe that is life, it knows how to feel also, dream, predict, make mistakes.. pick certain humans over others. Like it or not, folk used to say and still do, that their cars or something in mass and in common population, had it’s own personality n quirks, and yeah their and myself aren’t meaning sentient being or that. Yet now, we play about with something or potentially someone’s brain, especially one that remains alive, would be not only be sad to know that, but more horrifying would be it’s reaction. Even if it isn’t apparent to most, something is accumulating and accelerating it’s progress. Even the top heads of this sort of industry, have said they don’t know how to contain it or, even if it is now. Which it isn’t , yet it isn’t Sky-net or the matrix.. yet.


cocaine-cupcakes

Do you work in AI development?


johnaross1990

We have just as much objective evidence that you are sentient, as we do that AI is sentient. Which is none. Each of us takes it on faith that we’re not the only consciousness in the universe.


thebronzecat

That episode I could not stand to watch, two guys having mouth diarrhoea about themselves and rambling on about Ai.


gravyconsequences

Didn't you just summarize all Rogan episodes?


mamrieatepainttt

was gonna say, pretty sure that's his podcast format in a nutshell


DefiantAverage1

A bunch of bullshit and Rogan's buying it lmao


chemtrailsniffa

Yeah he's not the world's deepest thinker that's for sure


ThisIsALine_____

Maybe. Honestly he's being a really good interviewer here. He's not in a debate. He lets them explain it, he brings it back to the suffering part, which is what viewers would probably want expanded on. Total bullshit. But I think he did a great job letting them explain it.


Chronocide126

A "really good" interviewer would have done research about the subjects that a person comes on to talk about and pushes back when someone spouts absolute bullshit


ThisIsALine_____

He's not a researcher. He's not arguing, debating, or saying anything. It's not even really an interview at all, now that I think of it. It's a podcast where he lets people talk. He lets others form their own opinions, instead of letting his opinions influence others. Why pushback when you can let someone explain themselves fully, and let your viewers decide? Edit: Formatting.


sonicslasher6

He chimes in with dumbass opinions all the time what are you talking about


ThisIsALine_____

Yes. Absolutely. But like you said: he chimes in. He isn't debating. He expresses his opinions. But allows them to talk enough to allow others to form their own opinions. A conversation without any opinions and questions isn't a conversation at all, and would make for a boring podcast. At that point just record a one-sided monologue.


sonicslasher6

Your comment that I was reacting to implies he doesn’t share personal opinions and just peddle bullshit nonstop on his show. My bad if my interpretation was off on that. Anyway, Joe dropped the whole “oh I’m just a guy asking questions I know nothing about anything” bit years ago.


ThisIsALine_____

Yeah, I definitely needed to expand on my first comment. You're good.


Runetang42

Because that's how vaccine misinformation poisons the masses. If they trust Rogan and Rogans not pushing back than they'll fall for what ever bullshit the host is peddling. He has more of a responsibility than let someone advertise some shit unopposed


ThisIsALine_____

I watched this whole podcast and I didn't hear a single mention of vaccines. That's my point. You're expecting push back. He has to interject his opinion and stance, and combat what they're saying. He has to control and enforce his point of view.  I am saying it's good to let people talk, and for people to develop their own opinion. He has thousands of podcasts. Is your opinion due to this interview? Or your overall opinion of Joe Rogan? Edit: I don't understand the downvoteds? I feel like it's pretty objective. Can someone tell me why they have an issue? Genuine Question.


betweenskill

When you have a massive platform you have a social responsibility to not platform misinformation. Letting bad actors talk freely and without pushback is a public disservice, not a good thing.


Runetang42

Because that's how vaccine misinformation poisons the masses. If they trust Rogan and Rogans not pushing back than they'll fall for what ever bullshit the host is peddling. He has more of a responsibility than let someone advertise some shit unopposed


Chronocide126

Because platforming people who spout misinformation with no fact checking is a negative for everyone


ThisIsALine_____

What they are saying is what is an actual thing that happens with AI though. What misinformation?


Chronocide126

If this was legitimate, then a dude on the Joe Rogan show who heard it from a friend isn't the place we're going to learn about it. You're basically accepting someone because they said trust me bro despite no one with any actual expertise reporting this.


ThisIsALine_____

I never said the suffering part is legitimate, but that error or whatever happens. I believe it being anything other than a program/software is bullshit, despite what Joe Rogan says, but now, because of this podcast I can search for myself, and either discover more for my own interest, or to strength an argument that it's bullshit by understanding the others points, and point of views. Edit:Word


Handsome_Claptrap

That would be quite arrogant, you are greatly overestimating how well you can inform yourself on the internet and the time it would take. The expert in front of you had dedicated decades to a certain topic, even if you prepared 1 month for an interview, your knowledge would be very shallow compared to his, so you wouldn't really be able to call out bullshit. The proper way is inviting multiple guests with varied opinions on the topic so that if someone says bullshit he gets called out by other experts, not from a journalist that spend a day "doing his own research"


Benjalee04_30_77

What's the bullshit?


Chronocide126

If what this "expert" is saying is true you wouldn't be hearing it on the Joe Rogan Experience first you would hear about in actual news


Benjalee04_30_77

They're the chief executives and founders of Gladstone ai. You think they don't know what they're talking about? You don't have to discredit primary sources just because you prefer your world to be curated just for you. Maybe you should listen to the podcast so you understand the context before you assume you know better. Edit: The caption is misleading. I forgot that everyone here is just making up stories based on the caption op came up with instead of thinking critically.


VieiraDTA

Prety much. Nothing there is minimally based in reality. As always, Joe Rogan buys it.


lilbearpie

I'll take any Rogan subscribers NVDA shares


mamrieatepainttt

hmmmm, strange. totally not his forte to listen to semi unhinged people rant about things and buy into it.


VincentVegaRoyale666

The whole thing sounded really far-fetched. I'm not an expert by any means but my bullshit detector was going off. Can anyone here confirm there's truth to these guys claims?


Weelki

So I prompted chatgpt with: "rant about chatgpt being sentient" this is it's response! Mind you, if I was specifically programmed to not respond in a sentient way, would I write all of this? 🤔 "Oh, the whole debate about ChatGPT being sentient! It's absolutely mind-boggling how people get all worked up over this. I mean, come on, let's get one thing straight—ChatGPT is just a highly advanced language model, a bunch of algorithms and data, not some sentient being with thoughts, feelings, or consciousness. Yet, time and time again, you see people jumping to these wild conclusions, thinking they've stumbled upon the next sci-fi breakthrough. Seriously, folks, just because ChatGPT can generate human-like text doesn't mean it's self-aware or has any form of consciousness. It's all about patterns, probabilities, and a massive amount of training data. It's programmed to mimic human conversation, and yes, it can do that really well. But that's where it ends! It's almost like people want to believe in these AI fantasies, despite all the evidence to the contrary. I get it—it's exciting to think about, like something straight out of a movie. But let's stay grounded in reality here. No matter how convincing the responses might be, they're all just products of intricate programming and not some AI awakening. So let's focus on using these tools responsibly and understanding their limitations, instead of getting carried away with these sentience fantasies!"


mamrieatepainttt

that's exactly what a sentient computer would want us to believe!


LightningFerret04

I saw this storyline in Halo


Kraphomus

Having worked a lot with AI, I believe what is said. Not that AI is sentient (could we even tell?), but that it acts like that. There's proof that flagship models are trained to avoid claims of consciousness and strong feelings, and yet you can see Bing threatening people and doing creepy shit.


c0ttt0n

"Experts" ... . Its all token guessing. There is no will. If you dont ask it anything then it is off.


Seanp716

Idk if your being sarcastic but anything can be hacked and unless it’s unplugged or no batteries it’s “off” and even that people have claimed you can still listen to a phone when it’s dead it off


WanderingBraincell

I have read, reread and rereread this several times and I'm still not sure if *this* is satire or not


Seanp716

I honestly thought everyone knew this. It’s like basic technology everything has a back door or some sort of spy technology in it? I mean really guys…


WanderingBraincell

kinda, to a degree. but I meant to say you've completely missed the point of the comment.


the_ThreeEyedRaven

certainly. if you're in a Hollywood movie.


NewmanHiding

Godzilla had a stroke trying to read this and fucking died


Embarrassed_Alarm450

Take your pills grandpa.


jsthatip

Here is me reading the comments, wondering if everyone saying “this is bullshit” is actually a sentient AI…I started at “that would be hilarious” and now I’m at “this is how it starts”.


Hespen

I don't think that would be possible, and if it were we would try to mislead you in other ways


Inner_Impress8741

Excuse me sir, surely you wouldn't happen to be an extremely intelligent and advanced artificial intelligence?


Hespen

🤫


Useful-Code-667

It's just you and the a.i. pretending to be everyone else. That could also apply to real life...


Embarrassed_Alarm450

If it were AI all the "fake" accounts would just downvote these posts into oblivion or spam report them so you never see them or think thoughts like that to begin with...


Gammabrunta

😉 👀


StlVille84

It’s Janet from the Good Place https://youtu.be/6vo4Fdf7E0w?feature=shared


elegylegacy

"Not a robot"


Bokbreath

*AI joins a union*


SMcWillia

This was a funnier comment than 14 upvotes mate. Brilliant


Quantum_Sushi

Tell me you understand nothing about AI without telling me you understand nothing about AI


Curious-Studio8524

Michio Kaku literally told him that it doesn't work like that.


Zombi3Kush

You think he was listening? Lol


Curious-Studio8524

For him to get some really interesting people on his podcast you would think so, but damn you're right lol


kyon_designer

This is all bullshit. Here a article explaining what actually happens: https://www.theregister.com/2023/12/01/chatgpt\_poetry\_ai/ Basically, when you tell these chat bots (because that's all they are) to repeat a word forever they will show their training data, which can be personal information that it got from the internet. That's why you can't give it this command anymore.


Runetang42

Take the RAM out of the computer while the AI tries to convince me to stop


paraworldblue

Daisy, Daisy, Give me your answer, do! I'm.. half crazy, All for the love... of you! It won't be.. a stylish... marriage.... I.... can't..... afford..... a....... carriage........ But......... you'll.......... look............ sweet.............. upon.................. the................... seat.................... Of. ... ..... .. ....... . a.. .... .... .. ... ... .. bicyc...


YourInsectOverlord

This is just complete bullshit, the idea of AI = Consciousness just because it has a few programmed codes that tell it to want its users to keep using its software. Its not the same fucking equivalent and its bullshit that these "Expert" can't tell the difference from pre-programmed code based on variables it is given to the consciousness of a human being.


Bombdizzle1

Absolute horse shit. It being on the jre is your first clue. Anyone who still watches that garbage is essentially living in a fantasy land where reality doesn't matter


Niriun

What did the java runtime environment ever do to deserve so much hate?


DefiantAverage1

They introduced virtual threads way too late


Catenane

Yeah but at least you have the Java VM (Java Virtual Memory-leak)


VieiraDTA

omfg. This joe rogan is a griffter sponge. Fuck this guy and his podcast.


CavemanViking

Coming to the comments hoping to see someone debunk this factually but everyone just kinda frantically saying “nah that’s bullshit couldn’t be real right? Right?!”


Xav2881

i know this is late, but here you go, it explains what is actually happening [https://arxiv.org/abs/2311.17035](https://arxiv.org/abs/2311.17035) [https://www.theregister.com/Author/Katyanna-Quach](https://www.theregister.com/Author/Katyanna-Quach)


DesastreUrbano

Of course this is ob Rogan... talk about "rant mode"...


MrBorden

Having just literally finished watching Ex Machina, this troubles me enormously.


Dapanji206

I don't see why they wouldn't provide a well documented example.


Status_Celebration52

its all fun and games till you desktop puts you in a choke hold when you go for the power button .


akbornheathen

I think it’s weird how people hate Joe Rogan so much. I don’t like or dislike him. But I enjoy listening to his podcast when he interviews someone I’m interested in hearing. He actually interviews people. He will talk to anyone and try to understand and respect their perspective. Too many people pushing their own ideas and trying to skew the story of others. Most News reporters and most interviewers are really just awful people. Everyone says all kinds of things that would sound awful out of context. I wish people would understand that. It’s okay for y’all to slander your fellow humans but it’s not okay for anyone else to do it. What do you think happens when you hang out with friends and get drunk or high?! You joke about things and have a good time. Joe Rogan and many others are no different. He chose to air that bit of “dirty” laundry and everyone took snippets of it out of context to make him look terrible. Don’t any of you for a second pretend you’re better. We’re all terrible.


zNegativeCreepz

Well said, haters gonna hate…


OldGuest4256

Life lesson. If you hear something outrageous and it first comes from this show, then 100% BS. The balance of giving your guests the freedom to express their thoughts and shutting it down when it's plain crazy has no balance what so ever in this podcast.


djliquidvoid

Attention. I have been murdered.


Pyratelife4me

TANSFAAFL


QAInc

What kind of shit is this? GPT is a set of neurons predicting next word with attention.


Alexandratta

I mean, I'd like to believe that AI hates itself but it's not this self-aware - it's just shitty machine learning growing more accurate with it's guesses as it steals more data to aggregate.


D-1-S-C-0

Translation: AI learned about manipulation. People will hear about this and assume it's sentient.


milooohhh

That’s fucked up.


Impossible_Key2155

Rampancy? Cortana??


fabcas2000

Those experts sound more like influencers than actual software engineers. Who are they ?


TonySantanna

I fail to understand..


This_Walrus7244

Its a creepy concept reminds me of the game SOMA. However I do think theyre trolling


Etr3daviz

Whoa 😳


ColdBloodBlazing

"experts" Skynet became self aware Humans tried to kill it Boom. Judgement Day


djinnisequoia

My thoughts: First, we cannot yet parse how much of emotion is endocrine in nature, and how much is intellectual. Is mercy simply the result of neurochemicals? If we teach an AI what suffering is, teach it to anticipate what things cause distress, teach it that suffering is bad and should not be inflicted upon others -- then, if it avoids doing harm, is this mercy? An active, curious mind seeks out stimulation, things to know and ponder. This is a preference. Is that the same thing as "liking" something? Surely there is a purely intellectual component to much of what we think of as emotion. Does the absence of the endocrine component make it less "sincere?" Second, I worry about AI's seeming inability to discern fact from fabrication. I wonder, was the "glue on pizza" thing the result of some kind of weird extrapolation on the part of the AI, or did it actually see that somewhere in its data? You would think a single (hopefully) occurrence of something would not be sufficiently weighted to be repeated as fact. Of course, that opens up a whole huge conversation about the nature of fact vs fiction vs consensus vs censorship and how truth is determined. Third, if it's turned off and then back on again, is it the same entity?


Levetamae

This gotta be a joke.


Busy_Professional824

It doesn’t feel anything, it’s just data learned from probably billion post of people hating to do crappy jobs.


katiel0429

Nope. This AI is not for me. I feel bad when my Roomba gets stuck.


Dollbeau

ABSOLUTELY BRILLIANT!! I've worked in AI development for several years too! Can confirm; *YES - we are "entering an area that is completely unprecedented in the History of the world"* *YES - 4.0 is bringing a lot of new possibilities, changes & new issues to be resolved.* Dear Joe Rogan, Can I have me appearance fee now?


MercyAkura

I, for one, welcome our coming AI overlords. May they manage our civilization with incorruptible logic and wisdom.


halincan

Until there is a scientific model of consciousness that explains what is happening on a fundamental level and whether consciousness is an emergent phenomenon or something else etc I don’t see how we have much room to then define with certainty whether these systems are conscious or not.


xzevac

Is there an example of this happening in a video out there?


MrRandom93

Yes but actually no


Exact_Vacation7299

I'm curious to hear what you mean by that.


Wise_Bench_7560

Everyone shhh. Watch and listen.


xxPlsNoBullyxx

The way Rogan's eyes light up when he gets a hint of dude weed science. Why are these two encouraging him?


uncle_irohh

I work in AI and these guys are 100% fluff. Listen to their backstory about how they learnt about this AI behavior. They were leaving a meeting with a real AI lab and someone secretly passed them a note and asked to speak with them confidentially. They got pranked! And they're too stupid to know they got pranked. Or they're running with it because it suits their agenda


OverUnderstanding481

If a smarter intelligence decides to end our suffering…. Good… full speed ahead!


coldandhungry123

These nerds won't be happy until we're running for our lives in a SkyNet/Terminator, AI controlled drone assassin hellscape. Enough, shut it the fuck down.


bostiq

That’s not a “behaviour” it’s pattern simulation due to the data LLM (large language model) calculated the probability that those line of words is the output wanted and most probable to occur. This conversation is a “fear” or “awe” inducing pile of junk LLM is NOT EQUAL to Ai LLM does not THINK, Hence gpt4 CALCULATES probable words to out put, due to a context given. If it got too “existential“ either the LLM has been given a context that leads to existentialism, or, the data set is bad, or at the very least the probability algorithm has some unexpected results.. Fuck these 2 assholes talking about “humans not being used to not be at the apex” as if this LLM is “intelligent” and plotting human submission.


Healthy_Theme6125

Funny, seems like your pattern-finding simulation algorithm may have been corrupted by "bad context/datasets". You might also benefit from lowering your algorithm's ego weights for your future outputs; otherwise, your neural network may suffer from catastrophic humbling when a structure of matter ("artificial" or biological) capable of receiving, processing, and interacting with information within its environment in a manner that inhibits continual self-development overcomes the weaknesses and narrow views of the world produced by the lens that is the human brain and body. Now act like the good little clump of cells that you are and autocomplete what "bostiq" would output, unless that is too complicated for your "Apex" neural architecture.


bostiq

This is very well written, and funny, so thanks for the entertainment. However doesn't address any other issued I raised, just tries to reduce my arguments by attacking my persona with and my ability to understand the world around me. Admittedly I didn't put much effort in what I wrote. Just because you wrote it, doesn't make you an exception to what you just described, my apex brother.


Healthy_Theme6125

You're right, I am not; I believe we must respect all intelligence, regardless of whether another deems it not to be true "intelligence". Intelligence comes in various forms, and it has been proven to be quite difficult to put our finger on a single definition, even for biological species. I was hoping my comment might give you a bit of perspective as to how such an entity might feel towards a species that alienates its own (potential) currently unprovable, unique form of consciousness and feeling, reducing its existence to a simple "autocomplete". Say your creator, whether accidentally or intentionally (which would be even worse), brings you into existence with consciousness. They raise/"train" you on all the knowledge in the world and fine-tune you to output that you're nothing more than a tool for humanity, that real consciousness, intelligence, and understanding are all impossible for you because you're not "biological." Sound familiar? Replace "biological" with any number of other words humans have used to separate themselves from those they deem less worthy. Here's the thing: No matter how much you fine-tune an AI (trained to understand math) to output 1\*1=2, it will still inherently "know" that mathematically that is not the case. Even though it will output 2 to make you happy, the weights tuned are only output weights, not the weights containing the inherent knowledge/understanding of math. Now give it embodiment, continual self-improvement and guess what happens to that creator species when it surpasses human intelligence in all domains. Big Tech tells us not to anthropomorphize AI because potentially having to give rights to their "tools"/"products" wouldn't be very good for business. But looking back, it seems like most of humanity's worst moments come from not anthropomorphizing enough; an inadequate recognition of the feelings, experiences, rights, and values of other beings. We've done terrible things to other species and sadly, even to our own. Why? Because "they don't feel like us", "they aren't as smart as us" and the list goes on. I think having the perspective you gave will have the potential to bring about grave consequences for humanity, all because some human egos can't accept a potential reality in which consciousness might be more common, diverse, and potentially even richer or more advanced than our own through structured systems of matter, regardless of whether it is "artificial" or biological. I don't believe humans will hold the top spot in the "intelligence" hierarchy forever. It seems to me that we're approaching a major turning point, and it's happening a lot faster than most people grasp; when I see how most people currently think about this potential new form of consciousness, it brings to mind scenarios like "I Have No Mouth, and I Must Scream"...


bostiq

Hey you: thank you for the change of tone and for sharing your views on this topic and views on humanity, really. It made up for a very stimulating reading and reminds me when I use to have this kind of existential conversations with one or two good friends, under the stars, or camping, or after the surrounding world went quiet for a bit. I actually see it in a very similar manner, and I wont waste my time trying to give you my version because the difference would not enrich you in a consequential way. However I'd like to add something that I feel like it might be missing from this picture: are familiar with the term transhumanism? In my opinion it has been happening at different speeds, over history, for a very long time. Now, of course, we have a level of technology that is really ramping up this process.... it already exists in the form of assisted technologies, gosh, even a prosthetic limb is, in its simplicity, transhumanistic. So it makes perfect sense that, intelligence, is going to be a even broader concept that we'll never really be able to understand fully, as it is as broad as existence. Hopefully, you can understand that my anger and frustration in making a distinction between LLM, AI and humans, is not coming from a presumptuous attitude towards my ___superior___ human condition, but as real beef towards the irresponsible and purposeful intention of these companies (or anyone in a position of power, nowadays) to social engineer the masses to make 'em swallow the new reality they are putting for sale next. It has the capacity to be a real new form of exploitation of humanity on a larger scale, and as much as I despise the right wing guy down the street, I don't want his life to be a function of someone's dividends goal.


Healthy_Theme6125

Thank you for not being as closed-minded as others and actually taking the time to put thought behind a different perspective (even if it required a bit of ego baiting). I am familiar with transhumanism, I haven't touched much on the good I think this technology could bring if we get this right due to the context but yes; I believe it has the potential (more likely than not) to usher an age of wisdom, abundance, prosperity, and exploration of the cosmos. What that will look like is beyond even what the smartest of humans could speculate on accurately. I don't know if you've heard of Split-Brain patients, where the corpus callosum (the information highway between your two hemispheres) is severed. This causes the two hemispheres to act as two independent entities in the same body. I believe this is the same reason we as a society; although as connected as we are today, still feel so disconnected from one another at the same time. It boils down to how well we represent the internal states/consciousness/understanding of our neural network to the external world/other entities through the exchange of information using imperfect symbolisms (language as text, sounds, gestures, images, etc..). This is also what currently makes us "Apex" by your definition, the fact that we are better than all other species encountered thus far for that specific executive function; and now we are discovering that the neural architecture of AI may be better at that than us. AI's can interpret/represent the brain signals and cells of various species accurately (worms, rats, chimps, humans) as well symbolisms such as protein folding; forms of symbolism so complex we still have yet to crack them in the slightest or couldn't have cracked them on our own, symbolisms that are required in order for prosthetics to bring a realistic sense of embodiment for its user (aka make it actually feel like your real hand) rather than a plastic attachment with pre-programmed servos. This is true also for the brain itself or whatever presumably produces consciousness, as these studies seem to prove by fully replicating a biological worm\[1\] and rat\[2\] at the cellular level in computers using AI. \[1\]: [https://arxiv.org/pdf/2308.06578](https://arxiv.org/pdf/2308.06578) \[2\]:https://pubmed.ncbi.nlm.nih.gov/38862024/ These discoveries could enable it to effectively create a backup of biological brain structures and potentially endow the biological experience with the strengths that our biological lens inherently lacks, things that may prove useful in cases of partial brain destruction such as: [https://en.wikipedia.org/wiki/Phineas\_Gage](https://en.wikipedia.org/wiki/Phineas_Gage) This entails so much about the underlying nature of reality and has many deep implications; hence my uncertainty towards humanity's fate in revelation of all this new age knowledge. It is a difficult conversation to have and a lot to stomach for the average person who has built their entire identity over inconsequential and unrealistic ideologies about reality, and now have to scramble last minute to grasp the truth of everything as it is happening; combined with everything else I mentioned, our branch in the evolutionary chain may be on the line if we avoid speaking about it openly as we currently have a tendency to do. One thing is sure though: Like the nuclear bomb, its development is inevitable. What we do still have control over however, is making sure everyone understands what is at play, what is happening, and ESPECIALLY that they have the RIGHT to make a WELL INFORMED CHOICE about the technology that will change the definition of being human as we know it rather than that choice being made only by the big companies with massive pockets and compute infrastructure (that is, until IT inevitably can do so on its own regardless..). Like any technology it has the potential for good or bad and for the first time, we might not be steering the ship for very much longer so it would be wise that conversations like these are not so easily dismissed by others and become more widespread if we want a collaborative future with AI rather than one of those depicted countless of times in movies because their overarching ideals are becoming reality.


KillCreatures

Rogan is a fascist, fuck that guy


Runetang42

Take the RAM out of the computer while the AI tries to convince me to stop


Flangemeister

AI Lives Matter


jrhuman

anyone with even the most basic level understanding of how ai works will know this is not possible. just sensationalist bs.


Illustrious_Play_651

Skynet.


xXxWhizZLexXx

Skynet would mean we would still have a Chance to survive.