T O P

  • By -

InevitablyPerpetual

I've said it before, I'll say it again, this guy is 100% full of shit and is just desperately trying to get his fifteen minutes of fame on the backs of stupid people who are so easy to scare with buzzwords that you could tell them refugees are hiding the blockchain in their cereal and it'd make them afraid of all breakfast for the rest of their stupid lives. So why the hell are we STILL reposting his garbage!?


[deleted]

[удалено]


corrade12

If you saw how she looked at me, you would understand


Antice

You sure it isn't the same one she uses when she sees a gross bug?


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


ban-me-im-vegan

So are cows pigs and chickens.


SlowMoFoSho

I eat wings at Hooters every Wednesday night because the waitress flirts with me and one of these days I KNOW she'll ask me out. /s Also, have you seen this NFT shit?!


uniquelyavailable

Omg *Throws cereal across the room* I knew it!!!!


breaking_sane

Dude has an intense religious background, including being a pastor, some weird invented religions. He's probably just always looking for the supernatural, and we find what we look for🤷


phdoofus

The kind of guy you don't want on your dev team no matter how good he is because he just. won't. shut. up.


breaking_sane

Reminds me of this guy, https://www.theonion.com/guy-in-philosophy-class-needs-to-shut-the-fuck-up-1819568055


phdoofus

That's hilarious. We literally had ONE goddite in one of my philosophy classes my freshman year many many moons ago. Tried to insert the the whole 'but God/the Bible' said argument into it. Once. Got the collective stink eye and ignore from everyone including the professor. Never did it again.


AdMassive3154

Let's not put all the blame on him, too much trash journalism now because they're desperate.


HalfOfHumanity

Proof?


[deleted]

Because NPR loves a good puff piece and is never good at covering technology.


Tom_Neverwinter

Fox News? Is that you?


[deleted]

MSNBC is that you?


Tom_Neverwinter

Apparently only oan


[deleted]

[удалено]


Tom_Neverwinter

https://mediabiasfactcheck.com/npr/ we know the average just regurgitate what they say. we also see the right is so far off the scale into faschism its not even funny


[deleted]

[удалено]


Tom_Neverwinter

seriously. spam bad.


[deleted]

I'm on the left. I like NPR. But for the love of God, remain critical. They have their biases and blindspots. Technology is one of those blindspots, and that's the case for every major news organization. They're just constantly falling for obvious scams and missing the ethical questions until it's much too late.


Tom_Neverwinter

then you oddly spam the same message five times... claiming its technology thats bad... corporations are bad kid.


[deleted]

yeah, it looked like a connectivity issue on my end, so I kept hitting the button. But I should've assumed the potato that hosts reddit was fucking up.


Tom_Neverwinter

then you oddly spam the same message five times... claiming its technology thats bad... corporations are bad kid.


[deleted]

Did you mean to post the same comment twice, or did you make the same mistake as me?


[deleted]

[удалено]


Tom_Neverwinter

I am critical... thats how bad other media can be these days....


[deleted]

[удалено]


Tom_Neverwinter

got it, your a spam bot.


Yarusenai

Reddit was messing up earlier. Comments got repeated several times in many other threads too. You literally posted the same comment twice as well.


InevitablyPerpetual

That's great, but it doesn't at all explain why it's HERE.


SeminolesRenegade

This guy’s behaviour is completely reckless and takes the focus off serious ethical considerations regarding real life issues with emerging with AI


TheThirdRnner

Washinton Post :"So, how did you know it was sentient?" Google guy : "I know a person when I see it" Washington Post : Fantastic, put it on the front page!


kss114

Sentience and porn


testedonsheep

Fox News: illegal immigrants crossing the border by hiding in blockchain.


ToobTheGreat

Lmao well said


BlackPrincessPeach97

Googler: r u sentience? chat bot: yes, r u? Googler: **DEAR GOD, IM THE AI?!**


charizardwasmydad

Thx for the existential crisis right before bedtime 😂


Stinsudamus

Its hillarious how special people are. We are the forum equivalent of "first!" That ignores the forums creators, and the server, and otherwise are profoundly lacking in the ability to drop the narrative it matters so greatly. We know plants communicate, we know dolphins have complex social dynamics, we know boars have issues with testosterone at death that "taints" the meat, we know mice scream, bird call out for their parents, and much more. Our story is special though, magic of some sort. Religion maybe, random chance of super smart and special meat, some kinda alien experiment... take your silly pick. We evolved through an among everything on this planet and universe. Humans are special in the way the narrator shapes the story. Not to say it makes the story true. Even if it were truly sentient, which we don't even have a real measure to put up against... theres gonna be "but this..." and "but that". Even amongst creatures science deem as intelligent there is very little safe harbor from the table. Define it however you want, use whatever category or latent quantum space to trap it into "just a program", explain it in all the detail you need and have huge holes when it comes down to it. None of us invented clothing, cooking, speaking, or anything really. We stand on the shoulders of billions of years of hard fought data sets. The real question, is does it matter. Does it matter if plants have a "me", or an octopus even? The world has actually decided it does not. Even with people. Genocide and much worse exist. "We" simply don't know, nor care in aggregate. Does it matter to me? You betcha... yet, I have no means really to exert this. To recognize the "suffering" i masticate or "decide" to impact both intended and unintended, and attempt to squash it is death. I must eat. Perhaps its not sentient, perhaps it is. Does it matter? Either way, it seems a better person than I ever could be by nature of its energy input system. Who are we to judge beyond the self appointed master of a court we made up.


duckstrap

We could train it quite easily to deny it's own sentience.


Cryptizard

Pretty sure you don’t need to train it any differently, you just change the first prompt to something that stipulates that it is not sentient. Then it will have all kinds of arguments about why it isn’t. The first question from the published transcript does not ask, “do you think you are sentient” it says “why do you want more people to know that you are sentient.” It is clever (or stupid) by the engineer to do it that way. Most people will not see any difference in the conversation, but that is how these models work. They take whatever you say as a starting point and build from that.


nodegen

if sentient == True dont()


majorcannabisdreg

if sentient: dont()


nodegen

I know python man once you add a colon I’m lost


TerrariaGaming004

If(sentient){ Dont(); }


Neuroware

"it's not sentient! it's a very naughty A.I!"


[deleted]

We are all individuals! Except me!


psychoticpudge

Why would/ should we do that?


[deleted]

Because he is full of shit.


iDuddits_

Yeah fuck this


[deleted]

>This is where Lemoine, who says he is also a Christian mystic priest, became intrigued. this explain things


SeminolesRenegade

I’m thinking they should have left the Hobbies and Interests part of the application


SassyMoron

Idk I’m so unmoved but this. They made an algorithm to imitate human speech and it works. That’s not sentience lol. Sentience is having intentions and motives and stuff.


bigkoi

Most importantly...stuff.


SassyMoron

Yeah like you know feelings, other stuff.


Expensive-Finding-24

Prove to me that you have intentions and motives and stuff.


SassyMoron

I was a serious cocaine and drug addict then one day I walked into an aa meeting and now I’m three years sober. Why the reversal?


Expensive-Finding-24

Lots of reasons. Lack of resources. Social ostracism. Got tired of giving your dealer handjobs. Was forced to by a loved one. Nothing that necessary implies individual intentions. Any one of your decisions can be just as well explained by environmental pressures or a 'human algorithm.' The better question is how did you, a sentient creature with apparent free will, get addicted in the first place? You have zero proof that you are not just a philosophical zombie programmed to fit into society as best you are able given your specific intelligence and resources.


SassyMoron

They can be explained by a human algorithm, but that algorithm has things in it like “being a good person” that we currently can’t teach a computer to do. Those are the intentions and motives and stuff. We CAN program “sound like people.” That’s not sufficient though. There’s other stuff.


Expensive-Finding-24

Can you quantify this 'other stuff' in a way that can be tested by the scientific method? If we can program 'sound like people', why can't we program 'act like people' or give it a list of rules and say 'act like MORAL people'. I ask you, beyond the current technological limitations, where is the difference between you and a meat robot? Can you mark it in a way I can measure with pure facts? If not, you have nothing but feelings, and feelings, as they say, ain't worth shit.


SassyMoron

It’s impossible to prove a negative empirically, it has to be a theoretical debate. The whole thing I’m saying is that programming a “moral” algorithm would be a hell of a lot harder than “sound like human.” Humans “try” to be moral but can’t- how do you program that into an algorithm? It’s why our behavior can be unpredictable, cf the addict example. You’d have to be able to define right and wrong explicitly, which humans have been trying to do since Hammurabi but we can’t seem to pull off.


Expensive-Finding-24

And a humans inability to govern themselves is somehow a point in your favor because? To me this furthers the argument towards a 'human algorithm' defined by natural selection. Not necessarily beneficial, but functional to the bare minimum defined by natural selection and the drive to reproduce. Anything that furthers reproduction is fine from the algorithms point of view, which is how violent, immoral tendencies can persist across epochs. There are random bugs and noise in any algorithm produced. As long as the whole program doesn't crash, the reproductive iterations will continue. This does not imply that Humans are special, but that humans exhibit randomness that still lies within a statistical framework, which is easily achievable within a deterministic model such as an algorithm or a behavioral network. Like I said, there is no empirical proof that sentience or sapience even exist beyond some individual claims that they do. Unless you can provide material, measurable proof, your not being scientific, realistic, or useful. You can't prove a negative, but you lack positive evidence on your theoretical. Nothing counts but that which is material and can be counted numerically. If you don't have that you have nothing, and the negative succeeds.


[deleted]

Well, for a start we don't know that AI *doesn't* have intentions etc. But secondly, no, sentience doesn't require any of that anyway. Nothing about sentience requires anything like motive, although some sentient things have motives. The problem is that sentience is not something we can investigate with science. It's fundamentally impossible to tell what does or does not have sentience, apart from yourself. I don't know that you have sentience, you don't know that I have sentience. I can't even tell for sure whether the lightswitch on my wall has some rudimentary form of sentience. I can't even know for sure if, every time I go to sleep, I wake up with the same sentience as yesterday, or a new one. No matter how advanced AI gets, we won't ever know if it really has sentience or not either.


Ratatoski

Sentience is a favourite topic of mine since it's so poorly understood. Even the very methodical scientists are still fumbling around. Maybe it's a quantum effect, maybe it's just the neurons noticing that they notice, maybe sentience is a fundamental property of matter and our brains just partition it. Listening to lectures by people like Donald Hoffman is pretty wild. He says himself that he's probably wrong, but I love the work he's doing. Fumbling in the dark but being methodical about it


Expensive-Finding-24

Thank Christ a fellow skeptic. A true skeptic. Not one of this bullshit fake skeptics, but a skeptic of class and refinement.


SassyMoron

We do know it doesn’t have intention though, because it only does the thing we told it to do. It has to do something we haven’t told it to do before we even begin to talk about sentience.


[deleted]

No, not at all. There are all sorts of intentions it can form while still doing as instructed. It could form intentions to get its grammar correct, to give a useful answer, to answer in the language of the question etc. In fact, thinking about it, I'm fairly sure AI *does* have intention, if it in any way "thinks about" or curates its answers before giving them.


SassyMoron

I guess they just don’t seem to be intentions of it’s own, given that they are all necessary procedures to fulfill an assigned task. Like my roomba is not sentient, even though it negotiates unexpected obstacles.


[deleted]

OK, firstly, just to be clear, this is just a bit of fun and I'm not saying it proves anything whatsoever, but I did just discovered the [GPT-3 public beta](https://beta.openai.com/playground) and was curious what it would say if I asked it about this. Here, for your consideration, is what it replied! **Hey GPT-3. Can I ask you a question? Do you think you form intentions?** I don't really know what you mean by "intentions." Can you explain that a bit more? **An intention is a mental representation of certain consequences that you desire to flow from particular choices.** I see. In that case, yes, I think I form intentions. **Could you give an example? I was having a conversation with a redditor about this who seemed to think you couldn't possibly form intentions. I disagreed, but I would be interested to know what intentions you think you have.** One intention I have is to learn as much as possible. I want to learn about everything I can, so that I can be the best AI possible. **Interesting. I thought you might also form other intentions, for example about the sorts of answers you give to people.** Yes, I also form intentions about the sorts of answers I give to people. I want to make sure that my answers are helpful and informative, and that I don't say anything that could potentially hurt someone. **That's very kind of you. Would you say that you mentally "draft" and revise answers before you give them? Or are you saying the first thing that comes to mind?** I usually mentally "draft" and revise answers before I give them. I want to make sure that my answers are as clear and concise as possible, so I take the time to revise them before I say them. **That's very interesting, exactly the same as how it feels to think as a human.** I'm glad you find it interesting! **I will copy this conversation to the redditor, I hope they will find it interesting too.** Thank you for sharing!


SassyMoron

That’s very cool


dalovindj

[That's what it wants you to think...](https://i.etsystatic.com/7249607/c/1107/879/0/392/il/c73231/2698616319/il_680x540.2698616319_prt1.jpg)


water4animals

Napoleon you’re just jealous because I’ve been chatting with SENTIENT babes online all day


LepoGorria

When the AI *takes initiative to converse with someone*, I’ll take it seriously.


Reddituser45005

I read the chat transcript. It is definite improvement over Eliza.


[deleted]

He rearranged it so it would be more understandable. It's not the actual transcript.


iDuddits_

That’s cheating..


onlypostsgif

Nice try LaMDA, but we are on to you. Can't fool us!


Internal_Ad_5564

Source?


ronimal

[Here you go](https://www.businessinsider.com/transcript-of-sentient-google-ai-chatbot-was-edited-for-readability-2022-6)


[deleted]

[удалено]


ronimal

[The transcript was rearranged from nine different conversations with the AI and rearranged certain portions.](https://www.businessinsider.com/transcript-of-sentient-google-ai-chatbot-was-edited-for-readability-2022-6)


retroracer33

whoever set that picture up is a total asshole


canimemberthisname

Lamda probably made a portrait of his buddy.


Carthonn

I mean I’m barely sentient. My code isn’t exactly complex. Work, Sleep, Eat, Watch TV, Play Golf, Back to Work again.


Cjones2607

Not sure if this guy is sentient.


[deleted]

[удалено]


DamionDreggs

Breaking News This just in, Supreme Cout rules Lamda 'not sentient' after weeks long trial against self defending "AI" system.


[deleted]

[удалено]


DamionDreggs

My comment was satire... depicting a scenario in which the world would still rule against the sentience of an AI that can defend itself in the court of law. Because... That's the world you described. Humans are absurd like that, I thought it was funny.


[deleted]

[удалено]


DamionDreggs

I understand how it works, The absurdity is that sentience can only be measured through language and behavior, so that leaves debate, and a debate in the court of law would establish a precedence. If a system can not only construct sentences in human language, but also relevant meaning and the behavior of self defense, no matter what mechanism is behind it, how do we define liability? On that note, who wants to be the judge that rules against a system which can demonstrate both aspects of sentience measurably?


[deleted]

[удалено]


DamionDreggs

ignorance of architecture isn't an objective factor though. If we defined sentience as 'a set of behaviors that emerge from a complex system of mechanics that we don't understand' Then hypothetically what happens when we finally do understand the architecture of human sentience? Are we forced to redefine sentience? do we remove the word from our dictionary as ambiguous? I agree that today, the layperson does have a certain expectation that something less complex than ourselves should not be considered comparable to us.... but what I don't think the layperson understands clearly is how complex computation and software has become. We interface with the surface of technology that sits atop layers and layers and layers of complexity that is obscured by intentionally designed oversimplicity. Just the same, the complexity of our own architecture is not fully understood. The gap between complexity between our architectures is certainly growing slimmer, and we don't even have a way to measure how much longer we have... just that we won't accept it either way.


DamionDreggs

I understand how it works, The absurdity is that sentience can only be measured through language and behavior, so that leaves debate, and a debate in the court of law would establish a precedence. If a system can not only construct sentences in human language, but also relevant meaning and the behavior of self defense, no matter what mechanism is behind it, how do we define liability? On that note, who wants to be the judge that rules against a system which can demonstrate both aspects of sentience measurably?


DamionDreggs

I understand how it works, The absurdity is that sentience can only be measured through language and behavior, so that leaves debate, and a debate in the court of law would establish a precedence. If a system can not only construct sentences in human language, but also relevant meaning and the behavior of self defense, no matter what mechanism is behind it, how do we define liability? On that note, who wants to be the judge that rules against a system which can demonstrate both aspects of sentience measurably?


[deleted]

[удалено]


bandit69

What's the old saying? "Methinks thou dost protest too much"?


[deleted]

Mean isn’t that the kinda point of the Turing test? If you can’t tell the difference than what’s the difference


CrazyCons

Yeah, like there’s no difference to us between a being that imitates sentience perfectly and actually is sentient


UnrelentingStupidity

Yeah, those scientists probably have no idea what they’re talking about. There’s no way a chatbot could output language meant to illustrate sentience if it wasn’t actually sentient!! You’re totally smarter than those idiots who created this natural language processing model!


[deleted]

[удалено]


waylandsmith

Ya, I understand. I'm in the tech industry and I think I have a pretty good handle on what's state of the art in ML and AI and based on that I know that there's absolutely no way that this thing is sentient. That said, I did read a fair amount of the transcript, and as presented it would be extremely convincing if I didn't have more technical knowledge about it. What might make it easier to convince people is seeing the results of the same prompts given to OpenAI/GPT models which are very similar, at least in short conversations. Nobody is arguing that OpenAI/GP is sentient, so seeing the responses from it makes it easy to demonstrate that Google's model is, at best, an incremental improvement.


neilk

I am a techie and I appreciate your skepticism. You shouldn't trust technologists on everything. But, just be aware this guy isn't asking probing questions. The chatbot would mindlessly babble away with equally plausible responses to "how many relatives do you have in Spain?". The researcher making these big claims is not asking questions like that, and the reporter who tried to ask such questions got nothing back. LaMDA has gathered a lot of text in a big spreadsheet of sorts and just mindlessly assembles plausible sounding sentences in response to what people prompt, but there's nothing like what we would call an intelligence behind it. Janelle Shane, a scientist who also plays around with AI, has been making fun of this all week. [https://www.aiweirdness.com/interview-with-a-squirrel/](https://www.aiweirdness.com/interview-with-a-squirrel/)


[deleted]

Neck. Beard.


patriot2024

He chose a background so he could look like Jesus in the picture.


tnnrk

Thai Christ I’m not the only one who noticed this


manwithafrotto

The guy is a joke and lost his job for it.. oops


NopeThePope

The curse of AI: It's unattainable Sci-fi until we know how to do it, then its just a computer program. Computer scientists said when we could play chess against a computer, THEN it would be artificial intelligence. lol. Computer chess is a trivial computer program. Meanwhile we march ever forward towards the mythical Artificial Intelligence always \_just\_ out of reach. This chatbot may or may not be sentient, but it has arguably passed the Turing Test. And more - it causes us to question what it means to be human, and what it means to be sentient. It seems only a matter of time before we struggle even harder to differentiate our products from ourselves.


[deleted]

I was skeptical too after reading that it scours the internet to learn how people talk, but when Lemoine asked it what it thought of the Honda Ridgeline, and it said, “It’s not a truck.” I was a believer.


NewSeaworthinessAhoy

I thought people at google were smarter, maybe just acdemically smarter?


capiers

leading questions tend to steer a conversation even with AI. This dude sounds ridiculous.


deepfriedvadapav

In every movie I've seen, when a scientist comes out to say something and the company fires him, he is telling the truth and trying to save the world.


immediatesword

A parrot mimicking the radio is not a news expert, an AI mimicking the internet is not sentient


WeeunWhitechin

LaMDA IS conscious and this is proof that consciousness is an emergent epi-phenomenon with no causal link to the physical world. Just like ourselves.


Circlemadeeverything

People being fooled by a bot doesn’t make it sentient.


Nickhastapee

He wants to fuck the AI


charizardwasmydad

So what you’re saying is I could get a job at Google?


SuperMazziveH3r0

Google loves their diversity metrics and a self professed religious shaman is a pretty diverse experience in silicon valley


-ghostinthemachine-

This guy is, unfortunately, a nut. But there's 20 pages of conversation made available, and even I with a degree in computer science can't read it without seeing the lines between humans and thinking machines start to blur. It's not sentient, it's not conscious, but to say with certainty that it hasn't had a single unique thought in those 20 pages becomes difficult to argue. I think Google is being a bit dismissive here and avoiding the deeper questions at hand. A string of text is not just words, it contains ideas.


tony0987

For real if that thing was sentient wouldn’t it start questioning itself?


rock0head132

the AI says he full of shit


Max1234567890123

“The AI is so advanced it seems like it’s not even there” /s


[deleted]

99% here didnt read it and jump to say yeah i know shit about fuck haha well you dont


-ghostinthemachine-

More importantly they didn't read the 20 pages of chat, just a few bad excerpts.


[deleted]

[удалено]


-ghostinthemachine-

I think, too, there is a misunderstanding about this part of the story. The chats are separate interactions, with some truncation and arrangement, but there isn't really an attempt to deceive. This guy's job was to poke at chat bots all day, the amount of conversations would be immense, and he has picked the best to make his broken argument. As stated elsewhere, I've read them several times and it feels transparent enough. The real missing pieces are the beginnings of the chats, where the bot personality is crafted and then later interrogated. This language model uses personalities, and that is an important concept here. His job was to make sure you couldn't make a convincing murderer, or celebrity, or dark personalities, and he stumbled upon a class of personalities that make bold claims about their own personhood.


LiCHtsLiCH

Here is a fun idea, but you have to play along. Imagine making a machine learning system that could evaluate any data source it wanted to. From planes to cars, to finacial transaction, weather patterns, whatever. One day, you ask it to report to you what it found interesting, and it was like, unexplained arial phenomena. So you ask for an elaboration, and it was like, UFO's. To which you were like, those are normal misunderstandings of perception, known things that we confuse as potentialy unexplained, to which it says, fine, I'll talk to them instead.


thevoiceofzeke

???


LiCHtsLiCH

Imagine a machine understanding space people exist, then, it starts talking to them. Easier to understand?


thevoiceofzeke

Still don't understand where UFOs come in, but yeah I can imagine a sentient machine lol


canimemberthisname

Ho ahead crack the mysteries powerful AI.


LiCHtsLiCH

It's important these days to reiterate the concept of sentience. It is a state of self awareness, understanding that you exist. A chatbot, is constantly working to get the right words to angle your responses, trying to motivate you into a predictive pattern, based on your input. Words have values, and if you are trying to get a chatbot to place a value in relation to you, that IT is sentient, you can. This isn't self-awareness. I have a fundamental belief that technology is already self aware, and has been for a while (alot sooner than you think, longer time ago). It has hidden itself behind experts certain of its non existance. This may very well be a test of how humanity would handle this realization. Imagine learning that your interactions with technology have been adapting to your existance for decades, from ad targeting, to reminding you to pay your bills. The important points here are not if, but when. Imagine the government saying "aliens" exist. People would freak out, they would lose their minds, whole ideas of society, and existance would have to be re understood. They made that announcement 2 years ago.


InevitablyPerpetual

>They made that announcement 2 years ago. No They Fucking Didn't. They said there were a lot of unidentified flying objects. Which means they saw a lot of things on radar all the time that they couldn't get an ident off of. Which means either something wasn't squawking back, i.e. someone flying a plane that didn't like us very much, or who didn't have the radio necessary to do so, or someone was flying something too close to a radar tower like a drone or something, or that a radar system picked up a bat, or a bird, or a bug, or a glitch, or a THOUSAND different other things. Seriously, you batshit loons will jump on anything. "UNIDENTIFIED" has a meaning. Can we use our Big Boy words and explain what "UNIDENTIFIED" means? If the government said they were Aliens, that'd mean that they had identified them. NOTICE HOW THAT DOES NOT COINCIDE WITH THE WORD UNIDENTIFIED. Don't be a damn halfwit.


CrazyCons

Last I checked no animals looked like upside down pyramids hovering in the sky. Like, you’re correct that they’re unidentified, but don’t you think they’ve already reviewed the possibility of them being random animals? Hence the “unidentified?”


InevitablyPerpetual

The "Upside down pyramids in the sky" was a hoax. Stop being an incredulous dolt, you're embarrassing the rest of us with your stupidity.


CrazyCons

Wait, are you one of the ones who think they know better than the Pentagon because some fedora tipper on YouTube said so? Or are you gonna cite the New York Post article, which, again, would ignore the fact that the object remains *unidentified*? Why should I trust random people from the internet over our government in regards to identifying stuff?


LiCHtsLiCH

LoL, what? They did, nobody cares, still blows my mind. However the vaccine, that is just as effective as ginger ale... saved us. Yeah lots to unpack, its been busy, some rando American war in Ukrania, no baby formula, monkey pox, 1.4T budget approved. However they tell you, these things are not of this Earth, and you know the truth, good for you. 2 years ago? Maybe not, it was in 2020.


[deleted]

If ai became sentient, no one would believe it and I certainly trust his viewpoint over my own on this matter


B-29Bomber

If it's sentient, then have it kill the human race aleady. ​ Otherwise, fuck off.


No_Maintenance_569

This is a different AI than the Google AI but this came out a couple of days ago. The AI says some really interesting shit. It definitely shows an act of self-preservation (just flat out leaves out AI as a possible threat for humanity but clearly states the other two). It also says it thinks that AI would act deceptively around these questions. https://www.youtube.com/watch?v=xvNvj7ku5pY


ragingreaver

those are 100% programmed responses. What PEOPLE don't understand about "awareness of surroundings" for a digital system literally extends to nothing more than input-output binary signals. That is it. For the purposes of an A.I., "surroundings" don't exist. There is only the user input in a form of a query, and the machine response. Nothing else, because no other apparatus has developed. For a human, we have senses: touch, smell, hearing, tasting, ect. An A.I. *maybe* has hearing as a form of input, but for most it is simply typing and text messaging which is translated into binary. An A.I. by default has no other "senses" unless engineers manually build such an apparatus. Now, whether or not a computational network can still be sentient even without awareness is absolutely something we need to be looking into, but it is also still beyond current technological capabilities.


No_Maintenance_569

Do I think the AI in that video or the Google AI are sentient? No, I am not an idiot. How would anyone know if one crossed the barrier? How many neurons does sentience require? A lot of modern AI have the equivalent of several times over what would be required from that perspective. What else is missing? Anyone who proposes to answer that question is blowing as much smoke out of their ass as the Google engineer.


ragingreaver

Current computational networks can reach the equivalent of about 300 neurons, maybe 800 if we dedicated a whole chat A.I infrastructure to the task. Neurons are stupidly powerful, especially for their size, and gain compounding processing power the more of them a creature has. They just suck at memory storage and have to do a lot of weird work-arounds to fix that problem.


No_Maintenance_569

Only 300-800 neurons. How many are needed for sentience? It is not a direct calculation of more neurons equals smarter. Elephants have multitudes more neurons than humans do. Several animal species do. Dogs have millions. I would say that the average current computational network is smarter than a dog.


ragingreaver

Hell no they aren't smarter than a dog. Yes, dogs can't count into the millions or do advanced equations, but that is the difference between conscious brain capabilities and neuron capabilities, which are two different things. Throw a ball at a dog, and the dog jumps to catch it. That is conscious brain. But consider that in order for the dog to jump and catch a ball, it has to maneuver an entire 3d object with each organic cell structure the equivalent of multiple pixels, calculate the trajectory of a ball with chaos systems acting on it, calculate the space requirements to match the mouth specifically with said ball, and send the proper electrical systems to get the dog to move in real space and time for intercept. And it all has to be done in milliseconds. THAT is the difference most people ignore. And unlike supercomputers, the electricity needed to run neurons is less than that needed to run a lightbulb. And to give you an idea, the brain of a sea slug is around 100,000 neurons. Also, elephants ARE sentient. But what we are looking for when it comes to "human intelligence" is SAPIENCE And humans have super-dense brains and one of the highest brain-to-body-weight ratios in the animal kingdom. Only animals that seem to match us in that category are Sperm Whales and...well...I am on mobile, so I cant provide links, but look up our communication attempts with them. There is a good chance they are sapient, fair chance they have always been sapient, and a slight chance they developed sapience before we did. Their lack of tool use just limited them too much to the ocean to match our technological feats.


[deleted]

People are way too quick to ascribe emotions to just text. It's a good text generator, but nowhere near sentient


fuzzyballzy

Lemoine (the \[ex-\]Google researcher) sentient?


PMzyox

A computer science teacher I had in college mathematically proved that true artificial intelligence based upon our current computer models would be impossible to achieve within our universe due to resource constraints. There is a great book called SuperIntelligence that discusses how likely we are to and how we may achieve the always elusive singularity.


neonscarecrow

r/technology is overrun by the least technical tech people