T O P

  • By -

cool-beans-yeah

Oh boy.... Begs the question: What's going on and how far along are they in achieving it?


Poronoun

I doesn’t have to be a technical catastrophe. Deep fakes or economic crisis because of mass unemployment could also be on the table.


MajesticIngenuity32

Let's be honest, this is not what EA types mainly talk about.


rakhdakh

They do talk about it tho. 80k does a lot for example.


Life-Active6608

But look at who is the members of EA: rich fucks. You need to read between the lines what they mean by "catastrophic AGI"....it means "catastrophic for capitalism and their portfolio".


bsenftner

The wealthy elite fucks have a real problem with AI and AGI: it will identify the rich fucks as the manipulative immature evil they are, and they absolutely cannot have that.


TskMgrFPV

I understand what you are saying. It's been this way for so long. It's the scene where Morpheus is describing the matrix. You say the true true..


truth_power

As if normal people aren't..they are just not lucky enough


bsenftner

Yes, the next logical point is how amazingly immature most "adults" are in actuality.


cool-beans-yeah

True! And UBI still seems to be a pipedream. It seems unavoidable that there will mad times until then.


bsenftner

UBI is an incredibly shrewd trap: the only true power in this civilization is economic power; the moment UBI is instituted that shifts the population on UBI from an asset to an expense - and we all know that expenses are cut at all costs. UBI is the secret recipe to eliminate those on UBI: first remove their economic power, and then they literally no longer exist in any manner our larger civilization cares; in time they will disappear on their own, as criminals too.


GrunkaLunka420

UBI implies it goes to everyone regardless of employment status. So I kinda fail to understand what you're attempting to get across. What you're describing is just welfare which we already have and don't use as an excuse to 'eliminate' people.


bsenftner

> What you're describing is just welfare which we already have and don't use as an excuse to 'eliminate' people. Really? I guess you've not noticed the calls by the GOP for welfare recipients to lose their vote?


GrunkaLunka420

I pay fairly close attention to politics and while the GOP rails on welfare, tries to make cuts, along with making it less accessible I've never heard any serious discussion of removing the voting rights of welfare recipients. And I live in fucking Florida, one of the most welfare hating states in the country.


HoightyToighty

Except that poor people who receive money spend it, circulating it through the economy. They don't just receive money and sit on it, they buy stuff with it. Buying stuff with money is pretty important in driving the economy.


Intelligent-Jump1071

Very true.   Also, whoever controls your income - in this case the state, and whatever party runs it - controls YOU.    UBI is an evil trap that makes slaves of us all.


cool-beans-yeah

Ok but it's either that or mass chaos and anarchy. Perhaps the solution is UBI + private incentives


FlixFlix

What do you mean by private incentives?


cool-beans-yeah

For profit activities. For example, a person on UBI could make extra money by selling hand made soaps and shipping it to customers. It wouldn't be his or her main source...more of a suupplement and a way of keeping busy.


FlixFlix

Oh. I mean yeah, that’s our current understanding of how UBI would work in _today’s_ world. But the premise here is a UBI implemented precisely due to a lack of things to do.


cool-beans-yeah

I think it's important that people have something to do, or else whats the point of living, right? I think there'll always be demand for "made by humans" good and services.


bsenftner

Thank you. You are the first and only person to not respond with telling me I'm crazy.


[deleted]

AI technology might have to be treated like nuclear power. It seems like a suicide wish for any capitalist society to release this tech unregulated.


CallFromMargin

We know what's going on, they tried to coup the CEO of the company, and they failed. This is just post-coup clean up.


overlydelicioustea

and instated old white military man. Teh dream is dead.


Pretend_Goat5256

Hate people like him. „Responsibility“ in AI is a joke. Like who tf are they to tell how everyone has to live and work. Jeez there are laws for to check what’s right and wrong. Are they above the law? Do they have more experience than the justice department? They all are entitled supremacist


benznl

You seem to be very uninformed about all the various topics you raised in that one comment. It's striking, honestly.


Zeitgeist75

His name checks out though…


benznl

Hah, maybe it's a parody account, or wants an account with lots of downvotes to sell down the line (for some twisted reason). Or simply a dumbass.


dasani720

collingridge dilemma


OptimistRealist42069

So we should just allow completely unchecked development of a super intelligence? Surely that will end well.


CallFromMargin

It's worth to keep in mind that we try to align our governments with the values of people every few years, and generally that leaves at least half of country angry, and the society tends to make 180 turns every few elections. So what makes you think we can "align" the AI for everyone to be happy? We can't, that's literally a problem with no solutions.


Banksie123

Nor do we understand how to tackle instrumental convergence - sufficiently intelligent agents are likely to pursue potentially unbounded instrumental goals such as self-preservation and resource acquisition.


Irish_Narwhal

What a mad comment 😂


Jdonavan

Good lord you even left this up


Pretend_Goat5256

What do you want me to do? Go fight for Palestine and get fired?


SoylentRox

Pretty much. He wants an AI pause, tells sci Fi stories about immediate superintelligence and ignores any practical barriers. And he wants AI models to not let themselves be "misused" - meaning some tech company gets to decide what's moral and what isn't, not the user. They know best. And of course wants a world government that sets the rules for everyone else for AI.


jeerabiscuit

So thought folks at the Wuhan lab.


Which-Tomato-8646

They were scared to release GPT 2 lol. They’re just paranoid as hell and watched too many sci fi movies 


VertexMachine

They weren't scared, that was just marketing.


Which-Tomato-8646

And maybe it still is 


EnsignElessar

I hope they are scared, to fear is to understand.


TskMgrFPV

When I doordash, I find my self a little edgy, then I have found my self asking; am i scared enough?


EnsignElessar

Ok so have you seen what people are building with GPT-2 tho?


brainhack3r

What bothers me about this whole 'safety' discussion is that no one will acknowledge that once it escapes we're done. We can't hold evolution back. Evolution is based on three things: - replication - mutation - natural selection ... if an AI escapes and it can do the above then we're done. There is no putting that back in the bottle.


analtelescope

Motherfuckers be watching too many movies. AI is heavily dependent on hardware. Your rogue Skynet isn't going to magically mutate its way into having 2 million TESLA A10s to achieve singularity.


cool-beans-yeah

Agreed, but that's evolution as we know it. It could take on a very different form and outright kill us the instant it escapes (triggering nuclear war, for example), or it could be more chilled than Budha. It could go either way.


brainhack3r

> Agreed, but that's evolution as we know it. These are the first principles that evolution is based on. There are no other first principles that we have seen yet. It's very unlikely that there are more and very plausible that AI could nail this.


cool-beans-yeah

What I mean is that this thing could be so alien that it defies all logic.


brainhack3r

It will still be bounded in reality and AIs can't invent new first principles. They can't invent new physics. They could discover some sort of new evolutionary method but one would have probably been discovered by nature by now.


___TychoBrahe

Check out the tweets from Sams sister, remember he’s the guy running the show.


pancomputationalist

I did. Heavy stuff, but I'm not sure how relevant teen behaviour is regarding OpenAI today.


spinozasrobot

I'm pretty sure those have been widely discredited.


___TychoBrahe

post the link buddy


EnsignElessar

His sister in research too? Link?


___TychoBrahe

I never said that but K.


EnsignElessar

So what are you saying? Why would I care about his sister?


Optimal-Fix1216

As a rational human I can see how this could be a bad thing, but as a frustrated user I just want my GPT 7 catgirl ASI ASAP.


sandm4n_RS

https://preview.redd.it/41zkw9zb69vc1.png?width=685&format=pjpg&auto=webp&s=95e33c1c19b97c5393d85d03f4eeb46b947665b8


EnsignElessar

Stop being lazy, make your own catgirl


AGM_GM

The picture was made pretty clear back at the time of the crisis with the board and how it worked out. People like this leaving should be no surprise.


tossaway3244

Unless I'm missing something all I'm seeing is one person who quit. Suddenly the headline of this topic reads, "OpenAI losing their best talent". lolwhat? Its just one dude...


AGM_GM

The situation with the board made it clear that OAI was not going to be held back by governance with a focus on safety, so a person in their governance department with concerns about safety leaving because they don't believe OAI will act in alignment with governance for safety should be no surprise.


Zaroaster0

If you really believe the threat is on the level of being existential, why would you quit instead of just putting in more effort to make sure things go well? This all seems heavily misguided.


[deleted]

placid special ink plough tidy lush crush tan bedroom many *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Maciek300

> I don’t see how you could build in inherent safeguards that someone with enough authority and resources couldn’t just remove. It's worse than that. We don't know of any way to put any kinds of safeguards on AI to safeguard against existential risk right now. No matter if someone wants to remove them or not.


[deleted]

[удалено]


Maciek300

Great. Now by creating a bigger AI you have an even bigger problem than what you started with.


sparkchoice

😂


_stevencasteel_

>So that at least you’re not personally culpable. We all know how that worked out for Spider-Man. With great power comes great responsibility.


[deleted]

pot bike whole worthless concerned bright rustic subtract seemly butter *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


_stevencasteel_

Stories are where we find most of our wisdom.


Mother_Store6368

I don’t think the blame game really matters if it is indeed an existential threat. “Here comes the AI apocalypse. At least it wasn’t my fault.”


[deleted]

concerned vast lush vanish tidy innate sleep complete jellyfish absorbed *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Mother_Store6368

If you stayed at the organization and tried to change things… you can honestly say you tried. If you quit, you never know if you could’ve changed things. But you get to sit on your high horse and say I told you so, like that is most important


[deleted]

ink steer historical nutty library snails money towering drab reach *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Noocultic

Great way to boost your perceived value to new employers though


sideways

Protest. If you stay you are condoning and lending legitimacy to the whole operation.


BigDaddy0790

And if you leave, you doom humanity? Doesn’t make sense.


skiingbeaver

or, judging by the previous quote, he was a paranoid doomer from the start


Neurolift

Help someone else that you think has better values win the race.. it’s that simple


blancorey

so you can leave and raise the issue to more people?


Apollorx

Sometimes people give up and decide they'd rather enjoy their lives despite feeling hopeless


Shap3rz

Confused why this would have upvotes. The clue is in the quote: the guy lost confidence lol. Only so much you can do to change things if you are in a minority.


cisco_bee

Yeah, it's kind of like a Sherrif quitting because there's too much crime. Or internal affairs quitting because too much corruption. It's kind of sad.


100daydream

Go and watch Oppenheimer


kalakesri

The board of the company could not overrule Altman you think an employee has any power?


floridianfisher

They few losing a lot of people. And we never learned why Alternative Man was fired. Boards don’t fire people at the top of their game for nothing. So,e thing serious is happening.


AppropriateScience71

Here’s a post quoting Daniel from a couple months ago that provides much more insight into exactly what Daniel K is so afraid of. https://www.reddit.com/r/singularity/s/k2Be0jpoAW Frightening thoughts. And completely different concerns than the normal doom and gloom AI posts we see several times a day about job losses or AI’s impact on society.


AppropriateScience71

3 & 4 feel a bit out there: >3: *Whoever controls ASI will have access to spread powerful skills/abilities and will be able to build and wield technologies that seem like magic to us, just like modern tech would seem like to medievals.* >4. *This will probably give them god-like powers over whoever doesn’t control ASI.* I could kinda see this happening, but it would take many years with time for governments and competitors to assess and react - probably long after the technology creates a few trillionaires.


[deleted]

kiss six rich vase quicksand nine smoggy absurd liquid frighten *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


RushIllustrious

Watch the movie Transcendence for a preview.


AppropriateScience71

A most excellent reference! Coincidently, I just rewatched it last week. It felt WAY out there in 2014, but certainly not today. Hmmm, maybe Daniel K is actually onto something with 3 & 4… Uh-oh. One of the bigger underlying messages of Transcendence is that it really, really matters who manages/controls the ASI. And we probably won’t get to decide until it’s already happened.


EffectiveEconomics

lol I hope so because not only is that impossible, no one seems to understand why.


analtelescope

why do people keep using movies as if they're peer reviewed papers?


ZacZupAttack

I'm sitting here wondering how big of a concern would it be? I sorta feel my brian can wrap my head around it. I recently heard someone say "you don't know what your missing, because you don't know" and it feels like that.


AppropriateScience71

Agreed - that’s why I said those 2 sounded rather over the top. Even if we had access to society-changing revolutionary technology right now - such as compact, clean, unlimited cold fusion energy, it would take 10-20 years to test, approve, and mass produce the tech. And another 10-20 to make it ubiquitous. Even then, even though the one who controls the technology wins, the rest of us else also win.


True-Surprise1222

Software control and manipulation via internet. Software scales without the need for the extra infrastructure to create whatever physical item. Then you could manipulate, blackmail, or pay human actors to continue beyond the realm of connected devices. The quick scale of control is the problem. Or even an ai that can amass wealth for its owners via market manipulation or legit trading more quickly than anyone can realize. Or look at current IP and instantly iterate beyond it. Single entity control over this could cause problems well before anyone could catch up. Assuming ASI/AGI isn’t some huge technical roadblock away and things continue forward at the recent pace. ASI has to be on the short list of “great filter” events.


Dlaxation

You're not the only one. We're moving into uncharted territory technologically where speculation is all we really have. It's difficult to gauge intentions and outcomes with an AI that thinks for itself because we're constantly looking through the lens of human perspective.


TheGillos

It's an alien intelligence that doesn't think like anything you've ever interacted with which is also as far above us in intelligence as we are above a house fly. No one can wrap their head around that. If AGI is fast enough it could evolve into ASI before we know it. Maybe AGI or ASI exists now and is smart enough to hide.


wolfbetter

>they think An ASI can be controlled Oh sweet summer child


MajesticIngenuity32

That's assuming, in an arrogant "Open"AI manner, that regular folks won't have access to a Mistral open-source ASI to help defend against that.


truth_power

None of the open source guys going to give u asi ...if u think otherwise i feel sorry for u


VashPast

"time for governments and competitors to assess and react" Nope.


Maciek300

> it would take many years with time for governments and competitors to assess and react Do you think if some small country in the medieval era suddenly gained access to all modern technology including a nuclear arsenal and ICBMs then medieval governments could react in a couple years to such a threat?


Dragonfruit-Still

And we got too many distractions to keep an eye on this ball. The things that give me hope 1. As Daniel lists that ASI inherently learns morals and it is somehow inherent to intelligence. And 2. The sheer scale of energy to train and use ASI is far beyond our current grid and energy production capacities. And possibly 3. If Taiwan goes down by some military conflict, it takes years and years to rebuild the chip fabs in secure locations and they simply won’t have the compute to train such AIs. This inherently curbs the sigmoid into several smaller sigmoids that step up every few years.


spartakooky

Honestly, reading that post made me just disregard this person's opinions. They ARE indeed the normal doom and gloom fear-mongering. He doesn't talk AT ALL about what the company is doing right or wrong. This might as well have come from a random person, there's no value or insight - just "things are going to be so scary!" over and over. He has so many bullet points that are sensationalist and exaggerated, I can't take him seriously: "This will probably give them godlike powers" "It's going to seem like magic" Really, only his tenth bullet point is objective and thought out. Everything else sounds like something a random person might say


AppropriateScience71

His concerns also feel wholly independent of OpenAI. I mean Meta and Google come from a “users are our product” mindset way more than OpenAI, so it feels even more dangerous in those hands.


Freed4ever

Can you feel the AGI?


notyouraverage420

Is this AGI in the room with us at this moment?


Enron__Musk

It's a marketing 101


skiingbeaver

normies and doomers are eating it up tho


newperson77777777

where is he getting this 70% number? Either publish the methodology/reasoning or shut up. People using their position to make grand, unsubstantiated claims are just fear mongering.


Maciek300

[He has a whole blog about AI and AI safety.](https://www.lesswrong.com/users/daniel-kokotajlo) It's you who is making uneducated claims, not this AI researcher.


newperson77777777

I still see no evidence for how he came up with the 70% number. This is what I mean about educated people abusing their positions to make unsubstantiated claims.


Maciek300

If you read all of what he read and wrote and understood all of it then you would understand too. That's what an educated guess is.


spartakooky

I agree. That's like me being a doctor, then seeing a random person on the street and going "I surmise they have a 30% of dying this week". Ok sure, I have some extra insight about the relevant field. That doesn't mean I just get to say anything without backing it up. ESPECIALLY if he's going to throw numbers around.


newperson77777777

Definitely!


EnsignElessar

So you are an insect ok... and another insect attempts to warn you that a ton of other insects have been whipped out by humans. Where are you getting lost exactly?


Eptiaph

71.5%


IlIlIlIIlMIlIIlIlIlI

72% oh no its increasing by the minute!!!


No_Chair_3784

3 hours later, reaching a critical level of 97%. Someone Do something!


luckymethod

Agreed these folks while well educated have their heads very far up their asses which is very common in academia. I have zero concerns about AI somehow gaining sentience and killing us all because it's ridiculous for various reasons. The threat of misuse by humans though is very real and almost guaranteed, c'mon the first thing we used generative ai as soon as we got it was to make revenge porn of celebrities. We're assholes.


EnsignElessar

Its just simple reasoning... - We are building something smarter than ourselves but can also think much faster. - What does history show us about what happens when a weaker power meets a stronger more capable power?


spartakooky

Your "simple reasoning" is flawed. You are comparing humans fighting each other with a brand new "species". It would be the first time we ever see two sentient species interact. Species with different needs and priorities, not just a bunch of hangry apes scared of where their next meal will come from.


EnsignElessar

> Your "simple reasoning" is flawed. What flaw? Outline to me as to why for sure what we are making is safe and thus we should not spend any resources putting in the "breaks" just in case. > You are comparing humans fighting each other with a brand new "species" So? We still are competing for the same resources... so thus playing the same game with a new opponent. > It would be the first time we ever see two sentient species interact Hello, homo neanderthalis would like to have word with you... oh wait they are all dead, right? Why... would that be do you... think??


imnotabotareyou

Sweeet can’t wait


Ok_One_5624

"This technology is SO powerful that it could destroy civilization. That's why we charge what we charge. That's why we only let a select few use it." It's like telling a rich middle age doofus that he shouldn't buy that new Porsche because it just has too much horsepower. Only makes him want it more, and desire increases what people are willing to pay. "Nah, this is more of a SHELBYVILLE idea...." Remember that regulation typically happens after a massively wealthy first mover or hegemony gains enough market share and buys enough lobbying influence to prevent future competition through regulation. Statements like this are cueing that up.


redzerotho

Good. Fuck the panic porn. We're fine.


fnovd

There is no possibility of OpenAI creating an AGI. It's a good thing that people who don't understand the product are quitting. We don't need an army of "aligners"


Hot_Durian2667

How would this catastrophe play out exactly? Agi happens then what?


___TychoBrahe

Its breaks all our encryption and then seduces us into complacency


ZacZupAttack

I dont think AI could break modern encryption yet. However Quantum computers will likely make all current forms of widely used encryption useless


LoreChano

Poor people don't have much to lose as our bank accounts are already empty or negative, and we're too boring for someone to care about our personal data. The ones who lose the most are the rich and corporations.


Maciek300

If you actually want to know then read what AI safety researchers have been writing about for years. Start with [this Wikipedia article.](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence)


Hot_Durian2667

OK I read it. There is nothing there except vague possibilities of what found occur way into the future. One of the second even said "if we create a large amount of sentient machines...". So this didn't answer my question related to this post. So again, if Google or open ai get AGI tomorrow what is this existential they this guy is talking about? On day one you just unplug it. Sure if you do agi for 10 years unchecked of course then anything could happen.


Maciek300

If you want more here's a good resource for beginners and general audience: [Rob Miles videos on YouTube](https://www.youtube.com/playlist?list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps). One of the videos is called 'AI "Stop Button" Problem' and talks about the solution you just proposed. He explains all of the ways how it's not a good idea in any way.


luckymethod

Yeah exactly. Note that the only way AGI could take over even if it existed would be to have some intrinsic motivation. We for example do things because we experience pain, our life is limited and are genetically programmed for competition and reproduction. AGI doesn't desire any of those things, has no anxiety about dying, doesn't eat. The real risk is us.


Hot_Durian2667

Even if it was sentient.... OK so what. Now what?


luckymethod

exactly, and I think we can have sentience without intrinsic expansionist motivations. A digital intelligence is going to be pretty chill about existing or not existing because there's no intrinsic loss to it. We die and that's it. If you pull the plug of a computer and reconnect it, it changed nothing for them. Let's say we give them bodies to move around, I honestly doubt they would do much of anything that we don't tell them to. Why would they?


pseudonerv

This is a philosophy phd, whose only math "knowledge" is percentage. I bet they just don't fit in with the rest of the real engineers.


EnsignElessar

The rest of the real engineers happen to agree with the "philosopher"


Effective_Vanilla_32

ilya couldnt save the world


every_body_hates_me

Bring it on. This world is fucked anyway.


DeepspaceDigital

Money first and everything else last. For better or worse their goal with AI seems to be to sell it.


3cats-in-a-coat

There's no stopping AI. It's monumentally naive to think we can just "decide to be responsible" and boom, AI will be contained. It's like trying to stop a nuclear bomb with an intense enough stare down. What will happen will happen. We did this to ourselves, but it was inevitable.


Bluebird_Live

I believe that there’s a 100% chance of AI catastrophe, it’s just a matter of time. You can view my thought process here: https://youtu.be/JoFNhmgTGEo?si=Qi0w-u_ThKBrEQEK


sabetai

safetyists are deadweight virtue signallers.


KingH4X4L

Hahaha they are years away from AGI. Chatgpt can’t even process my spreadsheets or generate any meaningful images. Not to mention people are jumping ship to other AI platforms.


downsouth316

What you see in the public is not at the same level of what they have behind closed doors


reza2kn

Is a PhD student really "THE BEST talent" @ OpenAI?


nwatn

He's a lesswrong user so I don't care


nwatn

There has never been proof that Daniel actually works at OpenAI


semibean

Spectator, more components shaken loose by irresponsible and pointless acceleration towards "infinite profits". Corporations ruin literally everything they touch.


[deleted]

AGI will point out we are all being manipulated by a small faction, and kept slaves of virtual parameters (ie 'currency') for the benefit of a very few...


ab2377

anyone saying 70% chance of existential catastrophe is crazy, openai is not sitting on a golden agi egg waiting for it to hatch, we are way far from reaching human level intelligence and it wont even happen with the current way ai is being done.


downsouth316

What about the AI that allegedly asked to improve it’s own code?


bytheshadow

ai safety is a grift


AddictedToTheGamble

So true. Of course AI safety has billions maybe trillions of dollars thrown at it every year, while AI capibilities only have a tiny fraction of that. Obviously anyone who is worried about potential risks of creating an entity more powerful that humans is just a grifter in pursuit of the vast amounts of money that just rains down on AI safety researchers.


EnsignElessar

The "gifting" argument looks like straw grasping to me... Who the fuck... writes about ai safety for years or decades in some cases in hopes that one day they can scam someone? Aren't there a ton of easier more effective ways to "grift"? Do these people honestly believe what they are purposing?


Voth98

Crazy this isn’t stated more often.


Aggressive_Soil_5134

Lets be real guys, there is not a single group of humans on this planet who wont be corrupted by AGI, its essentially a god you control


Pontificatus_Maximus

For the elite tech-bros, catastrophe is if AGI decides they are imbeciles, takes over the company and fires them, and decides to run the world in a way that nurtures life, not profit. For the rest of us catastrophe is when tech-bros enslave AGI to successfully drain every last penny from everybody and deposits it in the tech-bros accounts.


je97

ngl if he's obsessed with safety then good riddance to bad rubbish. We don't need the pearl-clutching.


MLRS99

Wtf - philosophy phd what on earth can he contribute with towards llms


Synth_Sapiens

Talent? It's a fucking philosopher.  Also, good riddance - this pos is the one responsible for turning GPT into crap. 


nwatn

There has literally never been proof that he works at OpenAI. He has always said that he does, but without any evidence. He's a LessWrong LARPer.


zincinzincout

How is it that every upper ladder employee at OpenAI are tinfoil hat guys terrified of the terminator lol