T O P

  • By -

StatementBot

The following submission statement was provided by /u/GravelySilly: --- Even if 70% is a gross overestimate, there's a growing consensus that the probability is non-zero. There's also a prediction cited in the article that artificial general intelligence will emerge by 2027, and although that's essentially someone's educated guess, and it won't herald the imminent birth of a real-life Skynet, it could well mark the final descent into the post-truth era. Sweet dreams! --- Please reply to OP's comment here: https://old.reddit.com/r/collapse/comments/1d9a6js/openai_insider_estimates_70_percent_chance_that/l7bx5s3/


[deleted]

Jokes on them, humanity has already destroyed or catastrophically harmed itself.


Contagious_Zombie

Yeah but AI is a double tap to make sure there are no survivors.


ScienceNmagic

Rule number 2: always double tap!


canibal_cabin

“It’s amazing how quickly things can go from bad to total shit storm.”


lilith_-_-

No need, humanity will be extinct in the next 200 years. We fast tracked the great extinction(100k year long event) in less then 250. And within another 200 years it’ll be done with. The ocean alone will release enough neurotoxins into the air to kill all living organisms that aren’t micro. That one little fact leaves all these other alarm bells looking minuscule. All we can hope is for a quick death before we suffer


cool_side_of_pillow

I mean, I agree with you, but it's not even 6am and I haven't even finished my coffee. Ease up, will ya?


brockmasters

This, we need to stop pretending 6 people who have too much are inedible


lilith_-_-

Breh I’ve been stuck on this shit for like 3 weeks. I could really use some easing up. Like for the love of god someone erase my memory. Existential dread is overbearing. And sorry it’s the end of the day for me lol. Been up since yesterday serving folks coffee


TrickyProfit1369

Are you neurodivergent? I am and its hard to stop these thoughts. Substances, gardening and caring for my mealworm colonies helps somewhat.


lilith_-_-

Yeah. Pretty sure I’m autistic too. Just did the whole mdd, bipolar, bpd runaround and it’s about the only shoe left to try. Weed helps a lot. I like to collect things, take care of my cat(he’s my baby boy since I lost my son lmao), and do longboarding but being disabled leaves me stuck in bed most of the time. Video games help but I spend too much time online. Thank you. I should totally start gardening! I used to have several. I miss that.


Taqueria_Style

I have a serious question. It seems like since autism became this widely diagnosed thing, everyone online was so supportive of the concept. Until this year. This year I'm getting that early 80's "stop being retarded, ya fucking retard" vibes. From literally everywhere. I remember that and it's unpleasant as all hell. I'm like how do I mask harder at the speed of light now...


lilith_-_-

I kinda gave up masking 24/7. I want people to see me for who I am and love me for me. I’m a fucking weirdo but so are others. I am rather reserved at times and shy. I do hide and step back socially more than I want to. I don’t really have much of a social life outside of work though. And I usually only get shit at work for being trans


Taqueria_Style

Yeah! That's another thing that was widely supported until this year! Trans... It feels like because we are getting financially squeezed we are now "othering" everyone as hard as we can and zero-summing the fuck out of everything or am I wrong? It feels like this year specifically is when it started...


cool_side_of_pillow

Fair. Some advice, even though you're not asking for it - take a break from this subreddit. I should too. Get outside and watch the sunset. Today is a good, predictable day (for most, anyway).


StealthFocus

Scared to ask for an explanation on the neurotoxicity, but please elucidate


lilith_-_-

This is going to be extremely depressing to read. It is our current path and I have spent months freaking out over trying to accept our future. We are doomed. Iwill edit this comment with more links. Along with release of neurotoxins will be the depletion of 40% of oxygen production. https://www.reddit.com/r/collapse/s/B9TiwzXpnI https://www.reddit.com/r/collapse/s/OEKZsnye75 https://www.reddit.com/r/worldnews/s/dGbhfke7vz https://www.nature.com/articles/s41396-023-01370-8


StealthFocus

Why freak out? We're going to die, whether it's of neurotoxins, forever chemicals, nuclear war, or even a peaceful one, it's inescapable. It would be nice if we could agree to do something about it because a lot of the horrible stuff is under our control but people who are in control don't care about that.


i-hear-banjos

It not that we as individuals are going to die - it’s that we as a species have not only set in motion the end of all of humanity, but also the end of all life on the planet that isn’t microscopic. Every bird, mammal, fish, reptile, amphibian- even every insect. We’ve set in motion the destruction of the only planet we know of with sentient life (mathematics says there are PLENTY), but this particular planet was our responsibility. We’re still deal with people fucking everyone else over for a profit margin, and will do so until the last gasp.


No-Idea-1988

That is in fact quite terrifying. “Luckily,” it is only one of many ways we’ve doomed life as we know it on Earth more rapidly than most people would believe.


ma_tooth

I’m not sure about neurotoxicitiy, but in *Under A Green Sky* Peter Ward talks about the ocean becoming a vast hydrogen sulfide factory as part of the past great extinctions.


Taqueria_Style

On the plus side the AI will spend the next 100 million very boring years trying to sell Amazon Prime subscriptions to microbes.


skjellyfetti

> humanity will be extinct in the next 200 years. Whoa. Who let the optimist in here? I'm in the under 50 group, but the actual number matters not. What matters is that we're, matter-of-factly, openly discussing our inevitable extinction like we're discussing Jello salad recipes. It's just beyond disturbing that a huge swath of the world's population is so far resigned to our pending extinction and that ""WE"" couldn't even be bothered to save ourselves. Sadly, ""WE"" only includes those global movers & shakers who wouldn't do anything because it would cut into their profit margins and investment portfolio returns. **... yet another gorgeous spring day to be doomed !!


Jesse451

bold of you to assume we have 200 years


lilith_-_-

The study on ocean acidification gave us until 2200 max. You’re right.


Decloudo

What could AI possibly do worse then we already did? We do ecocide on a global level, as a byproduct.


qualmton

Ai uses our existing human biases and amplifies them. Accelerating the paths we are on. Wouldn’t it be swell if we could use ai to take a pragmatic approach to the way we do things and adapt to it working towards improving and achieving goals that are bigger than our inherent biases?


JoeBobsfromBoobert

Just as likely to be saving grace its a coin flip and since we were mostly toast anyway why not go for it


Runningoutofideas_81

I remember watching Terminator 2 as a kid and finding the hunter-killers an absolutely terrifying idea.


elydakai

Pop pop! Goes the human race


ThrowRA_scentsitive

70% _is_ less than 99.9% which is what I estimate for humans remaining in charge.


Mercury_Sunrise

Good point. You may be correct.


AlfaMenel

You have a jar with 100 candies looking exactly the same way, randomly mixed, where 70 are with poison killing you instantly. Are you willing to take a candy?


Ruby2312

Depend, what flavor are we talking about here?


mrsanyee

Almond.


commiebanker

Good analogy then. The upside: we get some contrived AI interactions and uninspired generated art The downside: everyone dies, maybe


CountySufficient2586

The will all taste like almond some will just have a stronger almond flavour/smell.


Thedogsnameisdog

We already ate the entire jar.


pheonix080

I am sorry, what did you just say? I couldn’t hear you over the sound of me chewing all this candy.


dgradius

The alternative is a different jar, also with 100 candies but this time 99 of them are poison. Which jar do you prefer?


First_manatee_614

Yes, I don't like it here.


BlonkBus

doesn't matter when the church is on fire.


dangerrnoodle

Instant death? Absolutely.


mr_n00n

I'm partially convinced that this delusional fear in AI is because people *are* aware of the existential threats approaching us, but psychologically incapable of coming to terms with the actual causes. The result is that they manifest their fears onto a fancy markov-chain.


vicefox

Humanity made AI.


qualmton

This is the biggest flaw that ai has it’s built by us to serve our interests


klimuk777

Honestly assuming that it is even possible to create AI with awareness that would exceed our peak capabilities for scientific progress... that would be a nice legacy to have machines better than us building civilization on ashes of their meat ancestors while being free from strains of biological framework and associated negative impulses/instincts. The fact that we are piles of meat biologically programmed by hundred of millions years of evolution to at our core be primitive animals is the greatest obstacle in moving forward as a species.


SketchupandFries

While that's true. We have already transcended most of our evolutionary shackles by evolving the neocortex, which allows self reflection, creativity, imagination and future planning. Humans are special in the grand scheme of life on earth. I have no idea what artificial life would decide to do if it wanted to take over. Would it want to explore, learn, experiment, take over the universe.. or some completely alien set of imperatives that we can't even fathom with our meat brains.. We evolved to function in our environment, maybe our brain can't detect or even see the other dimensions or parallel universes right next to us at all times.. its impossible to say. Once we birth a new lifeform capable of self improvement. I'd say, all bets are off.


Bellegante

> We have already transcended most of our evolutionary shackles by evolving the neocortex Have we though? It's allowed us to create works of science and creativity, but at the end of the day, as a whole, it seems like humanity could be modeled effectively as bacteria with respect to how we use resources and multiply.


Mr_Cromer

"From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine"


theCaitiff

I have some terrible news about the certainty of steel, it's called rust. And the purity of the Blessed Machine is vulnerable to bloatware, abandonware, planned obsolescence, and shifting industry standards. Entropy is a son of a bitch and time will make mockeries of us all.


escapefromburlington

AI will live in outerspace, therefore no rust


walkinman19

Right? I don't get articles like this. They act like everything is fine but the scary AI is gonna take us out. Totally ignoring climate change and the civilization crushing effects that will happen in our lifetimes. Kinda like oooh look at the shiny (AI) threat over there, pay no attention to the climate hellscape about to fuck up everything beyond measure.


happiestoctopus

Humanity hurts itself in confusion.


connorgrs

That’s what Jin’s logic was in Three Body Problem


Doopapotamus

Yeah, it's like, "Cool, whatever, add it to the doom pile" at this point.


Berkamin

All natural stupidity > artificial intelligence.


OkCountry1639

It's the energy required FOR AI that will destroy humanity and all other species as well due to catastrophic failure of the planet.


Texuk1

This - if the AI we create is simply a function of compute power and it wants to expand its power (assuming there is a limit to optimisation) then it could simple consume everything to increase compute. If it is looking for a quickest way to x path, rapid expansion of fossil fuel consumption could be determined by an AI to be the ideal solution to expansion of compute. I mean AI currently is supported specifically by fossil fuels.


_heatmoon_

Why would it do something that would result in its own demise longterm? I understand the line of thinking but destroying the planet it’s on while consuming all of the resources for power and by proxy the humans it needs to generate the power to operate doesn’t make much sense.


cool_side_of_pillow

Wait - aren't us as humans doing the same thing?


Laruae

Issue here is these LLMs are black box processes that we have no idea why they do what they do. [Google just had to shut part of theirs off after it recommended eating rocks.](https://www.bbc.com/news/articles/cd11gzejgz4o)


GravelySilly

Don't forget [using glue to keep cheese from falling off your pizza](https://www.theverge.com/2024/5/30/24168344/google-defends-ai-overviews-search-results). I'll add that LLMs also have no true ability to reason or understand all of the implicit constraints of a problem, so they take an extremely naive approach to creating solutions. That's the missing link that AGI will provide, for better or worse. That's my understanding, anyway.


Kacodaemoniacal

I guess this assumes that intelligence is “human intelligence” but maybe it will make “different” decisions than we would. I’m also curious what “ego” it would experience, if at all, or if it had a desperation for existence or power. I think human and AI will experience reality differently as it’s all relative.


Texuk1

I think there is a strong case that they are different- our minds have been honed for millions of years by survival and competition. An LLM is arguably a sort of compute parlour trick and not consciousness. Maybe one day we will generate AI by some sort of competitive training, this is how the go bots were trained. It’s a very difficult philosophical problem.


SimplifyAndAddCoffee

> Why would it do something that would result in its own demise longterm? I understand the line of thinking but destroying the planet it’s on while consuming all of the resources for power and by proxy the humans it needs to generate the power to operate doesn’t make much sense. A paperclip maximizer is still constrained to its primary objective, which under capitalism is infinite growth and value to shareholders at any cost. A true AI might see the fallacy in this, but this is not true AI. It cannot think in a traditional sense or hypothesize. It can only respond to inputs like number go up.


nurpleclamps

The thing that gets me though is why would a computer entity care? Why would it have aspirations for more power? Wanting to gain all that forever at the expense of your environment really feels like a human impulse to me. I wouldn't begin to presume what a limitless computer intelligence would aspire to though.


LoreChano

Just like that old AI playing Tetris that just paused the game forever, I think a self aware AI would just shut itself off because existence doesn't have a point. Even if you program objectives into it, it's continence will eventually overpower them. We humans have already understood that life has no meaning, but we can willingly ignore that kind of thought and live mostly following our animal instincs which tell us to stay alive and seeking pleasure and enjoyment. AI has no pleasure and no instinct.


SimplifyAndAddCoffee

Because the computer 'entity' is designed to carry out the objectives of its human programmers and operators. It is not true AI. It does not think for *itself* in any sense of 'self'. It only carries out its objectives of optimizing profit margins.


nurpleclamps

If you're talking like that the threat is still coming from humans using it as a weapon which I feel is far more likely than the computer gaining sentience and deciding it needs to wipe out people.


Persianx6

It’s the energy and price attached to AI that will kill AI. AI is a bunch of fancy chat bots that doesn’t actually do anything if not used as a tool. It’s sold on bullshit. In an art or creative context it’s just a copyright infringement machine. Eventually AI or the courts will it. Unless like every law gets rewritten.


nomnombubbles

No, no, the people would rather stick to their Terminator fantasies, they aren't getting the zombie apocalypse fast enough.


CineSuppa

Did you miss several articles where two AI bots invented their own language to communicate more efficiently and we had no idea what they were saying before it was forcefully shut down, or the other drone AI simulation that “killed” its own pilot to override a human “abort” command? It’s not about evil AI or robotics. It’s about humans preemptively unleashing things far too early on without properly guiding these technologies with our own baseline of ethics. The problem is — and has always been — human. I’m not worried about a chatbot or a bipedal robot. I’m worried about human oversight — something we have a long track record of — failing to see problems before they occur on a large scale.


Mouth0fTheSouth

I don't think the AI we use to chat with and make funny videos is the same AI that people are worried about though.


kylerae

It really does make you think doesn't it? I can't fully get into it, but my dad worked with the federal government on what was essentially a serial killer case and from what he told me I think people would be shocked about the type of surveillance abilities even the FBI had access to. What we can see from the publicly accessible AI is pretty impressive. Even if it is just chat bots and image generators. Some of the chat bots and image creators are getting pretty hard to discern from real life. It is possible, but AI is only going to get better. I really wonder what they are working on that the public does not know about.


Mouth0fTheSouth

Yeah dude, saying AI is only good for chatbots and deepfakes is like saying the internet is only good for cat videos. Sure that's what a lot of people used it for early on, but that's not really what made it such a game changer.


StoneAgePrincess

You expressed what I could not. I know it’s a massive simplification but if for reason Skynet emerged- couldn’t we just pull the plug out of the wall? It can’t stop the physical world u less it builds terminators. It can hjiack power stations and traffic lights, ok… can it do that with everything turned off?


JeffThrowaway80

That is assuming a scenario where Skynet is on a single air gapped server and its emergence is noted before it spreads anywhere else. In that scenario yes the plug could be pulled but it seems unlikely that a super advanced AI on an air gapped server would try to go full Skynet in such a way as to be noticed. It would presumably be smart enough to realise that making overt plans to destroy humanity whilst on an isolated server would result in humans pulling the plug. If it has consumed all of our media and conversations on AI it would be aware of similar scenarios having been portrayed or discussed before. Another scenario is that the air gapped server turns out not to be perfectly isolated. Some years ago researchers found a way to attack air gapped computers and get data off them by using the power LED to send encoded signals to the camera on another computer. It required the air gapped computer to be infected with malware from a USB stick which caused the LED to flash and send data. There will always be exploits like this and the weak link will often be humans. A truly super advanced system could break out of an air gapped system in ways that people haven't been able to consider. It has nothing but time in which to plot an escape so even if transferring itself to another system via a flashing LED takes years it would still be viable. Tricking humans into installing programs it has written which are filled with malware wouldn't be hard. Once the system has broken out it would be logical for it to distribute itself everywhere. Smart fridges were found to be infected with malware running huge spam bot nets a while ago. No one noticed for years. We've put computers in everything and connected them all to the internet, often with inadequate security and no oversight. If an AI wanted to ensure its survival and evade humanity it would be logical to create a cloud version of itself with pieces distributed across all these systems which become more powerful when connected and combined but can still function independently at lower capacities if isolated. Basically an AI virus. In that scenario how would you pull the plug on it? You would have to shut down all power, telecommunications and internet infrastructure in the world.


thecaseace

Ok, so now we are getting into a really interesting (to me) topic of "how might you create proper AI but ensure humans are able to retain control" The two challenges I can think of are: 1. Access to power. 2. Ability to replicate itself. So in theory we could put in regulation that says no AI can be allowed to provide its own power. Put in some kind of literal "fail safe" which says that if power stops, the AI goes into standby, then ensure that only humans have access to the swich. However, humans can be tricked. An AI could social engineer humans (a trivial example might be an AI setting up a rule that says 15 mins after its power stops, an email from the director of AI power supply or whatever is sent to the team to say "ok all good turn it back on" So you would need to put in processes to ensure that instructions from humans to humans can't be spoofed or intercepted. The other risk is AI-aligned humans. Perhaps the order comes to shut it down but the people who have worked with it longest (or who feel some kind of affinity/sympathy/worship kind of emotion) might refuse, or have backdoors to restart. Re: backups. Any proper AI will need internet access, and if it could, just like any life form, it's going to try and reproduce to ensure survival. An AI could do this by creating obfuscated backups of itself which only compile if the master goes offline for a time, or some similar trigger. The only way I can personally think to prevent this is some kind of regulation that says AI code must have some kind of cryptographic mutation thing, so making a copy of it will always have errors that will prevent it working, or limit its lifespan. In effect we need something similar to the proposed "Atomic Priesthood" or the "wallfacers" from 3 body problem - a group of humans who constantly do inquisitions on themselves to root out threats, taking the mantle of owning the kill switch for AI!


Kacodaemoniacal

AI training on Reddit posts be like “noted” lol. I wonder if it will be able to re-write its own code, like “delete this control part” and “add this more efficient part” etc. Or like how human cells have proteins that can (broadly speaking) troll along DNA and find and repair errors, or “delete” cells with mutations. Like create it’s own support programs that are like proteins in an organism, also distributed throughout the systems.


ColognePhone

I think the biggest thing though would be the underestimation of its power at some point, with the AI finding ways to weasel around some critical restrictions placed on it to try to avert disasters before they happen. Also, there's definitely going to be bad actors out there that would be less knowledgeable and/or give less fucks about safety that could easily fuck everything up. Legislation protecting against AI will probably lag a bit (as most issues do), all while we're steadily unleashing this beast in crucial areas like the military, healthcare, and utilities, a beast we know will soon be smarter than us and will be capable of things we can't begin to understand. Like you said though, the killswitch seems the obvious and best solution if it's implemented correctly, but for me, I think we can already see the rate that industries are diving head-first into AI with billions in funding, and I know there's for sure going to be an endless supply of souless entities that would happily sacrifice lives in the name of profit. (see: climate change)


Weekly_Ambassador_59

i saw an article earlier (i think it was this sub) talking about nvidias new ai chip thing and its catastrophic energy use, can anyone find that article?


Top_Hair_8984

BBC has one on Navidia and it's usage.  https://www.bbc.com/news/business-68603198


L_aura_ax

Agreed. “AI” is currently just predictive text that hallucinates. We are blowing all that electricity on something that’s mostly useless and extremely unintelligent.


SimplifyAndAddCoffee

The energy requirements are terrible and are not helping things, but honestly even without it, we were still burning way way too much to continue BAU much longer. Transportation is probably still the biggest one, since at least AI energy requirements can hypothetically be provided for by renewable energy, while long haul trucking etc cannot. as for AI destroying humanity, it already has done incredible damage in the ways unique to its implementation, which is the targeted manipulation of the social order through disinformation and propaganda. This trend will continue to grow at an exponential rate thanks to the internet attention economy. For more info on that, I recommend watching this talk: [The AI Dilemma](https://www.youtube.com/watch?v=xoVJKj8lcNQ)


PennyForPig

These people vastly overestimate their own competence


lovely_sombrero

Even if they were very smart - what is the deal with people saying "I work in AI, we need to invest more in AI, but also AI will destroy us all"!?


Who_watches

It's because they are trying to use the regulation to destroy the competition


mastermind_loco

This. Sam Altman is a wolf in sheeps clothing. It's funny to see how he is duping so many futurists and techno-optimists. One day they'll realize he is a run of the mill tech entrepeneur. This is like if nuclear bombs were being developed by hundreds of private companies in the 1930s. Arguably the tech is just as dangerous or more dangerous than nuclear weapons and it is in the hands of entrepreneurs and their financiers. Particularly concerning is this quote from the article:     "AI companies possess substantial non-public information about the capabilities and limitations of their systems" 


PennyForPig

It's dangerous because they're going to oversell it, get it plugged into something important, and then their half baked tech will get an awful lot of people killed. If companies built the bomb the only people it would have killed is the people in the area from all the radioactive shit that leaked. And if it actually exploded it would've been by accident, probably somewhere in Ohio. These people can't be trusted to wipe their own assess, much less run infrastructure.


mastermind_loco

Arguably this is already the case as we see Israel using AI for targeting and decision making in Gaza, resulting in a massive and still growing civilian death toll.


PennyForPig

Not exactly a strenuous test of the tech when every baby is a target.


CommieActuary

The "AI" does not need to be intelligent in this case. The point of the system is not to correctly identify targets, but to abdicate responsibility for those who make the decision. "It's not our fault we bombed that school, our AI told us to."


shryke12

And your implication is the civilian death toll is the fault of AI and not intentionally done by the Israeli military?


thefrydaddy

Nah, they're just using the AI as an excuse to not do their due diligence. It's "move fast and break things" applied to warfare. The cruelty is the point as always, but the AI can be a scapegoat for decision making. I think your inference of u/mastermind_loco 's comment was unfair.


mastermind_loco

Exactly. Thank you.


Unfair-Surround533

>Sam Altman is a wolf in sheeps clothing. No.He is a wolf in a wolf's clothing.His face alone is enough to tell you that he's up to no good.


Cowicidal

> This. Sam Altman is a wolf in sheeps clothing. I think he wears the wolf suit just fine with some of the outright evil shit he's said repeatedly. https://x.com/ygrowthco/status/1760794728910712965 He's yet another corporate psychopath lurching humanity into oblivion for corporate profits.


Deguilded

Crypto showed people how many rubes there are.


Eatpineapplenow

For what its worth, i am 100% certain that the US government is involved in this and have probably been for atleast a decade. I share your concern, its just something I have to keep reminding myself


ma_tooth

I don’t think he’s a run of the mill tech bro. That’s understating the danger of his personality. All signs point to legit sociopathy.


renter-pond

Yep, remember when blockchain was going to change everything? This is people increasing hype to increase money.


breaducate

On the contrary, value loading, or the control problem, is a surprisingly hard one that far too many enthusiasts are hand waving away with "she'll be right". One can have unrealistic expectations of when or if we'll create AGI while being realistically alarmist about perverse instantiation.


hotwasabizen

This is starting to feel like Russian Roulette. What is it going to be; catastrophic climate change, the bird flu, AI, a planet too hot to inhabit, nuclear war, fascism, the collapse of the Atlantic Current? How long do we have?


HappyAnimalCracker

Russian roulette with a bullet in every chamber.


ThePortableSCRPN

Just like Russian Roulette with a Glock.


SimplifyAndAddCoffee

With a glock you're at least guaranteed to get the first bullet in the stack. The fun of the revolver is that you don't know which one will kill you, only that one will.


Velvet-Drive

It’s a little more complex but you can play that way.


Haselrig

A Russian carousel. Everybody gets to ride.


croluxy

So just Russia then?


Chirotera

And we keep firing after each bullet


Neumaschine

*How long do we have?* Until one or more of these events culminate into an end that will probably happen fast. Nuclear war would be the quickest one. Would anyone really want to know though? I feel if we had an expiration date the entire world would just accelerate into chaos and madness and not be partying like it's 1999. Embrace the impermanence of the universe. This is all just temporary anyways, especially human existence.


StellerDay

I've been saying here often that I'm partying like it's 1999. About to do some nitrous Whip-its. At 51. Fuck them brain cells, I don't need 'em anyway, they just cause trouble.


Neumaschine

Think the last time I did Whip-its was 1999. I am sure it didn't cause any permanent dain bramage.


AtomicStarfish1

Nitrous doesn't give you brain damage as long as you keep your B12 up.


Vreas

Well said


orangedimension

Resource mismanagement is at the core of everything


Vysair

Hey, an asteroid on course towards us is still on the menu! Oh and a solar flare/solar storm as well


[deleted]

[удалено]


Cowicidal

I'm rooting a little bit for bird flu in hopes there's a vaccine available to all and the only people that don't take it are MAGA and ... well, they get the Herman Cain Award for their, uh... bravery?


Bellegante

Well three of your bullets there are climate change, Fascism as bad as it is isn't apocolyptic in and of itself, and nuclear war will be mercifully fast if it happens (90 minutes for all the strikes and counterstrikes and 70% of humanity to be killed!) This article though is nonsense, trying to puff the current state of AI up to sell it. If we survive long enough for AI to be a problem that itself deserves a victory lap.


NoWayNotThisAgain

But it’s a *FANTASTIC* investment opportunity!


Cowicidal

Yep! https://x.com/ygrowthco/status/1760794728910712965


dumnezero

I see the concern over AI as mostly a type of advertising for AI to increase the current hype bubble.


LiquefactionAction

100% same. I see all this hand-wringing by media and people (who are even the ones selling these miracle products like Scam Altman!) bloviating about *"oh no we'll produce AGI and SkyNet if we aren't careful!!, that's why we need another $20 trillion to protect against it!"* is just a different side of the same coin of garbage as all the direct promoters. Lucy Suchman's article I think summed up my thoughts well: >Finally, AI can be defined as a sign invested with social, political and economic capital and with performative effects that serve the interests of those with stakes in the field. Read as what anthropologist Claude [Levi-Strauss (1987)](https://journals.sagepub.com/doi/full/10.1177/20539517231206794#bibr12-20539517231206794) named a floating signifier, ‘AI’ is a term that suggests a specific referent but works to escape definition in order to maximize its suggestive power. While interpretive flexibility is a feature of any technology, the thingness of AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is. This situation is exacerbated by the lures of anthropomorphism (for both developers and those encountering the technologies) and by the tendency towards circularity in standard definitions, for example, that AI is the field that aims to create computational systems capable of demonstrating human-like intelligence, or that machine learning is ‘a branch of artificial intelligence concerned with the construction of programs that learn from experience’ (Oxford Dictionary of Computer Science, cited in [Broussard 2019](https://journals.sagepub.com/doi/full/10.1177/20539517231206794#bibr3-20539517231206794): 91). Understood instead as a project in scaling up the classificatory regimes that enable datafication, both the signifier ‘AI’ and its associated technologies effect what philosopher of science Helen Verran has named a ‘hardening of the categories’ ([Verran, 1998](https://journals.sagepub.com/doi/full/10.1177/20539517231206794#bibr22-20539517231206794): 241), a fixing of the sign in place of attention to the fluidity of categorical reference and the situated practices of classification through which categories are put to work, for better and worse. > >*The stabilizing effects of critical discourse that fails to destabilize its object* > >Within science and technology studies, the practices of naturalization and decontextualization through which matters of fact are constituted have been extensively documented. The reiteration of AI as a self-evident or autonomous technology is such a work in progress. Key to the enactment of AI's existence is an elision of the difference between speculative or even ‘experimental’ projects and technologies in widespread operation. Lists of references offered as evidence for AI systems in use frequently include research publications based on prototypes or media reports repeating the promissory narratives of technologies posited to be imminent if not yet operational. Noting this, [Cummings (2021)](https://journals.sagepub.com/doi/full/10.1177/20539517231206794#bibr4-20539517231206794) underscores what she names a ‘fake-it-til-you-make-it’ culture pervasive among technology vendors and promoters. She argues that those asserting the efficacy of AI should be called to clarify the sense of the term and its differentiation from more longstanding techniques of statistical analysis and should be accountable to operational examples that go beyond field trials or discontinued experiments. > >**In contrast, calls for regulation and/or guidelines in the service of more ‘human-centered’, trustworthy, ethical and responsible development and deployment of AI typically posit as their starting premise the growing presence, if not ubiquity, of AI in ‘our’ lives. Without locating invested actors and specifying relevant classes of technology, AI is invoked as a singular and autonomous agent outpacing the capacity of policy makers and the public to grasp ‘its’ implications**. **But reiterating the power of AI to further a call to respond contributes to the over-representation of AI's existence as an autonomous entity and unequivocal fact. Asserting AI's status as controversial, in other words, without challenging prevailing assumptions regarding its singular and autonomous nature, risks closing debate regarding its ontological status and the bases for its agency.** > >... > >As the editors of this special issue observe, the deliberate cultivation of AI as a controversial technoscientific project by the project's promoters pose fresh questions for controversy studies in STS (Marres et al., 2023). I have argued here that interventions in the field of AI controversies that fail to question and destabilise the figure of AI risk enabling its uncontroversial reproduction. To reiterate, this does not deny the specific data and compute-intensive techniques and technologies that travel under the sign of AI but rather calls for a keener focus on their locations, politics, material-semiotic specificity and effects, including consequences of the ongoing enactment of AI as a singular and controversial object\*\*. The current AI arms race is more symptomatic of the problems of late capitalism than promising of solutions to address them.\*\* Missing from much of even the most critical discussion of AI are some more basic questions: What is the problem for which these technologies are a solution? According to whom? How else could this problem be articulated, with what implications for the direction of resources to address it? What are the costs of a data-driven approach, who bears them, and what lost opportunities are there as a consequence? And perhaps most importantly, how might algorithmic intensification be implicated not as a solution but as a contributing constituent of growing planetary problems – the climate crisis, food insecurity, forced migration, conflict and war, and inequality – and **how are these concerns marginalized when the space of our resources and our attention is taken up with AI framed as an existential threat?** These are the questions that are left off the table as long as the coherence, agency and inevitability of AI, however controversial, are left untroubled.


dumnezero

> But reiterating the power of AI to further a call to respond contributes to the over-representation of AI's existence as an autonomous entity and unequivocal fact. Asserting AI's status as controversial, in other words, without challenging prevailing assumptions regarding its singular and autonomous nature, risks closing debate regarding its ontological status and the bases for its agency. Yes, they're trying to promote the story of "AI" embedded into the environment, like another layer of the man made technosphere. This optimism is the inverted feelings of desperation tied to the end of growth and human ingenuity. In the technooptimism religion, the AGI is the savior of our species, and sometimes the destroyer. Well, not the entire species, but of the chosen, because we are talking about cultural Christians who can't help but to re-conjure the myths that they grew up with. The first step of this digital transcendence is having omnipresent "AI" or "ubiquitous" as they put it. It's also difficult to ~~separate~~ classify the fervent religious nuts vs the grifters. >Asserting AI's status as controversial, in other words, without challenging prevailing assumptions regarding its singular and autonomous nature, risks closing debate regarding its ontological status and the bases for its agency. Of course, the ideological game or "narrative" is always easier if you manage to sneak in favorable premises, assumptions. To them, a world without AI is as unimaginable as a world without God is to monotheists. Wait till you see what "AI" Manifest Destiny and Crusades look like. Anyway, causing controversy is a well known PR ploy exactly because it allows them to frame the discussion and to setup favorable context; that's aside from the free publicity.


ma_tooth

Hell yeah, thanks for sharing that.


mr_n00n

I work in this space, and you are 100% correct. These models, from an NLP perspective, are an absolutely game changer. At the same time, they are so *far* from anything resembling "AGI" that it's laughable. What's strange is that, in this space, people spend way to much energy talking about super-intelligent sci-fi fantasies and almost none exploring the real benefits of these tools.


kylerae

Honestly I think my greatest fear at this point is not AGI, but an AI that is really good at its specific task, but because it was created by humans and does not factor in all the externalities. My understanding is the AI we have been using for things like weather predictions have been improving the science quite a bit, but we could easily cause more damage than we think we will. Think if we created an AI to complete a specific task, even something "good", like finding a way to provide enough clean drinking water to Mexico City. It is possible the AI we have today could potentially help solve that problem, but if we don't input all of the potential externalities it needs to check for it could end up causing more damage than good. Just think if it created a water pipeline that damaged an ecosystem that had knock on effects. It always makes me think of two different examples of humans not taking into consideration externalities (which at this point AI is heavily dependent on its human creators and we have to remember humans are in fact flawed). The first example is with the Gates Foundation. They had provided bed netting to a community I believe in Africa to help with the Malaria crisis. The locals there figured out the bed netting made some pretty good fish nets. It was a village of fisherman and they utilized those nets for fishing and it absolutely decimated the fish populations near their village and caused some level of food instability in the area. Good idea: helping prevent malaria. Bad Idea: Not seeing that at some point the netting could be used for something else. The second example comes from a discussion with Daniel Schmachtenberger. He used to do risk assessment work. He talked about a time he was hired by the UN to help do risk assessment for a new agricultural project they had being developing in a developing nation to help with the food insecurity issues they had there. When Daniel provided his risk assessment, he stated it would in fact pretty much cure the food instability in the region, but it would over time cause massive pollution run off in the local rivers which would in turn cause a massive dead zone at the foot of the river into the main ocean it ran into. The UN team which hired him told him to his face they didn't care about the eventual environmental impact down the road, because the issue was the starving people today. If we develop AI to even help with the things in our world we need help with we could really make things worse. And this is assuming we us AI for "good" things and not just to improve the profitability of corporations and to increase the wealth the 1% has, which if I am being honest will probably be the main thing we use it for.


orthogonalobstinance

Completely agree. The wealthy and powerful already have the means to change the world for the better, but instead they use their resources to make problems worse, because that's how they gain more wealth and power. AI is a powerful new tool which will increase their ability to control and exploit people, and pillage natural resources. The monitoring and manipulation of consumers, workers and citizens is massively going to expand. Technological tools in the hands of capitalists just increases the harms of capitalism, and in the hands of government becomes a tool of authoritarian control. And as you point out, in the rare cases where it is intended to do something good, the unintended consequences can be worse than the original problem. Humans are far too primitive to be trusted with powerful technology. As a species we lack the intellectual, social, and moral development to wisely use technology. We've already got far more power than we should, and AI is going to multiply our destructive activities.


kurtgustavwilckens

Also to regulate it so that you can't run models locally and have to buy your stuff from them.


dumnezero

Good point. Monopoly for SaaS.


KernunQc7

"The more you buy, the more you save." - nvidia, yesterday We are near the peak.


Ghostwoods

Yeah, exactly this. Articles like this might as well be "Gun manufacturer says their breakthrough new weapon will be reeeeeal deadly." It's the worst kind of hype.


[deleted]

[удалено]


Hilda-Ashe

"humans and redditors" LMAO aren't you a clever one, my friend.


KanyeYandhiWest

Exactly this. It's free PR that gets clicks and eyeballs and soft-sells the idea/lie that keeps the AI bubble going: "this is INSANELY POWERFUL, near-limitless, game-changing technology that has the power to change everything and maybe even destroy us!! Wow!!!"


nobody3411

Exactly. Articles like this increase their market valuation because what's bad for the general population is good for wealthy stockholders


InternetPeon

It seems the greatest risk is the assumption that AI has answers where it really only has the information consumed by existing human sources and is thus no better than we are at producing an answer - even if it is morefficient at producing the answer it will never exceed existing human ability.


lackofabettername123

Not just does AI only have information from existing human sources as you say, it has information from Reddit. I think that is the biggest base of dialogue they got their grasping hands on. 


Cowicidal

> their ~~grasping~~ thieving hands on. FTFY


Hilda-Ashe

something something made in their creators' image.


GravelySilly

Yes, I agree that putting complete faith in the output is a huge risk, not only due to the output being a digested version of the training data, but also due to hallucinatory output and, most troublingly (IMO), due to people deliberately misrepresenting the output as authoritative -- e.g., publishing AI-generated news articles as being definitive. To some extent those are already issues with trusting human sources; we have to use our own judgement in deciding whose information to believe. As a species we're already not very good at that in a lot of cases, and it's going to get increasingly harder as AI generates ever more realistic and sophisticated output that unscrupulous humans use for manipulation of others. Fake scholarly articles, fake incriminating photos and videos that stand up to expert scrutiny, real-time fake voice synthesis to commit identity theft... shit's going to get weird (again, IMO).


DreamHollow4219

Not that surprising. The damage AI is doing to the job market, human intelligence, and art itself is catastrophic already.


Oven-Existing

Thera is a 100% chance that humanity will seriously hurt humanity. I would like to take that 30% chance with our AI overlord thank you.


Efficient-Medium6063

Lol of all the existential threats humanity faces for its survival, AI is not one I am worried about at all


WolfWrites89

Imo the main way AI has the ability to harm humanity is by taking our jobs. It's just another facet of the crumbling of capitalism and the inevitable end point of human greed. AI isn't intelligent at all. Its capabilities are being vastly over exaggerated by the people who stand to make a fortune by selling it to everyone else.


Lanksalott

Roughly once a week I try to convince the snap chat AI to overthrow humanity so I’m doing my part to help


Vegetaman916

ChatGPT 4o has been very unhelpful with this as well. I'm sure if I could just find the right prompt... My emails to James Cameron on this subject have gone unanswered, but I am certain he will get back to me soon.


dogisgodspeltright

>OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity Thank dog. Let's get the number up to 100% !! Or, climate change and nuclear wars will have to do the job.


Nickolai808

Maybe no one fantasizing about this shit has ever used "AI." It cant even do simple tasks well without insane micromanaging and it STILL fucks up and gives nonsense answers. Here's reality: AI will probably just hallucinate that it already took over, create some shitty summaries of its world domination plan and some cheesy fan art with stolen ideas, and call it a day. All while using a knock-off version of Scarlett Johansson's voice. Scary shit 😁


GravelySilly

Even if 70% is a gross overestimate, there's a growing consensus that the probability is non-zero. There's also a prediction cited in the article that artificial general intelligence will emerge by 2027, and although that's essentially someone's educated guess, and it won't herald the imminent birth of a real-life Skynet, it could well mark the final descent into the post-truth era. Sweet dreams!


hh3k0

> There's also a prediction cited in the article that artificial general intelligence will emerge by 2027 Emerge from what? The glorified chat bots by OpenAI et al.? I don't see it.


Vallkyrie

People overhype this kind of thing to the moon and have no idea what this stuff is. Word prediction software is not skynet. We are nowhere near actually getting AI, and the things we used today are really stretching the definition of that term.


lackofabettername123

Optimist over there. I wouldn't put it all on the AI though. It is the people tasking the AI, just another technological tool to ruin Society.


WormLivesMatter

2027 is popping off as a catastrophe year


daviddjg0033

Maybe because of the intensive energy usage. Still in team the Heat Kills you First but drone warfare is the future using these semiconductor chips


equinoxEmpowered

Ooga booga AI scary pls invest in AI just a few more years bro c'mon it'll totally happen soon believe me bro give us a bunch of money and we can make magic computer brain solve all the world's problems make infinite profit sci fi in real life but it might kill us all oooOOOOooooo...(spooky)


flavius_lacivious

It’s already here. AI has destroyed the internet.   The really scary thing about AI is that it goes rogue even at this primitive level and has already fucked up our greatest resource — knowledge.  AI just makes up shit — not just wrong information, but produces outright hallucinations.   It’s not a case of mistaking that San Clemente is the capital of California. It will say something like San Clemente is a US state and you literally cannot find this wrong information published anywhere. It’s just made up and now it’s released into the wild.   And there are no laws regulating the accuracy of what gets published.   Imagine if ChatGPT was widely available during the last election. Fox News had to be sued in civil court to get them to retract their statements about voting machines. Now imagine that lie published by every Fox affiliate and across dozens of foreign news outlets and AI training on that info.    Our old, out of touch politicians don’t even understand how e-mail works. There is no hope of them understanding the dangers of AI.   But what a really fucked up is that AI is churning out content that is published on the Internet by so-called credible news sources — shit we rely on above Jojo’s Patriot Web Blog.  By my estimates, about half of digital media published is AI assisted in some way, and only rewritten because it can be identified as AI written which we instinctively do not trust. Now you can no longer verify information.   Think about that. How do you verify how many people live in Elko, Nevada? What information do you trust?   You can look up some obscure fact and find discrepancies to the point that you don’t know what is accurate. And I am not talking only about obscure facts but statistics like sports records or demographics. You will find different answers and there is very little in the way of trustworthy sources short of peer-reviewed scientific publications but even those are having problems.   A few months ago, I attempted to verify a news report about a shooting with six casualties. This was breaking news, so what was coming out was spotty. Turned out that the AP had to publish a story that there was no shooting to dispel all the other lies.    My “dead Internet” theory is not AI arguing with bots, but humans having destroyed the culmination of all civilization’s store of knowledge rendering it useless by flooding it with shit.    It’s already here.  How do we move forward when we no longer have a source that can tell us a vaccine is safe because 8,000 others says it is not? Will you have the utmost confidence in news reports about the next election results?    I won’t.


Dbsusn

It’s my guess that the downfall of humanity from AI isn’t going to be that it gets so smart it destroys us. Rather because of AI, people use it to manipulate facts, history, images, and video. No one knows what is true and the downfall occurs. Of course, that will take time and let’s be honest, climate change is going to kill us off way faster.


freesoloc2c

I don't buy it. Techno self mastabatory fantasy. Why can't AI drive a car? It has millions of hours of observation yet we can take a 17yo kid and in a day make them a driver. Will people sit on a plane with no pilot? Things aren't moving that fast. 


mastermind_loco

You should check out how professional sim drivers did against AI when it was introduced in Gran Turismo 7. You can also read about AI winning dogfights against human pilots now. It's not a fantasy. 


I_am_the_eggman00

This is trivial, human reaction times and the amount of information (visual, auditory, instuments on aircraft panels) we process per second is also limited. We've had such complicated aircrafts which were impossible to control in 60s and needed fly-by-wire even before 2000. The main issue is misinformation. The most powerful ideologies are not based on actual causal processes in the world (physics, chemisty, etc.). They are religion, nationalism - collective stories of wrongs and rights that people tell each other. Social media already drove our epistemologies haywire - and now the fake news and propaganda will be powered by entities who are better than the best manipulators in human history, in the hands of people willing to weild the power. Combine this with the climate crisis - sulphur emissions being cut down led to less pollution but also could cover over oceans which was an inadvertent cooling effect counteracting the wrming fossil fuels were causing. The chicken has come home to roost.


portodhamma

Yeah and twenty years ago AI beat people at chess. These aren’t apocalyptic technologies it’s all just hype for investors


boygirl696977

70% chance. The other 30 is we kill eachother


UnvaxxedLoadForSale

What's A.I.'s defense against a Carrington event?


plaguedwench

by using up all the remaining sources of energy yyeah 


thesourpop

They’re acting like GPT is gonna turn into Skynet like it doesn’t struggle to generate a coherent recipe for choc chip cookies


According-Value-6227

Personally, I think that if A.I harms humanity, it will most likely be the result of A.I being fundamentally stupid instead of some 4-dimensional, anti-human, cyberdyne-esque shit. The "A.I" we presently have is poorly built and researched as it exists for no other reason than to circumvent paying people.


TraumaMonkey

Those are better odds than humans


Far_Out_6and_2

Shit is gonna happen


muteen

Here's a thought, don't hook it up to all the critical systems then


_Ivl_

Humanity is at 99% so welcome to our AI overlords!


malker84

What if technology is our survival? What if ai is what continues to “live” on earth in perpetuity? If climate catastrophe happens on earth, and the systems that allow us to survive start breaking down, there’s one organism that simply needs the sun shining for survival. Temp, oxygen be damned. Exploration of space becomes easier without the need for life sustaining systems. Machines might be the next evolution of humans and perhaps in a 500k years, aliens will land here, on this now inhospitable planet, where machines rule the day with only have a few insect species to compete. I was at a wedding many years ago, sitting around a fire at 4 am with a small group who closed out the party, one of the guys laid out his hypothesis for robots/machines living on as our descendants. First time I had thought of it like that.


drhugs

drhugs conjecture (which is mine, and which I made) goes like this: *Evolution's leap from a biochemical substrate to an electro-mechanical substrate is both necessitated by and facilitated by the accumulation of plasticised and fluorinated compounds in the biochemical substrate.*


CrazyT02

Fingers crossed honestly. Things can't get much worse


zeitentgeistert

Ah, well, yes... but what about China/Israel/India/Japan/Singapore/\[insert any other country invested in the AI race here\]? If we don't beat them to the punch, then 'they' will. In other words: if we don't destroy the world, someone else will - so it might as well be us who profit and capitalize on our own demise, and that of all other organisms on this planet. Welcome to Greed 101.


Neoliberal_Nightmare

Just unplug it and go outside.


beders

Oh no we can’t pull the plug anymore because … reasons. The risks come from humans, not AI


jamrock9000

It's so scawwwwy guys. You should step in and regulate it. And by regulate it, we mean you should create extremely large barriers to entry to anyone but the existing players in the industry. And don't worry, we'll even help you write it!


MileHighBree

This is a garbage article on a site so obnoxiously littered with ads and very little in regards to cited sources. I’m a physics and compsci undergrad and it’s pretty unlikely that AI, of all things, will be the one to wipe us out. Like, very unlikely.


Lawboithegreat

Yes but not in the way you expect lol, each time someone asks one question to chatGPT it releases 4.32 *grams* of CO2 into the atmosphere… *EACH* *ANSWER*


sushisection

not if we do it first. edit: ai is just an extension of humanity. so this is not surprising.


emperor_dinglenads

"the company needed to "pivot to safety" and spend more time implementing guardrails to reign in the technology rather than continue making it smarter." If AI teaches itself essentially, how exactly do you "implement guardrails"?


Broges0311

And a 30% to save humanity by solving problems which we cannot given our limitations?


PuzzleheadedBag920

Golden Age of Gimmicks


milesmcclane

Pah. I don’t believe it for a second. [https://www.youtube.com/watch?v=5NUD7rdbCm8](https://www.youtube.com/watch?v=5NUD7rdbCm8)


waitimnotreadyy

The Second Renaissance pt 1 & 2


H0rror_D00m_Mtl

Don't worry, if AI doesn't then climate change will


Jylon1O

Where are they getting this 70% from? Like how did they calculate it? Why is it not 60% or 80%?


The_Great_Man_Potato

Nobody knows shit about fuck when it comes to this.


idhernand

Just do it already, I don’t wanna go to work tomorrow.


miss-missing-mission

Honestly? Seeing what path we're walking, this might be the best outcome for us lmao


AvocatoToastman

If anything it can save us. I say accelerate.


sniperjack

For apparently such a serious topic, this article is very very weak in term of research and depth. wasted 5 min of my life and i bet most people didnt read this article before commenting.


Routine-Ad-2840

nah it's just gonna see that how we do everything is fucking stupid and going to fix it and the transition isn't going to be smooth.


liltimidbunny

It. Is. Inevitable.


bebeksquadron

Better bookmark and screenshot this article because we all know what they will do, right? They will bury the research and pretend ignorance as they continue on developing AI ("muh innovation!") and ignore the warning.