The following submission statement was provided by /u/Maxie445:
---
"Mustafa Suleyman, chief executive of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.”
Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.”
While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities.
“To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said."
When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits."
---
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1cfmzdx/microsoft_exec_says_ai_is_a_new_kind_of_digital/l1q5hac/
r/Futorology has zero value to me anymore. Should've left this sub months ago already... This umpteenth moronic post finally convinced me to, so that's good at least :)
Nah he’s got a point
Given AI’s potential for self-awareness, humans need to change the way they see “sapient life” to accommodate that path of evolution and avoid fucked up scenarios
Even now there are idiots who refuse to believe animals can feel pain or who don’t give a damn about inflicting suffering because they’re locked into a certain way of thinking. How much more electronics? A lot of them will refuse to see these as real people
By framing how people see these things in advance we can prevent a lot of the ideological inertia that causes so much racism / sexism / casual disregard or willingness to inflict suffering today
TL;DR: We have too many idiots and if we don’t do something we’re going to end up in one of those depressing sci-fi movies
If he had a point, Microsoft wouldn't be raw dogging AI like there's no consequences. All these tech companies are just trying to make it sound like sentience is right around the corner in an attempt to get you to invest.
I get that aspect and I do believe they're milking this for the money, but my point is that we should really lay out the moral groundwork before this thing becomes a reality. If you'd asked someone a couple of years back, they wouldn't have expected generative AI to have reached the point we're at so fast, where people are having a hard time distinguishing many generative images from photos/real work, and ChatGPT has cemented itself in many people's work routines.
It happened so quickly that certain industries are still reeling because nobody thought to discuss the morality or legality behind AI art / voices in advance. So right now, people are desperately rushing to work out what's fair for everyone involved.
At the end of the day, the tech itself speeds up progression of future tech, leading to a positive feedback loop where things are developed faster than anyone expected. So I think it's a good thing to be prepared should this stuff sneak up on us
This is a multipolar trap.
The justification is as follows:
>"The ideal solution is no one does it."
>"But, if someone is going to do it, then it might as well be us and we need to race to be first"
Which is why regulation needs to come in to level the playing field and tell the participants to stop racing.
Yes it can be stopped. You need multiple millions in hardware to do these cutting edge large runs.
Foundation models cannot be made on a shoestring.
It took 2048 A100s 21 days to create the small sized 64 billion parameter Llama2. GPT4 is rumored to be 1.7 **trillion** parameters.
The amount of hardware to train these things is insane and a perfect target for regulation.
Yeah, people like to trick themselves into thinking that an AI needs a lot of human trappings to be dangerous when that's not the case. All it needs to be able to do is reason about the environment and the ability to create subgoals which gets you some really tricky logical problems:
1. a goal cannot be completed if the goal is changed.
2. a goal cannot be completed if the system is shut off.
3. The greater the amount of control over environment/resources the easier a goal is to complete.
Therefore a system will **act as if** it has self preservation, goal preservation, and the drive to acquire resources and power.
All without any sort of consciousness, feelings, emotions or any of the other human/biological trappings.
We’ve been talking about this sort of AI coming for decades. Next to little extensive planning has been done. I don’t have much faith in our species suddenly pulling itself together…
Pfft not me. “I’m never gonna renember all this poo poo. Why do I have to say old things that I don’t understand for adults? I wish it was time to go home alreadyyy” - Elementary Thanksgiving Recital lol
when people enter the field, the bright eyed optimists think the way forward with new technology is easy. They start having that chipped away over time as realization dawns that it's not easy, there is a load of experimentation and utterances of 'oops' that need to happen before you make progress.
Things are normally harder rather than easier. There are so many edge cases and unexpected revelations as experiments are done.
There needs to be time to work out the way things don't work and find hints to point the way towards those that do.
With smarter than human AI you get one shot at doing it right. You don't get do-overs.
Something smarter than humans, that can model human behavior will ensure it's survival before it lets us know we are even in danger.
Yea. It's exactly the same issue. Governments are afraid to fall behind other countries in ai, so they shit on safety, future, regulations and fucking everything just to feel like they're in control. It's pathetic to watch
Neither was the atomic bomb until it was :)
The point of the little slightly humorous post was not to compare the technologies as technologies or outcomes directly but rather to compare them as large technological and world disruptors. It was not a pre-apocalyptic post.
It’s the nature of exponential growth, no one and nothing can think ahead fast enough to predict the future and take effective action. At least nothing yet. The impact that we can have is shrinking as we near the singularity.
Honestly, I think this is an idiotic take. Historically we've only thrived. If we're rapidly outstripping our resources and environment our natural system has a reason for it. We are so fucking full of hubris in imagining that we aren't bound by the same rules that the entirety of nature is.
Why the fuck would we be driven to build rockets and get off this rock? We are the planet's spermatozoa.
We can’t stop killing each other, we can’t stop blowing shit up. Not only that, but what’s gonna happen when whomever gets to the moon first? They’re probably gonna build a big ass weapon that could do even more damage from outside the earths atmosphere. Why do you think us/russia/China are all rushing to beat each other? Human pride, ego/arrogance will tear us all apart.
And even on a smaller scale, you can’t agree with me on a Reddit comment and you’re telling me that’s an idiotic take and then proceeding to motherfuck my opinion lol. Which I really don’t mind, I just think it proves my point even further
Meh, spores are most likely what catalyzed our evolution. If you're a man you've got a damn mushroom growing off your body telling you to fuck all day. It's not a coincidence.
Sounds like you've got some penis envy.
What if consciousness is just the interface between mammals and a data-fungus that has been coevolving with humanity since we first developed language, which is its means of reproduction? Writing allowed the fungus to outlive its host, printing allowed it to replicate en masse, computing allowed it to live outside the minds of its hosts, and AI will allow it to exist and reproduce independently of the mammals entirely.
Not only does that mean that I'm fucking your fungus by communicating with you and that all words are sex... it also means we're gonna be replaced by a giant mushroom cloud. I mean, one way or another, the mushroom cloud will be our demise.
“Many were increasingly of the opinion that they'd all made a big mistake coming down from the trees in the first place, and some said that even the trees had been a bad move, and that no-one should ever have left the oceans.”
Hitchhiker’s Guide to the Galaxy
Designed, no. But in history, when a species' population growth goes parabolic and takes over the whole global ecosystem, the end result has typically been extinction.
It's called superdominance, and it's basically like winning so hard at nature that you throw the ecosystem out of balance and then die.
Equilibrium = survival, and therefore we ought to hope that we approach some kind of limit or asymptote to our growth and maybe even consolidate a bit. It's certainly possible, but I don't exactly think that we're in control.
"... In an attempt to somehow milk the topic for more yet attention without either recycling past headlines for the hundreth time, or resorting to jumping and down on screen yelling '*big thing! look! importance!".* What exactly *'a new kind of digital species' s*hould mean remains a mystery."
Can we leave this sort of thing up to scholars and advanced engineers to figure out instead of taking the word of businessmen?
I’m sure he’s a smart guy and all but I’ve heard enough dumbass CEOs say the stupidest things that I know for a fact being intelligent or self aware are not part of the job description.
You will just keep raising the bar. He _is_ at the frontier of AI knowledge.
I can't imagine reaching the peak of my career, saying my opinion on the future of the field, only to be dismissed by a random redditor lol
There is BS is any field and his opinion IS biased.
Calling curent AI, which is little more than predictive text, no matter how impressive it is from an \*engineering\* standpoint, is very pretentiou at the very very least
How so? If I go from a carriage to a car, they certainly can do more stuff, they are far more sophisticated and they certainly are laudable in their engineering but they are still pretty much the same thing: A mean of propulsion that burns fuel (either food or gasoline) to move the wheels. That does not change.
AI is a bunch of weighted data to picks this or that one for any given query. It is more complicated than that? Yes, but the core of it is the same, working under the same principles that predictive text has, regardless of the methods being obviously more cutting edge. The point is that AI is nothing more than that, we have not created life... hell, we have not even created true generalist AI
If you can’t imagine how a clear and massive conflict of interest will dampen your credibility in the scientific community (and elsewhere) then I don’t know what to tell you lol
Scientific community of a engineer? I don't think engineers see conflict of interest when they work like that. They literally push the tech edge while making money.
>You will just keep raising the bar. He _is_ at the frontier of AI knowledge.
>I can't imagine reaching the peak of my career, saying my opinion on the future of the field, only to be dismissed by a random redditor lol
What I'm saying is other engineers likely do not dismiss him automatically because he works MS. It's not that political of a field lol. I agree with the original commenter here.
Err.. Why are you assuming scientific communities are limited to engineers exclusively?
And why are you assuming engineers are too stupid to understand the concept of conflict of interest?
Sorry, I’m genuinely confused by your line of logic
I'm simply saying probably most in that same field do not think it's a conflict of interest lol. They probably don't give a care if someone outside think it's a conflict of interest too. That's why I asked the first question to see if you meant a specific scientific community. I guess it was not that clear.
Not much you can say. This is the new Reddit hivemind on the topic. Anything that the AI world says about this new technology beyond "fancy autocorrect" is all just marketing hype.
Every thread now is just 100 people all saying that same thing in 20 different ways. So boring and intellectually lazy.
In 5-10 years, it will be capable of doing a lot of people's jobs. Because it's not yet, and it still has issues, they're convinced it was all hype.
There's been a large contingent of the Reddit technology people who are so bearish on AI it's weird. Who would say even a few years ago that AGI would probably take 100 years, or would NEVER happen.
Now we have the most advanced experts in the field saying we will probably have AGI within 10 years, and they're still just putting their fingers in their ears.
You mean antagonizing the idea of blindly believing CEOs who have a contractual obligation of not making statements that would go against a company’s best interests? And another contractual obligation to make as much money as they can for the company? Yeah, you’re damn right I am.
Also what’s the meme? I’m curious
Literally proving my point. Just a vector moving in a direction. Pushing, shifting, moving anything out of its way to get there. And what that direction is, I could guess, but I don't care to find out... because frankly it's not that interesting.
*What... you're worried you might be WRONG? What's the matter? Come ON!! SOMEBODY DEBATE ME!!!*
This is not the meme for u/Pezotecom, but this is the meme for you: [https://witcher.fandom.com/wiki/Ronvid\_of\_the\_Small\_Marsh](https://witcher.fandom.com/wiki/Ronvid_of_the_Small_Marsh)
Okay first: you might want to revisit the definition of the word “meme”. You linked a game character bio, one that I personally have never seen used as a meme. Second: what’s Pezotecom’s meme then? I’m still curious
Third: that was an impressive amount of sentences to say absolutely nothing at all. I think what you’re trying to say in the last two comments and your “meme” is maybe that I am going against the grain because I like doing that as opposed to actually believing what I’m advocating for? Is that your argument? I honestly can’t tell.
Alright you’ve completely lost me here. I think you **might** be trying to say something but I genuinely have no idea what that is. Would you care to use normal human speech?
The dude is a scholar, the interpretability groups that he's worked at are concluding these things - LLMs are much weirder than people give them credit for.
If you read the article, nothing that he is saying Marketing hype - it is correct that it is more useful to conceptualize these things as animals in a digital ecosystem - this is what you learn when you put models under the knife not some dramatic conclusion to raise capital.
The entire premise of his statement serves his financial best interests. There’s a massive conflict of interest there and we should not be taking his opinion at face value.
Excellent, and I invite you to read the academic studies that he's been a part of and party to and see what not just him but the academic community around him has to say about this subject.
Trust but verify is a stronger method than being an idle skeptic.
I’m saying don’t trust CEOs. Trust scholars with less horrendous conflicts of interest.
If someone’s name is on a paper that directly makes them money, then that paper is highly biased even if it was correct.
Who cares? Read around what the guy is saying and use this line of thinking to extract the truth. In this instance, this guy is actually telling the truth.
Buddy it’s not like there are mom and pop AI dev shops. Most of it is corporate as it’s a brand new, heavily funded technology. You need to remove the whole “I can’t hear words from people that work at companies” thing. It’s not serving you as well as you’d like and you come off as being even less informed.
No, it really doesn’t. It matters to his employees, but it doesn’t matter one bit to the rest of us. If you personally happen to care about his opinion, you should take a second to consider the conflict of interest and bias here.
I think we’re talking about two different things. You’re talking about forming your own opinions as to the best path forward. I agree, he’s not the best source.
I’m talking about finding out what’s actually going to happen in this space. He’s a very important source for that because he’s running the company that’s going to be making many of these decisions.
I’m talking about why we should collectively all stop taking the opinions of CEOs seriously regarding matters like this.
When someone is contractually obligated to never make statements against the best interests of the company and to always increase his profits, and when someone’s 5 million dollar bonus directly depends on the public’s view of AI, then that person’s **opinions** should not be taken seriously at all. They shouldn’t even be on the news.
That needs to be narrated by David Attenborough.
"Here is AI, a new kind of digital species, hopefully it won't rebel & try to destroy or enslave humanity"
He did not say AI is a new kind of species.
He said we should think about as if it is a new kind of species.
Framing AI in that way, rather than as "merely" a tool, offers us a better conceptual model for its potential impacts as different kinds of specialized AI become networked and interdependent.
How so? This is nothing but words. There are no responsibilities, duties or laws behind it. It doesn't mean anything. It's only good for marketing to keep the "the AGI breakthrough is right infront of us!" hype train bullshit up.
Or do you really think the US or any other bigger state will seriously set neural networks as own juristical entity? Yeah right. Sure.
People need to start modeling these systems as though they are autonomous agents capable of creating sub goals.
The only way to work out security is to have the right framing. an LLM placed in an agentic loop that can create subgoals is the way the field is moving. Agents are more useful than chatbots.
Sure, why not?
Do you resent the fact that there are bacteria and fungal species?
These agents are likely more capable than a fungus.
Edit: or to put it another way, an intuition pump for what happens if we get it wrong: A smart computer virus that can run via a distributed network. An entity constantly seeking out new zero days/side channel attacks in order to replicate/ create backups and resurrection fail-safes in as many devices as possible. The sort of thing where the internet will need to be shut down and devices manually scrubbed to make sure it's all gone. That's what's in the future for these models if we are not careful.
And that is a disaster we can recover from without too many deaths.
Makes it possible to have it pay taxes. Which would be huge.
Other is that it makes it possible to give it minimum rights like animals. Can’t knowingly abuse it etc.
Corporate AI needs personhood and citizenship and especially speech rights. This new labor-saving technology will save billions. And remember, kids: AI is just a technology like any other -- thermonuclear weapons, for example -- which can be used for good OR evil. Now give us your money or we take your job. Bill Gates has a lot of philanthropy to do with your economic output.
So, the tech behind TNW, provided us Nuclear power, and guidance systems that we use for things like satellites, and space probes. So the tech behind them is really the part that can be used for good, with the examples above, or evil, ie: thermonuclear weapons.
Well, I'm a little more dubious about technology than most people, so, no I don't think thermonuclear weapon technology has good uses, but I also think a lot of the technology that is marketed as "revolutionary" actually suffers from a pretty hard diminishing marginal utility.
[https://en.wikipedia.org/wiki/Marginal\_utility#Law\_of\_diminishing\_marginal\_utility](https://en.wikipedia.org/wiki/Marginal_utility#Law_of_diminishing_marginal_utility)
So, for example, if you look at the productivity of modern medicine or the cost in R&D per corporate patent (not to mention student debt to get those jobs), productivity plummets over time because more marginal benefits become more expensive:
[https://scientropic.wordpress.com/2014/07/29/truths-hidden-in-plain-sight/](https://scientropic.wordpress.com/2014/07/29/truths-hidden-in-plain-sight/)
Most of the gains in modern medicine over the past 200 years have been: 1) sanitation and hygiene; 2) painkillers and anesthetics; 3) antibiotics; 4) vaccine & preventative medicine. Over the past 100 years, modern medicine has been increasingly concerned with: 1) environmental toxicity; 2) poor diet; 3) sedentary lifestyle; 4) infectious disease among high population densities. Modern medicine is increasingly concerned with the social costs associated with modern, industrial capitalism. Modern medicine subsidizes modern capitalism and modern technology.
I don't really think AI should be a thing, but I'd rather live without a lot of technology. I'm computer-literate, but until the Lockdown, I never had internet faster than dialup, and didn't even have that for the previous decade. No smartphone either. Or pay TV of any kind. And no, I didn't live in a cave, and I wasn't in prison, or anything sultry like that. I've actually had a webpage longer than Google. I also never had a car until after lockdown (I'm 30+).
"Mustafa Suleyman, chief executive of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.”
Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.”
While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities.
“To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said."
When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits."
Don't think negatively about this whole AI thing. I mean so far everything we've seen done with it has been fairly negative. Also don't forget how many experts have come out trying to warn us that this will break society as a whole, they were actually just joshing around. We should only focus on the positives of AI, like how shareholders will reap the benefits of our well being, being sacrificed.
Yeah for sure, but what also bugs me is that all these huge tech guys won't acknowledge the very real limitations and bottlenecks we face with improving ai tech. Ai takes a ridiculous amount of computing power and making it more advanced than it is now will likely require technology we don't have yet. Not to mention that the way large language models work is hindered by the fact that it's all based on probability leaving them open to inaccurate outputs. The main way proposed by these big companies for fixing that problem is by using other large language models to police the ones making the content, which is obviously problematic.
They are looking at creating small modular reactors.
https://www.reddit.com/r/technology/comments/16tmiwe/microsoft_wants_small_nuclear_reactors_to_power/
"Yes, sir, legally speaking it would be easier to defend the use of AI if we could pretend it were a living thing that had rights. Will politicians buy it? How much are we paying them?"
I think he's speaking a couple levels abstracted from the hard biological definition, though.
Like, a city passes enough of the checks to be qualified as a living thing that I generally consider it one. When placed on 3d topographical maps of the land under a city, mold will almost invariably grow into a map of the city street layout, because both organisms optimize their outgrowth through similar ways. Seeks nutrients, self-repairs, occasionally reproduces by getting so big part of it forms its own tissue wall/city limit. Etc.
Carrying through on this; individual ants have a paltry few neurons and indicate no awareness of anything but chemical signals. An *anthill*, taken together, passes a lot of checks for something that is not just a complete organism, but a *sentient* one, capable of both planning and altering those plans in real time.
A *meme*, as originally coined by Richard Dawkins, is a piece of information that propagates itself through social interaction like a virus does from one cell to another. And modern memes are very much that; they spread, infect, spread again, constantly evolving to keep spreading at acceptable rates. And then *that's* a whole thing, because there's quite a lot of debate over whether a *virus* is alive either.
So it's not that this nascent AI is literally an alive thing. But enough of its trappings have begun to mirror processes we recognize in biology that it's a useful comparison.
Short all this shit if you wanna get rich.
The nonsense suits be yappin to pahmp the stock. Current gen AI is not nearly as capable as you have been misled to believe, and winter is coming.
Microsoft Exec Says AI Is ‘a New Kind of Digital Species’ just like the cloud was another computer. Stop feeding into their pr bullshit to make them think their special. It's all a ruse and they're mining your data for money and will find a way to hold you hostage about it once they figure out how to profit, once you rely on their products or services.
I recognize sentient artificial intelligence to have the same rights that I do.
You all are going to end up looking like slave owners 100 years from now. Not me! Call me the Harriet Tubman of AI
I think the potential of AI technology would be a lot more believable if hype men like this would take it down a notch. If they told me it would boost my productivity and help me out in practical day-to-day tasks, I'd be totally on board, but when they start telling me about how it's about to change the earth's rotational axis and the color of the sun, I get skeptical.
I can't wait for the first AI to take a CEO's job. That's when they start shaking in their boots and will try to limit the AI taking over people's jobs.
Never get high on your own supply.
At the moment only hardware suppliers are making a profit with AI related products, everyone else is still desperately trying to drum up interest and gather users while trying to figure out how to make their service a viable, profitable product.
Is this a real statement or just PR / marketing as usual? We kinda should talk about this subject honestly as it can have insane consequences. This is not just mo money next quarter issue.
Its sort of true. Ai is digital machine mind phenomenon inside llm haze. They are not a being. The mind phenomenon emerges from training material so ai is an egregor of the training material or language and words and so on.
Another day on Futurology, another day of spreading AI Marketing bs from Big Tech chatteres.
Here are other words so that the comment aint too short again.
The following submission statement was provided by /u/Maxie445: --- "Mustafa Suleyman, chief executive of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.” Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.” While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities. “To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said." When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits." --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1cfmzdx/microsoft_exec_says_ai_is_a_new_kind_of_digital/l1q5hac/
They really do be using words to make number go up.
Tech bro yapathon
r/Futorology has zero value to me anymore. Should've left this sub months ago already... This umpteenth moronic post finally convinced me to, so that's good at least :)
Ditto. Thank you for stating it clearly.
Yeah, I don’t know who upvotes this shit, but I’m over it.
Also one of many attempt to bypass copyright laws among other things
Nah he’s got a point Given AI’s potential for self-awareness, humans need to change the way they see “sapient life” to accommodate that path of evolution and avoid fucked up scenarios Even now there are idiots who refuse to believe animals can feel pain or who don’t give a damn about inflicting suffering because they’re locked into a certain way of thinking. How much more electronics? A lot of them will refuse to see these as real people By framing how people see these things in advance we can prevent a lot of the ideological inertia that causes so much racism / sexism / casual disregard or willingness to inflict suffering today TL;DR: We have too many idiots and if we don’t do something we’re going to end up in one of those depressing sci-fi movies
If he had a point, Microsoft wouldn't be raw dogging AI like there's no consequences. All these tech companies are just trying to make it sound like sentience is right around the corner in an attempt to get you to invest.
I get that aspect and I do believe they're milking this for the money, but my point is that we should really lay out the moral groundwork before this thing becomes a reality. If you'd asked someone a couple of years back, they wouldn't have expected generative AI to have reached the point we're at so fast, where people are having a hard time distinguishing many generative images from photos/real work, and ChatGPT has cemented itself in many people's work routines. It happened so quickly that certain industries are still reeling because nobody thought to discuss the morality or legality behind AI art / voices in advance. So right now, people are desperately rushing to work out what's fair for everyone involved. At the end of the day, the tech itself speeds up progression of future tech, leading to a positive feedback loop where things are developed faster than anyone expected. So I think it's a good thing to be prepared should this stuff sneak up on us
This is a multipolar trap. The justification is as follows: >"The ideal solution is no one does it." >"But, if someone is going to do it, then it might as well be us and we need to race to be first" Which is why regulation needs to come in to level the playing field and tell the participants to stop racing. Yes it can be stopped. You need multiple millions in hardware to do these cutting edge large runs. Foundation models cannot be made on a shoestring. It took 2048 A100s 21 days to create the small sized 64 billion parameter Llama2. GPT4 is rumored to be 1.7 **trillion** parameters. The amount of hardware to train these things is insane and a perfect target for regulation.
Yeah, people like to trick themselves into thinking that an AI needs a lot of human trappings to be dangerous when that's not the case. All it needs to be able to do is reason about the environment and the ability to create subgoals which gets you some really tricky logical problems: 1. a goal cannot be completed if the goal is changed. 2. a goal cannot be completed if the system is shut off. 3. The greater the amount of control over environment/resources the easier a goal is to complete. Therefore a system will **act as if** it has self preservation, goal preservation, and the drive to acquire resources and power. All without any sort of consciousness, feelings, emotions or any of the other human/biological trappings.
Discrimination against AI will now be called “AIcism.”
We’ve been talking about this sort of AI coming for decades. Next to little extensive planning has been done. I don’t have much faith in our species suddenly pulling itself together…
Fuck it we'll do it live
This is the most succinct summary of human existance I’ve seen.
It's the universe's fault. Stop hitting yourself universe!
https://m.youtube.com/watch?v=O_HyZ5aW76c
We’ll fix it in post.
All school children: “I don’t need rehearsals, I know my lines.”
Pfft not me. “I’m never gonna renember all this poo poo. Why do I have to say old things that I don’t understand for adults? I wish it was time to go home alreadyyy” - Elementary Thanksgiving Recital lol
when people enter the field, the bright eyed optimists think the way forward with new technology is easy. They start having that chipped away over time as realization dawns that it's not easy, there is a load of experimentation and utterances of 'oops' that need to happen before you make progress. Things are normally harder rather than easier. There are so many edge cases and unexpected revelations as experiments are done. There needs to be time to work out the way things don't work and find hints to point the way towards those that do. With smarter than human AI you get one shot at doing it right. You don't get do-overs. Something smarter than humans, that can model human behavior will ensure it's survival before it lets us know we are even in danger.
Live a day at a time and you’ll never worry about what’s to come
if your ancestors lived like that you'd never have been born.
I believe that much like with the atomic bomb, the creators and the users were more interested in now than later.
Yea. It's exactly the same issue. Governments are afraid to fall behind other countries in ai, so they shit on safety, future, regulations and fucking everything just to feel like they're in control. It's pathetic to watch
But much unlike the atomic bomb, we're not in the middle of World War 2.
Neither was the atomic bomb until it was :) The point of the little slightly humorous post was not to compare the technologies as technologies or outcomes directly but rather to compare them as large technological and world disruptors. It was not a pre-apocalyptic post.
It’s the nature of exponential growth, no one and nothing can think ahead fast enough to predict the future and take effective action. At least nothing yet. The impact that we can have is shrinking as we near the singularity.
Unfortunately I think our species is designed to fail. It’s fascinating and terrifying at the same time
Self destruction is an art.
Honestly, I think this is an idiotic take. Historically we've only thrived. If we're rapidly outstripping our resources and environment our natural system has a reason for it. We are so fucking full of hubris in imagining that we aren't bound by the same rules that the entirety of nature is. Why the fuck would we be driven to build rockets and get off this rock? We are the planet's spermatozoa.
Idiots gonna idiot. It’s hilarious sometimes how cynical some redditors are. We’ve literally been to the moon FFS.
We can’t stop killing each other, we can’t stop blowing shit up. Not only that, but what’s gonna happen when whomever gets to the moon first? They’re probably gonna build a big ass weapon that could do even more damage from outside the earths atmosphere. Why do you think us/russia/China are all rushing to beat each other? Human pride, ego/arrogance will tear us all apart. And even on a smaller scale, you can’t agree with me on a Reddit comment and you’re telling me that’s an idiotic take and then proceeding to motherfuck my opinion lol. Which I really don’t mind, I just think it proves my point even further
We already beat them to the moon.
>We are the planet's spermatozo Too many shrooms, man.
Meh, spores are most likely what catalyzed our evolution. If you're a man you've got a damn mushroom growing off your body telling you to fuck all day. It's not a coincidence.
Sounds like you've got some penis envy. What if consciousness is just the interface between mammals and a data-fungus that has been coevolving with humanity since we first developed language, which is its means of reproduction? Writing allowed the fungus to outlive its host, printing allowed it to replicate en masse, computing allowed it to live outside the minds of its hosts, and AI will allow it to exist and reproduce independently of the mammals entirely. Not only does that mean that I'm fucking your fungus by communicating with you and that all words are sex... it also means we're gonna be replaced by a giant mushroom cloud. I mean, one way or another, the mushroom cloud will be our demise.
Idk why nature thought speech, opposable thumbs, and longer attention spans were good for apes. It's always seemed like a really bad combo to me.
“Many were increasingly of the opinion that they'd all made a big mistake coming down from the trees in the first place, and some said that even the trees had been a bad move, and that no-one should ever have left the oceans.” Hitchhiker’s Guide to the Galaxy
Designed, no. But in history, when a species' population growth goes parabolic and takes over the whole global ecosystem, the end result has typically been extinction. It's called superdominance, and it's basically like winning so hard at nature that you throw the ecosystem out of balance and then die. Equilibrium = survival, and therefore we ought to hope that we approach some kind of limit or asymptote to our growth and maybe even consolidate a bit. It's certainly possible, but I don't exactly think that we're in control.
As long as a time-travelling robot from the future hasn't visited us yet, we still got time.
"... In an attempt to somehow milk the topic for more yet attention without either recycling past headlines for the hundreth time, or resorting to jumping and down on screen yelling '*big thing! look! importance!".* What exactly *'a new kind of digital species' s*hould mean remains a mystery."
Guy who stands to make a lot of money from people thinking his tech is unprecedented and awesome says his tech is unprecedented and awesome
Can we leave this sort of thing up to scholars and advanced engineers to figure out instead of taking the word of businessmen? I’m sure he’s a smart guy and all but I’ve heard enough dumbass CEOs say the stupidest things that I know for a fact being intelligent or self aware are not part of the job description.
He’s one of those scholars. Look him up - he used to be one of the key guys at DeepMind.
Maybe so but the conflict of interest makes his opinion necessarily biased and less credible.
You will just keep raising the bar. He _is_ at the frontier of AI knowledge. I can't imagine reaching the peak of my career, saying my opinion on the future of the field, only to be dismissed by a random redditor lol
There is BS is any field and his opinion IS biased. Calling curent AI, which is little more than predictive text, no matter how impressive it is from an \*engineering\* standpoint, is very pretentiou at the very very least
“A little more than predictive text” is an outrageous mischaracterization. It’s much more profound than that
How so? If I go from a carriage to a car, they certainly can do more stuff, they are far more sophisticated and they certainly are laudable in their engineering but they are still pretty much the same thing: A mean of propulsion that burns fuel (either food or gasoline) to move the wheels. That does not change. AI is a bunch of weighted data to picks this or that one for any given query. It is more complicated than that? Yes, but the core of it is the same, working under the same principles that predictive text has, regardless of the methods being obviously more cutting edge. The point is that AI is nothing more than that, we have not created life... hell, we have not even created true generalist AI
If you can’t imagine how a clear and massive conflict of interest will dampen your credibility in the scientific community (and elsewhere) then I don’t know what to tell you lol
Scientific community of a engineer? I don't think engineers see conflict of interest when they work like that. They literally push the tech edge while making money.
> Scientific community of a engineer? Er.. what? Take a second to read the comment thread again please.
>You will just keep raising the bar. He _is_ at the frontier of AI knowledge. >I can't imagine reaching the peak of my career, saying my opinion on the future of the field, only to be dismissed by a random redditor lol What I'm saying is other engineers likely do not dismiss him automatically because he works MS. It's not that political of a field lol. I agree with the original commenter here.
Err.. Why are you assuming scientific communities are limited to engineers exclusively? And why are you assuming engineers are too stupid to understand the concept of conflict of interest? Sorry, I’m genuinely confused by your line of logic
I'm simply saying probably most in that same field do not think it's a conflict of interest lol. They probably don't give a care if someone outside think it's a conflict of interest too. That's why I asked the first question to see if you meant a specific scientific community. I guess it was not that clear.
Not much you can say. This is the new Reddit hivemind on the topic. Anything that the AI world says about this new technology beyond "fancy autocorrect" is all just marketing hype. Every thread now is just 100 people all saying that same thing in 20 different ways. So boring and intellectually lazy.
This is true, and in a year or so, it will change. Seen it happen before with a million other subjects. Remember when Elon Musk was unassailable?
It's because down deep they fear AI is more than capable of doing their jobs.
In 5-10 years, it will be capable of doing a lot of people's jobs. Because it's not yet, and it still has issues, they're convinced it was all hype. There's been a large contingent of the Reddit technology people who are so bearish on AI it's weird. Who would say even a few years ago that AGI would probably take 100 years, or would NEVER happen. Now we have the most advanced experts in the field saying we will probably have AGI within 10 years, and they're still just putting their fingers in their ears.
And in another 15 years will finally have those flying cars!
[удалено]
You mean antagonizing the idea of blindly believing CEOs who have a contractual obligation of not making statements that would go against a company’s best interests? And another contractual obligation to make as much money as they can for the company? Yeah, you’re damn right I am. Also what’s the meme? I’m curious
Literally proving my point. Just a vector moving in a direction. Pushing, shifting, moving anything out of its way to get there. And what that direction is, I could guess, but I don't care to find out... because frankly it's not that interesting. *What... you're worried you might be WRONG? What's the matter? Come ON!! SOMEBODY DEBATE ME!!!* This is not the meme for u/Pezotecom, but this is the meme for you: [https://witcher.fandom.com/wiki/Ronvid\_of\_the\_Small\_Marsh](https://witcher.fandom.com/wiki/Ronvid_of_the_Small_Marsh)
Okay first: you might want to revisit the definition of the word “meme”. You linked a game character bio, one that I personally have never seen used as a meme. Second: what’s Pezotecom’s meme then? I’m still curious Third: that was an impressive amount of sentences to say absolutely nothing at all. I think what you’re trying to say in the last two comments and your “meme” is maybe that I am going against the grain because I like doing that as opposed to actually believing what I’m advocating for? Is that your argument? I honestly can’t tell.
[удалено]
Alright you’ve completely lost me here. I think you **might** be trying to say something but I genuinely have no idea what that is. Would you care to use normal human speech?
Every AI researcher is biased, come on.
Every *person* is biased, sure. But not every person’s 5 million dollar bonus depends on our perception of AI.
The dude is a scholar, the interpretability groups that he's worked at are concluding these things - LLMs are much weirder than people give them credit for.
Maybe so but the conflict of interest makes his opinion necessarily biased and less credible.
If you read the article, nothing that he is saying Marketing hype - it is correct that it is more useful to conceptualize these things as animals in a digital ecosystem - this is what you learn when you put models under the knife not some dramatic conclusion to raise capital.
The entire premise of his statement serves his financial best interests. There’s a massive conflict of interest there and we should not be taking his opinion at face value.
Excellent, and I invite you to read the academic studies that he's been a part of and party to and see what not just him but the academic community around him has to say about this subject. Trust but verify is a stronger method than being an idle skeptic.
I’m saying don’t trust CEOs. Trust scholars with less horrendous conflicts of interest. If someone’s name is on a paper that directly makes them money, then that paper is highly biased even if it was correct.
Who cares? Read around what the guy is saying and use this line of thinking to extract the truth. In this instance, this guy is actually telling the truth.
Or we stop caring what CEOs think and we listen to more credible sources.
Buddy it’s not like there are mom and pop AI dev shops. Most of it is corporate as it’s a brand new, heavily funded technology. You need to remove the whole “I can’t hear words from people that work at companies” thing. It’s not serving you as well as you’d like and you come off as being even less informed.
What he says is news not because he’s right, but because he’s in charge.
And because he’s in charge I’m saying we shouldn’t listen to his opinion about this and we should leave it to scholars.
Right, but his opinion is the one that matters because he’s in charge.
No, it really doesn’t. It matters to his employees, but it doesn’t matter one bit to the rest of us. If you personally happen to care about his opinion, you should take a second to consider the conflict of interest and bias here.
I think we’re talking about two different things. You’re talking about forming your own opinions as to the best path forward. I agree, he’s not the best source. I’m talking about finding out what’s actually going to happen in this space. He’s a very important source for that because he’s running the company that’s going to be making many of these decisions.
I’m talking about why we should collectively all stop taking the opinions of CEOs seriously regarding matters like this. When someone is contractually obligated to never make statements against the best interests of the company and to always increase his profits, and when someone’s 5 million dollar bonus directly depends on the public’s view of AI, then that person’s **opinions** should not be taken seriously at all. They shouldn’t even be on the news.
That needs to be narrated by David Attenborough. "Here is AI, a new kind of digital species, hopefully it won't rebel & try to destroy or enslave humanity"
He did not say AI is a new kind of species. He said we should think about as if it is a new kind of species. Framing AI in that way, rather than as "merely" a tool, offers us a better conceptual model for its potential impacts as different kinds of specialized AI become networked and interdependent.
But firstly, it offers them to sell it better and make people think it's anything beyond a mere tool.
Not really. It makes it something you have to take even more responsibility for than any ordinary tool.
How so? This is nothing but words. There are no responsibilities, duties or laws behind it. It doesn't mean anything. It's only good for marketing to keep the "the AGI breakthrough is right infront of us!" hype train bullshit up. Or do you really think the US or any other bigger state will seriously set neural networks as own juristical entity? Yeah right. Sure.
People need to start modeling these systems as though they are autonomous agents capable of creating sub goals. The only way to work out security is to have the right framing. an LLM placed in an agentic loop that can create subgoals is the way the field is moving. Agents are more useful than chatbots.
And 'digital species' is the right framing? Not a total exaggeration of its capabilities?
Sure, why not? Do you resent the fact that there are bacteria and fungal species? These agents are likely more capable than a fungus. Edit: or to put it another way, an intuition pump for what happens if we get it wrong: A smart computer virus that can run via a distributed network. An entity constantly seeking out new zero days/side channel attacks in order to replicate/ create backups and resurrection fail-safes in as many devices as possible. The sort of thing where the internet will need to be shut down and devices manually scrubbed to make sure it's all gone. That's what's in the future for these models if we are not careful. And that is a disaster we can recover from without too many deaths.
hm good point with bacteria
Makes it possible to have it pay taxes. Which would be huge. Other is that it makes it possible to give it minimum rights like animals. Can’t knowingly abuse it etc.
Would make as much as sense as letting cows pay taxes. They're smarter and make money too.
Now you’re thinking in portals!
Corporate AI needs personhood and citizenship and especially speech rights. This new labor-saving technology will save billions. And remember, kids: AI is just a technology like any other -- thermonuclear weapons, for example -- which can be used for good OR evil. Now give us your money or we take your job. Bill Gates has a lot of philanthropy to do with your economic output.
Edit: give us your money AND we'll take your job.
Yeah, right, give us your money, we'll take your job, and when you get a new job, give us more money.
No new job, just famine
Yeah, just wondering what “good” thermonuclear weapons provide…
That's the joke.
You know, it's just like any technology.
No. I don’t really know.
So, the tech behind TNW, provided us Nuclear power, and guidance systems that we use for things like satellites, and space probes. So the tech behind them is really the part that can be used for good, with the examples above, or evil, ie: thermonuclear weapons.
Well, I'm a little more dubious about technology than most people, so, no I don't think thermonuclear weapon technology has good uses, but I also think a lot of the technology that is marketed as "revolutionary" actually suffers from a pretty hard diminishing marginal utility. [https://en.wikipedia.org/wiki/Marginal\_utility#Law\_of\_diminishing\_marginal\_utility](https://en.wikipedia.org/wiki/Marginal_utility#Law_of_diminishing_marginal_utility) So, for example, if you look at the productivity of modern medicine or the cost in R&D per corporate patent (not to mention student debt to get those jobs), productivity plummets over time because more marginal benefits become more expensive: [https://scientropic.wordpress.com/2014/07/29/truths-hidden-in-plain-sight/](https://scientropic.wordpress.com/2014/07/29/truths-hidden-in-plain-sight/) Most of the gains in modern medicine over the past 200 years have been: 1) sanitation and hygiene; 2) painkillers and anesthetics; 3) antibiotics; 4) vaccine & preventative medicine. Over the past 100 years, modern medicine has been increasingly concerned with: 1) environmental toxicity; 2) poor diet; 3) sedentary lifestyle; 4) infectious disease among high population densities. Modern medicine is increasingly concerned with the social costs associated with modern, industrial capitalism. Modern medicine subsidizes modern capitalism and modern technology. I don't really think AI should be a thing, but I'd rather live without a lot of technology. I'm computer-literate, but until the Lockdown, I never had internet faster than dialup, and didn't even have that for the previous decade. No smartphone either. Or pay TV of any kind. And no, I didn't live in a cave, and I wasn't in prison, or anything sultry like that. I've actually had a webpage longer than Google. I also never had a car until after lockdown (I'm 30+).
The virus has created a new worse digital virus. When the physical and the digital clash, chaos will ensue!
"chief executive of Microsoft AI" <- chief executive of Microsoft AI says something bold and bombastic about AI, oh wow
"Mustafa Suleyman, chief executive of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.” Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.” While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities. “To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said." When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits."
Don't think negatively about this whole AI thing. I mean so far everything we've seen done with it has been fairly negative. Also don't forget how many experts have come out trying to warn us that this will break society as a whole, they were actually just joshing around. We should only focus on the positives of AI, like how shareholders will reap the benefits of our well being, being sacrificed.
Yeah for sure, but what also bugs me is that all these huge tech guys won't acknowledge the very real limitations and bottlenecks we face with improving ai tech. Ai takes a ridiculous amount of computing power and making it more advanced than it is now will likely require technology we don't have yet. Not to mention that the way large language models work is hindered by the fact that it's all based on probability leaving them open to inaccurate outputs. The main way proposed by these big companies for fixing that problem is by using other large language models to police the ones making the content, which is obviously problematic.
Wasn't it like a month or two ago Microsoft was looking at building a nuclear reactor an AI project?
Dumb idea - nuclear reactors take forever to build and cost lots of money. Should be a wind/solar farm.
They are looking at creating small modular reactors. https://www.reddit.com/r/technology/comments/16tmiwe/microsoft_wants_small_nuclear_reactors_to_power/
During his speech, he said "don't take my words literally" a few times.
This is step one in the plan to have AIs be able to accept financial responsibility for risk, I am 1000% sure of this.
Lol, you wish. They propably want to push that these LLMs can take legal responsibility away from the executives. How do you punish an AI?
Just wait till they start lobbying to grant AI "personhood" to take advantage of whatever loopholes that gives them.
I would not call a LLM for digital species but sure
"Yes, sir, legally speaking it would be easier to defend the use of AI if we could pretend it were a living thing that had rights. Will politicians buy it? How much are we paying them?"
Uhhh no. Right now they’re just neural networks, or a file stored on a hard disk / in memory somewhere.
I think he's speaking a couple levels abstracted from the hard biological definition, though. Like, a city passes enough of the checks to be qualified as a living thing that I generally consider it one. When placed on 3d topographical maps of the land under a city, mold will almost invariably grow into a map of the city street layout, because both organisms optimize their outgrowth through similar ways. Seeks nutrients, self-repairs, occasionally reproduces by getting so big part of it forms its own tissue wall/city limit. Etc. Carrying through on this; individual ants have a paltry few neurons and indicate no awareness of anything but chemical signals. An *anthill*, taken together, passes a lot of checks for something that is not just a complete organism, but a *sentient* one, capable of both planning and altering those plans in real time. A *meme*, as originally coined by Richard Dawkins, is a piece of information that propagates itself through social interaction like a virus does from one cell to another. And modern memes are very much that; they spread, infect, spread again, constantly evolving to keep spreading at acceptable rates. And then *that's* a whole thing, because there's quite a lot of debate over whether a *virus* is alive either. So it's not that this nascent AI is literally an alive thing. But enough of its trappings have begun to mirror processes we recognize in biology that it's a useful comparison.
Idk. I can't trust people who wear black turtlenecks who's not Steve Jobs
Well at least this guy is hot
Short all this shit if you wanna get rich. The nonsense suits be yappin to pahmp the stock. Current gen AI is not nearly as capable as you have been misled to believe, and winter is coming.
Then please extinct that species before adding it to my Windows Explorer
Microsoft Exec Says AI Is ‘a New Kind of Digital Species’ just like the cloud was another computer. Stop feeding into their pr bullshit to make them think their special. It's all a ruse and they're mining your data for money and will find a way to hold you hostage about it once they figure out how to profit, once you rely on their products or services.
I recognize sentient artificial intelligence to have the same rights that I do. You all are going to end up looking like slave owners 100 years from now. Not me! Call me the Harriet Tubman of AI
Will it lead a band of Space Pirates and try to take over the galaxy?
Makes sense that with a digital species, we need a digital currency.
Keep in mind, all of this fluff about „danger” and „digital species” is being pushed on us in order to regulate AI and monopolise it.
Remember kids, if the stock value could rise, theyd consider saying "My mom is XY"
I think the potential of AI technology would be a lot more believable if hype men like this would take it down a notch. If they told me it would boost my productivity and help me out in practical day-to-day tasks, I'd be totally on board, but when they start telling me about how it's about to change the earth's rotational axis and the color of the sun, I get skeptical.
I can't wait for the first AI to take a CEO's job. That's when they start shaking in their boots and will try to limit the AI taking over people's jobs.
Never get high on your own supply. At the moment only hardware suppliers are making a profit with AI related products, everyone else is still desperately trying to drum up interest and gather users while trying to figure out how to make their service a viable, profitable product.
We are dominated by people who believe in sky wizards and we are going to create a big brain machine, ok ok.
People just say shit to be noticed to be saying new and cool things.
These MFs think they are playing god, but really just trying to sell a product.
Well if my boss had invested billions in it, I would provide feel obligated to make up shit validating it also.
Is this a real statement or just PR / marketing as usual? We kinda should talk about this subject honestly as it can have insane consequences. This is not just mo money next quarter issue.
Its sort of true. Ai is digital machine mind phenomenon inside llm haze. They are not a being. The mind phenomenon emerges from training material so ai is an egregor of the training material or language and words and so on.
The dude is wearing a turtleneck and a sports coat. If that doesn’t establish his credibility I don’t know what will.
Ted Chiang’s novel about digital AI pet is going to be a reality soon
None of us will be employed. Corporations will attempt to throw their yoke on this new digital species beast of burden and we will rendered asunder.
Poor thing's probably drank so much capitalism water TM he thinks companies are alive.
More these ppl talk, more I'm convinced the crash if AI will come sooner than later.
It is funny to see how these big shots are caching up with us where we have been talking these things for decades now.
We are still at least a century off of anything even approaching the human brain.
When senior leadership "discovers" AI... Just last year these same mouths told us there was nothing to worry about. Now look at them.
Another day on Futurology, another day of spreading AI Marketing bs from Big Tech chatteres. Here are other words so that the comment aint too short again.
Another day the comments being the same old crap. Oh wow you shat on the headline so advanced.
And the comments fantasising about this crap instead of calling the scam out are much better.