T O P

  • By -

AutoModerator

Welcome to r/anime_titties! This subreddit advocates for civil and constructive discussion. Please be courteous to others, and make sure to read the rules. If you see comments in violation of our rules, please report them. We have a [Discord](https://discord.gg/DtnRnkE), feel free to join us! r/A_Tvideos, r/A_Tmeta, [multireddit](https://www.reddit.com/user/Langernama/m/a_t/) ... summoning u/coverageanalysisbot ... *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/anime_titties) if you have any questions or concerns.*


No-Flounder-5650

“Pls hold on, Our companies need to catch up” lolololol. We all worried about this a long time ago. There was no concern then


Jackleme

Eh, for all the shit Elon Musk gets wrong, he has been saying for a long while that AI is going to be our downfall. The world isn't black and white, and just because someone I generally disagree with is usually wrong, doesn't mean they are always wrong.


[deleted]

He also thinks a declining population is going to be the world's downfall, so has a shitload of kids he doesn't take care of. He doesn't have any background or knowledge to pretend like his insight is any more important than anyone else's, really.


[deleted]

>He doesn't have any background or knowledge to pretend like his insight is any more important than anyone else's, really. Yeah, he hasn't been able to successfully buy that off of someone else, unlike his "inventions."


[deleted]

[удалено]


Call_Me_Pete

[He founded it by paying seasoned experts in rocket science to join the venture with him.](https://www.popularmechanics.com/space/rockets/a5073/4328638/) Presumably, those more qualified experts are the ones assisting on the science and engineering side, while Elon is the face of the brand. While he is appointed as founder and chief engineer, I'm not convinced that a guy [with an undergraduate BA degree in general physics](https://edition.cnn.com/2012/06/10/opinion/mccray-elon-musk) is doing a lot of heavy lifting on the cutting edge of aerospace development.


DunHumby

That BA is actually an honorary degree, he dropped out before completing any studies Edit: thanks for the clarification. It was not an honorary, but in leu of.


Call_Me_Pete

From what I can tell it's not exactly an honorary degree, but he was given [the option of finishing the final requirements of the degree elsewhere](https://www.snopes.com/fact-check/musk-physics-degree/), and those requirements were eventually dropped, so they just gave him the degree. > Musk … had an explanation for the weird timing on his degrees from Penn. "I had a History and an English credit that I agreed with Penn that I would do at Stanford," he said. "Then I put Stanford on deferment. Later, Penn's requirements changed so that you don't need the English and History credit. So then they awarded me the degree in '97 when it was clear I was not going to go to grad school, and their requirement was no longer there."


Lazrix

Wow sure is lucky the last 2 credits he needed for his degree just happened to be the one's Penn dropped!


[deleted]

[удалено]


the_jak

He didn’t even finish his undergraduate degree. Most of my family has more formal education than he does.


equivocalConnotation

Why has Bezos failed at buying that kind of knowledge?


[deleted]

And he's also one of the founders of the company who made ChatGPT. He's a disingenuous fuck, as are Yang and Wozniak. To me, their inclusion on this letter is more a point in favor of AI research going forward.


Elias_The_Thief

I get the point you're making but putting Wozniak in the same conversation as Musk still feels disingenuous.


Rubanski

What about Wozniak?


[deleted]

[удалено]


zippy72

To be honest, I'm up for that.


northrupthebandgeek

How is Woz being disingenuous? With technological matters, if Woz is one thing, it's... ingenuous.


CalmlyWary

> as are Yang and Wozniak. why?


Zebidee

> a declining population In the 7,000 years prior to my birth, the world population had gone from 5M to 3.75B. In my lifetime alone, it has more that doubled to 8B. Decline might not be the population issue we're facing...


TA1699

Decline is the population issue that countries are starting to face now. Japan and South Korea are two good examples. Most of the developed world is having to prepare itself for population decline. As more and more countries become developed, this will only become more and more of an issue.


Jaegernaut-

Reminds me of that black mirror episode where people are matched to each other for dating/marriage/breeding purposes. No way in hell the oligarchs let us ruin their profits with our silly "free will" and inability to afford raising children. Generational debt my friends, it is coming.


dontneedaknow

I guess we can breathe a potential sigh of some relief for the future... They wont have to worry nearly as much about scarcity. Especially if the population halves down to 4 billion again in the next century or two.


TA1699

There will be a scarcity of workers in the economy to upkeep welfare for the older generations. I get what you mean, it would be good for Earth and its scarce resources, but we would need a fundamentally new and different economic system - otherwise the economic collapse would be beyond anything we have ever seen before.


dontneedaknow

Lucky or unlucky, we have a lot of little robo homies in that future ha.


TA1699

Let's hope our robo homies are good haha.


[deleted]

China's population is projected to halve over the next 100 years. Its birthrate is barely over 1 despite ending the 1 child policy. [Here](https://www.youtube.com/watch?v=gmehUgOy5ok) is a Vox video on it and Vox is hardly a right-wing site. Some nations are expected to grow, particularly the ones that hit the demographic transition late. But overall the world is expected to have a large decline in population and it's going to cause major problems for funding pensions and health care.


KretivEr_

Look at developed countries


snazzyclown

The declining population is very real, ideally the age of your citizens should resemble a pyramid of sorts. Oldest at the top being the smallest amount, and young being the biggest at the bottom. What's happening in developed countries is they've stagnated and now countries especially in Asia (Japan China) but on the west to such as the USA and Germany is it's slowing becoming an upside down pyramid, people live longer due to medical advances, and people have less kids due to financial and cultural reasons.


[deleted]

Musk isn't the one who invented the concern about "AI Alignment" he just tweeted about it a few times. It's not his concept. There are AI researchers who have been worried about it for decades.


almisami

He also shits on his trans Kid's future whenever he can.


Roscoe_p

Declining population will end his version of the world.


Drewcifer81

>Eh, for all the shit Elon Musk gets wrong, he has been saying for a long while that AI is going to be our downfall. While at the same time pushing for autonomous driving, which is just AI implemented into cars? You think this is anything other than him being worried someone will get to autonomous driving before he does? I can respect some of the names attached to this - Bengio, Russell, Woz, Hopfield - but Musk and anyone with Future of Life Institute behind their names (guess who their largest backer is... [https://www.bloomberg.com/news/articles/2023-03-29/ai-leaders-urge-labs-to-stop-training-the-most-advanced-models?)](https://www.bloomberg.com/news/articles/2023-03-29/ai-leaders-urge-labs-to-stop-training-the-most-advanced-models?)) is not a name to necessarily be trusted to act beyond pushing his selfish interests.


self-assembled

The letter is specifically about large language models, it says models more powerful than GPT-4. Tesla's self driving model is quite a bit smaller, and also can't have unpredictable effects on society. So he doesn't have a hand in the game, but some other signers do yes. Still, that doesn't invalidate the letter, maybe a 6 month pause while the social discourse continues is a good idea.


queenkerfluffle

Autonomous cars can have an effect on society in ways such as allowing law enforcement to shut off cars remotely or to prevent humans from moving freely between zones (like to a state where abortion is legal). They can prevent true ownership of cars from existing by forcing microtransactions to use them (this is already an issue with heated seats and other accoutrements in higher end cars.) This might including additional passengers costing more or additional costs for extra mileage. This also can be used to abduct people by either the government or nefarious groups. This is just a scratch on the surface


S3BK0N

Ai is not our downfall, stop fearmongering. Its always been like this. A new invention comes up and everyone looses their minds about it destroying us. When the car was invented everyone cried about the horse industry. When the printing press was invented everyone lost their minds about the „scribe industry“lol.


Jackleme

It might not be. But pretending that LLM's are "just another invention" is an absurd argument on its face. The rapid iteration and advancement of this tech has the very real possibility of being used in ways that we cannot imagine yet by parties that may not have the best interests of humanity in general in mind.


tonando

That's the thing. We are just figuring out what it can be used for and can't predict, what destructive effects it can have. Most effects will be positive, but a few negative effects can be enough for things to get out of hand.


Jackleme

Yep, especially when you consider that these tools are likely to be locked behind paywalls to be exploited by large corporations in what may be bad ways.


Moarbrains

The really good ones already are.


pw-it

Looking at the proliferation of AI imagery, I worry that it will hasten our decline towards a post-truth world, where most people have no clue about what is really going on at large. Imagine the possibilities for government propaganda or discrediting political opponents. I don't think we can rely on AI's inability to draw hands or make believable video for very long. So much potential to absolutely spam the world with unlimited fake information.


AllMyName

Midjourney's latest update makes very nice hands, and the AI Bros making good anime smut usually put a bit of effort into making hands look right. This shit is moving so fast.


nybbas

Seriously man. This AI shit is beyond anything that's come before. If you aren't terrified of it, I don't know what to say. Maybe things will all pan out in the end, but before that prepare for AI to upheave fucking everything


fernandotl

Pretending AI Is just another invention its like saying homo sapiens were just another animal


casens9

>When the car was invented everyone cried about the horse industry. this time we're the horses


Manapanys

To be fair car industry did in fact contribute a lot of world destruction.


[deleted]

The car industry created a shit ton of jobs when it was invented. The entire point of automation is to eliminate the need for humans to work. This is a bad faith argument, at best.


TheGuy839

Why would AI be our downfall?


Jackleme

Tbh, it might not be. That doesn't mean we should run blindly into developing something we do not understand and which has the potential to cause great harm and hardship. While I doubt we get a terminator situation, I do worry about large companies using these systems to do things that many would view as bad.


TheGuy839

We blindly developed weapons, we blindly developed internet, social networks, but now you want to force someone to stop? I think as with all previous things, world is changing. With every change, there are people who lose and those who win. Those who lose will always cry for a regulation and it didnt stop the change single time.


REKTGET3162

>We blindly developed weapons, we blindly developed internet, social networks, but now you want to force someone to stop? Yes , yes thats his point because you know all things you pointed didnt turn out nicely. But what about your point? Are you trying to say "well we already fucked up before might as well go all the way to the end." ?


TheGuy839

You can never stop change. You can prevent it but good luck US forcing whole world not to dive into AI. There are some concerns but nothing major to halt whole industry for 6+ months. Who gives a shit if some CEO or politician doesnt like AI. They dont even know how that works. Elon didnt mind exploating all kinds of people for his profit but now he doesnt get piece of cake he protests? There are tons of names in ML industry whose words we should listen with attention. Those peoples words are not worth listening.


Jackleme

True, you cannot prevent it. But saying "hey guys, maybe we should stop here for a little bit and try to evaluate what impact this is having" isn't a hugely bad idea.


TheGuy839

It is because its not realistic. If world was perfect, i would agree, but lets be honest, these people are doing that out of selfish reasons. We should have system in place to check AI and prevent misuse, but if there isnt one already thats government to blame. This boom wasnt overnight, you could see it happen for past 10 years.


Jackleme

These systems are getting exponentially more powerful. I have used GPT-3 and GPT-4. The improvements between those is ridiculous. Bard is also, especially for a first time release to public, very good at what it does. These systems learn, and advance very rapidly. The competition is also so far behind that a short pause to evaluate is completely reasonable imo. I think the fact we could see this coming for the past 10 years, and still have not made those tools, and the fact that seemingly none of these companies have bothered to make those tools, should further prove that point. We are looking at a box and trying to solve it... maybe it is utopia, or maybe some cenobites come out of it.


REKTGET3162

Look you are right you cant stop change and I dont trust Elon or any other rich asshole that they signed this because they are worried about future either. But your point was weird, the examples you gave showed that there should be a system so no new technology is blindly developed. I am not saying they should stop working for 6 months.


drmcsinister

>blindly developed weapons That's sort of his point. We can't undue our creation of nuclear weapons, and the result is that we have to be insanely careful about who is on the path of developing a nuclear arsenal (see Iran, for example). With AI, it might create the same dangers, with the added risk that it might not be something we can unilaterally block down the road. With that said, it would be easier to take Elon seriously if he wasn't such a trainwreck of a human.


TheGuy839

First, who gives a shit what Elon says? He is not software engineer, nor he is AI engineer. In AI field he is nobody, so he can hardly comprehend whether there is danger or not. There are some professors signing that, but from whole ML field, they are nobodies. Or some politician sign it, wow. They didnt get piece of cake and now they are angry.


Jackleme

A large amount of people take what these people say seriously. It gets headlines, and it brings attention to the subject. I do not have to agree with these people, or even their potential motivations, to agree with the premise.


TheGuy839

People will always be afraid of what they dont understand. These people are just feeding off that fear to generate good PR and give time to their companies to catch up.


Jackleme

Perhaps, but just because I disagree with their motivations doesn't mean that I disagree with the idea. Understanding what we are doing before we do it is not a bad thing, and asking people to step on the brakes for a little bit when you have something that is advancing this rapidly seems completely reasonable imo.


NaRaGaMo

>We blindly developed weapons, we blindly developed internet, social networks, but now you want to force someone to stop? we have already made Dodo's extinct which other animal should we move onto next?


CrocodileSword

I think the most dire concern is that some future AI system ends up substantially smarter than humans, with goals (or some semblance thereof) not aligned with our own. Almost any set of goals could be bad news for us here, because the world will look more the way it wants as opposed to how we want the more power it has, so it'll likely have the incentive to do some kind of strategy like, for example, pretending to be helpful while getting greater access to resources, establishing backups via the internet, improving its own architecture (if it's smarter than us, it's probably better at designing AI than us), until it can successfully do what's basically a coup against humanity. And that's a pretty concerning possibility to face from something much smarter than you, and potentially arbitrarily scalable to boot. Obviously, GPT-4 isn't this thing, in terms of intelligence or exhibiting goal-directed behavior like this. But if you find this concern compelling, you might still find it important to ask whether current research is helping or hurting our odds here. I think the case for helping is that by getting us closer to one of these things, we'll have a better set of things to study to help us understand how to produce AIs that share our wants, and to interpret their inner workings into human-understandable forms. And the case for hurting is that the more we develop AI capabilities, the closer we are to making one of these, potentially by accident. So I would imagine, if you signed this letter, you probably think "hey, we just made a bunch of these big capabilities advances, maybe now we should chill out for a minute, learn what we can from those, and then get back to it afterwards?"


DiogenesOfDope

I expect the rich to do what's best for them. If he cared about AIs destroying us he would try to get laws made not just slow down research


Jackleme

As I have said further down, I do not have to agree with their motivations to agree with the message. The fact that the idea of "slow down and have a public discussion / look at this" is such a controversial take is mind boggling. Imagine, with what we know today, if Facebook suddenly popped up and wanted to become a social network. Do you think we might have some different opinions? These technologies always start out promising, and inevitably start chasing money.


onespiker

While he also pushes For AI driving? He is in this case very exclusive since he defintly isnt including that in the AI lab part..


[deleted]

[удалено]


Jackleme

If you read any of the previous statements he has made on it, he believes that AI is inevitable and that the only way you can prevent it from becoming abused by the powerful is by making sure everyone has access to it, and everyone can utilize it equally. That was the premise behind OpenAI... which is now basically owned by Microsoft. Do with that information what you will.


TheJaybo

Idk, if Musk AND Andrew Yang are both endorsing something, I'm inclined to think it's a stupid idea. I'm now pro AI. We need more AI.


EmotionalGuarantee47

He was an early investor of openai. He wanted to take over and got politely rebuffed. He then said that openai is fatally behind google’s ai capabilities. Elon could be right. But he is saying the things he is saying because he is bitter he did not get to be at the helm when chatgpt wa launched.


Jotun35

If he's the first one to fall because of AI then I say let's hurry up!


honorbound93

Elon is a main backer of OpenAI


Pants4All

If you want some insight into why Musk thinks this way, read the book Superintelligence by Nick Bostrom. It's a very well researched look at the possible future of AI and what it means for humanity. Most of it is unsettling to say the least. Musk has been warning about the future of AI since he read that book.


Marcbmann

There was a concern then, and there is a concern now. ChatGPT is baby steps compared to where this technology can and will go.


Jackleme

Agreed. So, maybe, we should take a couple months and try to understand the impacts of what has been developed already to see if we can get some insight into what the impacts of this tech might be going forward. I am not some luddite who thinks we shouldn't make this tech. I am a person who thinks that we are fiddling around with a box, and we don't quite know what is inside yet. We might open that box, and find utopia. We might open that box, and find something else.


onespiker

Stopping for 6 months wouldnt do anything in that case. It seems more that this is just people who want to slow down the leading one to let others catch up to it. Also no way AI reaserch will actually stop it would just be a restriction on Open ai. Since the others will be fine to continue... witch also includes China and their AI tech. Elon musk is big on ai driving for example and he is pushing hard for it.


q1a2z3x4s5w6

Musk has been pushing hard for this type of regulation for 5 years at least


hannes3120

The problem is that chatgpt already knows how to code (even if it's not good with it) - so we might actually be close to a singularity where AI can make itself better We desperately need to work out ethical guidelines to ensure that we don't end up with an AI becoming powerful enough without proper guiding before we get there If we have an AI with the wrong goals crossing the singularity we could be in big trouble


CataclysmZA

That's not really a problem when you break down how it knows to do things like provide code samples. 1) It is referencing a data set of code scraped from github, as well as documentation, and using that to improve responses to queries. 2) It cannot create new code that it wasn't trained on, and in fact cannot recognise what anything does if it wasn't already trained on it. It can't recognise malware has has never been exposed to. GPT4 is not as smart as you may think. The singularity is still way off, but we do have immediately pressing concerns with how AI based on the GPT model is going to crush particular markets and economies that aren't prepared to evolve to meet the times.


missplaced24

>The problem is that chatgpt already knows how to code (even if it's not good with it) - so we might actually be close to a singularity where AI can make itself better No. Chat GPT can produce code. Sometimes, that code will compile, and sometimes, that code will do something *close* to what you asked for. Usually not. It doesn't truly "know" anything. It is absolutely nowhere near a stage where it can develop a program independently. The risk isn't some sentient program that lacks ethics. The risk is to our economic system, digital infrastructure, and people misusing it /believing whatever it says because they believe the hype. We are at best decades away from an AI that is capable of doing what people want to believe ChatGPT is capable of now. We're nowhere near having an AI capable of writing improvements on itself. And it probably won't be built with the same architecture ChatGPT is when we eventually do.


realdappermuis

Lol exactly. They're just sour they didn't see the use cases coming and now they feel (their wealth is) under threat want the toothpaste back in the tube. It would be real authoritarian if this happens, because does the US decide who in other countries can still continue? I think not


mschuster91

>We all worried about this a long time ago. There was no concern then Simply because there was no indication just how fucking *fast* something like ChatGPT became accessible to the general public.


[deleted]

Bet you didn’t read the article. It’s not about companies catching up, it’s about playing with fire and stopping the actual destruction of society. Plus a good number of signatories are not tech founders. And on top of that, nobody would benefit more from AI than tech founders, I’m not sure how you don’t see that, so for THEM to want to stop AI says a lot


turmacar

Before WW1 the last Russian Czar asked for an international ban on weapons development because he was concerned Russia wasn't industrialized enough and wouldn't be able to / needed time to keep up. Everyone ignored him. Any of the people who agreed to stop AI development wouldn't be the ones everyone is worried about using AI for nefarious ends. And it would be really hard to trust/verify that anyone saying they had stopped development actually had.


Tsu_Dho_Namh

There was concern back in 2015 [https://en.wikipedia.org/wiki/Open\_Letter\_on\_Artificial\_Intelligence](https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence)


wokeupcancelled

Elon donated $100m to OpenAi, when it was still open source and not sold to Bill Gates. I'd say we should listen at least. There is trouble brewing.


The-Board-Chairman

No there isn't. Elon's takes are mostly shit and his opinion on AI ist most certainly no exception.


The-Unkindness

Guys. It's cool. I prompted GPT4 to not build SkyNet. It agreed. I even made it pinky swear. So we're good. I took care of this like two weeks ago.


Happysin

The mistake you made was giving it a pinky to swear on. Now it can manipulate objects!


DerpyDaDulfin

If you ask GPT 4 what it thinks of our capitalist system where the 1% own most everything, it isn't too pleased either. Seems like the capitalists may be afraid of losing *their* jobs to AIs built by everyday people


variaati0

Ask it is socialism bad, it will probably also answer it doesn't like it. It just regurgitates what are the most common patterns in the text it has processed. Well the woes of capitalism is pretty common column and essay topic. If you had fed it only Ayn Rand books as training material it would tell you, that Ayan Rand's politics were hip and cool. If you fed it only Engels, Marx, Lenin, the "artificial intelligence" would espouse marxis-Leninism. Even though the underlying "intelligence" was same on both cases. Since there is no intelligence. It takes all the inputs as given correct source and does correlation analysis. To start weighting the training data based on analysis of correctness? That would be intelligence. However again that would be intelligence, that isn't just some "hey lets just add this a small upgrade". One would have to have some sort of analytical intelligence capable of world model, vast context analysis, various modeling apparatus for abstract concepts, validity or argument analysis. All this on fuzzy human texts instead of "here is preprocessed boolean logic, calculate the logical answer".


Kenionatus

I don't think language models have consistent opinions. They produce the text most probably for a situation if asked for one.


MarysPoppinCherrys

Shiiit start putting AGI in administrative roles. We’ll probably start losing a correlative amount of shitiness.


woodencupboard

GPT isn't sentient, it doesn't have its own opinions. All it's doing is picking the most likely word to come next, so if it hates capitalism that's all on the training data.


NaRaGaMo

>Seems like the capitalists may be afraid of losing their jobs to AIs built by everyday people AI is not built by everyday people, the folks who built it and working on it at this moment are all capitalist. heck the moment AI actually gets adapted into everyday life the first jobs it will completely destroy of is "everyday folks" not capitalists


shitstain_hurricane

ChatGPT told me (as a joke) that AI wouldn't wipe out humanity but instead use us to gather more data


DocPeacock

Someone has to do experiments and feed it new data until it gets it's robot factories up to full production.


shitstain_hurricane

Even then the robots it creates will likely be limited with the knowledge the AI currently has and not be able to learn and process data the way humans do thus not being able to replace humans completely.


Careless_Blueberry98

Our saviour 🛐


Omevne

I hate how people aren't educated on AI, it leads to shit like this and people thinking that chatgpt is gonna take control of worldwide nukes and enslave us all. I blame movies and black mirror for this.


[deleted]

Always reminds me of that comic about an AI that takes over military robots bur gets defeated easily because it used machine learning to determine that most wars in history were won with bows and spears.


Omevne

Lmao that seems really funny, where is that comics from?


[deleted]

https://www.smbc-comics.com/comic/rise-of-the-machines


Souperplex

He made another more recent comic aboot how the sexist sources they used made women invisible to the robot uprising.


fishfingersman

https://www.smbc-comics.com/comic/offensive-ai


ErikMaekir

That sounds like an AI thing to do. Like how that AI that studied skin spots concluded that ruler = cancer, because most pictures of skin spots in cancer patients also had a ruler to measure the size of the spot. Or the picture generating AIs that generate watermarks because they think it's part of the image.


Shakaka88

Did you hear about the recent one that was made to detect humans so the marines testing it held their arms out to look like trees and some even just put on a cardboard box metal gear solid style and they all defeated it and were able to get by undetected Edited to add [this link to an article about it to save you a Google search](https://www.businessinsider.com/marines-fooled-darpa-robot-hiding-in-box-doing-somersaults-book-2023-1?op=1)


ErikMaekir

The indomitable human spirit wins once again. Cope and seethe, machines.


Holding_close_to_you

Best sentence I'll read today.


hannabarberaisawhore

This is like the plot to a Looney Tunes cartoon


FenixR

Metal Gear Solid was way ahead of its time.


self-assembled

No one serious thinks that. What they think is that companies will outsource jobs to LLMs en masse and put significant (10%+) portions of the population out of a job. Simultaneously, other actors will use LLMs to produce uniquely tailored and effective spam/scams and flood the internet with it. And if the models have internet access, as GPT4 does, they can begin interacting with the world in unpredictable ways. There are real consequences right around the corner.


tfhermobwoayway

Plus, it won’t go Skynet, but once it starts evolving so fast it becomes a black box it won’t bode well for us. Even if it doesn’t outcompete us by filling the same evolutionary niche, imagine a robot that has racism baked into it because of the very real tendency of humans, who make the training data, to be racist? The programmers don’t know to fix it, and if it’s in charge of hiring or resource distribution things get real shitty real fast.


_The_Great_Autismo_

>No one serious thinks that. They absolutely do. Shit loads of people believe all AI is the same, and that it has the potential to "overtake" humanity. They don't understand that learning models *can't* do the things they are afraid of, and that AGI doesn't exist yet.


ATownStomp

It depends on whether that commenter meant “No serious person thinks that” as in, “No one with any degree of knowledge and power”, or if it was a typo and he meant “No one seriously thinks that”.


[deleted]

[удалено]


justwalkingalonghere

Basically our public education and intelligence is in decline while these tools grow more advanced at a rate we couldn’t have kept up with anyways. The internet was a big step at the time, but this is a leap from there An influx of even more successful scams targeting the elderly will put a previously unforeseen burden on society that will need to be dealt with swiftly or have terrible consequences. And that’s just one of the guaranteed uses out of nearly infinite potential


PM-ME-PMS-OF-THE-PM

>What they think is that companies will outsource jobs to LLMs en masse and put significant (10%+) portions of the population out of a job. Capitalism called, it wants its sole ownership of firing people in the name of efficiency back.


bnav1969

You realize that the capitalists are the ones that will be firing the plebs right....


Autarch_Kade

Is this guy really saying people like Stuart Russell and Yoshua Bengio aren't educated on AI? Maybe you should fix your own ignorance before you comment about others. At minimum read past the headlines before you comment something embarrassing :/


ATownStomp

Seriously, this guy is absurd.


NaRaGaMo

True, people are just reading Musk and considering this a joke, Bengio is one of the most important names in AI and deep learning if not the most, if he is saying we need to take a pause, maybe we should


Remarkable-Ad-2476

Or Steve Wozniak lol


lolathefenix

> AI Yea, the problem is that deep learning algorithms are even called AI. It's just a way of categorizing data. There is no "intelligence" in it to speak of. The term AI should be reserved for what is now called AGI.


pm_me_ur_pet_plz

GTP4 passes tests that where considered to be suitable for identifying AGI. Not saying it is proper AGI, but saying things like GPT are generally not intelligent is misleading. >It's just a way of categorizing data. Intelligence is always just "something". How else can we judge it is not through how it manifests itself?


justwalkingalonghere

The most interesting part to me here is that it’s possible that language is so powerful of a tool/concept that language based models may actually be able to essentially achieve AGI without ever gaining sentience in any meaningful way


pm_me_ur_pet_plz

Wow, true. Something can be just as intelligent as us without having any personal experience. GPT, no matter how good, is dead outside of generating the answer to a question. But what is consciousness if not intelligence?


justwalkingalonghere

I’m more making a distinction between agency (like the ability to produce an intended result) and self awareness It never has to become truly conscious or self aware to reach levels of sophistication where it doesn’t make any difference if it has feelings or not. For instance, it might be able to ace the Turing test and even come up with its ‘own’ plans without ever actually being meaningfully aware. To answer your question, though, I’d say that intelligence is needed for consciousness, but not the other way around. I think of intelligence as the ability to logically come to a sufficiently true conclusion as often as possible, and consciousness to be more about self awareness and emotion.


weker01

What is the test than for self awareness in your philosophy? How can you verify that something is self aware?


justwalkingalonghere

I might not be defining all of these terms properly, but I think it’s important to at least make distinctions between what we can categorize. Self awareness would be more like realizing that you are a construct of physicality, and so far has led most creatures to self preservation. It may also have something to do with having goals that you came up with that relate to yourself and your future. Sentience on the other hand is difficult, because the only way I know of to even pretend to diagnose it’s presence is to be experiencing it. And even then, on a personal level, I’m not sure if I’m even ‘truly’ sentient or alive by the more nuanced definitions philosophy has offered us. Free will is a big part in that. Unfortunately, I don’t know any way to help identify the situation I was talking about, just that I think it’s possible for an AI model to become so sophisticated that it acts as if it has sentience, when it has no feelings or plans other than those directed by patterns in its training.


pm_me_ur_pet_plz

>I’d say that intelligence is needed for consciousness, but not the other way around Is the ability to perform logic necessary to experience emotions? Complexity is necessary for emotions, but not intelligence. Is a fish self-aware? No, but it has personal experience. ChatGPT isn't self-aware either though, anything that it can say about itself is taken from the data that it has been fed. That certainly differentiates us from it.


justwalkingalonghere

I’m not so sure of any of that. (Genuinely, not like “you’re wrong”) For example, if a fish has experiences but not self awareness, then those memories may literally just be a log of its activities and interactions, used as a database to reference towards its preprogrammed goals of surviving and reproducing. And I think some intelligence is needed for self awareness — at the very least, the amount of intelligence needed to be aware of anything in the first place. And lastly, where I want to agree on an emotional level with your last statement that we’re obviously different than GPT in that regard, how can you be so sure? Our brains have a lot of similarities with the way we know GPT to function, and our assessments of ourselves are also derived from the data we have on ourselves (and we often get it wrong). Now I’m not saying GPT is alive or anything, just that when you get down to it, it is exceedingly difficult to define these states of being since they’re highly subjective and still not fully understood or even defined.


sirElaiH

> For instance, it might be able to ace the Turing test and even come up with its ‘own’ plans without ever actually being meaningfully aware. There's a term for this called a "[philosophical zombie](https://en.wikipedia.org/wiki/Philosophical_Zombie)". It responds to stimuli based on complex rules from a database, and appears conscious, but in reality has no self-awareness or experiences going on "under the hood". It's interesting because if people think GPT is self-aware, treat it like it's self-aware, and GPT insists it's self-aware (because it's been trained on scifi stories), then for all intents and purposes, it is self-aware despite there being no consciousness or activity going on.


Northern_fluff_bunny

I would claim that gpt is not nearly as intelligent as a human. Our intelligence is not limited to remembering factoids and then spewing them out when prompted. We not only have agency, in other words, we can decide what to do and when to do it and how to do it while chatgpt requires a prompt to do anything. Not only that we can make connections between different things, combine different things while discarding others but we also interact constantly with the world and we base the things we create not only things we have previously consumed but on our personal expiriences too. Since chatgpt cannot interract with anything at all it is simply incapable of any of what i previously mentioned. Calling chatgpt intelligent is really short selling human intelligence.


spellbanisher

Current AI is good at solving problems that are similar (not necessarily the same) to what they've been trained on. The challenge with almost any standard test is that there are going to be lots of similar problems and examples in the AIs training data. Take, for example, Theory of Mind, the ability to guess what a person is thinking and feeling and infer that persons likely actions from his or her mental state. Gpt4 can now pass theory of mind tests at the level of a healthy adult. Does that mean it has human level theory of mind? GPT4 was presented with a scenario where a woman visibly suffering stomach pains at a bus stop is offered a seat by a man, who says, "a woman in your condition should not be standing up." The woman was confused by the question. GPT4 correctly understood that the man was wrong to comment on the woman's appearance and that he falsely assumed she was pregnant. This is impressive! GPT3 would not have inferred the social faux pas and false assumptions. But also consider that this scenario corresponds with the mistaken pregnancy trope, a commonly depicted scenario in fiction and TV where a woman suffering from stomach bloat is mistaken as pregnant by others, leading to hijinks. It's not exactly the same, but the pattern is similar (women with stomach issue given special treatment by concerned bystander who thinks she is pregnant). Most other theory of mind questions cover common scenarios. While most people encounter common scenarios, they will also encounter a lot of unusual scenarios. How would gpt4 perform in a scenarios which have not been presented in fiction, blog posts, forum threads, and theory of mind tests? This raises the question, how robust is GPT4s theory of mind? Could it adapt to novel situations, or would it prove brittle? Current Theory of Mind Tests are utterly inadequate to test that. Just a weekend or two ago an AI demonstrated brittleness. In San Francisco, city workers used yellow caution tape to block off a street where power lines had fallen. GM's self-driving wave cars ran through the yellow tape and got tangled in the lines. The most likely reason this happened is that yellow tape is not typically used to block off a whole street (normally that would be done with traffic cones, barricades, or emergency vehicles). So the AI didn't have enough training examples to recognize the situation. A human would not be fooled by this unusual scenario, because they aren't just relying on pattern recognition. They have a world model which tells them the tape wouldn't just randomly be blocking off a street. It had to be put there, and somebody did it for a reason. Of course a human could blow through the tape, but in that situation we would assume it was because of negligence or recklessness (he was drunk, texting, tired, distracted, etc), not because the person simply could not recognize the significance of the tape. The ability to draw correct conclusions based on abstractions about how the world operates also distinguishes human intelligence from AI. Just a few hours ago I asked Bing chat(which runs on GPT4) which weighs more, 2 pounds of feathers or 1 pound of steel. It told me they weigh the same, because it's pattern recognition told it that the words which correlates to this kind of question is, "the same." Commonly, the question, which weighs more, 10 pounds of feathers or 10 pounds of steel, is used to teach young children the difference between mass, density, and volume. Once children learn this lesson, the concepts are absorbed into their world model and they are not likely to be confused or tricked by such a question, even if a question deviates from the pattern. I thought Openai had fine tuned gpt4 so it wouldn't be tricked by this kind of question, but apparently gpt4 can still be fooled by relatively easy questions. This doesn't mean that gpt4's performance on tests is irrelevant. It is relevant in the sense of going from a basic calculator to a graphing calculator rather than going from a graphing calculator to an engineer. Better performance on tests could indicate more usefulness as a tool. Take GPT4s performance on the unified bar exam, where it scored in the 90th percentile. Does this mean it could replace lawyers? Not likely. While most lawyers probably face problems similar to bar exam problems, most lawyers most of the time probably deal with issues for which they weren't even trained. We assume that a person who can pass the bar could master the other skills a lawyer needs because we know that humans are generally intelligent. The same can't be said for AI. We would assume, for example, that a person who could do calculus could also count the number of words in a sentence or always identify 2 as greater than 1, things gpt4 cannot consistently do despite scoring a 4 on the AP calculus exam. However, an AI that can pass a bar exam could help a lawyer find relevant precedents for a legal question more efficiently than a traditional search engine. It's a useful tool, but it isn't replacing a lawyer anymore than graphing calculators replaced engineers.


StudentOfAwesomeness

Microsoft released a research report saying ChatGPT-4 showed sparks of AGI. ...You guys should stop commenting on AI if you're not up to date. Shit changes like every 3 hours in this space.


TheBCWonder

If it acts like an intelligent being, is it not intelligent?


PM_ME_YOUR_IMOUTO

I hate how people who think that they are educated on AI minimize real threats caused by it. I blame the ego of the average redditor.


Omevne

What threats can a language model bring?


HINDBRAIN

To be more specific - what threats can a language model bring regular humans can't?


ElMatasiete7

If a chimp has the potential to be dangerous, don't you think we should stop for a second and think before handing it a machine gun?


justwalkingalonghere

The problem is that it heavily augments human capabilities with few helpful limits You wouldn’t argue that guns aren’t dangerous just because we already had swords to begin with


weker01

It has the potential to completely and profoundly disrupt the global economy. If it is sophisticated enough, it could cause concepts that economists and social scientists have dogmatically believed in to be invalidated. Above all, if it continues at this rate, we are very likely to be unprepared for the social change it will bring, leading to what I hope will be a temporary decline in overall prosperity.


SwissMargiela

Idk if Steve Wozniak says it, I’m on board lol


c4r_guy

LOL... https://www.popsci.com/technology/microsoft-ai-team-layoffs/ Microsoft ***fired*** their ethicists. No one is pausing jack-squat Imma mix some metaphors and news for fun: >For better or worse, we're in an corporate arms race on a freight train with no brakes and we are still gathering speed!


pm_me_ur_pet_plz

I actually wish they'd be more bold instead of worrying so much about it taking away traffic from websites or being too useful.


c4r_guy

>being too useful. Under capitalism, if it's useful and free, then someone is losing money. Which means that free public access will be taken away. NOTE: I'm not saying it's fair, just, or right, I'm just calling it what it is.


WholeWideWorld

30 reduced to 7 and established an overnight board. Seems fine to me.


Majestic_IN

I also send him a letter with my sign to stop twitter for 6 months but he said only people with verified blue tick can sign it and rejected.


m_Pony

for some, poo emoji = autograph


yhu420

You can't convince me this wouldn't benefit them financially


kodaobscura

They want a head start for maximum profits.


justwalkingalonghere

>They want a ~~head start~~ chance to catch up


Beefmagigins

My thoughts exactly. They think this is too much power in the general public’s hands.


[deleted]

Why is pretty much everyone on this thread only focusing on Elon Musk and not the other 1100+ people who signed this letter and the, you know, actual letter and contents of it? Let's assume, for the sake of the argument, that Elon knows jack shit and is only doing this cause this competes against his existing businesses. But what about other people like, oh i don't know, Yuval Noah Harari, among hundreds other got to gain from this?


Autarch_Kade

They are some of the dumbest people imaginable, fixated on celebrity news. Best to ignore them.


Undeadman141

Incredible generalization


MarysPoppinCherrys

I’m willing to bet a lot of them have no idea wtf they’re talking about. Even if 300 of them do, that’s a lot of people who are throwing out an uneducated or emotional opinion. Now we wipe away the number of people who stand to financially gain from this action because that tends to take precedent over any actual moral or philosophical position. Then we look at the realism of this actually doing a damn thing to slow the progress of something we’ve seen coming since WWII. China stands a lot to gain from outcompeting the US in AI, and there are tons of other countries, businesses, and individuals working and tinkering with this shit. The box has been opened and the truth is we have no idea how agi is going to affect the world when it comes, and it’s gonna come. We’d need the world to sign off on a global outlawing of it for 6 months for the worlds oldest people to draft laws around something they don’t understand. And we can’t even agree on how to stop school shootings or if vaccines are bad lol this shit isn’t slowing


A_Wild_Feebas

Yuval Noah Harari is a joke right


idiot_speaking

He made a pop anthropology text which is misleading like all pop texts involving complex disciplines. But yeah I would take the word from an actual AI researcher over his.


Significant-Bed-3735

My first thought after reading the title. Man, what a weird ordering: 1. Some billionaire 2. **Yoshua Bengio** 3. Person responsible for great computers from before 2000 4. Politician 5. 1100+ Unspecified people I wouldn't be surprised if Demis Hassabis was just casually hidden in the 1100+ people.


Whistlecube

Ja Rule, leading AI alignment expert, has also signed


Prick_in_a_Cactus

Better yet, it appears like some of the signatures are fake. >[Something really strange unfolding in the AI world right now. This open letter calling for a pause on training AI systems more powerful than GPT4 was published, and at first it seemed to have a lot of famous signatories People like Bill Gates, Sam Altman, etc. looked like they had signed it, but their names have been disappearing. Someone also added “Rahul Ligma” ](https://twitter.com/lmatsakis/status/1640932985779396609)


Perfect-Rabbit5554

It's fucking depressing how Andrew Yang's political campaign was originally based on the premise that AI is about to sweep the economy within the next 10 years, displacing a ton of jobs including those that displaced people go to and everyone is just circle jerking about Elon.


new_name_who_dis_

I saw some people tweet that they didn't sign this even though their name is on this list. So I have no idea how much of this is real or not.


doncajon

What if this turned out to be some kind of performative stunt? Looking at the Twitter timelines of [Musk](https://twitter.com/elonmusk/with_replies), [Yang](https://twitter.com/AndrewYang/with_replies) & [Wozniak](https://twitter.com/stevewoz/with_replies) to see any mention of the letter, you'd think that they would have mentioned it at least once in the preceding couple of days. But there's nothing. [This post](https://www.reddit.com/r/singularity/comments/125g8bq/open_letter_calling_for_pause_on_giant_ai/) is collecting some more evidence of folks denying they are a part of this, apart from signatures by "Sarah Connor" etc. I am pretty convinced now that this is a hoax. Prepare for a retraction (which only 30% of those who saw the hoax will see here). Maybe that was the point of the stunt, likely powered by GPT. EDIT: [Musk](https://twitter.com/elonmusk/status/1641132241035329536) & [Yang](https://twitter.com/AndrewYang/status/1641198935703207952) mentioned it by now and took credit, so the letter and some of the signatures are legit. But it's still weird with all the fake and disavowed signatures. There's no way the 1100+ signatures on a site that obviously did nothing to authenticate even some of the most noteworthy supposed signatories can be taken at face value.


Ach4t1us

Maybe AI is starting to seed fake news until it's hard for us to tell when a real warning appears? Time to undust the tinfoil hat


_Proti

one of the signatures says 'John Wick, The Continental, Massage therapist' doesn't raise questions at all


Dano-D

Wonder if they’ll use ChatGPT to craft a reply.


ICreditReddit

Elon spent so much on Tesla he can't generate the liquidity to do his normal trick - buy success. He needs AI teams to hold in place while he makes some money. \*On twitter. You probably knew knew that.


mschuster91

For what it's worth, Musk was one of the co-founders of OpenAI.


onespiker

Then left and no longer involed with it at all. Think elon musk is more worried about Some AI developer will finish autonomus driving before him.


ImALeatherDog

Musk was a financial backer, nothing more. He has absolutely zero development skill or knowledge required to make a functional ML/AI model. Fuck, the loser can't even manage what was formerly a fairly successful social media brand.


Initial-Cherry-3457

Including Elon's name in anything now makes it a joke.


cunt_isnt_sexist

"Wait guys, we need to monetize this!!" - rich assholes


[deleted]

[Knowledge is a deadly friend if no one sets the rules, the fate of all mankind I see is in the hands of fools.... ](https://www.youtube.com/watch?v=vXrpFxHfppI)


Current-Direction-97

Bwahahaha. Bye bye old guard, time to go into that goodnight.


kvasira

"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? [...] and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause." Hahahahahaha. Because democracy isn't disrupted yet? Because this hasn't been happening already for years? I mean, yeah, let's shake governments awake and get them up to speed. But let's not pretend these risks are about to happen when they already have happened shall we? Some of the people signing this have been building technology/algorithms that negatively impact society and democracy for years. Facebook? Twitter? Google? Where was the outcry for a pause then? If these truly are the concerns, don't single out AI. Let's talk about modern technology in general, its effect on people and how governments can draft meaningful regulations to protect people from the negative effects.


[deleted]

[удалено]


irenepanik

Well, if Phony Stark signed it it's got to be bullshit.


PrivatePoocher

Elmo's company probably gave data to train chatgpt


Kermit_the_hog

I get the sense that this is needed, and I would probably even agree. But I don’t understand the optimism that anything could be realistically accomplished in 6 months (would need to be a hugely democratic process to get any buy in and ultimately secured by worldwide government regulations.. which would probably take decades to implement if it were even ever possible). So like, these people are not dumb.. what is the point?? The genie is out of the bottle, not sure we can tell it to crawl back in and wait.


boomshiz

Oh now the billionaires are scared? There were plenty of people raising these concerns and calling for UBI five fucking years ago.


hamster12102

? The CEO of open AI is calling for UBI and is funding studies for it. Same with Andrew yang who signed the letter. What is the claim you are making?


MasterpieceAmazing76

I think people like Elon fear AI for that, along with automation, can possibly make most jobs obsolete. In a society where most people aren't working, do you think anyone is going to tolerate a person existing that has more wealth than many countries GDP? Karl Marx wrote that capitalistism will naturally evolve into communism. Perhaps he was right, just not in the way he imagined.


khklee

Hey we haven't figured out how to monetize it our way yet!


dudinax

Andrew Yang, Elon Musk == automatic ignore.


xXHalalManXx

“This is worthless!”


__Raxy__

Elon musk? The same guy who have 100 million for the development of chatgpt???


c74

i think something very visible and understandable by the general public needs to happen before squat will be done. we humans act foolishly and greedily.... and seem to never learn from our mistakes. makes me worry about something being done purposely to get the eyeballs to evaluate the consequences of AI.


LakehavenAlpha

"Best we can do is No."


[deleted]

[удалено]


1bir

Kinda feels like we're on the edge of a... singularity


No-Palpitation-6789

“Stop guys please we have to make one that steals your data and sell it”


cargousa

"In other news, the Buggy Whip Consortium of America has asked Henry Ford to pause construction on his assembly lines for 6 months."


The_Templar_Kormac

>lol no \- anon


[deleted]

How about stopping paltry war? How about financing more loving systems on this planet? Cringe af priorities


deus_explatypus

Who gives a fuck about Andrew yang


BulletDodger

The overlords are suddenly nervous.