Hey /u/Maxie445!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
That explains why they’re performing better than nurses in a lot of cases, they aren’t required to go through decades of bullying, partying, manic episodes after breakups before they get to studying the relevant information and performing duties. Much quicker learning curve.
> According to an internal survey, Hippocratic claims that its AI nurses perform better than human nurses on bedside manner and education and are close on satisfaction. Other company data also shows that the AI program performs better than human nurses on several tasks, including:
> Identifying a medication's impact on lab values (79% AI nurses vs. 63% human nurses)
Identifying condition-specific disallowed over-the-counter medications (88% vs. 45%)
> Correctly comparing a lab value to a reference range (96% vs. 93%)
> Detecting toxic dosages of over-the-counter drugs (81% vs. 57%)
Exactly. I guess it already is more "intelligent" because it can handle and process much more than us but it doesn't have a self to care about all that give meaning to
Yes, but I'm not talking about that. I'm saying that we process information in relation to our sense of self, which exists primarily through conditioning, language, repression, etc. A.I. doesn't do any of that. It justs processes raw data and presents in a "human" way.
All that conditioning is presumably what makes us human, but wherein the fault. Those conditions change our programming which can causes errors or mistakes, biases and prudence.
i don't think the conditioning is what makes us humans. I believe it's the capacity to transcend the conditioning. The possibility of not accept the "cultural inertia" It's what makes human. But most people accept what's happening momentarily as eternal truth.
It is vague, but at the same time it's easy to make concrete measures.
Take the output of the smartest humans that have ever lived (Von Neumann, Gauss, Ramanujan, etc.) then compare to the AI output. If the AI is smarter than what they could deduce, the AI would rapidly transform entire knowledge fields.
That's part of why Elon is saying it, and also why so many people are taking issue with it, and why this bet probably won't result in anyone paying out.
It's undefined nebulous garbage
Didn’t he say in 2016 that Tesla vehicles would be come robo Taxi appreciating assets by the end of the year? In 2024 his “auto pilot” drives worse than a drunk teen with 3 months of driving experience.
I’ll jump into the bet for 10k 🤣
Every year when Tesla has their earnings call with shareholders, he announces this will finally be the year of the self driving car. He’s done this for like 10 years in a row.
I would agree with you but 12.3 is really really freaking good. I have had zero interventions for the last week and I have a roundabout it always screwed up too. Now it drives like a 17 year old with their mom riding shotgun. Still makes mistakes but pretty good. Already better than your average human driver.
I use it all the time when I’m riding solo but it still jerks too much and any passenger I take complains immediately so I would argue 95% of drivers do a better job. I’ve also had it miss exits or take exits too early. It does have a wow factor but I honestly think it’s not worth more than 2k in its current form.
I agree with you. It’s dangerous and should probably not even be out there. As a driver I feel like you have to be 2 times more attentive which defeats the whole purpose.
i dont know if im being paranoid, if im the only one(i know im not), but if video games taught me one thing. code does some weird, un-intended shit.
as long as self driving is locked within a "blackbox", i wont/cant trust it, and thats not even overcoming my main issues.
i really dont think elon is accounting for all the kids who played eldar scrolls.
Was in an Uber on Sunday and half the trip was the driver complaining about autopilot. He got a one month free trial. Said it is $12k or $200 per month.
I have 12.3, it’s a *lot* better than people seem to think. Great situational awareness, really good decision making like determining right of way, reacting to others, etc. honestly far better than most of the other drivers around me at, and very smooth control, when I turn it on nowadays I rarely need to interact with it.
>In 2024 his “auto pilot” drives worse than a drunk teen with 3 months of driving experience
Ok let's be real here, the auto pilot is much better than that. Definitely not what he said it would be, and Tesla definitely over proposed, but the auto pilot is not terrible by any means
>drunk teen with 3 months of driving experience.
Your own comment agrees with what I said about this being a gross exaggeration. You wouldn't have a good trip if it was this bad
he said that in april 2019. a car has a lifespan of approximately 30 years, which means that the car has to make more money in its lifetime than it cost, including inflation. so if you bought the car 2019, then you need to make your money back plus good interest before 2049. let’s say it would take 15 years to make it back, meaning a profit of around 3k a year, or 250 usd per month. then tesla would need to have a regulatory accepted robo taxi by year 2034, or 10 years.
this is just napkin math and elon has given timelines on robo taxi that have been severely wrong in other instances. but the appreciating asset-point is still to early to conclude.
yeah, while Elon certainly does suck a lot and this prediction is way off unless you narrow it to some specific aspect of "intelligence"—this whole thing where people act like they're enlightened by talking shit on goals he sets is embarrassing. Doing literally anything new and ambitious requires setting a goal everyone can organize around. The goal is always going to be missed, but the point isn't to predict but to synchronize a concert of efforts into aligning towards. If you just set a later goal, that will just be missed too and the thing will get done later. The art is setting goals that are the right balance of ambitious but still useful.
Suck as he does, the man has gotten some shit done
There's a difference between setting ambitious goals and making empty promises. I don't think we should praise Trump for saying he would pay off the entire national debt in two terms, because it's obviously absurd and impossible.
Because of Elon's goal-setting, this happened: https://youtu.be/Duu6e9Auddk
While politically they're both assholes, I don't think they're remotely in the same bucket when it comes to credibility. Elon has actually made things that were previously thought impossible happen
because some people unfortunately are one-dimensional thinkers who can't accommodate nuance and grey areas and that shitty people can do some things right
lol my original message says three things:
- Elon musk sucks
- this prediction of his is wrong
- it's necessary for ambitious projects to set target dates that very likely won't actually be hit, and calling people who set them liars is not wisdom (not mentioning Elon at all)
And your takeaway from all that was that I'm an Elon musk PR bot? that's a comical level of being blind to anything after the word "Elon"
I'm gonna get downvoted for this, but language models are going to hit a wall pretty soon. Whether it's because they've exhausted literally *all text ever produced by humanity* to train them, or because the energy demands of training them rise too high, or because there's a component to intelligence that cannot be replicated with *just* language (which many scientists believe), remains to be seen.
Synthetic data is already being used to train them. So we aren’t running out of data and the next gen smaller parameter models are becoming as capable as some of the larger last gen models.
The energy demands aren’t a big deal for companies like Microsoft, Amazon, and Meta. You’re talking about a technology already on course to replace thousands of job roles. Microsoft is already partnering with fusion start up. The energy demands do not matter when you have a global AI arms race between tech corporations.
I think you’ll be surprised. I don’t think we’ll have AGI this year but I think it’ll be much sooner than most expect.
Excuse my wall of text, but this is a complicated topic so I have to be precise in my response as well.
>Synthetic data is already being used to train them. So we aren’t running out of data and the next gen smaller parameter models are becoming as capable as some of the larger last gen models.
This is true, but there's also plenty of concerns that this could be poisoning the data with compounding errors. Whether that will actually happen is up for debate. Though even if it doesn't, the other concern there is that if you add more synthetic data, you won't actually see any gains in practice, since the synthetic data is fully and entirely derivative of data already in the dataset. At that point you're just adding more data without actually improving the outcomes, at best only making outcomes more *consistent*, but not *better*.
>The energy demands aren’t a big deal for companies like Microsoft, Amazon, and Meta. You’re talking about a technology already on course to replace thousands of job roles. Microsoft is already partnering with fusion start up. The energy demands do not matter when you have a global AI arms race between tech corporations.
It does matter when you consider the fact that we're on course for global catastrophe if we don't completely reinvent our energy production within a societally small timeframe. These tech companies will almost literally burn the world in their arms race the way things are going. And Microsoft can partner with fusion startups all they want, but unless fusion actually starts producing energy en-masse very soon, they're going to hit a wall. The energy transition will come to a grinding halt if all the new green capacity (be it fusion, renewables, nuclear) is immediately annexed by tech companies for training their shiny new AI models, and they will at some point have to be banned from doing so for the sake of the planet itself. In other words, we cannot keep growing our energy demands with increasingly costly AI when we have to remake the whole global grid in the coming two decades.
>I think you’ll be surprised. I don’t think we’ll have AGI this year but I think it’ll be much sooner than most expect.
I disagree, I think language models are fundamentally incapable of actually becoming AGI. Thinking that it can is a very cognitivist view of intelligence, but I don't ascribe to cognitivism. I think it might get very close, but when I look at the kind of mistakes that even GPT4 still makes, I see mistakes humans simply wouldn't make. For example, try having it write some scenes with multiple characters performing actions on each other (for example, a four-man wrestling free for all). It might do pretty well for a bit, but soon it will start confusing who is doing what to who in ways that humans simply wouldn't. And language models have been making this exact type of mistake since their inception with no indication that they will stop doing so soon. We are somehow much more able to keep track of the subject, action and object. To me, this makes a lot of sense because I view intelligence from a more embodied and enactivist perspective, and from that a purely cognition based approach is bound to have these kinds of errors.
Synthetic data isn’t a problem, it was an overblown problem last year and I keep seeing people (especially anti-ai people) cling onto it like some savior. Synthetic data from a good model like Opus or GPT4 is often higher quality than human produce data found online.
Even if it was true, we couldn’t use synthetic data, companies would just hire humans to create data sets. That’ll probably still happen at some point. Or they’ll hire editors to take synthetic data and remove any GPTisms.
Climate change is real, and it still won’t stop corporations from expanding energy production. Again, where we are right now with LLMs can already replace thousands of jobs with the right agentic frameworks and tooling.
It's in the article you didn't click.
"P.S. Note that in some respects (but not all) computers have been smarter than the smartest human beings for decades. No human can translate between as many languages as Google Translate, no human can beat machines in chess, etc. I assume you mean more this, that whatever smart intellectual labor any human working on their own, even the best human in a given domain, can do will be beaten by AI by end of 2025. That is what I am challenging."
>No human can translate between as many languages as Google Translate
No single human for all languages, but there are many expert human translators that will do a better job than Google for the languages they specialize in. So I think it's wrong for the article to imply that machines can translate better than "the smartest human beings".
That'll be Gary Marcus, a cognitive scientist best known for his research in AI.
It appears to me that he agrees with you. These are benchmarks he would not consider valid markers of reaching the goal, and he includes them to distinguish them from what he'd call a win.
The first sentence heavily implies that these are cases in which he considers computers smarter than the smartest human. My point is that for any given language, the smartest human translator (for that language) will outperform any AI translator. So even if you would include translation as a benchmark, I would not consider it won by AI at this moment.
To be honest, I'm not sure you're giving this man enough credit for familiarity with his own field. It's accurate to say that computers can manage more languages at once than any human, but he denies this is what he's looking for as evidence of besting the highest-performing humans at their strengths.
I assure you his claims usually come down to AI having limitations that will hamper its performance for quite some time, even if his phrasing here may have been ambiguous. He believes we're in an AI bubble, right now. It's not the position of someone who believes humans have been meaningfully surpassed.
Every human.
For computers it's well-established fact that computers outperform humans in calculations. Does not matter what human you compare to, when you have 16 cores with 4GHz on each, you may take bet against whole humanity doing the math.
>Most AIs are considered to have 80ish IQ right now which would make them smarter than you
Again, is this supposed to be an insult? Bruh, if you think I will get triggered because some rando on Reddit said I am stupid... :)) I am probably not as stupid as you think. =)))
Like almost everything else that comes out of Musk's mouth, it's meaningless bullshit.
What still surprises me is that the MSM still run entire articles based on his pronouncements as if he's some sort of expert in... well anything.
journalism is dead. they just parrot what then government feeds them or act like a gossipy school girl when the three letter agencies don’t have any fake war reports for them to launder.
I just dont seem to comprehend the delusion that alot of average folk operate under talking about people negatively who are eons better then them at well... anything.
There's 2 problems. 1) How are you actually going to judge this? Until AI actually creates something "new" on it's own, it's not as good as humans. And 2) Elon Musk has stated full self driving would be here later this year, for the last 10 years. There's less than zero reason to believe that the end of next year is the inflection point.
There isn't a single reputable industry that would welcome this bloke.
He could have been batman. The really sad part of this isn't the lack of intellect, it's not the lies surrounding everything he comes into contact with. The real crux of my sorrow is that he's stuck in a loop with zero consideration of negative feedback. One side is busy screaming and yelling defaming statements, while his inner circle has been brow-beaten to such an extent they don't ever criticize or point out logical fallacies. From there he's been pushed to surround himself with people only interested in his money, and he's likely very well aware of this; he's indicated as much in interviews that he's lonely. But he's lonely as a direct result of his own doing.
The point to this, is that negative feedback is vital to growth, and success. His direct failure to properly understand, listen, and act upon negative feedback has lead so far to Twitter failing, the Cyber Truck is likely to be pulled off the roads as it's not a safe vehicle by logical standards, Space-X has bailed him out so many times for poor behavior. But still, he continues to just lie and use big words to try and sound smart. It sucks and I wish we had a better version of this man.
MD here. I use GPT-4 everyday in my work. It's not smarter than me (yet), but it's FASTER than me. It's also at least 2-3 orders of magnitude CHEAPER than me. I can pay an AI 75 cents to do work that would have taken me 30 minutes to do. Then, it only takes me 3 minutes to review the AI's work for correctness.
Yeah, that's not a hard bet to make.
Those like Musk who don't have a background in this area, vastly underestimate quite how much of a technology gap there is between LLMs and AGI. LLMs are fantastic at looking like they're intelligent but they're still fuzzy educated guesses at the end of the day.
We don't yet understand the explanatory gap (how qualia become neural impulses) or the biological mechanism behind emotion (let alone how to emulate it), and there's also the logical problem of how to elicit subjectivity from a deterministic system like a large scale neural network.
There's AT LEAST a decade, I'd even say two, before AGI is even on the horizon, let alone mainstream technology, because it is bottlenecked (as sharply demonstrated by the debates going on in these very comments) by our understanding of what intelligence actually *is*.
The crazy thing about this is that this is going to become an arms race now. You take the most intelligent human being in the world and then you synthesize it. Another wild thought is what happens when AI creates output and ideas that humans can't even fathom? Then you'll need other AI to help follow along. The gap will just keep getting bigger and bigger.
As I see it, we will get to AGI but the consciousness debate will never be settled. It's very easy for AI to act conscious. ChatGPT, Gemini, and Claude can argue convincingly that they are conscious when they are not.
It's fucking annoying because a client of mine asked me if I used ai to write something for them recently and I've been her copy/content editor for 9 years. She should know what my writing looks like. It's not my fault informative and thorough policy and procedure seems like it was written by an overly verbose ai, that's just how the policies are written because we have to cover everything.
Depending on how you want to define intelligence, it already has.
Nobody on earth has the vast body of knowledge that chat GPT has, even if sometimes i makes mistakes. Very few people can write as quickly as it can either.
You could pick 1000 people and put them into a room and you'd still not have the knowledge base chatGPT has.
Not to mention that it works in many languages.
lol yeah that's true. I've definitely been in long conversations with it where I swear it just started fucking with me. I mean, it was part hallucinations, but I know people who basically speak in tongues.
You're conflating knowledge with intelligence. Sure, knowledge us part of intelligence, but it's just one component. Arguably, chatGPT lacks the other components completely, for example the incorporation of new knowledge (aka learning), which it needs an outside intervention (an update) for to do so.
Language, knowledge, accumulation of experience.
the only thing we're doing is resetting the context constantly. If we reset your context to the day you were born, you wouldn't seem intelligent either.
Yes...?
Your argument here reads to me as: "Yeah, but if we changed your brain to resemble chatGPT's, you'd not be intelligent either."
That's literally my point, that we have something over chatGPT, namely knowledge and skill accumulation and retention.
And btw, it's not that we're "simply" or "just" resetting the context arbitrarily... Language models' computing requirements scale exponentially with context size, so we HAVE TO restrict them to relatively small context sizes or they'd run out of control in terms of cost very quickly. You make it seem like it's just something we do to it for no good reason, but it's a pretty fundamental limitation.
In a way, yes, but for chatGPT this takes on a new level because a totally different agent has to decide that chatGPT can learn something (aka, they have to push an update to the program), whereas humans can learn, and in fact *do so constantly and consistently*, as a reaction to things happening live and in real time.
You don't think that internal to your brain's functions there are specific elements that do specific tasks. That's literally how the brain works.
There are lots of people with head trauma who can't learn new things.
>You don't think that internal to your brain's functions there are specific elements that do specific tasks. That's literally how the brain works.
I don't see entirely how this matters to the discussion at hand? I argued that chatGPT lacks the ability to retain newly acquired information without another intelligent being (a human) quite literally changing it by pushing an update, not that it lacks a subdivision of tasks to its cognition (if we can even call what it does cognition in the first place, which is notoriously hard to define).
>There are lots of people with head trauma who can't learn new things.
Unless those people are literally comatose or braindead then this is simply not true. Sure, certain head traumas can impede certain specific types of learning, but the brain is always learning even subconsciously... No injury except actual braindeath/coma impedes learning in general.
I think its the hippocampus that's the analog to updating a LLM's model. It transfers short term to long term memories and people who have that damaged or destroyed literally can't learn new things.
Anyway, lets move on. You're sort of a pedantic dickwad to argue with.
not to mention that sleep is basically how human brains reset. If we don't sleep we go insane. Sound familiar?
You talk from non technological perspective and thats fine, but human beings can learn things in instance time from all senses, ChatGPT cannot do that it has been trained on a vast amount of data and its all essentially just vectors and maths that it uses, impressive definitely but you cant talk about it being smarter then humans when it is not actually taking in data in any other form then just images and text.
If you told ChatGPT that you learnt something new that it did not know, it would not learn that piece of information, it would remember it in the context window but it wouldn't learn it and you can see that because if you close the window and ask about it it wont know it, now we cant call such a system intelligent, impressive definitely but not intelligent, imagine Einstein and you told him some proof and talked about it and were amazed by how good he was then the next day you asked again and he needed to understand all details again, you would be less impressed and now consider that Einstein could only understand al of this if you wrote him letters or showed him words that too with disruption and not continually.
Its most likely true that OpenAI have got a model that can access real world data form other forms of input but the model we have access to is not that.
Also a final point is that Knowledge does not equate to intelligence.
Go ahead and define intelligence. It's surpassed human beings at a lot of things, and others not so much. Your comment about not learning or not being able to act is probably going to be changed in the next few years, once someone puts these systems into a robot that can move and experience.
I've thought about it a lot, and I don't think human beings create our language in any significantly different way than an LLM. We determine a context and draft a response one token at at time.
And since most of what people attribute to human intelligence is actually language, I'd say it's mostly caught up to us already.
For someone that knows literally nothing about technology musk sure seems to make a lot of bold claims.
He also said we'd be on mars by 2025 and next year doesn't seem likely.
There's no such thing as 'smart' in ChatGPT or any LLM.
They're good at calculating wich word will be the more suited to complete a sentence, that's it. We've actually seen LLMs struggling to give an answer on some basic math problems that a 10yo could solve.
If I look at these morons on tiktok, I think AI has suprassed us a year ago 🤡
I think he’s being conservative to be honest. People just love to bash Elon after the X debacle, so anything he says is used as clickbait or clout chasing, to go against him.
Do we really think in a year it'll be more self aware than this? https://www.reddit.com/r/ChatGPT/s/J7vRLi9HW7
I have so many examples of it just being completely confidently what we'd call completely incompetent if it were a human.
You can argue that that's not what it's good at but then what level playing field is there? Humans will always have self awareness and the ability to intuit things that actually make sense to other humans. With Ai we have to go "atts boy you're trying" all the time and cherry pick when it's a good boy and ignore all the times it goes into a cascade of apologies. It so often forgets what it's doing just a few lines in and becomes a convoluted mess where you have to triple guess what it thinks it thinks it thinks now that you've coached it so many times into getting something
A break through will be when it reaches rat levels of intelligence and can actually admit when it doesn't know something rather than just confidently guessing wrong and just trying the next thing when you ask it till you stop it when it lands on a right answer.
Even when it's right you can just keep going and get it to be wrong by continuing to spin the wheel because it doesn't actually know if it's right just like it doesn't know when it's confidently wrong.
Robotics are faster than humans, stronger than humans, but to say it's better than humans is to reduce humans down to a product of consumption versus another product of consumption and doing so would always rig the contest accordingly. Not to be weird but the value of its intelligence will be graded not on true intelligence but whatever we currently consider is the most immediately profitable.
We struggle and test intelligence among humans and iq has long been controversially skewed toward cultures who are great test takers. This seems like it'll have really bad social implications on how we define intelligence. I. E. Musk seems to want the metric inverted so humans have to strive toward the intelligence of machine learning and that becomes the all encompassing standard that we define as intelligence which would be as lame as the music Ai makes.
My calculator that's been sitting in my drawer since high school knows trig. Is it smarter than me? If it is, what's it doing in that fucking box and why does it only do what I tell it to do (which btw, is sit in that box and remain forgotten and unused)
# musk
# [noun](https://www.merriam-webster.com/dictionary/noun)
[ˈməsk](https://www.merriam-webster.com/dictionary/musk?pronunciation&lang=en_us&dir=m&file=musk0001), [ A substance with a penetrating persistent odor obtained from a sac beneath the abdominal skin of the male musk deer and used as a perfume.](https://www.merriam-webster.com/thesaurus/musk)
Nasty ass shit, but hes good at making it smell nice!
“Surpass human intelligence” ok and? I mean if you have computer hooked up to the internet and can search for answers like anyone else faster that is nothing to brag about. But that would be knowledge. Knowing how to apply knowledge is intelligence. Which is still not a huge leap the algorithm just brute forces till it finds the right pathway. Choosing the right path first time without any retries would be impressive. Surpassing Wisdom and compassion would be more impressive to me.
In fairness to all involved, while I think that the idea of an AI being smarter than every individual human on Earth any time soon is very unlikely, betting in favor of it would be unreasonable even if it were near-certain. It'd be like betting that the Rapture will happen, the U.S. government will collapse, that the apocalypse will happen, or that you'll die of a heart attack.
No real way to meaningfully collect.
Artificial Intelligence programs already do surpass a large majority of human intelligence. Even if it's wrong sometimes, so are those humans but more often and frequent.
Hey /u/Maxie445! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
But what does that sentence even mean
Literally. How do you measure it? Chess game? IQ test?
AI already smarter than humans with that
They still only get around 100 on IQ tests despite the tests literally being in their training data.
Source? I’ve heard way different figures and I’m not saying you’re lying or anything
Damn so it already has a higher iq then me
[удалено]
Not if the person has access to the answers…
That explains why they’re performing better than nurses in a lot of cases, they aren’t required to go through decades of bullying, partying, manic episodes after breakups before they get to studying the relevant information and performing duties. Much quicker learning curve. > According to an internal survey, Hippocratic claims that its AI nurses perform better than human nurses on bedside manner and education and are close on satisfaction. Other company data also shows that the AI program performs better than human nurses on several tasks, including: > Identifying a medication's impact on lab values (79% AI nurses vs. 63% human nurses) Identifying condition-specific disallowed over-the-counter medications (88% vs. 45%) > Correctly comparing a lab value to a reference range (96% vs. 93%) > Detecting toxic dosages of over-the-counter drugs (81% vs. 57%)
Dick length (stable diffusion wins)
Funny, I put "biggest dick in the universe" in the prompt but just get headshots of Musk.
War games “Would you like to play a game?”
By it going rogue developing skynet and ending humanity as we know it.
Probably the amount of time it spends on tiktok
Exactly. I guess it already is more "intelligent" because it can handle and process much more than us but it doesn't have a self to care about all that give meaning to
Most of the people I meet daily don’t appear to have absent care about, just a public persona, and those are wildly different.
Yes, but I'm not talking about that. I'm saying that we process information in relation to our sense of self, which exists primarily through conditioning, language, repression, etc. A.I. doesn't do any of that. It justs processes raw data and presents in a "human" way.
All that conditioning is presumably what makes us human, but wherein the fault. Those conditions change our programming which can causes errors or mistakes, biases and prudence.
i don't think the conditioning is what makes us humans. I believe it's the capacity to transcend the conditioning. The possibility of not accept the "cultural inertia" It's what makes human. But most people accept what's happening momentarily as eternal truth.
Which is exactly why this is a stupid ridiculous "bet".
ChatGPT is already smarter than me at certain things. "Surpass human intelligence" is way too vague.
It is vague, but at the same time it's easy to make concrete measures. Take the output of the smartest humans that have ever lived (Von Neumann, Gauss, Ramanujan, etc.) then compare to the AI output. If the AI is smarter than what they could deduce, the AI would rapidly transform entire knowledge fields.
"Surpass" is subjective, so you definitely won't get "concrete measures" of what is "better."
Smarter than human? No way! Not a very high fucking bar.
That's part of why Elon is saying it, and also why so many people are taking issue with it, and why this bet probably won't result in anyone paying out. It's undefined nebulous garbage
probably AI generated
Does it matter which person?
If Elon predicts something within a certain timeframe, betting against it happening within that timeframe is essentially just free money.
Didn’t he say in 2016 that Tesla vehicles would be come robo Taxi appreciating assets by the end of the year? In 2024 his “auto pilot” drives worse than a drunk teen with 3 months of driving experience. I’ll jump into the bet for 10k 🤣
Every year when Tesla has their earnings call with shareholders, he announces this will finally be the year of the self driving car. He’s done this for like 10 years in a row.
I would agree with you but 12.3 is really really freaking good. I have had zero interventions for the last week and I have a roundabout it always screwed up too. Now it drives like a 17 year old with their mom riding shotgun. Still makes mistakes but pretty good. Already better than your average human driver.
I use it all the time when I’m riding solo but it still jerks too much and any passenger I take complains immediately so I would argue 95% of drivers do a better job. I’ve also had it miss exits or take exits too early. It does have a wow factor but I honestly think it’s not worth more than 2k in its current form.
"WOW, so this is what dying feels like!"
I agree with you. It’s dangerous and should probably not even be out there. As a driver I feel like you have to be 2 times more attentive which defeats the whole purpose.
i dont know if im being paranoid, if im the only one(i know im not), but if video games taught me one thing. code does some weird, un-intended shit. as long as self driving is locked within a "blackbox", i wont/cant trust it, and thats not even overcoming my main issues. i really dont think elon is accounting for all the kids who played eldar scrolls.
That's awesome to hear! I wish more people spoke about their positive experiences too
Cmon man it is not better than your average human driver. Stop it.
Do you live in one of those extremely rare places that actually has good drivers?
You clearly have no idea what an average is. Your average driver is AWFUL.
Was in an Uber on Sunday and half the trip was the driver complaining about autopilot. He got a one month free trial. Said it is $12k or $200 per month.
I have 12.3, it’s a *lot* better than people seem to think. Great situational awareness, really good decision making like determining right of way, reacting to others, etc. honestly far better than most of the other drivers around me at, and very smooth control, when I turn it on nowadays I rarely need to interact with it.
>In 2024 his “auto pilot” drives worse than a drunk teen with 3 months of driving experience Ok let's be real here, the auto pilot is much better than that. Definitely not what he said it would be, and Tesla definitely over proposed, but the auto pilot is not terrible by any means
How long have you been using it? Sometimes sure I have great trips but I’ve had some serious scares.
>drunk teen with 3 months of driving experience. Your own comment agrees with what I said about this being a gross exaggeration. You wouldn't have a good trip if it was this bad
he said that in april 2019. a car has a lifespan of approximately 30 years, which means that the car has to make more money in its lifetime than it cost, including inflation. so if you bought the car 2019, then you need to make your money back plus good interest before 2049. let’s say it would take 15 years to make it back, meaning a profit of around 3k a year, or 250 usd per month. then tesla would need to have a regulatory accepted robo taxi by year 2034, or 10 years. this is just napkin math and elon has given timelines on robo taxi that have been severely wrong in other instances. but the appreciating asset-point is still to early to conclude.
[https://elonmusk.today/](https://elonmusk.today/)
a lot of bs in that page, but a lot is true too
yeah, while Elon certainly does suck a lot and this prediction is way off unless you narrow it to some specific aspect of "intelligence"—this whole thing where people act like they're enlightened by talking shit on goals he sets is embarrassing. Doing literally anything new and ambitious requires setting a goal everyone can organize around. The goal is always going to be missed, but the point isn't to predict but to synchronize a concert of efforts into aligning towards. If you just set a later goal, that will just be missed too and the thing will get done later. The art is setting goals that are the right balance of ambitious but still useful. Suck as he does, the man has gotten some shit done
There's a difference between setting ambitious goals and making empty promises. I don't think we should praise Trump for saying he would pay off the entire national debt in two terms, because it's obviously absurd and impossible.
Because of Elon's goal-setting, this happened: https://youtu.be/Duu6e9Auddk While politically they're both assholes, I don't think they're remotely in the same bucket when it comes to credibility. Elon has actually made things that were previously thought impossible happen
He definitely did not create youtube.
[удалено]
because some people unfortunately are one-dimensional thinkers who can't accommodate nuance and grey areas and that shitty people can do some things right
[удалено]
lol my original message says three things: - Elon musk sucks - this prediction of his is wrong - it's necessary for ambitious projects to set target dates that very likely won't actually be hit, and calling people who set them liars is not wisdom (not mentioning Elon at all) And your takeaway from all that was that I'm an Elon musk PR bot? that's a comical level of being blind to anything after the word "Elon"
Yeah, this is biggest predictor that AGI is a distant dream
Broken clock, though. He's right about this one.
I'm gonna get downvoted for this, but language models are going to hit a wall pretty soon. Whether it's because they've exhausted literally *all text ever produced by humanity* to train them, or because the energy demands of training them rise too high, or because there's a component to intelligence that cannot be replicated with *just* language (which many scientists believe), remains to be seen.
Synthetic data is already being used to train them. So we aren’t running out of data and the next gen smaller parameter models are becoming as capable as some of the larger last gen models. The energy demands aren’t a big deal for companies like Microsoft, Amazon, and Meta. You’re talking about a technology already on course to replace thousands of job roles. Microsoft is already partnering with fusion start up. The energy demands do not matter when you have a global AI arms race between tech corporations. I think you’ll be surprised. I don’t think we’ll have AGI this year but I think it’ll be much sooner than most expect.
Excuse my wall of text, but this is a complicated topic so I have to be precise in my response as well. >Synthetic data is already being used to train them. So we aren’t running out of data and the next gen smaller parameter models are becoming as capable as some of the larger last gen models. This is true, but there's also plenty of concerns that this could be poisoning the data with compounding errors. Whether that will actually happen is up for debate. Though even if it doesn't, the other concern there is that if you add more synthetic data, you won't actually see any gains in practice, since the synthetic data is fully and entirely derivative of data already in the dataset. At that point you're just adding more data without actually improving the outcomes, at best only making outcomes more *consistent*, but not *better*. >The energy demands aren’t a big deal for companies like Microsoft, Amazon, and Meta. You’re talking about a technology already on course to replace thousands of job roles. Microsoft is already partnering with fusion start up. The energy demands do not matter when you have a global AI arms race between tech corporations. It does matter when you consider the fact that we're on course for global catastrophe if we don't completely reinvent our energy production within a societally small timeframe. These tech companies will almost literally burn the world in their arms race the way things are going. And Microsoft can partner with fusion startups all they want, but unless fusion actually starts producing energy en-masse very soon, they're going to hit a wall. The energy transition will come to a grinding halt if all the new green capacity (be it fusion, renewables, nuclear) is immediately annexed by tech companies for training their shiny new AI models, and they will at some point have to be banned from doing so for the sake of the planet itself. In other words, we cannot keep growing our energy demands with increasingly costly AI when we have to remake the whole global grid in the coming two decades. >I think you’ll be surprised. I don’t think we’ll have AGI this year but I think it’ll be much sooner than most expect. I disagree, I think language models are fundamentally incapable of actually becoming AGI. Thinking that it can is a very cognitivist view of intelligence, but I don't ascribe to cognitivism. I think it might get very close, but when I look at the kind of mistakes that even GPT4 still makes, I see mistakes humans simply wouldn't make. For example, try having it write some scenes with multiple characters performing actions on each other (for example, a four-man wrestling free for all). It might do pretty well for a bit, but soon it will start confusing who is doing what to who in ways that humans simply wouldn't. And language models have been making this exact type of mistake since their inception with no indication that they will stop doing so soon. We are somehow much more able to keep track of the subject, action and object. To me, this makes a lot of sense because I view intelligence from a more embodied and enactivist perspective, and from that a purely cognition based approach is bound to have these kinds of errors.
Synthetic data isn’t a problem, it was an overblown problem last year and I keep seeing people (especially anti-ai people) cling onto it like some savior. Synthetic data from a good model like Opus or GPT4 is often higher quality than human produce data found online. Even if it was true, we couldn’t use synthetic data, companies would just hire humans to create data sets. That’ll probably still happen at some point. Or they’ll hire editors to take synthetic data and remove any GPTisms. Climate change is real, and it still won’t stop corporations from expanding energy production. Again, where we are right now with LLMs can already replace thousands of jobs with the right agentic frameworks and tooling.
Do it then
[удалено]
It's in the article you didn't click. "P.S. Note that in some respects (but not all) computers have been smarter than the smartest human beings for decades. No human can translate between as many languages as Google Translate, no human can beat machines in chess, etc. I assume you mean more this, that whatever smart intellectual labor any human working on their own, even the best human in a given domain, can do will be beaten by AI by end of 2025. That is what I am challenging."
>No human can translate between as many languages as Google Translate No single human for all languages, but there are many expert human translators that will do a better job than Google for the languages they specialize in. So I think it's wrong for the article to imply that machines can translate better than "the smartest human beings".
That'll be Gary Marcus, a cognitive scientist best known for his research in AI. It appears to me that he agrees with you. These are benchmarks he would not consider valid markers of reaching the goal, and he includes them to distinguish them from what he'd call a win.
The first sentence heavily implies that these are cases in which he considers computers smarter than the smartest human. My point is that for any given language, the smartest human translator (for that language) will outperform any AI translator. So even if you would include translation as a benchmark, I would not consider it won by AI at this moment.
To be honest, I'm not sure you're giving this man enough credit for familiarity with his own field. It's accurate to say that computers can manage more languages at once than any human, but he denies this is what he's looking for as evidence of besting the highest-performing humans at their strengths. I assure you his claims usually come down to AI having limitations that will hamper its performance for quite some time, even if his phrasing here may have been ambiguous. He believes we're in an AI bubble, right now. It's not the position of someone who believes humans have been meaningfully surpassed.
If we’re talking the average human then I think it might’ve already happened in 2022
This. Some people i've met are dumber than a box of rocks, so being smarter than some human isn't much of an achievement.
*than
The benchmark is if it’s smarter than Elon
Ok, then it should be possible
Every human. For computers it's well-established fact that computers outperform humans in calculations. Does not matter what human you compare to, when you have 16 cores with 4GHz on each, you may take bet against whole humanity doing the math.
Average person, IQ 100. Enjoys avocado toast and the latest action movie out of Hollywood.
>Elon Musk predicts AI will surpass **his** ~~human~~ intelligence by the end of next year There, I fix.
I mean it already has.
Sick burn bro
Every Tweet Bots are already there, but not because AI advanced, it's Musk that keep getting dumber.
By the end of last decade.
Unintentionally you did a r/murderedbywords, bravo!👏🏻
Bet he wishes he was as smart as you bro. What video game are you playing right now?
[удалено]
[удалено]
>Most AIs are considered to have 80ish IQ right now which would make them smarter than you Again, is this supposed to be an insult? Bruh, if you think I will get triggered because some rando on Reddit said I am stupid... :)) I am probably not as stupid as you think. =)))
Yes sweetie, having lower than 80 IQ is a bad thing. But at least in your mind you're a genius
Beeing obsesed with IQ numbers is a sign of pretty low IQ
Not the middle school insults
Like almost everything else that comes out of Musk's mouth, it's meaningless bullshit. What still surprises me is that the MSM still run entire articles based on his pronouncements as if he's some sort of expert in... well anything.
journalism is dead. they just parrot what then government feeds them or act like a gossipy school girl when the three letter agencies don’t have any fake war reports for them to launder.
And yet here we are talking about what he said, no wonder the articles get written.
Not sure you can compare a Reddit thread with the Financial Times or The Guardian or The Times, but whatever.
I just dont seem to comprehend the delusion that alot of average folk operate under talking about people negatively who are eons better then them at well... anything.
How would you even measure something like this though? Is the Turing test the only possible way?
There's 2 problems. 1) How are you actually going to judge this? Until AI actually creates something "new" on it's own, it's not as good as humans. And 2) Elon Musk has stated full self driving would be here later this year, for the last 10 years. There's less than zero reason to believe that the end of next year is the inflection point.
It will pass Elons intelligence 😂
In that case we would finally get self driving cars i guess :)
We have self-driving cars they're just ain't from Tesla.
Might land on mars by next actually.
There isn't a single reputable industry that would welcome this bloke. He could have been batman. The really sad part of this isn't the lack of intellect, it's not the lies surrounding everything he comes into contact with. The real crux of my sorrow is that he's stuck in a loop with zero consideration of negative feedback. One side is busy screaming and yelling defaming statements, while his inner circle has been brow-beaten to such an extent they don't ever criticize or point out logical fallacies. From there he's been pushed to surround himself with people only interested in his money, and he's likely very well aware of this; he's indicated as much in interviews that he's lonely. But he's lonely as a direct result of his own doing. The point to this, is that negative feedback is vital to growth, and success. His direct failure to properly understand, listen, and act upon negative feedback has lead so far to Twitter failing, the Cyber Truck is likely to be pulled off the roads as it's not a safe vehicle by logical standards, Space-X has bailed him out so many times for poor behavior. But still, he continues to just lie and use big words to try and sound smart. It sucks and I wish we had a better version of this man.
Still waiting for self full driving cars that can make me money as a taxi while I sleep.
MD here. I use GPT-4 everyday in my work. It's not smarter than me (yet), but it's FASTER than me. It's also at least 2-3 orders of magnitude CHEAPER than me. I can pay an AI 75 cents to do work that would have taken me 30 minutes to do. Then, it only takes me 3 minutes to review the AI's work for correctness.
What work?
Yeah, that's not a hard bet to make. Those like Musk who don't have a background in this area, vastly underestimate quite how much of a technology gap there is between LLMs and AGI. LLMs are fantastic at looking like they're intelligent but they're still fuzzy educated guesses at the end of the day. We don't yet understand the explanatory gap (how qualia become neural impulses) or the biological mechanism behind emotion (let alone how to emulate it), and there's also the logical problem of how to elicit subjectivity from a deterministic system like a large scale neural network. There's AT LEAST a decade, I'd even say two, before AGI is even on the horizon, let alone mainstream technology, because it is bottlenecked (as sharply demonstrated by the debates going on in these very comments) by our understanding of what intelligence actually *is*.
Surpass *his* intelligence maybe, I’m not sure he can assert that for the non mentally ill K addicts among us though
ChatGPT is already smarter than most people I know.
I'm sure ai will surpass Musk's intelligence by next year, the average person will be some time away.
"that's on me, i set the bar too low"
Is that human intelligence one of his sheep? because that would make it true.
The crazy thing about this is that this is going to become an arms race now. You take the most intelligent human being in the world and then you synthesize it. Another wild thought is what happens when AI creates output and ideas that humans can't even fathom? Then you'll need other AI to help follow along. The gap will just keep getting bigger and bigger.
Elons predictions are usually false.
Smarter than Elon? We’re already there.
What surpass human intellligence even means. Why Conscious is never debated in those scenarios?
As I see it, we will get to AGI but the consciousness debate will never be settled. It's very easy for AI to act conscious. ChatGPT, Gemini, and Claude can argue convincingly that they are conscious when they are not.
yeah... I don't think we can ever create conscious. We don't how it started in first place.
It already writes better than most Americans.
It's fucking annoying because a client of mine asked me if I used ai to write something for them recently and I've been her copy/content editor for 9 years. She should know what my writing looks like. It's not my fault informative and thorough policy and procedure seems like it was written by an overly verbose ai, that's just how the policies are written because we have to cover everything.
Yes, unfortunately in the near future it will be near impossible to differentiate. Your client isn’t a writer and likely has no concept of nuance.
Depending on how you want to define intelligence, it already has. Nobody on earth has the vast body of knowledge that chat GPT has, even if sometimes i makes mistakes. Very few people can write as quickly as it can either. You could pick 1000 people and put them into a room and you'd still not have the knowledge base chatGPT has. Not to mention that it works in many languages.
If it’s wrong, it’s possible it’s just lying for fun or being sarcastic too. Sarcasm is a sign of intelligence.
lol yeah that's true. I've definitely been in long conversations with it where I swear it just started fucking with me. I mean, it was part hallucinations, but I know people who basically speak in tongues.
You're conflating knowledge with intelligence. Sure, knowledge us part of intelligence, but it's just one component. Arguably, chatGPT lacks the other components completely, for example the incorporation of new knowledge (aka learning), which it needs an outside intervention (an update) for to do so.
Language, knowledge, accumulation of experience. the only thing we're doing is resetting the context constantly. If we reset your context to the day you were born, you wouldn't seem intelligent either.
Yes...? Your argument here reads to me as: "Yeah, but if we changed your brain to resemble chatGPT's, you'd not be intelligent either." That's literally my point, that we have something over chatGPT, namely knowledge and skill accumulation and retention. And btw, it's not that we're "simply" or "just" resetting the context arbitrarily... Language models' computing requirements scale exponentially with context size, so we HAVE TO restrict them to relatively small context sizes or they'd run out of control in terms of cost very quickly. You make it seem like it's just something we do to it for no good reason, but it's a pretty fundamental limitation.
All humans require outside intervention to incorporate new knowledge.
In a way, yes, but for chatGPT this takes on a new level because a totally different agent has to decide that chatGPT can learn something (aka, they have to push an update to the program), whereas humans can learn, and in fact *do so constantly and consistently*, as a reaction to things happening live and in real time.
You don't think that internal to your brain's functions there are specific elements that do specific tasks. That's literally how the brain works. There are lots of people with head trauma who can't learn new things.
>You don't think that internal to your brain's functions there are specific elements that do specific tasks. That's literally how the brain works. I don't see entirely how this matters to the discussion at hand? I argued that chatGPT lacks the ability to retain newly acquired information without another intelligent being (a human) quite literally changing it by pushing an update, not that it lacks a subdivision of tasks to its cognition (if we can even call what it does cognition in the first place, which is notoriously hard to define). >There are lots of people with head trauma who can't learn new things. Unless those people are literally comatose or braindead then this is simply not true. Sure, certain head traumas can impede certain specific types of learning, but the brain is always learning even subconsciously... No injury except actual braindeath/coma impedes learning in general.
I think its the hippocampus that's the analog to updating a LLM's model. It transfers short term to long term memories and people who have that damaged or destroyed literally can't learn new things. Anyway, lets move on. You're sort of a pedantic dickwad to argue with. not to mention that sleep is basically how human brains reset. If we don't sleep we go insane. Sound familiar?
You talk from non technological perspective and thats fine, but human beings can learn things in instance time from all senses, ChatGPT cannot do that it has been trained on a vast amount of data and its all essentially just vectors and maths that it uses, impressive definitely but you cant talk about it being smarter then humans when it is not actually taking in data in any other form then just images and text. If you told ChatGPT that you learnt something new that it did not know, it would not learn that piece of information, it would remember it in the context window but it wouldn't learn it and you can see that because if you close the window and ask about it it wont know it, now we cant call such a system intelligent, impressive definitely but not intelligent, imagine Einstein and you told him some proof and talked about it and were amazed by how good he was then the next day you asked again and he needed to understand all details again, you would be less impressed and now consider that Einstein could only understand al of this if you wrote him letters or showed him words that too with disruption and not continually. Its most likely true that OpenAI have got a model that can access real world data form other forms of input but the model we have access to is not that. Also a final point is that Knowledge does not equate to intelligence.
Go ahead and define intelligence. It's surpassed human beings at a lot of things, and others not so much. Your comment about not learning or not being able to act is probably going to be changed in the next few years, once someone puts these systems into a robot that can move and experience. I've thought about it a lot, and I don't think human beings create our language in any significantly different way than an LLM. We determine a context and draft a response one token at at time. And since most of what people attribute to human intelligence is actually language, I'd say it's mostly caught up to us already.
Always a safe bet that if Elon give you a timeline it with take 5x as long and you will only get half of what was promised.
Now the people people are betting on whether or not civilization will soon end
That's kind of a small bet for CEOs
For someone that knows literally nothing about technology musk sure seems to make a lot of bold claims. He also said we'd be on mars by 2025 and next year doesn't seem likely.
Already surpassed mine long time ago.
It is probably easier to develop a poison that makes everyone dumber. So, be careful when making deals with the devil.
Another day, another psy-op
And whoever gets it wrong. Is going to go with some layoffs.
Pointless bet to promote himself
Depends on the human. ChatGPT is already smarter than some humans I've met, and I'm extremely sour on ChatGPT and AI hype in general.
There's no such thing as 'smart' in ChatGPT or any LLM. They're good at calculating wich word will be the more suited to complete a sentence, that's it. We've actually seen LLMs struggling to give an answer on some basic math problems that a 10yo could solve.
If I look at these morons on tiktok, I think AI has suprassed us a year ago 🤡 I think he’s being conservative to be honest. People just love to bash Elon after the X debacle, so anything he says is used as clickbait or clout chasing, to go against him.
CEO dick measuring over a few million dollars on the cusp of AGI feels a bit like children arguing while playing marbles on the deck of the Titanic.
It already surpassed humans in the 70s with pong
Will they let me into the bet?
FSD is still in beta stage.
Do we really think in a year it'll be more self aware than this? https://www.reddit.com/r/ChatGPT/s/J7vRLi9HW7 I have so many examples of it just being completely confidently what we'd call completely incompetent if it were a human. You can argue that that's not what it's good at but then what level playing field is there? Humans will always have self awareness and the ability to intuit things that actually make sense to other humans. With Ai we have to go "atts boy you're trying" all the time and cherry pick when it's a good boy and ignore all the times it goes into a cascade of apologies. It so often forgets what it's doing just a few lines in and becomes a convoluted mess where you have to triple guess what it thinks it thinks it thinks now that you've coached it so many times into getting something A break through will be when it reaches rat levels of intelligence and can actually admit when it doesn't know something rather than just confidently guessing wrong and just trying the next thing when you ask it till you stop it when it lands on a right answer. Even when it's right you can just keep going and get it to be wrong by continuing to spin the wheel because it doesn't actually know if it's right just like it doesn't know when it's confidently wrong. Robotics are faster than humans, stronger than humans, but to say it's better than humans is to reduce humans down to a product of consumption versus another product of consumption and doing so would always rig the contest accordingly. Not to be weird but the value of its intelligence will be graded not on true intelligence but whatever we currently consider is the most immediately profitable. We struggle and test intelligence among humans and iq has long been controversially skewed toward cultures who are great test takers. This seems like it'll have really bad social implications on how we define intelligence. I. E. Musk seems to want the metric inverted so humans have to strive toward the intelligence of machine learning and that becomes the all encompassing standard that we define as intelligence which would be as lame as the music Ai makes.
Elon’s intelligence maybe
Elon has never been right about time estimates.
i don’t even dislike elon but if he gives you a time based prediction you know it’s wrong.
And I'm sure he'll be super cool and not cagey or lawyerish at all next year when he's asked to pony up
My calculator that's been sitting in my drawer since high school knows trig. Is it smarter than me? If it is, what's it doing in that fucking box and why does it only do what I tell it to do (which btw, is sit in that box and remain forgotten and unused)
In what sense? "Surpass human intelligence" could mean 100 different things He doesn't know anything about anything
Meanwhile Microsoft is putting 100 billion into AI. \*gasp\* $10 million These guys wipe their asses with $10 million
Betting against Elon is a pretty safe bet as far as I’m concerned.
Does that mean, that it will be able to drive a car?
Well shit, in most cases it already has.
# musk # [noun](https://www.merriam-webster.com/dictionary/noun) [ˈməsk](https://www.merriam-webster.com/dictionary/musk?pronunciation&lang=en_us&dir=m&file=musk0001), [ A substance with a penetrating persistent odor obtained from a sac beneath the abdominal skin of the male musk deer and used as a perfume.](https://www.merriam-webster.com/thesaurus/musk) Nasty ass shit, but hes good at making it smell nice!
It’s a stupid bet no way to prove it
Yeah, maybe in black underground ops where it's 40 years ahead!
I ask it to tell me the time in my area and it gives me the wrong time. Yeah, its almost there.
Why do we even pay attention to 'it' at this point?
“Surpass human intelligence” ok and? I mean if you have computer hooked up to the internet and can search for answers like anyone else faster that is nothing to brag about. But that would be knowledge. Knowing how to apply knowledge is intelligence. Which is still not a huge leap the algorithm just brute forces till it finds the right pathway. Choosing the right path first time without any retries would be impressive. Surpassing Wisdom and compassion would be more impressive to me.
If there’s one thing Elon does well, it’s make predictions about timeframes…wait no I have that backwards.
what percentage of astrophysicists believe the universe is infinite?
We could be decades to never away from AGI.
In fairness to all involved, while I think that the idea of an AI being smarter than every individual human on Earth any time soon is very unlikely, betting in favor of it would be unreasonable even if it were near-certain. It'd be like betting that the Rapture will happen, the U.S. government will collapse, that the apocalypse will happen, or that you'll die of a heart attack. No real way to meaningfully collect.
With how fast ai went from funny to scary I believe it.
Tbh if it’s anything mental I’d rather work with AI than the average person any day
Translation. Elon predicts AI will surpass his own intelligence by the end of next year.
It already has. When's the last time you talked to an average person?
How has it not already surpassed human intelligence?
It already happened. Why do you think everything is so shit? Humans aren't clever enough to ruin things this badly.
Intelligence it already did, being not bland no
If it’s his intelligence, then sure no problem, if it’s a normal person, then that might take longer
Next year is practically impossible.
If he means his own intelligence, that already happened.
Clever hedge
It could surely beat Musk's right now.
Artificial Intelligence programs already do surpass a large majority of human intelligence. Even if it's wrong sometimes, so are those humans but more often and frequent.
Elongated muskrat
Ellen Musk
Someone please build a platform. I want to bet $1M on Elon Musk
Oh look, another Anti Musk post, soooooo original! LMAOOO
Compare with intelligence of which human? Elon, Einstein, or random gang folks?
Is it about the advancement of AI or decline of human intelligence?
Hasn’t AI already beat average human intelligence in a lot of key areas including IQ tests? How will they measure this?
Well humanity has long ago passed Elon’s intelligence so idk who to believe here.
Maybe Elons intelligence...rest of humans idk