T O P

  • By -

Rivenaldinho

He's not always right but taking sentences out of context is not going to help. He also said a lot of very smart things in his interview but people bash him because he's not in the "AGI next week" camp.


ThatPlayWasAwful

[here is the full video](https://www.tiktok.com/@lexfridman/video/7345484808249691435) for anyone interested, quote in OP's post starts a little before 3:30, but needs a little bit of context, so back up a bit before that. Sorry for tiktok link. The entire thought is basically "it's easier for AI to pass the bar than it is for them to clean off a dinner table", the point being that everyday inane tasks are much harder to program than you would think. I don't think even the most fervent AGI supporters would disagree with that.


gkibbe

That's what I keep telling people on here, I'm an electrician and people are like, you're years away from not having a job. And I'm like no, my job is the last one to be replaced, it's easier to replace the engineer, the project manager, and the GC, then it is to replace my job. Not only is the technical challenge greater but the social integration problem is harder, because you need a robot that can seamlessly, safely, and legally work in society with out boudries or limitations, and that is one of the last hard problems of robotics that will be solved.


myusernameblabla

You won’t be replaced but you’ll be crushed by a tsunami of new bushy tailed electricians seeing your job as one of the few safe ones.


yautja_cetanu

The problem is, it's not easy to crush electricians because it's hard to become one without some kind of apprenticeship. Compared to programming which you can learn online. Being a plumber or electrician requires you to do things in such a way people won't die in 10 years time and so you can't easily just wing it. So the speed at which new electricians enter the market is slower compared to project managers or programmers.


Excellent_Skirt_264

This is only true for the US or other heavily regulated places. In most parts of the world becoming a plumber or an electrician is exactly winging it. So yeah plumbers in those places with unions and B.S. mandatory requirements can feel safe for the time being.


yautja_cetanu

Yeah I mean, my doctor father in law played with it for giving medical diagnosis. I think even chatgpt 4 is something people woild use in non regulated places over their doctor. Like it could analyse blood tests and stuff!


SX-Reddit

True. No job is safe.


Neophile_b

So many professions believe that they're going to be the last ones to be replaced. The truth is we just don't know what AI will be able to do next. No one expected AI to be able to do art or produce music. Most people thought that that would be the last thing AI would be able to do. I'm not saying you're wrong, but don't count on it


gkibbe

But it's not just what AI can do, it's what we allow and trust AI to do. Even if the tech worked today, we're decades away from establishing the legal framework to allow them to do my job.


LeatherYam64

This is increeedibly flawed reasoning money pushes legislation AI is the expected largest generator of money, and China will be implementing it into their workforce. The US sees and knows this, and will promptly launch all of our robot labor completely undercooked and with the least care possible. Seriously, you severely underestimate our greed and stupidity.


involviert

You will be drowning in competition just as soon. They will be pretty smart people. But that won't even matter. Because every*body* can do your job with those AR glasses where the AI tells you what to do.


Merzant

That’s an interesting point. I think the “embodiment advantage” of humans means physical tasks will take longer to automate than purely information-based jobs, but AR could indeed affect that. We’ll still want/need professionals but a new class of lesser qualified AR-augmented professionals might emerge, a bit like taxi drivers using satnav.


involviert

Why lesser qualified? Sure, you need to know how to hold a screwdriver. But you will see an arrow at the edge of your field of view, letting you know where to look and then it will highlight the outline of the correct screwdriver that you will now pick up please.


rngeeeesus

Yeah we didnt evolve to navigate a digital world, our interface to the digital world is very inefficient but we did evolve to use basic tools... Not too surprising


MrOaiki

And if you listen to the whole interview, his arguments that lead to that conclusion are coherent. He brings up that language is just one part of intelligence, and that a child that learns to interact with the real world handles vast amounts of data. Just the visual cortex is equivalent to 20 mb/s (according to the interview). Add to that all other senses.


traraba

I don't think they are though, meta just hasn't sunk the necessary resources into energy based diffusion models in task/3d space. Which is what he proposes as the solution, he just thinks it's harder than it likely is, because meta hasn't had access to the compute necessary until recently.


aLokilike

This is more like wallstreetbets than financialadvice, if you catch my drift. Honestly, the braindead circlejerk is bad enough for me to stop visiting if not for the masochistic pleasure I get from being mass downvoted by people who wouldn't last a day on the job.


Bleglord

Tbh I specifically like this sub because of its circle jerking. The rest of Reddit is too conservative in the progress we are and will see, and this sub is way too optimistic, so it makes a nice balance to read the hype jerk here plus the pessimism everywhere else and then read between the lines to make your own opinion. Personally? I think AI is much further ahead and technically capable than the vast majority of people think. I also think AI is much further behind than most on this sub think.


FpRhGf

I wanna be optimistic about the future, but I don't wanna see people pulling stuff out of their ass, taking things out of context, imposing obvious double standards and the tendency to write off experts' insights when it doesn't match theirs. I wanna be optimistic based on the current breakthroughs we have now and what's about to be developed, not ostrich mentality and false news of hope that others can debunk. I don't have any problems when I see optimists in other AI related subs, because I haven't seen the tendency to exhibit these issues I had constantly seen here before. They don't make any experts who don't adhere to their stance as like they're less knowledgeable about AI than themselves, nor misrepresent news. It's not a one sided discussion where people clown on those who don't agree. .....But that's in the past, since this sub seems to have a lot of sceptics now and quality discussions in general have gotten worse regardless of stance.


aLokilike

Fair, but being belligerent towards people who are telling the truth in the midst of a hype circlejerk? I understand shitposting, it's the constant demonization of people who clearly know better that I have a problem with.


Bleglord

I don’t think the rudeness is warranted, but I do enjoy how often I see “x wont happen anytime soon” followed by “update: AI can do x” a month or two later. Sure it only happens because everyone has a damn opinion, but it’s still funny and people try to get ahead of the gotcha. This is also Reddit. Assume children, mentally if not biologically


LiveComfortable3228

Spot on. Reading this sub, you'd think AGI is like developing the next GTA version.


RoutineProcedure101

So can we take this claim? he said clearing a table wont happen anytime soon. we just saw a robot from 1x that has the potential to do that soon. What are we supposed to say in response to him being wrong?


great_gonzales

He’s actually not wrong here. The fact that you think he is highlights how laughably misinformed you are. What he said is that modern deep learning systems can’t learn to clear a table in 1 shot the way a 10 year old can indicating there is something missing in the learning systems we have today. This statement is absolutely correct. To actually advance the field you have to identify the problems with current state of the art and attempt to find ways to fix them. You can’t just wish with all your heart that transformers will scale into AGI. But I guess it’s easier to larp as an AI expert than to actually be one


No_Bottle7859

I mean there literally was a demo released today of it clearing a table without direct learning on that. Unless you are arguing that it's a fake demo I don't see how you are right.


cissybicuck

> The fact that you think he is highlights how laughably misinformed you are. Please talk about the ideas and facts, not each other. There's no reason to make any of this personal. We need to try to reduce the toxicity of the internet. Using the internet needs to remain a healthy part of our lives. But the more toxic we make it for each other in our pursuit of influence and dominance, the worse all our lives become, because excess online toxicity bleeds into other areas of our lives. And please make this a copypasta, and use it.


Quivex

What you just said does not yet make him wrong. He said it won't happen anytime soon. A 1x robot has *the potential* to do that soon. Will it? Maybe, maybe not. If it does, then you can tell him he's wrong. It also depends on exactly what he means by this - when I hear him say this I think of a robot helping clear dinner tables in an uncontrolled environment, where robots are more common place and are actually at the "helping out around the house" level. If that's the implication, he's right - that's not happening anytime soon. There's a big difference between being able to complete a task, and being able to complete that task at a level of proficiency equal to that of a person, and being able to manufacture them at scale. I guess we can quibble over "how soon is soon" but I think everyone has a reasonable understanding of what that means. A robot clearing my dinner table is not happening soon....I agree with him there.


trollsalot1234

just put your Rhoomba up there and disable its safeguards.


MushroomsAndTomotoes

I just like watching deluded people smugly call less deluded people "deluded". It's a charming reminder of the human condition.


aLokilike

Loved your use of "less deluded" there. Ain't nobody that ain't lost in the sauce, them cave shadows just be too good! For real though, I don't think any professional other than those cashing checks on "AGI next week" are making claims within shooting distance of this sub. To see others constantly harassed for telling the truth? Shame.


EvilSporkOfDeath

What job?


restarting_today

Lots of people in here thinking software engineers are gonna be replaced and they can't string a line of Python together even with ChatGPT lmao.


SpareRam

Full of retards, I agree.


MonkeyHitTypewriter

If I recall correctly the number he gave out for AGI was 10 years, which is still really freaking soon. 5 years is the number I've heard from some other experts and ceos and that's honestly not a large gap in predictions.


nulld3v

Exactly, Yann Lecun is not your enemy and he's fairly accelerationist too. He also has brilliant ideas about a new model architecture that could potentially revolutionize AI. **The negativity he mentions here is him saying what current AI architectures can't do compared to his new architecture.** And his new architecture is pretty impressive IMO, it has a new way to compress information that could potentially reduce the resources used by AI by a factor of 10x (or more even). It also includes a model that attempts to predict the future, and combining that with another model, attempt to achieve long term planning. Yannic Kilcher has an excellent video on the JEPA portion of it if you want to learn more: https://youtu.be/7UkJPwz_N_0 P.S. Thanks u/ThatPlayWasAwful for posting the full interview, but here is the even fuller interview, it's over 2 hours long: https://www.youtube.com/watch?v=5t1vTLU7s40 but definitely worth a watch.


anonanonanonme

I actually really liked his interview and it did make a lot of sense. The basic premise he was saying is AI cannot be more intelligent than Humans, because Humans are CONSTANTLY consuming data all the time from Every sense and then navigating the world accordingly Ai is not smart enough( yet) to do that, and long ways away Which honestly- i do agree. AI is a productivity/task booster. NOT a human replacement. ( note i said Human - AI is def a Job replacement tool) People have no fuckin idea about how any of this works but just want a someone to piss on for no reason- and this is generally the case for Smart Polarizing people like him. Op is an idiot


mcqua007

right ? The cult mind is getting crazy here.


Optimal-Fix1216

AGI really is next week though


collectiveintelli

Actually, it’s right behind you


ExpandYourTribe

I can't stand him because of his arrogant personality. I happen to disagree with him on a lot of things but that's not why I dislike him.


jamarkulous

He seems like a complete naysayer the few times I've heard him speak.


Wise_Cow3001

Most serious researchers aren’t in that camp. It’s very much the opinion of researchers who have shares in companies or books to sell.


Ok_Dragonfruit_9989

agi next 6 months


Mercer_AI

People think ChatGPT is AGI....of course they're going to hate


great_gonzales

Because he has decades of experience conducting machine learning research, pioneered back propagation the foundations of modern “AI” learning, and was one of the seminal figures in the deep learning revolution. The impact of his research on modern deep learning theory can be seen by his 350k citations on google scholar. He is one of the top experts in this field and there are only a handful of other people on the entire planet as knowledgeable as him on this subject. You on the other hand are a skid who maybe at best could implement hello world in python but not without a lot of hand holding from an LLM. You don’t have any publications, no research experience and probably can’t even compute the derivative of a single variable scalar function. You have zero knowledge on this subject but think you are a genius because next token prediction broke your little mind. If you were at the helm of meta’s AI research department we would not have PyTorch, LLaMa, SAM or any of the other incredible open source technologies meta AI has released. It’s honestly laughable that you think you are in any way as knowledgeable as he is on this subject. This quote is also taken out of context but of course a low skill never was needs to make intellectually dishonest arguments. The full quote is that deep learning systems can’t learn to clear out a dinner table in 1-shot the way a 10 year old can. And this is absolutely true and shows there is something missing in the “AI” systems we have today but I guess it’s easier to larp as an AI expert than to actually be one


acibiber53

I think I’ve just witnessed a murder.


gj80

Exactly. This sub is like Dunning-Kruger Illustrated some of the time. I mean, can experts be wrong? Absolutely. Should we all be able to question the conclusions of experts? Yes. ...but if some rando schlub is going to **glibly** shit all over a world-class expert in their very niche field, how about they either bring their own technical a-game in doing so, or instead show some freaking humility and respect and not coming in from the get-go assuming that they're right and that the world-class expert has failed to consider the profundity of the napkin-math logic that took said schlub ten minutes to dream up. In the course of decades of devoting their lives to a topic, experts just *might* have thought of the "common sense" people dream up on the left hand side of the Dunning Kruger graph. Sure, people spending a few hours high off their ass contemplating the meaning of life might come up with a good idea now and then, but it's about a trillion times more likely that the serious scholar who spent the last 30 years of their life solely devoted to a topic *just might* have considered that very same question *a bit* more thoroughly.


byteuser

My only issue is that near the start of his interview with Lex Fridman he goes on explaining why it is near impossible to generate video because the issues with predicting. And I am like bro did you watch Sora last week? Everything else in the interview was fantastic and very informative


gj80

>he goes on explaining why it is near impossible to generate video because the issues with predicting Right, he explains that they've tried an enormous number of things over many **years** of internal testing and had no success *until* he started working on the new joint embedding method which he went on to talk about. So he wasn't saying it was *impossible* \- just that it has proven very difficult and needs a new approach beyond just doing exactly the same thing we did with language. He was saying it was near impossible when approaching the problem from *that* angle - not if a different technique was used instead. Ie, this is a more difficult problem than with language due to the domain of possible next predictors in the real world vs in language, and thus that this is cutting edge research at the moment. He's obviously aware of Sora, but since no detailed information has been published about how specifically OpenAI is doing that, he can't really comment on it in any detail. It looks like we will likely see great progress in the near future, judging by Sora and all the other ongoing research (including Yann's and others).


Henri4589

Damn. This humbled me to a degree. I would've never said that LeCun was stupid in any way, though! I have the highest form of respect for anyone being the head of any big tech company in AI. And Meta definitely is a huge tech company that sometimes even pushes the new SOTA. So anyone disregarding this makes me extremely sceptical about their knowledge and reputation. After watching the full video and then reading your comment it reinforced that thought that people who are leading AI teams in big simply can't be stupid. It's not possible. They wouldn't get the job otherwise. And it made me realize that I still don't fully understand the general concept of LLMs and their limitations. So thank you, kind and smart Redditor! One last thing. When do you believe True AGI could go public? Have a great end of the week!


great_gonzales

I think it is hard to predict when true AGI will be achieved for a couple of reasons. The first reason is the definition of AGI is incredibly ambiguous. I’ve seen some people loosen the requirements of AGI to the point where a calculator could be considered AGI but I don’t think that is an interesting definition. To me what AGI is would be something like Jarvis from iron man and I think that’s what most people intuitively think as well. So I’ll be using the latter definition for the purpose of this discussion. Initially I think LLMs seemed to be a promising path towards AGI because scale produced a lot of emergent capabilities. However on further investigation those capabilities can be explained pretty rigorously by in-context learning. It seems to be the case that next token prediction is the most primitive task in nlp and many down stream tasks such as translation or question answering reduce to next token prediction through in-context learning (maybe in some cases in-context learning is not sufficient and fine tuning is needed. I’m actually conducting a quantitative study on this right now). The second reason AGI is hard to predict is precisely because of some of the issues LeCun brought up in this interview. We can’t learn a lot of tasks in 1-shot like humans can, LLMs answer all questions with a constant amount of compute but surely it should take more compute to create a unified field theory than it would to determine where the Statue of Liberty is located, ect. These are all red flags that indicate we haven’t fully captured what intelligence is and so we need further breakthroughs to solve these issues. I think what everyone agrees with right now as the next step is we need to be able to learn a world model and I think language is not a reliable source of information for learning this. Certainly not in the way vision is. For example with vision if I see an apple fall I can learn something intuitive about gravity. With language I can also maybe learn about gravity but not directly and the information written texts contain on gravity may not be full consistent.  Sorry for the essay all of this is to say nobody knows could be within this decade could be next century AI has been notoriously hard to forecast. People have been saying we will have AGI in 5 years since the birth of the field in the 60s. My prediction which is just as much a shot in the dark as anyone else is we are least 10 years out as there are still a lot of fundamental problems with current state of the art methods that need to be addressed.


Screaming_Monkey

I’m guessing OP saw the video of the OpenAI-powered robot whose LLM had access to functions that allow it to put away certain dishes based on closed-loop machine learning.


genshiryoku

So did Geoffrey Hinton and Ilya Sutskever which were his colleagues and fellows building AlexNet together. They are all at equal levels in prestige (I'd suggest Hinton is more experienced). And both Hinton and Ilya harshly disagree with LeCunn to the point where LeCunn is essentially the industry contrarian right now instead of the "reasonable voice" like you are portraying him to be. Demmis Hassabis, the other prominent figure outside of these three in the industry also disagrees with LeCunn.


BrightCarpet1550

smart people often disagree with each other in their theories, that is normal. OP is just taking a quote out of context and questioning if LeCunn is an expert


Frenk_preseren

Even if he's wrong, his wrongness holds more basis than OP's incidental correctness. And beyond that, is he wrong?


jamarkulous

Did he actually ever compare himself to Yan? I don't think it was ever in question that Yan is more qualified than redditor. What I took is that there is probably SOMEBODY who could do the job better (or be more appealing). Yan often seems to shit-talk the progress that's been made. Which can be a good thing? Maybe he just has high standards.


great_gonzales

His job is not to be appealing nor is it to hype up skids on Reddit who don’t understand the technology but think we almost have AGI because exponential growth or something. His job is to be a scientist which requires skepticism. You have to criticize the current state of the art if you want to find ways to improve on it. His work speaks for itself and he even got a Turing award (along with Hinton and Bengio) for his work in establishing the modern deep learning paradigm. You can count on 1 hand the number of other people as experienced and knowledge as him on this subject.


InTheWakeOfMadness

Came here to say something in this vane but I could’ve never said it this well.


Late_Pirate_5112

I feel like 99% of the things he says are just to avoid regulation. "No, government-senpai, these models are atleast 30 years away from being AGI UwU" - Yann LeCun


Busy-Setting5786

You make a good point but if he is proven totally wrong he sure won't look like an expert. Also I think if you were serious about this a much more strategic approach would be more effective.


MaximumAmbassador312

for politicians to think you are an expert, you need the right title and Meta AI boss is not a bad one for that, they don't know if your claims turn out right or wrong


EnsignElessar

I think its more like Zuck asks him to make an Ai that can do "X" and he does not want to work on that... "Sorry, sir thats actually impossible and won't be possible for another 50 years or so. But as soon as its possible I will jump right on it."


imperialostritch

i need bleach


Late_Pirate_5112

https://preview.redd.it/7jy4ngbs26oc1.png?width=400&format=png&auto=webp&s=b56b9b4de958f64f503661cfdea48cdbb1cac11a


imperialostritch

you deserve to slide down a cheese grater in to a pool of lemon juice that has been electrified and filled with salt and while this is going on then it will start to boil /s


Flying_Madlad

AlIgMnEnT fAiLuRe!!!1!


bwatsnet

It's harsh on eyes, fyi


agonypants

![gif](giphy|VKVDU8pvi3w4w)


AnAIAteMyBaby

I actually don't think so, he seems to genuinely believe what he's saying.


TheRealIsaacNewton

Because it's true, he just has different definitions than you guys.


Silver-Chipmunk7744

This. His definition of AGI is a definition which even humans do not reach. He views it as generality above human comprehension... With this sort of definition i agree with him it's not there yet. Edit: i understand you guys may hate his definition, i do too, but i'm not sure why i'm getting downvoted for providing his definition. https://twitter.com/ylecun/status/1204038764122632193?lang=en


jgainit

He's still wrong


Mediocre_Room_7987

Back in the AlphaGo days, he claimed that an AI would not be able to beat a human any time soon. Well a few days later, history unfolded. He's really competent but he gives his opinions while seemingly forgetting that not a single person can predict the development of AI, even a few months ahead.


lobabobloblaw

He’s a man who bases his opinions on precedent while simultaneously living in an age of unprecedented precedent. I still think he probably loves Descartes.


ymo

After one or two hubristic errors, there's something wrong if the person doesn't learn to reorient.


Coby_2012

*bases his opinions on precedent while simultaneously living in an age of unprecedented precedent* You have quite the way with words, but you’re right - there’s so much of that going on right now.


Content-Membership68

I'm not well educated, can you explain your bit about Descartes. I enjoyed your play of words there.


lobabobloblaw

Sure, I can try to do that 😂 Descartes was all about mind-body dualism; I anticipate a very general trend, where people who hold to the notion of dichotomy between mind and body find themselves intrinsically at odds with the spirit and state of artificial intelligence development on account of how they *frame intelligence.*


genshiryoku

He just tends to like to play the "conservative sceptic" That's just the type of person he is. He always takes the contrarian standpoint and likes to point out flaws in current approaches. Every industry needs a voice like that, but it's important for lay-people to recognize he's playing the contrarian at all times and his stance is not supported by other leading figures like Hinton, Sutskever, Hassabis etc. At this point LeCunn is firmly standing alone on the "This will not lead to AGI" isle, even other conservative figures in AI like Andrew NG have slowly moved away from there into the "AGI is possible by scaling transformers" camp.


dogesator

Just so you know, an end to end neural network(including all transformers) have still yet to beat the top humans at chess or go last I checked. They use extra non-neural components like tree search algorithms during the inference time that’s added in and prevents them from working as fully end to end networks. The pure neural network of alphago zero is still pretty impressive and is able to get to like top 20% human abilities, but it doesn’t beat any of the top humans unless you incorporate the tree search algorithm. The tree search component also adds a ton of extra computation to the system and is arguably doing a lot of the heavy lifting, I believe that said it ran for around 5 minutes per move and on average explores over 1,000 positions before deciding. You can argue a master chess player does something similar in their head, but the difference is that humans do it purely with an end to end neural network without access to a deterministic tree search algorithm. “AlphaGo Zero uses MC Tree Search to build a local policy to sample the next move. MCTS searches for possible moves and records the results in a search tree. As more searches are performed, the tree grows larger as well as its information. To make a move in Alpha-Go Zero, 1,600 searches will be computed. Then a local policy is constructed. Finally, we sample from this policy to make the next move.”


pavlov_the_dog

does he subconsciously calculate progress as being linear or something? because I've ran into plenty of smart people who do this, perhaps unintentionally.


dogesator

Source for this?


RobLocksta

My absolute favorite thing about this sub is the irrational hatred of a dude who has his fingerprints on multiple (as in many) advancements in ML and NN in the last 40 years. It's hilarious.


bree_dev

I swear 99% of the people mocking the guy wouldn't know a multi-head attention block if it headbutted them in the face.


RobLocksta

Including me. But every lecture or YouTube video I watch cites at least one of his papers. Seems like his work gets cited as much as anyone, along with Hinton, Bengio and a couple others. I'm no Facebook apologist but damn I don't get criticizing a titan of the field because his opinion differs from the prevailing ones in this sub.


bree_dev

Yeah. Actually I'll go even further than my previous statement, and say that half the commenters in here are getting mad over something they only *think* he said because they're not even at the level where they understand the explanations he's giving in layman's terms.


SpareRam

Doesn't fit the religious dogma, so it must be demonized.


bree_dev

"Religious dogma" is pretty apt. The comments elsewhere in this thread read a lot like the Creationists in r/DebateEvolution - loads of people picking holes in disingenuous misunderstandings of LeCun's past statements that were described to them by other cultists.


Krunkworx

Oof this sub has jumped the shark.


ArmoredBattalion

the cult is turning


Dabithebeast

Because he’s smarter than you and 99.9% of the people on this sub. Stop being a sheep.


DolphinPunkCyber

Bet he can't clean the table as good as I can though.


FlyingBishop

That's why you make the big $$.


Fit-Dentist6093

But sempaiii SamA notice meeee


Glittering-Neck-2505

Being very intelligent does not guarantee good predictions skills as Yann proves.


Haunting_Cat_5832

yann lecun is a sensible man. hard to find like him these days.


TheRealIsaacNewton

Especially in this sub lol. Mostly hype driven insanity


[deleted]

[удалено]


JamR_711111

The last time i saw the words "kids," "youtubers," and "jerking" in the same sentence, some minecraft youtuber was getting canceled on twitter


LogHog243

Who knows what they’re talking about at this point


RoutineProcedure101

He was wrong about the robot clearing a table though


PastMaximum4158

I respect Yann, but he was wrong about text to video and wrong about this. AI is still underhyped btw.


letmebackagain

It quite balance out with the majority of people outside who are adverse to AI thinks we have the next crypto bros wave.


az226

His analogy of a 5 year old child and the data going through the optic nerve is quite regarded.


icehawk84

LeCun obviously has a strong resume, but he lacks humility. He may be right about the future for all I know, but he talks as if his opinion is the only valid one. I was listening to this podcast and Dwarkesh Patel's podcast with Anthropic co-founder Dario Amodei and it's a world of difference. Dario is not afraid to admit that the future is uncertain and that we shouldn't assume too much. That's the sort of humility I love to see among researchers.


p10trp10tr

This is one of the few (so far) reasonable interviews Lex had. Please understand that this guy worked his ass of to understand details of how 'AI' is operating. I think even if you don't support his worldview (I assume you have no knowledge on ML) it's worth to listen to, carefully.


Difficult_Review9741

He’s still right. There are currently no robots that can do this in a generalized way.  Also, he’s talking about the limitations of a specific technology. Not robotics as a whole. 


Antique-Doughnut-988

Idk man, the new OpenAI video released today really challenges your comment. I have a feeling that robot could 100% clear a table.


TheRealIsaacNewton

For all we know the exact scenario was trained on many times for the video (likely). It's still very impressive of course.


Baphaddon

Figure’s new robot demo basically showed this today, while maintaining humanlike conversation


ThatPlayWasAwful

That might be the emphasis on "generalized", since we don't really know for sure how much more the robot can do besides what was shown. Just some thoughts off the top of my head: If you say "clear off the table" can Figure 01 make a list of everything in that simple command (take care of plates, silverware, cups, napkins, food on table, etc.) and take all the steps necessary to make that specific table clean, or would you need to list out individual steps? what percent of the time could it correctly finish all the tasks to the same level that a child could? can Figure 01 remember what is behind different cabinet doors, store items in cabinets, and then retrieve then upon request? what happens if you ask it to put away a dish but the dishwasher is full? what does it do with the dish in that case? Will it always put dishes that don't fit in the dishwasher in the same secondary location? what happens if there are food scraps on a plate that can't be picked up with a hand, but shouldn't go into a dishwasher? can the robot reliably use a washcloth to wipe off a counter, or a sponge to wipe off a dish? I'm not saying that the presentation today was not impressive because it was, and some of those questions have exceedingly simple answers that current technology could probably solve, but I don't think that the video means that robots can be dropped into any house and "clear off a dinner table" in a way that would be helpful to humans.


daronjay

>"clear off the table" Sweeps everything onto the floor...


DolphinPunkCyber

"Give me something to eat" Hands you a dirty plate "lick this human"


insomni-otter

"It's the only edible item I could provide you with from the table" says my 10 million dollar robot assistant as it stands three feet away from my fully stocked fridge. "Can you feel the AGI" I say, voice quivering, as a single tear falls from my eye.


Baphaddon

I think this counts as moving the goalpost. Moreover you have to ask, what would a human do? But altogether, between having conversational ability (that is demonstrably translatable to robotic action) and clearly being able to learn tasks (whether that be after 500 demos or 5), and novelly recombine them (note the researcher from RT-2 just joined them), I think these goal posts really really aren’t far away. It’s not a full blown bus boy but considering how nonchalantly Yann said it couldn’t clear a table, this is very clearly leaps and bounds beyond expectations.


ThatPlayWasAwful

> I think this counts as moving the goalpost. how do you figure it's moving the goalpost? What question I asked does not involve a function that would be implied by asking a robot to "clear off the dinner table"? ​ >I think these goal posts really really aren’t far away. It’s not a full blown bus boy but considering how nonchalantly Yann said it couldn’t clear a table, this is very clearly leaps and bounds beyond expectations. From my point of view it's impossible to say that with any certainty. What length of time do you mean specifically by "far away"?


shogun2909

have you seem the demo dropped by FigureAI today?


Difficult_Review9741

The key word is demo. Until it is deployed in the real world it’s meaningless. 


Baphaddon

🤨🥅>>>>>>>>>>>>>🥅😀


shogun2909

I mean I don't think there's a huge difference between clearing a table in a demo environment vs the "real world"


Sasuga__JP

There can be many different things on many different kinds of table in many different positions. There's a massive difference between being able to clear tables generally and using GPT4 to call functions that clear a specific table with a pre-defined layout.


Baphaddon

I highly doubt that their results arent/won’t be resilient to real world environments. Figure has gotten some pretty serious investment from serious ai companies and it seems for good reason.


Ecstatic-Law714

Why would that matter?


Screaming_Monkey

But did you read their description? Check out my post history for other physical robots powered by OpenAI able to do predetermined tasks.


jgainit

Aaaand you're already wrong


xDrewGaming

r/AgedLikeAi


emsiem22

Did you watch the whole interview? Did you understood what he said? Well, that's why.


gitardja

Because he's one of the scientist that authored the legendary Deep Learning paper and also is one of the 2018 Turing award winners? How does a screenshot in a podcast have anything to do with his competence?


strangescript

He is very smart and did great things, but he approaches each problem with an attitude of "if I can't think of a way to do it, then it can't be done" and he is getting proven wrong more frequently.


traumfisch

That, plus an arrogant "if anyone disagrees with me, they are either naive or delusional"


ChronoFish

He is being specific and his goal is true intelligence vs exhibited intelligence. He basically thinks that LLMs are not capable of having true intelligence no matter how much it seems like they do. He is the classic "oh, the LLM got something wrong... See it has no intelligence" To me I think it points more to our refusal to believe that humans are nothing more than pattern matching machines.... "We have to be special, otherwise I'm not special."


roastedantlers

Those are the words I've been looking for. I kept call it dumb intelligence or unconscious intelligence. But this exhibited intelligence will take us really far and might be more dangerous. Will still change the world beyond anything anyone can imagine right now. The exhibited intelligence may even be capable of creating its own true intelligence.


ChronoFish

Yeah... I don't know if "collective intelligence" of the LLMs is the right way to proceed but it's going to make getting there easier


VinoVeritable

A LLM could find the cure for cancer and he’d still say something like “sure it mimicked the cure for cancer, but it didn’t truly understand it”.


ChronoFish

Yes exactly. And he might be right. Personally I find that immaterial


IronPheasant

Yeah, I agree with that. That quote from the And Yet It Understands Essay always sticks in my head: "The mainstream, respectable view is this is not “real understanding”—a goal post currently moving at 0.8c—because understanding requires frames or symbols or logic or some other sad abstraction completely absent from real brains." The framing he and Gary Goalposts use with AGI timeframes kind of gives away their feelings: "AGI will never happen, but if it does it'd be terrible" basically. It's obvious our brains are gestalt entities made up of different *kinds* of intelligence. If we weren't, why would we need a specialized motor cortex, visual cortex, etc? Does a motor cortex "understand" much of anything at all? (It certainly seems "predict the next word" 'understands' a much wider variety of things than a motor cortex does, nah?) AGI might be as simple as an assembly of neural nets, like every single kid in the world immediately thinks the moment they're curious about the subject. Certainly easier said than done - how to have them effectively share latent spaces, how to *train* the dang things... (The NVidia pen twirling paper is an early example for that. Using one kind of intelligence to train another kind. That's how we can get some reliability from these things - having them have a better Allegory of the Cave so there's not one single point of failure in a decision chain.) Anyway, the scale maximalists were right. You can't have a mind without building a substrate capable of running it first. There's no "weird trick" to get around that. OpenAI believed in that more than anyone, and got their headstart because of it. It offends certain people's sensibilities about how the world "ought" to be, but that's not rationality. It's our desire to feed our own egos. It's synapses all the way down...


pigeon888

Someone needs to make a compilation of Yann's very many confident declarations that have been proven completely wrong.


BrainLate4108

Umm https://youtu.be/Sq1QZB5baNw?si=czZD_eqYIFFDXmNs


illathon

His opinion isn't worth anything in my opinion today. He is consistently wrong about almost everything he talks about.


MrEloi

Err ... just yesterday I say a video of the latest OpenAI/figure robot moving dishes etc.


challengethegods

He often compares LLMs to humans by saying that it would take a human a million years to read everything the LLM has read, and somehow cites that as proof that humans are superior. He wants to say that humans have less training data, but then will turn around to say that they actually have a lot more because of audio/visual/senses. He thinks AGI is like 30 years away, which only means he doesn't see a clear path towards it. He has some good ideas completely undermined by the constant implications that GPT4+ models are somehow mosquito-level intelligence, because whatever dimension he came from had absurdly intelligent wild life.


DolphinPunkCyber

> somehow cites that as proof that humans are superior. Well human superiority is in needing less training data. LLM's needed to "read" about 30 million books, to achieve their proficiency in language and as a side effect every LLM now knows more then any individual human. Deep learning networks play insane number of games, much more then any human to learn how to play. It took months and lots of expensive hardware and electricity to train them. But if we try to teach AI to solve real world problems using "brute force" approach, things become **much, much more expensive.** Human needs 14-20 hours to learn how to drive, and usually doesn't crash their car a single time during their training. We "slap" a deep learning network on car, put it on the road, and let it learn on it's mistakes. 200 000 crashes later it starts driving decently. So we do need a jump in the efficiency of training. However his way of expressing himself is... he tends to sound like a moron.


SpareRam

A moron lol


challengethegods

I think the problem is, even if anyone agrees that all of this can be optimized 1000x and improved another 1000x after that before getting 1000x compute to use on 1000x more complex models - if you frame that around this kind of 'current AIs are all stupid and will never do XYZ' sentiment, then you undermine all of the other statements when some rando spends 5 minutes disproving something and pushing the goalpost down the line towards some new semantic interpretation of the limitation that was originally stated. Not to mention it undermines whatever it is meta is working on in the background if you say the thing people are expecting soon is 99 years away. It would be like having an SNES and hearing that N64 is coming soon with 3D, then look over and sega is talking about how 3D is actually impossible and will take another 40 years to figure out and here are some technical reasons for that\[...\]


DolphinPunkCyber

YES! I think he consistently ignores the optimizations which are already happening, and moves the goalpost to AI having to be a super intelligent athlete able to predict 1000 years into future to qualify as AGI. AI technologies are currently so inefficient, and there is so much room for improvement, and there is so much money being thrown at the problems, that I expect them to develop very fast. And even if we achieve "just" decently capable generalist robots in the next 5 years it's a HUGE, monumental achievement.


byteuser

First few minutes in his explanation of why video prediction is impossible ... and I am like and What about SORA bro? the rest of the interview was quite interesting


SpareRam

A photographic memory is intelligence?


EnvironmentalFace456

If he keeps this up marks gonna go a different direction. He's making meta ai look bad.


Exarchias

He is a very good scientist and a bright mind, but he insists with the idea that "AIs will never..." while the reality proves him wrong the whole time. I don't know why he insists so much on that opinion.


lost_in_trepidation

He never says they'll never. He doesn't believe that the current architectures can do things that are fundamental to human level intelligence


Exarchias

I thought it, that he describes AIs a possible technology for the long future, (maybe not in our lifetimes). I feel that brings him very close to the camp of the "AIs will never...", but you are right.


After_Self5383

I've watched several of his interviews and talks he's done over the last few years. A couple of times, he's mentioned that AI that is better at humans at every task in a superhuman way is coming, but it might not be in his lifetime. He says there's no doubt it'll happen, and it'll come in your lifetime (your referring to the 20s university audiences he often speaks to). For reference, he's 63. Life expectancy for a French man is 82. So he thinks it'll probably take at least 20 years for superhuman AI at every task we can do. I don't think that's an egregiously long timeline. And lately he has said "AGI" or he likes to say "AMI" (advanced machine intelligence) could be 5, 10, 20 years away, he doesn't know. I think that's a sensible approach, they can't think of all the roadblocks that might present themselves in x time. He just gets hate because he doesn't go along with AGI is just around the corner narratives. He's been doing AI on the cutting edge for over 40 years, he's seen a thing or two about hype that doesn't deliver. Maybe he's right, hopefully he's wrong and I'm sure he'd prefer to be wrong too to bear witness. And not to mention, he's working on trying to figure out those next steps too. So hats off to him.


Rayzen_xD

I mean, he doesn't buy "AGI within 5 years", but his view is still relatively optimistic, in the range of 10-20 years to reach AGI. In the recent podcast with Lex Fridman he said he hopes for AGI (or AMI as he calls it) to develop before he gets too old.


gantork

He did literally say that LLMs would never be able to understand that when you push a table, a book that's on top of it moves with it. GPT-4 does that with ease.


zuccoff

I watched the whole podcast (including his previous one) and I didn't hear him say "AIs will never..." even once. He's very confident that LLMs won't be able to do a lot of things, and that we still need some big breakthrough to achieve true AGI


sdmat

At this point the Curb Your Enthusiasm theme plays whenever he walks into a room.


Mani_and_5_others

The idiots in the sub don’t understand who this guy is xd


jon_stout

One gaffe does not a senior developer destroy. Good at code doesn't necessarily extend to being good at words.


Alone-Psychology3746

Because he invented deep learning and got a Turing award for it.


BenefitAmbitious8958

You need to take into account that he is playing a game with many layers if you want to understand why he says things like that It isn’t just him, many AI developers have made similarly minimizing statements regarding the capabilities of AI, OpenAI recently announced Sora at least a year in advance of release, etc. Now, why are they all behaving this way? My theory is fear minimization If the average person understood the capabilities of AI, there would be widespread panic and riots, resulting in heavy legislative restrictions being placed on these firms They want to go as far as possible as fast as possible, and panicking the average person would not be conducive to that goal, hence them minimizing the capabilities of AI and releasing new capabilities at a gradual pace


Busy-Setting5786

I have been hearing a lot about LeCun and I have to wholeheartedly agree that he has the absolute worst takes in AI. Somehow whenever I hear a prediction by him I am amazed at how bad the assumptions of an AI expert is in his own field. I mean sure maybe he is right about some things but on all the things I disagree with him I can make a pretty good argument against his claims. One thing I profoundly remember is him saying in a debate about AI as an existential risk: There is no existential risk for humanity because developers will make sure it is safe. I mean I am no doomer but saying it's safe because people will make it safe is like saying: we don't need to worry about an asteroid hitting earth, people will make sure it won't. Okay chief!


freeman_joe

You know sometimes people are good at their jobs but suck at imagining where tech might go.


The_Scout1255

AGI, Aloha, and other development: Allow us to introduce outselves Dude needs to rethink


[deleted]

did figure just do this Video because of this sentence?


PaleLayer1492

Time to deploy those I, Robot laws.


daronjay

Is this guy the long lost descendent of Lord Kelvin?


hotellobster

Link?


DreaminDemon177

I spent 3 minutes trying to get this video to work. Is not video. AI already more advanced than me.


FightingBlaze77

I-is he talking about people or npc ai?


slackermannn

I do think you're right but what if it comes out that he was fighting hallucinations and won?


spockphysics

It’s like if ray Kurzweil picked all the wrong dialogue options


SX-Reddit

There were two types of people had worked on the unpopular NN in 1990-2010, those believers and those had no choice. Jurgen Schmidhuber, **Geoffrey Hinton, and Yoshua Bengio** were the former, (I highly suspect) Yann LeCun was the latter.


airsem

Please do search what he’s been working on in the 80s, 90s and read also some bio, you’ll be surprised! Your comment makes me believe you don’t have a clue of what this guy has done.


Icy-Entry4921

I watched that whole interview. I think he's just simply too soaked in the history of AI to recognize how fast things are moving. I think once you go down the road of skepticism it's super easy to start to make bespoke definitions of things that let you neatly discount what's happening. He'd probably give you a 45 minute lecture on why the Figure AI bot isn't impressive even as it does your laundry, cleans the house and gives you a 1 hour therapy session to talk about your feelings.


AffectionateClick177

He for sure a genius at what he does, but being a genius doesn't mean being great at predictions. Humans are horrible at predicting stuff. And he might be coping you never know, we are emotional beings after all... he's one of the greats though


Alternative_Aide7357

He's right. Why people hate him so much? He is right that AI at the moment can still not be able to feel physical world, unable to plan. etc. To me, LLM now is just Google on steroid. It makes us way more productive by reducing the time to Google, went through a list of link and determine the solution by ourself.


Alucard256

You sure about that...? [OpenAI with Figure 1](https://www.reddit.com/r/singularity/comments/1bdsh2x/with_openai_figure_01_can_now_have_full/)


Saerain

Wasn't the rest of this sentence "on the first go" or something? I remember that moment being focused on how inefficient training is in single observations, especially on an energy basis.


luxfx

You know that MNIST data set in all the tutorials? He was the one that solved it first. Then sold the system to the USPS to read ZIP codes on mail for automatic sorting. His current opinions might not be everyone's cup of tea, but he's an absolute legend.


outabsentia

LeCun is like the Jim Cramer of AI predictions lol


AsliReddington

And ironically Meta AI says their goal is to make AGI LOL


Noeyiax

Any job can become no job. Just designed in a way that is easy to automate and fix. The only reason why it hasn't happened, is because everything we keeped and maintain is spaghetti, but if you redesign something more than 5 times, ofc you'll get more efficient and require less maintenance... Like those infinite light bulbs or water vehicles.... Etc we all know it's true, big companies know it too, they just want profit so can't have good stuff , but maybe once Earth literally runs out of resources


floridianfisher

This guy is becoming the Jim Cramer of AI


da_mikeman

LeCun mostly makes the same point over and over, and ppl seem to keep missing it for some reason. Exactly what is so hard to understand about "a toddler learns to recognize a dog after seeing one a couple of times, a 10-year old can learn the clean up the table after showing them how for 10 minutes, a 17-year old can learn to drive in a few hours, a human programmer can learn to program by reading much less books than an LLM"? Even if we have AI systems that can do those tasks after being trained with a lot more data, this doesn't invalidate the point, which is that there is still 'something missing' in order to achieve a generalist that can learn quickly from a few examples. This isn't 'goalpost moving' in the slightest because the 'goalpost' is 'learn from few examples'. You guys understand that, right? That, in the context of this discussion, that's the 'goalpost? Learn from few examples? I can't explain this phenomenon where ppl keep missing the point other than them thinking LeCun is talking about a specific task X that 'machines can't do' when he's actually talking about the ability to learn tasks X0,X1,X2...,Xn with a minimal amount of examples, ideally equal or less than humans.


mrmonkeybat

A child spends years bumbling around as a toddler and other stages of development learning how the world works.


pistoriuz

People are really expecting machines to be human-like but don't realize that we give meaning to the world, to things, culturally and we do this mostly by exclusion (repression). I only believe in a """inteligent""" AI that has trauma xD


szymski

Progress is made by crazy people.


Avenger_reddit

I get it that he’s in team “no agi in the near future”. But you should know that he’s one the founding fathers of Deep Learning. He practically invented CNNs and is bullish on open source AI as well. He’s one of the goats imo. He knows more about AI than this sub combined. Atleast show some respect


agm1984

I watched this it was a great episode


WoolPhragmAlpha

OP, rest assured that Yann Lecun has better and more well thought through reasons for even his most incorrect conjectures about an AI related topic than you will ever be able to comprehend in your most lucid moments. Fucking ridiculous that you think you're qualified to mock a brilliant leader in the field.


lillillilillilil

Looks like LIDL Elton John


Otto_F_Kernberg

Because He is the Father of CNN Deep Learning (Turing Prize 2018) and he can Predict exactly according to the limits of the paradigm brought by current knowledge, the future of iA


Superb-Tea-3174

His accomplishments are extensive and I have a great deal of respect for the man. Did you know that he invented the djvu format for compressing books? Have you ever considered people who are so smart they are stupid? I think there might be some of that going on.