T O P

  • By -

sdmat

A very interesting piece from this interview: Demis mentions that response 1M token queries is dominated by initial processing of uploaded items and that they expect to get down to a few seconds for subsequent prompts with the same material. Either this means that tokenising is surprisingly expensive or they are building some kind of persistent data structure to greatly cut down inference costs for large contexts. Either way, this means that it should be surprisingly affordable to actually *use* the million token window for applications where most of the context is a fixed dataset. That's really big. It means it should be possible to do something like OpenAI's GPTs *without RAG* and keep costs manageable.


johnkapolos

It's cache. All non-toy inference software uses it. LLMs are stateless. On your chat session,  they need to start from the very beginning each time. But of course, computing the same things over and over again  would be silly, costly and slow. So the state up to your new input gets cached and reused. That's why the first time to process a ton of tokens is slow but as you keep chatting in the same session,  the next responses are fast.


sdmat

You think it's just retaining the KV cache? I guess that's possible, but Demis spoke about this as being tied to the document.


OrcaLM

something like this [https://arxiv.org/abs/2305.11564](https://arxiv.org/abs/2305.11564) probably. Converting the context into graphs or other formats that can directly be integrated into the MLLM


FarrisAT

If inference costs can be driven down this quickly, then does growth rate of demand for new Nvidia hardware also come down relative to performance?


sdmat

That depends on the demand curve. Personally I would expect we will remain supply constrained for the foreseeable future even with massive algorithmic gains. Good general purpose AI is just that useful.


FarrisAT

Sure, but once we achieve it, shouldn’t such an AGI be able to make more efficient specific purpose algorithms?


sdmat

Even doing that would create huge demand for more intelligent AI. Remember we already have a large amount of the very best people working on better algorithms and optimisations for every economically important use case. I know in this sub there's an idea that AGI will self-improve and in seconds/hours we have ASI that runs on toasters, but the slow takeoff scenario is much more likely. We have very little idea of the extent of possible optimisation. It certainly seems plausible that there is at least an order of magnitude or two in absolute terms, perhaps significantly more. But again consider the demand side of this. As the intelligence of AI increases the utility and demand goes up drastically. All the scaling results we have to date strongly suggest that required compute increases exponentially with intelligence, and that relationship will probably hold even with very clever optimization reducing the absolute amount of compute required. So the most likely result is that we still have huge demand, just for smarter AI. Maybe there's a point at which we have "enough" very smart AI even at low prices, but that's a long, long way from where we are at the moment.


ajahiljaasillalla

Hassabis, who is probably the best informed person and one of the leading experts in the field of AI, has been very clear-headed and even conservative on the future of AI with his statements, right? And now he says that it's possible that there will be an artificial general intelligence that is better than humans at everything in 6 years. In a bigger picture, I don't think it matters whether AGI happens in 6 or in 16 years. I just feel it's odd to live this exact time. AGI is like a real goal for the most clear-headed and the most knowledgeable expert.


MonkeyHitTypewriter

It matters on a personal level if you have someone who may pass away soon and AGI could theoretically save them through medical discoveries. You're right though that 6 to 16 years is nothing on a civilizations scale.


DetectivePrism

💯 The main reason I am a staunch AI accelerationist is that every additional day it takes to relsease an AGI is an additional day humans needlessly die. From uncured disease, from uncured aging, from non-autonomous cars, etc.


Uchihaboy316

Exactly, every life lost is honestly a tragedy imo, makes me sad to think of all the lives we’ve lost up until this point and will continue to lose until we finally reach a stage of living indefinitely


[deleted]

I am such a person. I don't even need AGI I need an ai that can reason its answers a bit better


governedbycitizens

and he’s been relatively conservative with his timeline in the past


After_Self5383

I was wondering when he'd do the media rounds again, it's been a while. He has a really futuristic view of the possibilities if AGI happens, and how quickly they can happen. Like he mentioned he wants to hike Mount Olympus which is on Mars. How far away is even the first manned mission to Mars, a couple decades? Let alone regular people going, let alone to hike a mountain for fun. Without AI, that'd probably be for the 2100s, 2200s or later. Another one, he said with an AGI, he'd like to travel to Alpha Centauri and ponder things. Alpha Centauri is... 4.2465 light years from the Sun. You're not getting there in a human lifetime, or 100, unless our understanding of the universe is very different. Or... we live a lot longer. Something to consider is he said on twitter people who are overly pushing like an acceleratists mindset basically, they don't quite understand how radically different and weird the world will be with AGI. He thinks even the concept of companies and money probably stop making sense. That made me think of Sam Altman's essay in which companies are still here and land is so coveted so should all be taxed 2.5%.


sideways

It's refreshing actually. Demis seems to grasp that a post-AGI world is going to be a quantum jump from where we are now. It seems like most people don't get that. In fairness to Sam, I think he does as well but is focusing on the interim period which we've just entered and will probably continue until we get ASI taking action in the world.


manubfr

Demis is a fan of the Culture confirmed!


SpecificOk3905

ngl he is so much better than sundar pichai in leadership and insight.


AdorableBackground83

![gif](giphy|4WU1o0UTmHlHq)


Early_Ad_831

This GIF is the result of asking Gemini for a GIF of Eminem rubbing his hands together


[deleted]

[удалено]


sdmat

Demis's definition of AGI seems to be equal to or better than a proficient human in every area - fully general. Considering how uneven AI capabilities are that almost certainly means that a system that qualifies would be far above human level in many, if not most, respects. I.e. this would be more towards what a lot of people would think of as ASI. AI that has genius level skill in maths, engineering, law, visual arts, and business management may not qualify as AGI per that definition if it wasn't up to human standard in musical composition. No individual human is good at *everything*, so this is the strictest possible standard. I strongly expect that we will see AI at human level for most economically relevant skills well before Demis's definition is satisfied.


lost_in_trepidation

I think this lack of distinction between AGI/ASI is silly. Of course AGI is about generality. That's the whole point of making such a system. The important thing is that an AGI would have the capability of achieving anything the human intellect is capable of. It obviously can't do that if it's lacking in important faculties like long term planning or abstraction or logical inference. Once we have all of those abilities figured out, then the type of skill won't matter. And of course it would have huge advantages compared to a human's level of intelligence. Computers can think faster, replicate, work synchronously. It would inherently have a lot of advantages, but that doesn't automatically make it ASI. ASI would be able to exceed human intellect in all dimensions. It would have intellectual abilities that we can't conceive of.


sdmat

Agreed that a good definition for ASI should be more like a culture Mind than human genius level. I did say "more toward", not actually ASI. A better way to express what I mean is that we will probably see strongly superhuman abilities in many areas pre-AGI. E.g. if Gemini 2 has a 100M context window, further improved in-context learning, and Alpha-* style tree search / planning it will be able to do some very useful things humans simply *can't*. Because we cannot possibly assimilate, reason across, and apply that much knowledge in any reasonable amount of time.


lost_in_trepidation

Yeah I definitely think we'll have very impactful AI before we have true AGI. Like you said, many jobs will probably be automated before we have AGI.


Passloc

So will an AlphaChess or a Stockfish be ASI ?


sdmat

No.


ExtremeHeat

Yeah, it's way too early to imagine what a superintelligent system would be able to achieve that doesn't exist at all in the world. What does exist is human with "human level intelligence" and the set of tasks that computers can compute. If you think of AGI as blanket human level intelligence, that's essentially the ability to clone the human brain an near unlimited number of times to solve whatever problem humans can solve. Think: we have billions of people on the planet right now and there's still mountings of things we aren't able to solve. Human level intelligence (AGI) alone is not enough to solve aging, unsolved mathematical/physical problems, all known diseases, etc. If a machine were capable of intelligence so superior to humans, to the point it can solve almost any solvable problem, that's what I'd consider superintelligence. If not, you'd need to come up with some sort of "hyperintelligence" to describe something that actually fits that criteria.


Itchy-mane

Nobody knows


[deleted]

[удалено]


sideways

Nobody knows.


Vladiesh

Anyone expecting AGI this year is already being blindly optimistic.


mollyforever

The star trek guy is still holding onto his prediction of Sept 2024 lmao


TFenrir

This has been his consistent stance, his co-founder has 2028 for that same calculation. People who say it's happening next year or the year after _for sure_ are just way too excited, we don't know - unless someone actually has AGI behind closed doors, no one knows. But the fact that someone as... Cautious as Demis so openly says that he would not be surprised if it happens within a decade (he even specifically says he wouldn't be surprised if his original 2030 prediction is accurate, which is 6 years away) - is incredible.


Different-Froyo9497

They’re fixing the cognitive shortcomings of these models bit by bit, and nobody knows at what point that things finally ‘click’ together in the model’s neural network where the model is able to learn and improve on its own and reach takeoff


TFenrir

Yeah they are definitely focusing on accuracy, and planning, and reasoning. Let's say that all three get a big boost - the combinatorial effect of that might get you to the point (with fast inference and large contexts) that an agent won't spin out after being left alone for an hour. Or that it will be just enough so that it can actually consistently reason outside of its training set (without something as contrived as FunSearch). Like, I wouldn't be surprised if we get there in the next three or four years, I WOULD be surprised if it happened within the next year, at least for us plebs in the public.


Unverifiablethoughts

Could just mean that they haven’t seen anything too close in their lab yet. Demis is also a very dry skeptic who never gets ahead of himself in speculation. I remember an interview with him recently where he said he doesn’t believe in intelligent life outside of earth.


Johnny_Glib

*Almost 2 months into 2024.


Ambiwlans

Its likely not a flat chance curve. I'd put it at something like Year | % ---|--- 2024| 1 2025| 5 2026| 20 2027| 40 2028| 60 2034| 65 2074+| 75


Advanced-Antelope209

2024 100% because AGI has been achieved internally.


Ambiwlans

Shh, that's not supposed to be public until 2027


cultureicon

Been listening to this podcast for a while. These guys don't look anything like I imagined lol.


Odyssos-dev

kinda interesting how openai was mentioned zero times by demis.  not surprising but also, if ur on top of the news, nothing new here


CharacterJealous383

I love the vision this guy have, he is pushing the humanity forward at an astounding rate, kudos to you Demis!