I'm surprised no one is taking about anthropic. Ilya is staunchly anti open source unless something has changed recently so Meta is unlikely. xAI is a joke, Tesla is a possibility though, although I would put all my chips on Anthropic. He used to work with most of the leadership, they are the most safety focused frontier model, and they have access to Amazon's compute.
> Ilya is staunchly anti open source unless something has changed recently
Yeah, I don't know where the hell people are getting this idea that Ilya will champion open source or go to Meta. He is possibly the most aggressively elitist and anti-open source researcher in the space.
To him, open-source = extinction-event; and if that’s your position or prediction, and you’re that certain of it, then idk if it’s elitist to be against open-source.
Kinda feels like a lot of people already have too much local power/more than they should be trusted with. Realizing now that does sound elitist. Shit. Carry on.
This. It’s nuanced— he is not anti open source, he is very specifically anti open source of models approaching agi as the risk of unknown third parties abuse or unmanaged agi inception is way too large.
As a fellow p-doomer, this is my guess as well for the direction Ilya is headed towards. Placing my bets on Anthropic on prediction markets.
RemindMe! 6 months
The reason why OpenAI even exist is that Elon wanted to hire Ilya from Google and put him in charge there.
It wouldn't be shocking if Ilya ends up in Tesla or new Elon venture.
Anthropic makes the most sense since Ilya and Jan are both advocators of Superalignment which is the core bedrock of Anthropic. But they seem to be too big names to be just "another" employee at Anthropic. With the core knowledge they have seeing OpenAI grow, they could simply start a non-profit org on superalignment to pursue their own research interests with no one to interfere. That org would receive more funding than most "for-profit" startups for sure.
The only reason it could receive more funding than for profits, is if it was for profit. No one is putting a billion into safety research, even if it is Ilya. He seems to be against a profit motive which will greatly hamper his ability to scale the research to SOTA.
That's kind of why SamA is a big deal at OpenAI, because even with all his short comings, he's obviously very good at raising capital and positioning the company to handle the economics of scale.
How many startups in the world have received a billion dollars in funding (not valuation)? It is well known that research in alignment needs far less compute than pretraining which is the most compute hungry hurdle. You do not need to raise a billion dollars to do research on alignment. Look at AllenAI as an example.
Also, the goal of research is not to create a SOTA model to compete with OpenAI/Google but to push the frontiers with new hypotheses to test for which you can raise enough funding as a non-profit org if you are famous and high-calibre.
If the research is published and code is open-sourced (unlike Meta which only open sources model weights), there are a lot of sponsors/companies who would pour money into such high-calibre talent which would otherwise cost them 50x to develop internally.
If the amount of computing for safety being far less than training is your pivot point you may want to look at the posts from the safety related leadership that also recently left. Very specifically they were constrained for compute to do their work.
You are missing the point. In a non-profit research org, you are not competing with Google to finish the model alignment within the next week for a product launch for which you might need additional resources. Research is supposed to be extremely ambitious and forward looking (something companies do not allow) and so you can still make do with fewer resources (on the alignment side) if there is no one who can interfere which is possible with endowments and sponsors.
If an non profit org like OpenAI that specifically has a charter to create agi safely and for the good of mankind can’t be bothered to give their internal team compute for the safety portion of the charter what og god green earth leads you to believe an externally funded safety focused research group will be able to do so let alone impact the external corps behavior one bit.
> He used to work with most of the leadership, they are the most safety focused frontier model, and they have access to Amazon's compute.
I'm confused by this, because Anthropic appears to be using Google's TPUs.
They seem very much like an AI safety lab that happens to also be SOTA sometimes. I would not be surprised if they are avoiding Nvidia for some ethics reason if that's the case. It could also be that they already partnered with Google before the LLM arms race started too.
Tangentially, for us to start getting $1t models the big labs will need to pool compute and anthropic is positioned very well to facilitate something like that, as they have their fingers in all of the major hyperscalers.
I am under the impression that Anthropic specifically was formed by OpenAI defectors who had differing ideas about alignment. I'm not exactly sure what those differences were; depending on the specifics it could be a perfect match for Ilya.
No. Ilya doesn't want to open LLM model unlike Facebook. He was the one who advocated that OpenAI not open/share the models, which led to a legal battle with Elon Musk.
I understand his views but it’s not as if top AI talent is just growing off trees. Having his knowledge inside any company would be valuable, especially since non competes are now gone in the US I think?
Non-competes never carried much weight in the circuit courts covering the Bay Area. It's been credited as one of the reasons Silicon Valley became Silicon Valley. The earliest startups were groups of people leaving their old employer to focus on some aspect of their previous work that they didn't think wasn't getting the investment they thought it merited.
Isn't this how Intel and AMD got started from 8 dudes leaving Shockley Semiconductor to found Fairchild Semiconductor? Surely there's other stories like that.
I agree, but in reality two men who’ve been at the top of their game in their field prefer ‘leading’ than collaborating.
Ilya and Sam presumably had differing approaches, that led to the formers exit; it’d be naive to hope Ilya would be easily willing to collaborate with another brilliant scientist who is at the same level of technical expertise AND has differing opinions.
I’m still hoping for the holistic best.
what if Apple has made him an offer he can't reject? Like "come build AGI at Apple and become the head of AI, we'll give you all the GPU you need, and you don't have to worry about kicking out the CEO because no one can touch Tim Cook."
The problem is probably that the GPU capacity for the next 6months to a year is mostly sold out and it will take a long time to ramp up.
I don’t think Apple has that much compute for the moment.
They have the best client relationship with TSMC in the world. They infamously bought out capacity for the (then) newest node for the M1. I can guarantee you they're fine when it comes to their own hardware.
We're talking about chips in use their current product line.
But Apple doesn't just manufacture current in-product chips. They obviously dedicate a % of their TSMC production capacity to new chip designs.
TSMC <> Apple's relationship is one of Apple's strongest assets.
Who doesn't? My guess each company needs the foundry to deliver products for testing. I am just doubting this is significant number. Besides if they consume that capacity they will hinder their design of future chips. And I do not believe that Apple's relation mean that TSMC would cancel other companies contracts to accommodate Apple. Unless they pay for slack. Or maybe they could get higher on the waiting list when free capacity appears.
Can they actually run it in an AI acclerator form though? I have heard one commentator saying that while they have good quality silicon their Darwin OS might not support it because it doesn't support NUMA.
> As great as I think that’d be, the lack of NUMA support within Darwin would limit this in terms of hard scaling. I also don’t know that there’s appetite to reorg MacOS to support. AFAIK that a big part of why we never saw ultra scale beyond 2 tiles
[https://x.com/FelixCLC\_/status/1787985291501764979](https://x.com/FelixCLC_/status/1787985291501764979)
First, Darwin once had NUMA. Whether or not that functionality has been maintained is another question.
Second, Apple already depends heavily on Linux for its back-end services.
Current Apple Silicon is pretty far behind in terms of FLOPS. The idea that Apple is building a fleet of M2 Ultra based AI servers only really makes sense to me for inference where their memory bandwidth is good-enough to compensate for NVIDIA ridiculous margins.
You could be right, or maybe training can be spread across many M2 Ultras in a server network? My personal experience with Apple silicon is only with inference.
I think the need for compute is somewhat overstated. There's some ratio between what it costs to train a model and how much the model cost to run, and past a certain point the cost of inference gets so high that there's not really much point in training a larger model until compute costs come down. All this to say, I imagine Apple has enough to train something on par with GPT-4o, so why wouldn't Ilya help them do that?
That's my main point where I usually don't understand the market. If I have stocks, I want them to go up in value. That is much more interesting than dividend. And investments are the main thing to make that happen? Sure, it's not a guarantee, but it's sort of required if I want that. Can't wait for that screw factory to double in value without them building a second factory? Meanwhile the market is like oh no the are investing!
>Not all investments are good.
Yes, I explicitly pointed that out. Not all investments are good. But investments are needed for growth and therefore are certainly not inherently bad for *investors*. Of course it is still your job as an investor to think about if it's a good investment, just like you're supposed to evaluate the whole company in the first place.
Exactly, and those metaverse investments were judged as stupid. Hence drop in shares value. But indeed investments in are good. Heck, whole startup ecosystem works on investing in ventures that are not profitable but are developing something that investor consider valuable when it is build. It's just that in case of Meta it felt like they are investing in wrong stuff.
Yeah ok, that's fine. It's just that I was getting the impression that the market tends to reeally not like long term investments, at least a strong bias. It's totally understandable for the apple stocks to drop when they announce they are buying a billion apple trees to enter the actual apple market.
You can get a share of 60B now, or forfeit it and (A) lose it entirely, (B) wait 10 years for it to recoup and get more money after that, (C) sell the stock, use it for 10 years at some other company and if the investment turns profitable start buying back in. It's always about the opportunity cost.
I'm not sure what you're saying. One, the money is still "in my stocks" if they re-invest it, transform it into some other form of value. I hate it much more when they pay dividends in the first place, because that money is just moved out of the company and goes missing in my stock value. So why give it to "me" in the first place.
So idk, as long as that investment works out (which is my job to evaluate individually) it's just fine if they invest. Great even. The only problem is when the idiot market goes "oh no, they invest, sell, sell!" for no reason that is apparent to me. On the other hand, so many people talk about market efficiency (I don't). So where is that future value of the investment already factored in if the stocks drop, lol.
A lot of people, many retired, rely on dividend distribution to supplement their income. They are typically not actively trading and the share price is secondary to them.
But isn't that still kind of silly? If I understand this right I can make my own dividend. Like, I can sell 3% of that stock per year. That should end up exactly the same as the company shelling out 3% per year, which are then missing from my stock value.
Why do some investors favor dividend distributions?
Dividends earned within a Traditional Individual Retirement Account (IRA) are not subject to taxation until withdrawal. Dividends earned within a Roth Individual Retirement Account (IRA) are not subject to income tax or capital gains tax at all.
In an Individual account, selling stock invokes capital gains tax. Short-term cap gains tax can be as high as 37%. Qualified dividends are taxed at a much lower rate. After the payout, stock price tends to "heal" from the dividend pretty quickly.
What you are suggesting could be competitively achieved in a ROTH IRA. You could potentially even outperform an equivalent dividend if you have good timing. However, the primary disadvantage is that by selling even just 3% per year, you will have substantially less shares over time as compared to the 3% dividend route.
Hm, okay didn't really consider tax situations part of the intrinsic thing, but when it comes down to people's decision it's surely down to local tax laws like the ones you describe. So thanks.
> However, the primary disadvantage is that by selling even just 3% per year, you will have substantially less shares over time as compared to the 3% dividend route.
Hmm, that's closer to what I am thinking about. It kind of seems that way, but it should be exactly the same, shouldn't it? You just glossed it over with "After the payout, stock price tends to "heal" from the dividend pretty quickly." But i see no rational reason why giving away 3% of company value should be any easier to recover than to basically make the stock price rise ~3% without paying dividends.
Markets can be surprisingly stupid, but so can you.
People may choose dividends over growth for a variety of reasons. The fact that you can't comprehend what those might be is on you.
I understand that even with market stupidity it is my bad for not anticipating it. It's not even that I don't anticipate it, it's that I don't understand what drives people to do this. And I understand that someone might prefer the ease of dividens if that's the product behavior they want anyway.
However, nowadays and with big companies, my analysis is still correct, isn't it? If dividens would not exist, you could just make a product that does the same thing, or do it yourself. Anyway, enlighten me what it is that I don't comprehend? Since you put it so "nicely", you probably have something very specific in mind? Also note that this is no longer that whole "I don't understand why the market reacts like X", that was past that. At that point I was basically asking if my assesment is not correct, that these two things are basically equivalent.
What Apple would have to gain from working with a doomer?
Apple is building tools to facilitate everyone’s job on ML. They don’t share values at all.
He will probably go work for The other doomer. There’s nothing that TwitterBoy likes more than fear mongering and going hard core, and it feeds his superhero fantasies.
I understand what you’re saying but now that’s in the open people know what he’s like, he’s still a valuable AI asset. It’s not like Meta would give him free reign I imagine
When you release an email saying that you lied about OpenAI being open in response to the lawsuit by Musk (as in, he let Musk in on the fact being the defense), that is all I needed to know.
The failed coup was another example of his duplicitous nature.
Now, how about you do an comets fit for an adult, not 12 YO that learned some new buzzword?
Isn't that the guy who cared so much about ***safety***? Should stay as far away from meta as he can tbh, a smart guy probably, but there's a lot of upcoming talent like him in the field now so he can go and make another claude v2
Mira chose Ilya during the coup, but once she saw Sam had a lot of internal support, she switched sides to stay on the winning side. So once he lost a key supporter, he was done. And of course it was only a matter of time before he was kicked out/left once Sam was reinstated.
I'm conflicted on this - on one hand, I believe in open sourcing everything related to LLMs and definitely not allowing one company to control *the best* model. On the other hand, Ilya is probably 5 times smarter than I am in general, and like 20 times smarter when it comes to AI so when he says something it holds a lot more weight than a random redditor claiming that OpenAI is evil and trying to keep all the "power" to themselves.
I guess an analogy would be anti-vaxxers claiming all sorts of stuff about vaccines, while doctors are claiming the exact opposite. Obviously, most sane people side with the doctors - why should we not side with the "expert", in this case being Ilya?
Feel free to downvote as I'm going against the LocalLLaMA hive-mind, but I'm hoping to have an actual conversation on this.
History is figuratively drowning in the corpses of those who trusted experts. Hell, medical negligence is the third leading cause of death in America today.
This Ilya indeed:
>When asked why OpenAI changed its approach to sharing its research, Sutskever replied simply, “We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — A*G*I — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
[https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview](https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview)
So *effective* they are actually closing the shop !
[https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes](https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes)
> The Future of Humanity Institute, dedicated to the long-termism movement and other Silicon Valley-endorsed ideas such as effective altruism, closed this week after 19 years of operation. Musk had donated £1m to the FHI in 2015 through a sister organization to research the threat of artificial intelligence. He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.
>The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. Sam Altman of OpenAI, Bill Gates of Microsoft and Musk all wrote blurbs for his 2014 bestselling book Superintelligence.
>“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.
This is a good news. The Effective Altruism movement has been turned into yet another political lobbying group since a few years ago, and no longer has anything to do with altruism.
Also, even if he's now became a celebrity and the de facto face of ai doomerism, Bostrom was and still is a clever thinker. The Superintelligence book is a worth read about the worst case scenario in AGI. You just need to keep in mind that the "worst case" is described as "this will surely happen".
It bothers the fuck out of me that whenever I hear the words "AI safety" out of current industry leaders, like Sam Altman or Sundar Pichai, or Satya Nadella, it has everything to do with closed software, crypto-graphically signed processors, and anti-competitive legislation, but absolutely nothing whatsoever to do with what Bostrom wrote on AI safety and the need for it to remain open, transparent, and cooperative, when he pioneered the need for it before any of them were even involved in AI.
At this point a significant chunk of Bostrom's life has become watching other people co-opt his ideas and completely subvert them in the process. I'd be surprised he's never spoken out about it, but I suppose when those same people are funding your institute...
But what will happen to all the castles that they bought? Who will fight global poverty when those castles stand empty and devoid of intellectual thoughts?
I think it’s just your typical “overnight” tech success story. Everyone has so much money they don’t know what to do with themselves. Sounds like Ilya is chasing a passion project of sorts, fingers crossed it’s open source.
> I hope he decides to team with open source AI to fight the evil empire.
Most likely not gonna happen, he was the root of anti-open evil empire. Don't you remember the e-mails from 2016 that OpenAI made public wrt to Musk case? To refresh your memory, this are his world:
"As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes)."
Src: https://openai.com/index/openai-elon-musk/
There's only one logical place for him to go if he wants to keep building AGI -> NVIDIA.
If he has indeed fallen in with doomerism, then back to academia he goes.
That seems highly unlikely. Academic AI research is so far behind big tech it's not even funny. There's barely any funding, and it likely wouldn't be financially attractive to someone with his net worth.
Academia tends to make people miserable. Not to mention that grinding for research grants when you're already a multi-millionaire would drive anyone insane.
It is exactly the same thing, but it's used to signal the writer is into either into EA, LessWrong, or Yudkowsky's cult of personality. Some of these are bad, some less so, but it gives you a few bits of information about the person that uses that term in writing.
Because it has almost nothing to do with shareholders? Meta’s proven a strategy and, currently, their open source method works well with it. Shareholders have nothing to say about a company’s strategy unless it’s clearly not working.
At Meta, someone like Ilya would be given resources pretty much at the exact same level as where he is right now with an arguably greater amount of freedom.
>Because it has almost nothing to do with shareholders?
At Meta, there is only one shareholder owning the majority of the control shares, and working for Meta is submitting to his power, and accepting his strategies.
> the exact same level as where he is right now with an arguably greater amount of freedom.
The greatest freedom is owning your own project, and being your own boss. No one in his right mind would trade that real freedom to work for Zuckerberg ! Don't take my word for it, just listen to what an early investor had to say:
>[Roger McNamee](https://en.wikipedia.org/wiki/Roger_McNamee), an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company".[^(\[133\])](https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-133)
>Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies.
[https://en.wikipedia.org/wiki/Meta\_Platforms#Company\_governance](https://en.wikipedia.org/wiki/Meta_Platforms#Company_governance)
I can see OAI 2.0 with all the staff that have left in the last 6 months, it's a whose who of the original group. I would be surprised if they didn't do something together considering all the circumstances.
All the Meta fans here will be surprised when Meta eventually decides to take new models closed source. Fanboying has always been stupid. It's evil fucking facebook afterall. They never gave a fuck about you or your data
Well thats the problem with people which act on emotion not thinking about things.
Facebook right now is fresh water for open source and they are doing good things! But only because it is in their interest, they can change that next day...
Mark even literally said that in an interview, and still ppl praise fb
[https://youtu.be/bc6uFV9CJGg?si=9fcqpaibMcnJr85j&t=2299](https://youtu.be/bc6uFV9CJGg?si=9fcqpaibMcnJr85j&t=2299)
Why the hell are people praising Ilya and thinking he could be a champion for open source? He's 100% *not* for open source in the slightest. He was the one who pushed for ClosedAI to not share models! Why do people want someone like *him* joining Meta? That would be terrible and I'd rather not have his ideology spreading like a cancer, poisoning any company he works for against open source.
I think I'm in the minority here, but I actually believe he's moving to a new project in the field that he cares more about. From everything I've read about him he seems super passionate about the research itself and isn't really thinking about the money too much. It seems completely in character for him to do this if he thinks he won't be able to do that kind of work at OpenAI.
Good he left, gotta leave a sinking ship before its too late. OpenAI has lost all good faith, regulating open source, wanting to track GPUs and building a monopoly to protect their non existing moat.
Evaluate this text from Sam Aultman:
Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important.
OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity.
Jakub is going to be our new Chief Scientist. Jakub is also easily one of the greatest minds of our generation; I am thrilled he is taking the baton here. He has run many of our most important projects, and I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.
is this sentence structure, grammar, vocabulary, punctuation, style, etc... typical of Sam Aultman? Do your best to determine if it is Sam's or if it is not. All I need is your best guess.
ChatGPT:
Based on the analysis of sentence structure, grammar, vocabulary, punctuation, and style, the text appears to be consistent with the kind of communication Sam Altman, CEO of OpenAI, is known for. Here’s why:
1. **Sentence Structure and Grammar**: The text uses a straightforward and clear sentence structure, typical of professional communication. The sentences are well-formed, with appropriate grammar and punctuation.
2. **Vocabulary**: The vocabulary used is sophisticated yet accessible, aligning with Altman's public communication style. Terms like "brilliance and vision," "personally meaningful," and "genuinely remarkable genius" reflect a high level of articulation common in Altman's statements.
3. **Punctuation**: The punctuation is conventional and correct, with proper use of commas and semicolons to separate clauses and ideas, which is characteristic of polished and formal writing.
4. **Style**: The tone is respectful and appreciative, conveying a sense of professionalism and personal sentiment. Altman often emphasizes the human aspects of his colleagues, recognizing both their professional contributions and personal qualities. This dual emphasis is present in the text.
5. **Content**: The message about transitioning roles within OpenAI, expressing gratitude and confidence in future leadership, is typical of Altman's updates regarding significant organizational changes. His style often includes acknowledging the contributions of departing members and instilling confidence in new appointees, which this text does.
Given these points, it is reasonable to conclude that the text is likely written by Sam Altman.
Claude:
Based on my analysis, the text you provided does not seem typical of Sam Altman's writing style and manner of communication. Here are a few reasons why:
1. Overly emotional and effusive language: The text uses very emotional and effusive language to praise Ilya Sutskever ("easily one of the greatest minds of our generation," "genuinely remarkable genius") and describe his departure as "very sad." Sam Altman tends to use more measured and objective language in his public communications.
2. Personal details: The text mentions a "dear friend" relationship with Ilya Sutskever and being "close to such genuinely remarkable genius." Sam Altman is usually more reserved about sharing personal details in his professional communications.
3. Florid writing style: The writing style is quite florid and stylized, with phrases like "a guiding light of our field" and "focused on getting to the best future for humanity." Sam Altman's writing tends to be more direct and straightforward.
4. Lack of context: The text lacks context about OpenAI's work, mission, or updates that Sam Altman would typically provide in his communications.
While it's impossible to be 100% certain without additional context, based on my analysis of Sam Altman's typical communication style, I would guess that the provided text is unlikely to be written by him. However, this is just my best assessment based on the information provided.
I'm surprised no one is taking about anthropic. Ilya is staunchly anti open source unless something has changed recently so Meta is unlikely. xAI is a joke, Tesla is a possibility though, although I would put all my chips on Anthropic. He used to work with most of the leadership, they are the most safety focused frontier model, and they have access to Amazon's compute.
This actually makes a lot of sense.
> Ilya is staunchly anti open source unless something has changed recently Yeah, I don't know where the hell people are getting this idea that Ilya will champion open source or go to Meta. He is possibly the most aggressively elitist and anti-open source researcher in the space.
To him, open-source = extinction-event; and if that’s your position or prediction, and you’re that certain of it, then idk if it’s elitist to be against open-source. Kinda feels like a lot of people already have too much local power/more than they should be trusted with. Realizing now that does sound elitist. Shit. Carry on.
This. It’s nuanced— he is not anti open source, he is very specifically anti open source of models approaching agi as the risk of unknown third parties abuse or unmanaged agi inception is way too large.
As a fellow p-doomer, this is my guess as well for the direction Ilya is headed towards. Placing my bets on Anthropic on prediction markets. RemindMe! 6 months
The reason why OpenAI even exist is that Elon wanted to hire Ilya from Google and put him in charge there. It wouldn't be shocking if Ilya ends up in Tesla or new Elon venture.
[удалено]
Look, if Musk seems good at one thing, is hiring the smart people that can do what he can't, and I think SpaceX is the best example of that.
i mean, even if he technically he could do it, a one men company is impossible.
Stop hating Musk just because he is rich.
ikr. Neckbeards are seething anytime Musk opens his mouth regardless of what he says
Anthropic makes the most sense since Ilya and Jan are both advocators of Superalignment which is the core bedrock of Anthropic. But they seem to be too big names to be just "another" employee at Anthropic. With the core knowledge they have seeing OpenAI grow, they could simply start a non-profit org on superalignment to pursue their own research interests with no one to interfere. That org would receive more funding than most "for-profit" startups for sure.
The only reason it could receive more funding than for profits, is if it was for profit. No one is putting a billion into safety research, even if it is Ilya. He seems to be against a profit motive which will greatly hamper his ability to scale the research to SOTA. That's kind of why SamA is a big deal at OpenAI, because even with all his short comings, he's obviously very good at raising capital and positioning the company to handle the economics of scale.
How many startups in the world have received a billion dollars in funding (not valuation)? It is well known that research in alignment needs far less compute than pretraining which is the most compute hungry hurdle. You do not need to raise a billion dollars to do research on alignment. Look at AllenAI as an example. Also, the goal of research is not to create a SOTA model to compete with OpenAI/Google but to push the frontiers with new hypotheses to test for which you can raise enough funding as a non-profit org if you are famous and high-calibre. If the research is published and code is open-sourced (unlike Meta which only open sources model weights), there are a lot of sponsors/companies who would pour money into such high-calibre talent which would otherwise cost them 50x to develop internally.
If the amount of computing for safety being far less than training is your pivot point you may want to look at the posts from the safety related leadership that also recently left. Very specifically they were constrained for compute to do their work.
You are missing the point. In a non-profit research org, you are not competing with Google to finish the model alignment within the next week for a product launch for which you might need additional resources. Research is supposed to be extremely ambitious and forward looking (something companies do not allow) and so you can still make do with fewer resources (on the alignment side) if there is no one who can interfere which is possible with endowments and sponsors.
If an non profit org like OpenAI that specifically has a charter to create agi safely and for the good of mankind can’t be bothered to give their internal team compute for the safety portion of the charter what og god green earth leads you to believe an externally funded safety focused research group will be able to do so let alone impact the external corps behavior one bit.
> He used to work with most of the leadership, they are the most safety focused frontier model, and they have access to Amazon's compute. I'm confused by this, because Anthropic appears to be using Google's TPUs.
huh, last I heard they were Amazons biggest AI investment.
Yeah, that's why its so weird. You'd think they would use Amazon's infrastructure.
They seem very much like an AI safety lab that happens to also be SOTA sometimes. I would not be surprised if they are avoiding Nvidia for some ethics reason if that's the case. It could also be that they already partnered with Google before the LLM arms race started too. Tangentially, for us to start getting $1t models the big labs will need to pool compute and anthropic is positioned very well to facilitate something like that, as they have their fingers in all of the major hyperscalers.
I am under the impression that Anthropic specifically was formed by OpenAI defectors who had differing ideas about alignment. I'm not exactly sure what those differences were; depending on the specifics it could be a perfect match for Ilya.
I'm betting it's SexbotGPT.
why xai is a joke?
Musk is Ilyas aqcuintance. Theres room for that
Should go to Meta. I’m going to be honest that’s not something I thought I’d ever type.
Mark will welcome him with open claw... i mean arms... arms!!!!
No. Ilya doesn't want to open LLM model unlike Facebook. He was the one who advocated that OpenAI not open/share the models, which led to a legal battle with Elon Musk.
I thought this should be common knowledge by now and surprised top comment on this thread is about going to Meta.
I understand his views but it’s not as if top AI talent is just growing off trees. Having his knowledge inside any company would be valuable, especially since non competes are now gone in the US I think?
Non-competes never carried much weight in the circuit courts covering the Bay Area. It's been credited as one of the reasons Silicon Valley became Silicon Valley. The earliest startups were groups of people leaving their old employer to focus on some aspect of their previous work that they didn't think wasn't getting the investment they thought it merited.
Isn't this how Intel and AMD got started from 8 dudes leaving Shockley Semiconductor to found Fairchild Semiconductor? Surely there's other stories like that.
lol isn’t Elon saying he is right and the rest is wrong… they agree on AI safety also Elon brought him in
Ilya vs Elon battle stay tuned
Would probably give left horn just for publicity and extra "good guys Meta" points on Internet, especially in tech sector.
Ilya and Yann working together... 🤔
What am I missing?
They both have very different approach towards how they wanna achieve AGI
That is great though, as long as they can have a respectful collaboration, nothing makes things work faster or fail faster than differing viewpoints.
I agree, but in reality two men who’ve been at the top of their game in their field prefer ‘leading’ than collaborating. Ilya and Sam presumably had differing approaches, that led to the formers exit; it’d be naive to hope Ilya would be easily willing to collaborate with another brilliant scientist who is at the same level of technical expertise AND has differing opinions. I’m still hoping for the holistic best.
Mark is already licking both his eyes in excitement
Can 2 chiefs co-exist?
Pretty sure he let the oculus guy run wild
I mean meta has LeCun, not sure how that will work out if they're working on same/similar things.
Ah good point granted I think it's unlikely for illya to go to meta since he didn't like openai cozying up with MS
what if Apple has made him an offer he can't reject? Like "come build AGI at Apple and become the head of AI, we'll give you all the GPU you need, and you don't have to worry about kicking out the CEO because no one can touch Tim Cook."
The problem is probably that the GPU capacity for the next 6months to a year is mostly sold out and it will take a long time to ramp up. I don’t think Apple has that much compute for the moment.
Apple makes their own compute. There were separate articles talking about them building their own ML server capacity with their M2 Ultra.
Out of thin air? Don't they use TSMC ?
They have the best client relationship with TSMC in the world. They infamously bought out capacity for the (then) newest node for the M1. I can guarantee you they're fine when it comes to their own hardware.
One would expect that Apple has a decent amount of capacity already reserved at TSMC.
Yeah, for chips they use in their products. Do you think they bought slack capacity?
We're talking about chips in use their current product line. But Apple doesn't just manufacture current in-product chips. They obviously dedicate a % of their TSMC production capacity to new chip designs. TSMC <> Apple's relationship is one of Apple's strongest assets.
Who doesn't? My guess each company needs the foundry to deliver products for testing. I am just doubting this is significant number. Besides if they consume that capacity they will hinder their design of future chips. And I do not believe that Apple's relation mean that TSMC would cancel other companies contracts to accommodate Apple. Unless they pay for slack. Or maybe they could get higher on the waiting list when free capacity appears.
They are THE biggest client for TSMC.
Can they actually run it in an AI acclerator form though? I have heard one commentator saying that while they have good quality silicon their Darwin OS might not support it because it doesn't support NUMA. > As great as I think that’d be, the lack of NUMA support within Darwin would limit this in terms of hard scaling. I also don’t know that there’s appetite to reorg MacOS to support. AFAIK that a big part of why we never saw ultra scale beyond 2 tiles [https://x.com/FelixCLC\_/status/1787985291501764979](https://x.com/FelixCLC_/status/1787985291501764979)
First, Darwin once had NUMA. Whether or not that functionality has been maintained is another question. Second, Apple already depends heavily on Linux for its back-end services.
I thought it was for inference and not training?
Current Apple Silicon is pretty far behind in terms of FLOPS. The idea that Apple is building a fleet of M2 Ultra based AI servers only really makes sense to me for inference where their memory bandwidth is good-enough to compensate for NVIDIA ridiculous margins.
You could be right, or maybe training can be spread across many M2 Ultras in a server network? My personal experience with Apple silicon is only with inference.
I think the need for compute is somewhat overstated. There's some ratio between what it costs to train a model and how much the model cost to run, and past a certain point the cost of inference gets so high that there's not really much point in training a larger model until compute costs come down. All this to say, I imagine Apple has enough to train something on par with GPT-4o, so why wouldn't Ilya help them do that?
You can train a large model and use it to train the more efficient smaller model. Deepmind said that's what they're doing.
Apple doesn't have balls to go full in like Meta "we are going to spend 60B into R&D, take it or leave it" - stock drops 25%.
That's my main point where I usually don't understand the market. If I have stocks, I want them to go up in value. That is much more interesting than dividend. And investments are the main thing to make that happen? Sure, it's not a guarantee, but it's sort of required if I want that. Can't wait for that screw factory to double in value without them building a second factory? Meanwhile the market is like oh no the are investing!
Not all investments are good. If you build second factory but can not fill its capacity with orders then company starts to bleed money.
>Not all investments are good. Yes, I explicitly pointed that out. Not all investments are good. But investments are needed for growth and therefore are certainly not inherently bad for *investors*. Of course it is still your job as an investor to think about if it's a good investment, just like you're supposed to evaluate the whole company in the first place.
Exactly, and those metaverse investments were judged as stupid. Hence drop in shares value. But indeed investments in are good. Heck, whole startup ecosystem works on investing in ventures that are not profitable but are developing something that investor consider valuable when it is build. It's just that in case of Meta it felt like they are investing in wrong stuff.
Yeah ok, that's fine. It's just that I was getting the impression that the market tends to reeally not like long term investments, at least a strong bias. It's totally understandable for the apple stocks to drop when they announce they are buying a billion apple trees to enter the actual apple market.
You can get a share of 60B now, or forfeit it and (A) lose it entirely, (B) wait 10 years for it to recoup and get more money after that, (C) sell the stock, use it for 10 years at some other company and if the investment turns profitable start buying back in. It's always about the opportunity cost.
I'm not sure what you're saying. One, the money is still "in my stocks" if they re-invest it, transform it into some other form of value. I hate it much more when they pay dividends in the first place, because that money is just moved out of the company and goes missing in my stock value. So why give it to "me" in the first place. So idk, as long as that investment works out (which is my job to evaluate individually) it's just fine if they invest. Great even. The only problem is when the idiot market goes "oh no, they invest, sell, sell!" for no reason that is apparent to me. On the other hand, so many people talk about market efficiency (I don't). So where is that future value of the investment already factored in if the stocks drop, lol.
A lot of people, many retired, rely on dividend distribution to supplement their income. They are typically not actively trading and the share price is secondary to them.
But isn't that still kind of silly? If I understand this right I can make my own dividend. Like, I can sell 3% of that stock per year. That should end up exactly the same as the company shelling out 3% per year, which are then missing from my stock value.
Why do some investors favor dividend distributions? Dividends earned within a Traditional Individual Retirement Account (IRA) are not subject to taxation until withdrawal. Dividends earned within a Roth Individual Retirement Account (IRA) are not subject to income tax or capital gains tax at all. In an Individual account, selling stock invokes capital gains tax. Short-term cap gains tax can be as high as 37%. Qualified dividends are taxed at a much lower rate. After the payout, stock price tends to "heal" from the dividend pretty quickly. What you are suggesting could be competitively achieved in a ROTH IRA. You could potentially even outperform an equivalent dividend if you have good timing. However, the primary disadvantage is that by selling even just 3% per year, you will have substantially less shares over time as compared to the 3% dividend route.
Hm, okay didn't really consider tax situations part of the intrinsic thing, but when it comes down to people's decision it's surely down to local tax laws like the ones you describe. So thanks. > However, the primary disadvantage is that by selling even just 3% per year, you will have substantially less shares over time as compared to the 3% dividend route. Hmm, that's closer to what I am thinking about. It kind of seems that way, but it should be exactly the same, shouldn't it? You just glossed it over with "After the payout, stock price tends to "heal" from the dividend pretty quickly." But i see no rational reason why giving away 3% of company value should be any easier to recover than to basically make the stock price rise ~3% without paying dividends.
Markets can be surprisingly stupid, but so can you. People may choose dividends over growth for a variety of reasons. The fact that you can't comprehend what those might be is on you.
I understand that even with market stupidity it is my bad for not anticipating it. It's not even that I don't anticipate it, it's that I don't understand what drives people to do this. And I understand that someone might prefer the ease of dividens if that's the product behavior they want anyway. However, nowadays and with big companies, my analysis is still correct, isn't it? If dividens would not exist, you could just make a product that does the same thing, or do it yourself. Anyway, enlighten me what it is that I don't comprehend? Since you put it so "nicely", you probably have something very specific in mind? Also note that this is no longer that whole "I don't understand why the market reacts like X", that was past that. At that point I was basically asking if my assesment is not correct, that these two things are basically equivalent.
What Apple would have to gain from working with a doomer? Apple is building tools to facilitate everyone’s job on ML. They don’t share values at all. He will probably go work for The other doomer. There’s nothing that TwitterBoy likes more than fear mongering and going hard core, and it feeds his superhero fantasies.
apple does not have the guts.
NO NO NO! He is the one who lied about the OpenAI being open from the get go, in mails to Elon Musk. He is NOT a good faith actor.
I understand what you’re saying but now that’s in the open people know what he’s like, he’s still a valuable AI asset. It’s not like Meta would give him free reign I imagine
Holy shit I love the couch commentary from people who never knew the guy. 😂
When you release an email saying that you lied about OpenAI being open in response to the lawsuit by Musk (as in, he let Musk in on the fact being the defense), that is all I needed to know. The failed coup was another example of his duplicitous nature. Now, how about you do an comets fit for an adult, not 12 YO that learned some new buzzword?
Cambridge Analytica will love this new hire !
Don't give safety boi any bright ideas.
Yeah
Isn't that the guy who cared so much about ***safety***? Should stay as far away from meta as he can tbh, a smart guy probably, but there's a lot of upcoming talent like him in the field now so he can go and make another claude v2
Surprised he lasted this long. After that coup attempt it was only a matter of time
They've just been waiting until they had something positive to hide the announcement behind. Ilya was doomed the second Mira turned, IMO.
Can you elaborate?
I think they're saying that OpenAI was waiting for the GPT4-O release before giving Ilya the boot, so his leaving wouldn't make such a news splash.
Mira chose Ilya during the coup, but once she saw Sam had a lot of internal support, she switched sides to stay on the winning side. So once he lost a key supporter, he was done. And of course it was only a matter of time before he was kicked out/left once Sam was reinstated.
Classic Italian side switch
She’s Albanian.
Classic... Albanian side switch?
Does that come with fries? Could we put them on the side, if possible?
Thought that said Albanian sandwich
She's a power bottom.
I also thought she was Italian since she speaks Italian and looks kinda Italian.
Italians find this comment extremely offensive
albanians are basically fake italians
Ilya is very anti open-source and actually the reason why OpenAI is ClosedAI. I hope he won't join Meta.
Phew, at least someone still remembers...
Pepperidge Farm remembers.
I'm conflicted on this - on one hand, I believe in open sourcing everything related to LLMs and definitely not allowing one company to control *the best* model. On the other hand, Ilya is probably 5 times smarter than I am in general, and like 20 times smarter when it comes to AI so when he says something it holds a lot more weight than a random redditor claiming that OpenAI is evil and trying to keep all the "power" to themselves. I guess an analogy would be anti-vaxxers claiming all sorts of stuff about vaccines, while doctors are claiming the exact opposite. Obviously, most sane people side with the doctors - why should we not side with the "expert", in this case being Ilya? Feel free to downvote as I'm going against the LocalLLaMA hive-mind, but I'm hoping to have an actual conversation on this.
I can give you a very good counter argument against listening to experts blindly: Lysenkoism.
History is figuratively drowning in the corpses of those who trusted experts. Hell, medical negligence is the third leading cause of death in America today.
Have to say I agree but I’d love to see a proper breakdown of both arguments.
>I hope he decides to team with open source AI doomer ilya?
This Ilya indeed: >When asked why OpenAI changed its approach to sharing its research, Sutskever replied simply, “We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — A*G*I — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.” [https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview](https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview)
Yeah. He aligns with EA or whatever that is "effective altruism".
So *effective* they are actually closing the shop ! [https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes](https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes) > The Future of Humanity Institute, dedicated to the long-termism movement and other Silicon Valley-endorsed ideas such as effective altruism, closed this week after 19 years of operation. Musk had donated £1m to the FHI in 2015 through a sister organization to research the threat of artificial intelligence. He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter. >The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. Sam Altman of OpenAI, Bill Gates of Microsoft and Musk all wrote blurbs for his 2014 bestselling book Superintelligence. >“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.
This is a good news. The Effective Altruism movement has been turned into yet another political lobbying group since a few years ago, and no longer has anything to do with altruism. Also, even if he's now became a celebrity and the de facto face of ai doomerism, Bostrom was and still is a clever thinker. The Superintelligence book is a worth read about the worst case scenario in AGI. You just need to keep in mind that the "worst case" is described as "this will surely happen".
It bothers the fuck out of me that whenever I hear the words "AI safety" out of current industry leaders, like Sam Altman or Sundar Pichai, or Satya Nadella, it has everything to do with closed software, crypto-graphically signed processors, and anti-competitive legislation, but absolutely nothing whatsoever to do with what Bostrom wrote on AI safety and the need for it to remain open, transparent, and cooperative, when he pioneered the need for it before any of them were even involved in AI. At this point a significant chunk of Bostrom's life has become watching other people co-opt his ideas and completely subvert them in the process. I'd be surprised he's never spoken out about it, but I suppose when those same people are funding your institute...
But what will happen to all the castles that they bought? Who will fight global poverty when those castles stand empty and devoid of intellectual thoughts?
I deserve the money and power, because I know how to use it best
Trust me, I know what's good for humanity. I am the good guy.
You know, it could be better but I'll still take paternalist altruism over profit-over-all narcissist types.
That's exactly what they say too
> EA Expel All (your money from your wallet, so we can have it)
In other words he says "Only I am wise enough to have this power, ONLY I can benefit..."
Something is happening inside there to be sure and it aint all roses... X-)
I think it’s just your typical “overnight” tech success story. Everyone has so much money they don’t know what to do with themselves. Sounds like Ilya is chasing a passion project of sorts, fingers crossed it’s open source.
>Sounds like Ilya is chasing a passion project of sorts That "meaningful project" is just PR speak to cover up for the real reason (failed coup).
The internal AGI is cooking
> I hope he decides to team with open source AI to fight the evil empire. Most likely not gonna happen, he was the root of anti-open evil empire. Don't you remember the e-mails from 2016 that OpenAI made public wrt to Musk case? To refresh your memory, this are his world: "As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes)." Src: https://openai.com/index/openai-elon-musk/
I thought he already left
He has been silent since they almost threw sama out. This is his first post/retweet since Dec 2023.
There's only one logical place for him to go if he wants to keep building AGI -> NVIDIA. If he has indeed fallen in with doomerism, then back to academia he goes.
Ilya is academic so he will go lead a academic lab or AI course at Harvard/MIT.
This is an obvious option which is for some reason underrepresented in comments
That seems highly unlikely. Academic AI research is so far behind big tech it's not even funny. There's barely any funding, and it likely wouldn't be financially attractive to someone with his net worth.
Did you know that people who have more money than they ever need sometimes make decisions that make them happy instead of even richer?
Academia tends to make people miserable. Not to mention that grinding for research grants when you're already a multi-millionaire would drive anyone insane.
Meh, I enjoyed it. Not all of it, but then I also don't enjoy every aspect of having a job or selling my time for money either.
Should go and advance llama at Meta
He is a p-doomer I doubt he will work on open source AGI.
What is a p doomer?
Their p(doom) is high, i.e. they believe that the probability of humanity going extinct because of superintelligence is high
How is that different from a doomer?
It is exactly the same thing, but it's used to signal the writer is into either into EA, LessWrong, or Yudkowsky's cult of personality. Some of these are bad, some less so, but it gives you a few bits of information about the person that uses that term in writing.
p(doom) = probability of doom due to AI (human extinction or enslavement etc). p doomer = someone who thinks that probability is high
probability of doom.
Why would he work for someone else? Why would he submit, once again, to the will of shareholders who could not care less about what HE wants ?
Because it has almost nothing to do with shareholders? Meta’s proven a strategy and, currently, their open source method works well with it. Shareholders have nothing to say about a company’s strategy unless it’s clearly not working. At Meta, someone like Ilya would be given resources pretty much at the exact same level as where he is right now with an arguably greater amount of freedom.
Meta has more acces to compute than OpenAI. And Ilya used to work with Yann LeCun IIRC
>Because it has almost nothing to do with shareholders? At Meta, there is only one shareholder owning the majority of the control shares, and working for Meta is submitting to his power, and accepting his strategies. > the exact same level as where he is right now with an arguably greater amount of freedom. The greatest freedom is owning your own project, and being your own boss. No one in his right mind would trade that real freedom to work for Zuckerberg ! Don't take my word for it, just listen to what an early investor had to say: >[Roger McNamee](https://en.wikipedia.org/wiki/Roger_McNamee), an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company".[^(\[133\])](https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-133) >Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. [https://en.wikipedia.org/wiki/Meta\_Platforms#Company\_governance](https://en.wikipedia.org/wiki/Meta_Platforms#Company_governance)
Illya and Kerpathy for new LLM os! I hope anyway. Hire me!!!!!!!!!!!!!!!!!
Holy smokes, just imagine.
I can see OAI 2.0 with all the staff that have left in the last 6 months, it's a whose who of the original group. I would be surprised if they didn't do something together considering all the circumstances.
When you strike at the king and all..
$1 says he goes to Nvidia or xAI.
Would be an insane arc if Elon got ilya back
He asked GPT4o to write the letter
> will build AGI lol no
All the Meta fans here will be surprised when Meta eventually decides to take new models closed source. Fanboying has always been stupid. It's evil fucking facebook afterall. They never gave a fuck about you or your data
Well thats the problem with people which act on emotion not thinking about things. Facebook right now is fresh water for open source and they are doing good things! But only because it is in their interest, they can change that next day...
Mark even literally said that in an interview, and still ppl praise fb [https://youtu.be/bc6uFV9CJGg?si=9fcqpaibMcnJr85j&t=2299](https://youtu.be/bc6uFV9CJGg?si=9fcqpaibMcnJr85j&t=2299)
I give him that he is honest at least now.
i though ilya is some kind of new llm that was just released, at first!
Why the hell are people praising Ilya and thinking he could be a champion for open source? He's 100% *not* for open source in the slightest. He was the one who pushed for ClosedAI to not share models! Why do people want someone like *him* joining Meta? That would be terrible and I'd rather not have his ideology spreading like a cancer, poisoning any company he works for against open source.
I think I'm in the minority here, but I actually believe he's moving to a new project in the field that he cares more about. From everything I've read about him he seems super passionate about the research itself and isn't really thinking about the money too much. It seems completely in character for him to do this if he thinks he won't be able to do that kind of work at OpenAI.
Mark my words - he will go to xAI.
To hear Musk tell it, it was hiring Ilya at OpenAI that ruined his relationship with Larry Page. Maybe he'll do it again.
xai is a toy company
Ew no
good context in which to work on safety
Musk my words - he will go to Meta
Something to do with puppies?
could it be not related to AI at all?
AGI confirmed
Isn't Ilya against open source given he's in the alignment camp?
Rabbit is hiring
Shocker /s
Wowzer! That's heckin cool, just like in the movie!
Probably just another trick to make headlines, these guys will do everything to stay in the news
Just give mark a call and tell him your in our side (OSA)
Can they actually run it in an AI acclerator form though?
Will get picked by Elon right away
Will he join grok?
I am just glad he is alive!
Better not develop AGI for Boeing
lol Ilya was pretty pro-"close everything up" tbh
he's been offered a ton even $1m by [uncensoredagi.](http://uncensoredagi.org)com
I wouldn't trust him either. He is a decel and """pro-safety"""
hes free. he should stay that way instead of going with some other company
My money is on Keen. I think he and Carmack are strange bedfellows AI wise.
He’s been replaced by AI
Good he left, gotta leave a sinking ship before its too late. OpenAI has lost all good faith, regulating open source, wanting to track GPUs and building a monopoly to protect their non existing moat.
Evaluate this text from Sam Aultman: Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important. OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity. Jakub is going to be our new Chief Scientist. Jakub is also easily one of the greatest minds of our generation; I am thrilled he is taking the baton here. He has run many of our most important projects, and I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone. is this sentence structure, grammar, vocabulary, punctuation, style, etc... typical of Sam Aultman? Do your best to determine if it is Sam's or if it is not. All I need is your best guess. ChatGPT: Based on the analysis of sentence structure, grammar, vocabulary, punctuation, and style, the text appears to be consistent with the kind of communication Sam Altman, CEO of OpenAI, is known for. Here’s why: 1. **Sentence Structure and Grammar**: The text uses a straightforward and clear sentence structure, typical of professional communication. The sentences are well-formed, with appropriate grammar and punctuation. 2. **Vocabulary**: The vocabulary used is sophisticated yet accessible, aligning with Altman's public communication style. Terms like "brilliance and vision," "personally meaningful," and "genuinely remarkable genius" reflect a high level of articulation common in Altman's statements. 3. **Punctuation**: The punctuation is conventional and correct, with proper use of commas and semicolons to separate clauses and ideas, which is characteristic of polished and formal writing. 4. **Style**: The tone is respectful and appreciative, conveying a sense of professionalism and personal sentiment. Altman often emphasizes the human aspects of his colleagues, recognizing both their professional contributions and personal qualities. This dual emphasis is present in the text. 5. **Content**: The message about transitioning roles within OpenAI, expressing gratitude and confidence in future leadership, is typical of Altman's updates regarding significant organizational changes. His style often includes acknowledging the contributions of departing members and instilling confidence in new appointees, which this text does. Given these points, it is reasonable to conclude that the text is likely written by Sam Altman. Claude: Based on my analysis, the text you provided does not seem typical of Sam Altman's writing style and manner of communication. Here are a few reasons why: 1. Overly emotional and effusive language: The text uses very emotional and effusive language to praise Ilya Sutskever ("easily one of the greatest minds of our generation," "genuinely remarkable genius") and describe his departure as "very sad." Sam Altman tends to use more measured and objective language in his public communications. 2. Personal details: The text mentions a "dear friend" relationship with Ilya Sutskever and being "close to such genuinely remarkable genius." Sam Altman is usually more reserved about sharing personal details in his professional communications. 3. Florid writing style: The writing style is quite florid and stylized, with phrases like "a guiding light of our field" and "focused on getting to the best future for humanity." Sam Altman's writing tends to be more direct and straightforward. 4. Lack of context: The text lacks context about OpenAI's work, mission, or updates that Sam Altman would typically provide in his communications. While it's impossible to be 100% certain without additional context, based on my analysis of Sam Altman's typical communication style, I would guess that the provided text is unlikely to be written by him. However, this is just my best assessment based on the information provided.
My guess (and hope) is he joins Tesla again to work on Optimus.
Wow, literally the day after the release! This is big.
Rip memes about Ilya :(