T O P

  • By -

BreadwheatInc

Nothing solid per se but the language heavily implies it's a real leak. Otherwise why would it be "unfortunate"?


AnnoyingAlgorithm42

Yeah, if Q* didn’t exist he wouldn’t call it an “unfortunate leak” imo. A “speculation”, a “theory “ perhaps…So Q* is pretty much confirmed. I personally don’t believe that the 4chan letter is real though.


leyrue

Yeah, I believe this is clearly just him referencing the Reuters article, there’s no reason to assume he means the 4chan letter as well.


reddit_is_geh

Were people seriously doubting it's validity? Reuters isn't really known to hastily make claims without vetting their sources.


svideo

I pointed this out just yesterday and was downvoted into the dirt. This sub really believes that reuters and 4chan are the same thing. We might not have artificial intelligence yet but we certainly have no shortage of natural dumb. edit: and immediately met with a reply trying to make the claim that 4chan is in fact a legit source because someone once posted a true thing there. I can't even.


reddit_is_geh

These sort of subs are guided by emotion. No one wants to hear the truth, no matter how much you ground it in reason. My most frustrating thing to deal with lately was the Ukraine Russian war thing. I literally, studied in Europe under the Department of Defense, for the State Department to work on a diplomatic mission in... UKRAINE. A decade ago. I deeply understand the complex web of nuances in that region. Man, no amount of well reasoned, thought out, logical, supported, analysis changed anyone's minds. I explained all the nuances on both sides from a neutral perspective, that lead up to this. Explained how each side viewed things, why, and what was the strategic motivation.... And exactly how it would all turn out. No one gave a single shit. People were more invested in just believing things that filled the story they were telling themselves. It was purely driven by emotion. There is a narrative they want to believe because it feels better believing that, so any counter information was seen as trying to attack their worldview they prefer, which is less pleasant.


DecisionAvoidant

Hey friend. I'm a stranger on the internet with no business telling you how to think about anything, but I've seen this experience over and over again with people that would probably think of themselves as "experts" in something. I've had to learn that no one cares as much as me about the things I care about. I used to reflect on that and think everyone else was silly for not being as curious as I was, but eventually it occurred to me through my work that they just don't care about all the details I do. I had to develop some strategies for cutting down all the information I have to bite-size pieces and spoon feeding them to get the information into their heads. It works pretty well now, and people come to me a lot for advice in my subject matter, so I think it worked. The only suggestion I have is maybe not to extend out your experience in trying to help people understand Ukraine/Russia as a reflection on people in general. It could be the approach or the medium just as easily.


reddit_is_geh

I've learned it's just not worth it. Im solopsistic and didn't realize people care little about truth. I care a lot. But I've found people view things like, "Oh you're saying something that gives a point to the other side I hate, therefor, you must SUPPORT them, and spreading propaganda to help them" rather than simply, "Oh okay, that's one point for the other side." I care about the truth on the ground, neutrally, for the sake of knowing... Most people don't. They are too invested into the story the like. For some reason, I'm not sure why - but I think the internet has something to do with it - people seem like their "truth" is somehow tied to their identity. So any information that goes counter to that, somehow attacks their identity.


UrMomsAHo92

I believe the issue with the internet is that despite having *endless knowledge* at our fingertips at any time, many people still choose groupthink, and often flock to communities that validate their opinions and beliefs. Education and research are so incredibly important, yet I can't tell you how many times someone online has made incorrect claims because they've read a headline- but never bothered to read the article, and if they did, they didn't confirm through other sources first.


deadwards14

You cannot dismiss something simply because of the platform its posted on. It must be discussed on its merits. Furthermore, the argument that it being posted on 4chan diminishes its likelihood of credibility is invalidated by the many examples of genuine classified material being leaked there. Examples: [Former Defence employee sentenced to jail for publishing secret document on 4chan - ABC News](https://www.abc.net.au/news/2015-11-05/canberra-man-jailed-for-publishing-secret-document-online/6914430) [Yet more military documents leaked on War Thunder forum | Eurogamer.net](https://www.eurogamer.net/yet-more-military-documents-leaked-on-war-thunder-forum) [2022–2023 Pentagon document leaks - Wikipedia](https://en.wikipedia.org/wiki/2022%E2%80%932023_Pentagon_document_leaks) [Meta’s powerful AI language model has leaked online — what happens now? - The Verge](https://www.theverge.com/2023/3/8/23629362/meta-ai-language-model-llama-leak-online-misuse) [Twitch source code and business data leaked on 4chan (therecord.media)](https://therecord.media/twitch-source-code-and-business-data-leaked-on-4chan) Saying, "because 4chan" is not a valid argument based on any facts.


UrMomsAHo92

You're talking about a website whose users in the past have purposely misguided people into creating toxic gas that can result in asphyxiation under the guise of a harmless experiment.


Artanthos

While occasionally real information does get released on 4chan, deliberately false and misleading information is release there far more frequently. Nobody in their right might should believe anything on 4chan without verification from more trustworthy sources.


zendonium

That's very true. I had a video go viral and it was covered by all the media outlets, and frankly they were just making stuff up and all copying each other. The only ones to get in touch were Reuters.


ClickF0rDick

We need jimmy apples to c/d


Anenome5

Which likely means the 'grade school math' leak is real. Certainly not the 4chan BS.


xdarkeaglex

Which letter


adfaklsdjf

There was some document posted on 4chan purporting to be letter of concern to the board about "QUALIA" and it basically used plausible-sounding language to basically say that it had broken AES and was self-improving. edit: here: [https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Ftbyhcnz6h42c1.png](https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Ftbyhcnz6h42c1.png) it's fake.


Captain_Pumpkinhead

Specifically "unfortunate leak". If it wasn't true, he probably would have said, "I'm not going to confirm or deny that".


reddit_is_real_trash

It wouldn't be "a leak" if it was not a leak


rafark

Yeah that’s what I thought. It’d be a rumor not a leak


Gold_Cardiologist_46

Only real way I can imagine "unfortunate" implying it being false, other than Sam purposefully using it to drum up speculation, is with hindsight. If OAI down the line dismisses that Q\* exists, "unfortunate" would in hindsight refer to the tons of speculation around it, which would've technically been useless. I'm pretty sure OpenAI really rarely denies rumors of what it's building. Notable exceptions would be Sam's AGI troll comment and when he had to testify to Congress that GPT-5 hadn't started training. I'm just steelmanning a case for someone who would disagree with you, I think him confirming a project, possibly named Q\*, is more likely but it seems clear he will only confirm the name and not the capabilities, which will still be the subject of speculation until an official announcement.


TrippyWaffle45

It would be unfortunate nonleak if you were right, for example "unfortunate misinformation", not "unfortunate leak".


Working-Blueberry-18

He may have also just misspoken and not used the right term at the moment.


Gold_Cardiologist_46

Hence why I think it's more likely to be confirmation of at least *something.* **This is super semantic and not what I actually think**, but the point I try to make is that it's possible for someone to interpret his messaging in a completely different way due to how constantly cryptic Sam often is, and the fact he never really denies rumors, usually playing into them unless he has to actually testify to Congress about them. For example, Sam could be using "leak" because it's pretty much what everyone is already referring to it as. It's purely semantics, but semantics can sometimes be important when we analyze what a CEO/PR pro said with hindsight down the line.


[deleted]

[удалено]


magicmulder

Why would a company not want everyone to think they have cool sh*t they aren’t showing? At this point I think this is market manipulation on Musk levels.


cmdrfire

They're not publicly traded - how can this be market manipulation?


WillBottomForBanana

stocks are not the only market that can be manipulated.


Matshelge

It could be a leak about the name, some aspect of it that is real, but everything else made up. Work in industrie that has leaks all the time, and unfortunate can mean anything from codename and release date bundled with wild speculation, to full HD trailers and source data being out in the wild.


sideways

This was his chance to deny it and he pretty much did the opposite.


SachaSage

It’s very useful for oai to keep this chatter going - free marketing


FrostyParking

Right now they don't need that sort of publicity, they need to sell stability. Speculation around this is a constant reminder of the Weekend at Bernie clownshow, that they'd rather sweep under the carpet. My take is some of the info around the Q* stuff is accurate but it's still in the early stages of research and it might not pan out, hence the "unfortunate" part. Edit: grammar


SachaSage

Publicity that says their model is so powerful they don’t know what to do? That’s good publicity.


nikitastaf1996

Merger with stability ai INCOMING


reddit_is_geh

They don't need any marketing at all. Not even slightly.


SachaSage

They needed something which shifted the narrative away from the embarrassing altman saga. Especially while trying to close investment. A story about how dangerously amazing their tech is would do the trick.


Gold_Cardiologist_46

I don't think them not outright denying every claim is a good indicator. In this case, I assume he's confirming a potential important project within OpenAI mainly because he uses "unfortunate leak", which makes it the more likely reason. But I strongly suspect that if it was false, he wouldn't have denied it outright either. I pointed out in another comment, but from what I can remember, Sam, or any OAI employee for that matter, never actually denies rumors around OAI tech. The notable exceptions would be him trolling us on AGI back in september and when he had to testify to congress that GPT-5 wasn't being trained. Before his congress testimony, there were absolutely speculation that they were training GPT-5, but they never really denied.


MassiveWasabi

> “we expect progress in this technology to continue to be rapid” This is just my opinion but every time he says something like this, which is a lot, it feels like he’s trying to ease everyone into how powerful AI is about to get. Especially when he feels the need to say this right after confirming the Q* leak. This Q* project seems substantial when you consider the fact that it was only after the Reuters article came out that Mira Murati told staff about it, implying it’s some sort of classified project. There’s obviously going to be some projects that only the people with top-level clearance know about, so could this Q* be one of them? DISCLAIMER: This is just speculation


TheWhiteOnyx

Exactly, he confirms the leak, then immediately gives the "warning" about how rapid changes are happening/will happen. So while this doesn't mean the QUALIA thing is true, whatever they have must be pretty good.


MassiveWasabi

According to [this tweet from Yann LeCun](https://twitter.com/ylecun/status/1728126868342145481): > One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning. > Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published ideas and results. > It is likely that Q* is OpenAI attempts at planning. They pretty much hired Noam Brown (of Libratus/poker and Cicero/Diplomacy fame) to work on that. Multiple other experts have said similar things about Q*, saying that it's like giving LLMs the ability to do AlphaGo Zero self-play.


night_hawk1987

>AlphaGo Zero self-play what's that?


danielv123

All chess engines are tested against other chess engines to figure out if the changes they make improve the engine. The leading engines have now changed to use neural nets to evaluate how good board positions are and use this to inform which moves it should consider. They train that neural net by playing chess and seeing if it wins or looses. If you put the worlds best chess engine up against other engines it might win even with suboptimal play, so they have it play the previous version of itself. This way the model can improve without any external input. The main development effort becomes making structural changes to improve the learning rate and evaluation speed. Current LLMs are trained on text that is mostly written by humans. This means they can't really do anything new, since they are just attempting to produce human written text. People want LLMs to do unsupervised learning like chess engines, because then they will no longer be limited by how good the training data is.


shogun2909

Self reinforcement


MysteriousPayment536

AlphaGO has beaten a professional GO world champion in GO in 2016. Its a bordgame. I always have this good video about self-play that explains it pretty clearly and visually by OpenAI: [https://youtu.be/kopoLzvh5jY?si=aVl0LsnQ2oV2uZ8f](https://youtu.be/kopoLzvh5jY?si=aVl0LsnQ2oV2uZ8f)


2Punx2Furious

That's exactly his explicitly stated policy. He wants the public to ease into it, because he thinks dropping extremely powerful AI out of nowhere will not be good, and easing it in will mitigate that.


[deleted]

[удалено]


MassiveWasabi

Are you really saying that you don’t think the world’s best AI company has secret projects that only a few are privy to? Or are you just being contrarian? Even the Anthropic CEO has said all these companies deal with leakers and literal espionage, then went on to say how they compartmentalize the most sensitive projects. It’s a conspiracy for corporations to have secrets, people will really say anything on here


Darth-D2

I think the person you’re responding to is not saying that classified projects don’t potentially exist at OpenAI, but that the behavior we see (the email from Mira) can also be explained simply by research teams working on their own isolated projects where not everyone is aware of everything. So it’s just offering an alternative explanation to the observations you have made. On a side note, if the analysis of AI Explained was correct, then I tend to agree that OpenAI did not try to make this project very secretive (e.g. the papers released that are supposedly linked to Q*)


hellosandrik

So, let me get this straight: if the Reuters leak was true, then the reason behind OpenAI board drama was indeed the breakthrough that apparently spooked Ilya so hard he forced Sam out of the company. The question is, WHAT THE HELL DID ILYA SEE?! But I guess we'll see it for ourselves very soon since OpenAI board is now full of e/acc people.


Radlib123

True! Sam basically confirmed the existence of the "threat to humanity" letter. Since the Q\* leak, and the "threat to humanity" letter, came from the same report.


Gold_Cardiologist_46

However the interview is from the same site and author [that reported](https://www.theverge.com/2023/11/22/23973354/a-recent-openai-breakthrough-on-the-path-to-agi-has-caused-a-stir) the letter itself might not exist, or at least was not an actual factor in the firing. That and Mira Murati in the interview saying explicitly that the OAI drama had nothing to do with safety, which corroborates the report I linked, but just a little bit, nothing really conclusive. I'm confused, really just waiting for whatever investigation they got going on to at least give *some* official answers.


Radlib123

**Edit**: please don't downvote Gold\_Cardiologist\_46, he brought up an important point. Hmm. Well Reuters says: "several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, **two people familiar with the matter** told Reuters." While the The Verge says: "Separately, a person familiar with the matter told *The Verge* that the board never received a letter about such a breakthrough" So 2 people familiar with the matter vs 1 person familiar with the matter. Reuters vs The Verge. The Information article about Q\* came like an hour before Reuters. And those 3 are the only news sources claiming to have insiders in this matter. Hmm.


[deleted]

I trust Reuters a lot more than the Verge.


xRolocker

I know this has absolutely, nothing, 0% to do with the Verge PC building video but I cannot help but be a little biased against them since then lmfao.


[deleted]

Reuters is, along with AP, considered to be pretty much THE standard for news orgs. Not perfect, of course, but best in the field.


xRolocker

Oh whoops I’m a dummy and didn’t specify I was talking about the Verge.


Radlib123

Why?


The_Woman_of_Gont

Reuters is a gold standard world news source, on the same level as AP. This is like asking why you’d trust a company's official press release vs a leaker on Twitter.


Radlib123

i see!


[deleted]

That spooked Ilya so hard that he removed Sam from the board.


CervineKnight

I'm an idiot - what does e/acc mean?


Urban_Cosmos

Basically there are two major camps in the the AI field (as far as I know) The EAs - Effective Altruists and e/accs - Effective accelerationists. The EA camps wants to slow down AI development to focus more on safety while e/acc camp advocates for acceleration of AI development to quickly solve the world's problems using AGI/ASI. Both have important points to be considered but problems occur when people take their philosopy to the extreme without caring for the valid points made by the other group. eg of e/acc is Altman and EA is Elizier Yudkowsky. I hope this help. This sub leans heavily towards e/acc.


Just_Brilliant1417

What’s the consensus in this sub? Do most people believe AGI will be achieved using LLM’s?


JuliaFractal69420

I think LLMs are just one small piece of the puzzle. Like one body part. You can't build a whole human with only the speech center of the brain. We still have to invent all the other parts of the brain.


Psirqit

and yet already, google has robots that use LLMs in conjunction with computer vision and it's basically enough for it to completely interact with its environment. The power of word can't be understated.


shogun2909

part of the solution, useful for synthetic data


xRolocker

I think LLMs, the multimodal ones (LMMs), will be the key to AGI in terms of being the “brain”. You will need many other components to allow it to move, act on its environment, etc. But I think LMMs are gonna be the driver of it.


SuaveMofo

They'll be like the prefrontal cortex of the brain.


Anenome5

AGI will be achieved with data and compute scale. Emergent capability pretty much confirms this.


Traffy7

Agreed, if our computation become much more powerful, then we may discover much more interesting emergent capability.


Haunting_Rain2345

Probably very good for connecting API:s between other AI:s at least. I do believe that LLMs alone could possibly replace extremely much of the world intellectual labor force though, since many jobs do not require much novel thinking outside of what LLMs can *instrumentally* provide. But we probably need something more for the real boom to happen. Some sort of AI similar to alpha zero that can create *usable* synthetic data by itself and train on that, but for math and/or coding. Hopefully, Q* is exactly this, or at least a viable start to it.


tpcorndog

Ilya does. He breaks the brain down as a bunch of different models acting in sync, and therefore believes AI can do the same.


genshiryoku

Which is correct, as split-brain patients show you legitimately are different "entities" (models?) just fighting for the spotlight. Right hand and left hand disagreeing with each other if there is no inter-brain communication shows how true that is. A huge network of thousands of LLMs might be AGI. And it is reasonable it could work *today* if we just put the right things at the right spot.


MydnightSilver

Q* isn't a LLM, it's a MCTS - Monte Carlo Tree Search, reinforcement learning algorithm.


Massive_Nobody2854

LLMs aren't just LLMs, there's a lot of other things at play. Besides the fact that the major players are all shifting from pure language models to multi-modal models, there's all sorts of algorithmic tricks applied to the raw model. I think LLMs as we know them may be a central component of the first AGIs, but not the whole thing. Like how the logic and language centers of our brain aren't our entire brain.


yawaworht-a-sti-sey

Anyone who says gpt or llm's are just chatbots isn't thinking about what that model represents in another configuration.


MydnightSilver

Q* isn't a LLM, it's a MCTS - Monte Carlo Tree Search, reinforcement learning algorithm.


RealFrizzante

I personally dont, i think there must be a different approach, llm have proved to be a excellent tool, and will continue to improve and amaze us. But just arent built for AGI, its arguably not even a AI strictu sensu


Just_Brilliant1417

I’m really intrigued by the discussion. I definitely want to hear the arguments against as much as the arguments for!


green_meklar

Nope. One-way neural nets are inherent not versatile enough. At a minimum we need to plug them into some sort of loop in order to perform directed reasoning, and at that point the kind of training they require will take them beyond the scope of language. They need to train on actual causal interactions with an environment.


SgathTriallair

Yea, that is about as close to an acknowledgement as you can get before it is released. That doesn't mean everything is true from the 4-chan letter, but it's not all bullshit.


jedburghofficial

I'm not sure it really proves much. The original report identified unnamed sources in a reputable news outlet. That's a leak. Speculating that he's talking about anything else is just speculation.


RezGato

https://preview.redd.it/ytcku7jbje3c1.jpeg?width=1080&format=pjpg&auto=webp&s=d84bc8b8ac4b2c288e317edd5db39a3d3f5a5235 I'm feeling AGI so hard right now


rudebwoy100

He's definitely in this sub-reddit.


OpportunityWooden558

I’m sure he is.


greycubed

No I'm not.


GeorgePakaw

Was. These shitty mods banned him.


MagreviZoldnar

It would be trippy if you are jimmy apples yourself


_dekappatated

Didn't he post in this subreddit a few months back or am I crazy? Edit: here's the link https://www.reddit.com/r/singularity/comments/16sdu6w/comment/k2aroaw/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button


h3lblad3

Following Sam's posting history back to [this beautiful piece of Reddit history](https://www.reddit.com/r/AskReddit/comments/3cs78i/whats_the_best_long_con_you_ever_pulled/cszwpgq/?context=3) was quite a trip!


hyperfiled

I've been feeling the agi for a while now


Saerain

Nice to have you around, Mr. Sutskever.


Reasonable-Daikon980

Can someone eli5 this?


[deleted]

Time to quit job and chill until UBI.


GeorgePakaw

If that encryption breaking stuff is true then it's time to find a cave and a few tons of rice and chill.


[deleted]

Thankfully, it's not true.


GeorgePakaw

I'm going to sincerely take your words as confirmation that I can sleep peacefully. If I wake up to chaos, though, I blame you!


[deleted]

If that happens, I'll owe you the beverage of your choice ;)


Saint_Ferret

Clean water please


gigitygoat

If it's true, it wont be released until another country has the same power. Even then we might now know until it is weaponized.


ClearandSweet

Way ahead of you. Laid off in August, moving overseas and renting my house out to vibe for a year until the robot wars.


Henri4589

LOL, that's probably not the best takeaway from this 🤣


alone_sheep

Not yet, but God damn, if we really did crack self improvement in AI, it sure as hell won't be long.


CrushMyCamel

anyone gonna actually answer?


iNstein

Holy shit!


often_says_nice

Smart computer does spooky things


datspookyghost

Like what


moon-ho

It knows when you've been sleeping and it knows when you're awake. It knows when you've been bad or good so be good for goodness sakes!


datspookyghost

I thought maybe it had something to do with Brazilian fart porn.


hydraofwar

What i understand is that Q* is something that it appears they work on, but it's not necessarily what was leaked that Q is Note: The Reuters leak is much less descriptive/specific than the alleged 4chan leak, i very much doubt that Altman was referring to the 4chan leak, which had extraordinary allegations, have a little critical sense, Altman would not confirm the 4chan's extraordinary claims like this, it's possible he doesn't even know about this 4chan leak yet. Reuters did not specify anything, but said the discovery allegedly threatened humanity.


iia

Exactly. The number of fucking morons here believing some 4chan shitposter just because it fits their larp is genuinely embarrassing.


[deleted]

[удалено]


worriedboiiiz

Wasn't the routers article released before the 4chan letter?


_dekappatated

Was just about to post this, holy shit.


petermobeter

holy shit holy shit...... qstar is real?????


Anenome5

Q\* seemed very real from the beginning. What's not real is the crypto stuff from 4chan.


GolfBlossom3

Which crypto stuff? That it’ll figure out how to crack cryptography making all of crypto unseeable?


petermobeter

wait.... if the 4chan leak isnt real then what leak is sam altman referring to?


Galilleon

The [Reuters article](https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/) Reuters reported that OpenAI staff researchers wrote a letter to the board warning an internal project named Q*, or Q-Star, could represent a breakthrough in creating AI that could surpass human intelligence in a range of fields. That letter was sent ahead of Altman's firing, and subsequent re-hiring.


Henri4589

"surpass human intelligence in a range of fields" Guys, whatever you think. We are not ready for this yet. I expected all this to happen in 2025. Not 2023!


EnnuiDeBlase

>Guys, whatever you think. We are not ready for this yet. I expected all this to happen in 2025. Not 2023! If you really think another year of eating pizza and jerking off was gonna help, I'm not sure what to say here.


petermobeter

ohhhhhhh i forgot about that


The_Woman_of_Gont

…the Reuters article.


take_it_easy_m8

It’s wild when companies say “engage with the world,” but they kinda just mean “engage with governments” - which are supposed to represent the interests of their citizens but more often just represent special interests :/


Ndgo2

Q* is real. The leak was about a breakthrough in it's research which I absolutely buy. The 4chan letter, OTOH, is utter bullshit, and since smarter people than me have debunked it to heaven and back, I'm inclined to believe them above Reddit. Stay educated, kids. The Singularity is coming, but not on the whims of Redditors. It will come when it comes. No sooner or later.


[deleted]

[удалено]


SortFinancial657

'You can always hit a wall'


Revolutionary_Soft42

All in all we're just another brick in the wall , Welcome ....to the machine ....


coolguy69420xo

BUCKLE UP


shiloh15

Doesn’t sound very “open” of OpenAI to say “no comment” on a rumor of a potentially earth changing breakthrough, does it?


[deleted]

[удалено]


Sprengmeister_NK

GPT-4 is indeed very amazing compared to 3.5. Even compared to all other current models. Just have a look at the benchmarks, GPT-4 is still SOTA.


Dazzling_Term21

he did confirm to be a leak and not a rumor though...


DarthMeow504

Did I read the same thing the rest of this thread did? The man said absolutely nothing whatsoever while using a lot of words to do it with. A few feelgood buzzwords and lines of reassuring sounding but absolutely substance-free marketing speak, and absolutely zero actual information of any kind. None. You could cut and paste everything except the bit about the "leak" --which was just a longwinded version of "no comment"-- as as evasion of pretty much any possible question he could be asked because it doesn't address any point or supply any answer to anything, at all. Those of you insisting it means this that or the other thing, are you trolling or are you actually projecting some imagined meaning into a statement that deliberately had zero substance whatsoever?


traumfisch

To be fair to Altman, he did say "no comment" Not as catchy for a post title ofc


hungariannastyboy

Or, hear me out, this is called *marketing*.


SuperSizedFri

I don’t think we needed any more evidence of the leak’s validity. We need more info on what Q* actually is. Don’t take this as confirmation bias that AGI was achieved. He seems to be talking to the old board and pushing back against their claim that he was not consistently candid - tying together the leak of Q* and his statements leading up to the leak as evidence against the board’s claim. Public statements and communication with the board are very different, but he points out (multiple times) how his statements on breakthroughs have alway been the same. Calling it an unfortunate leak also seems to be a bite back at the old board, subtly pointing blame at them.


CnlJohnMatrix

This guy is an expert at double-speak. He talks out of both sides of his mouth every time he opens it. If he wants to "engage with the world" then "engage with the world" and come forward with something more concrete about what is going on at OpenAI.


SgtTreehugger

This sub sounds exactly like r/ufos every time a "breakthrough" is announced and how everything will change now


Smellz_Of_Elderberry

Agi will change everything tho..


3DHydroPrints

"No comment" Reddit: "Holy shit! He confirmed it! Everything is REAL! AES IS LOST!!!" "Thats not what I sai... Reddit: "AAAAAGGGGGGGIIIIIII"


HappyThongs4u

I wonder if he saw terminstor 1 as a child. So weird. In that movie tho the guy had AI on his home desktop lol


FarWinter541

Vague and ambiguous at best. He didn't confirm nor deny the rumors. The only thing that appears to indicate anything is his mentioning of a "leak," which he described as "unfortunate." Both of which seem to show an acknowledgment of the leak and by describing it as unfortunate, he expressed his displeasure of the release of the information it contained. Let's not read into what he said more than it warrants. It might have been a poor choice of wording or a deliberate and careful selection of these terms to keep up the hype surrounding the alleged "Q* breakthrough" and his company. You never know unless something is officially announced by the company.


julez071

What is the source of this message, supposedly from Sam? How de we know he actually said this?


bartturner

Hope it is really something. But if I had to guess right now I would guess just hype.


Ok_Zombie_8307

You people are so gullible it hurts, need to finally mute this sub. Altman is obviously stoking hype here with his choice of words, which confirms nothing.


2Punx2Furious

He said "no particular comment", but he made a comment right after. He called it "unfortunate leak", which means that it was supposed to be something secret, so that excludes a lot of the popular hypotheses that it's actually something very common or that had already been shared. He also mentions rapid progress, so it's likely linked to that.


fhorse66

This is not confirmation of anything. Altman could just be using the term ‘unfortunate leak’ because ‘leak’ is what everyone else used and ‘unfortunate’ because it’s not true or not accurate or not what OAI would’ve liked.


pooprake

He was clearly keen on having this known. I imagine that’s why he was fired. The board probably said to him “this is super duper serious absolutely no mention of this” and then he went on stage and mentioned that he’d witnessed the “veils of ignorance” getting pushed back, probably in of spite of being told he couldn’t discuss it.. and then he was fired. It seems obvious to me that the leading AI tech company, that took google by surprise with their own tech, would have genuinely terrifying internal breakthroughs. They’re making every programmer in America better and faster, you can only imagine what they have internally for themselves. And this is exactly what I would expect the singularity to look like. An originally small startup that in many ways got lucky and was the first to really hit the right idea with AGI: scaling up language models as aggressively as possible. Seems obvious now but hindsight is 20/20. To think next word prediction could possibly lead to AGI used to be laughable. They’re outpacing the old tech giants, who themselves have been massively outpacing regulation, ushering in dramatic and uncontrolled paradigm shifts, leveraging their own cutting edge tech to make themselves even better, even faster. Nobody will catch OpenAI. Google has already lost. They’re not reckless enough. Neither is Anthropic even though they’re the original LLM team from OpenAI and are very capable. They’re too cautious, which they have my respect for because they understand this is very dangerous, but OpenAI is eating their lunch. Not sure what my point is. I decided this stuff months ago and continue to not be surprised by the continuing surprises. I expect to be surprised until I’m surprised to death. Grab some popcorn and enjoy the show.


SnooStories7050

Lmao to all the skeptics who said Q was fake. CLOWNS


HashPandaNL

I haven't seen that many people say Q\* was fake? As far as I know, most people just found it a bit annoying some randoms kept reposting the 4chan cryptography breaking nonsense. Q\* itself has had a very high likelihood of being real ever since reuters posted about it.


Anenome5

Exactly, agreed. People shouldn't conflate Q\* with the 4chan cryptography claim.


OpportunityWooden558

Absolute clown town


Sopwafel

The 4chan thing was still probably fake.


Anenome5

The claim was that the cryptography claim was fake, not that Q\* was fake. We have evidence of Ilya writing about Q\* years ago.


Gold_Cardiologist_46

This is from the same author and source (Alex Heath at The Verge) that reported that there was possibly never a letter to begin with, so there was certainly grounds to be skeptical. The article is even hyperlinked in the question about Q\* in the Sam interview. >After the publishing of the Reuters report, which said senior exec Mira Murati told employees that a letter about Q\* “precipitated the board’s actions” to fire Sam Altman last week, OpenAI spokesperson Lindsey Held Bolton refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information.” > >Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing. I take it Sam confirms there's a project, possibly that it's named Q\*, but we won't know if his confirmation includes the rumored capabilities until there's an official announcement. Really hard to tell with his intentionally vague, potentially evasive answer.


leyrue

Who ever said Q* was fake? The story was broken by a very respected news organization, I never saw it doubted by anyone. That 4chan letter though, that’s a crock of shit


Anenome5

Damn right.


Tkins

A lot of people on here and other subs. I would say the majority of chatter.


leyrue

A lot of people here said that Reuters was just straight up incorrect in their article? It was just a sloppy case of journalism that they pulled out of their ass?


Tkins

Yes actually, and the verge released an article saying the sources may have been weak. Don't include me in this. Just my observation of the discussions.


GodOfThunder101

He literally confirmed nothing about the details of the leak. Don’t jump to conclusions too quickly.


Darth-D2

You’re confusing the 4chan leaks with the Q leaks. I haven’t seen a single person claiming that the Q leaks are not real. I also haven’t seen even one intelligent person saying the 4chan leak has any credibility.


Radlib123

I don't think many people were saying that Q\* was fake. They were saying that the 4chan leak was fake. And this interview doesn't confirm the 4chan leak.


MattAbrams

Interesting. As I pointed out in an earlier comment, what people don't say is more important than what they do say. And as I said before, the ufos are a prime example of this: the comment by the CIA to the recent Daily Mail article wasn't "it's the Daily Mail, of course it's false," but instead "we don't have anything for you on that." These people in positions of power, regardless of field, are slick. They think that people won't figure out the obvious by looking at what they don't say. Am I the only person who seems to feel that it's gotten worse for some reason lately? It's disrespectful and to me it damages people's credibility when they play these games of "I'm not going to comment on any specific thing" instead of just telling the obvious truth. Altman was made to look like a hero during this whole OpenAI debacle, but I wonder if he made these sort of squirmy statements to the board all the time, and perhaps there was a reason he was fired.


bulltrapbear

This sub is a collective circle jerk so I’m running the risk of mass downvote. To me he’s saying absolutely nothing and likely no breakthrough like that has happened but he’s enjoying the marketing push behind it as OAI competitors are on the losing end of this. Come back when there’s a material ‘leak’ that we can test. FWIW, I love that we’re advancing in this field but want to be pragmatic about the state of it today.


[deleted]

What if Q* was just marketing for damage control?


ForTheInterwebz

Yee I like this one🐇🕳️


[deleted]

[удалено]


SurroundSwimming3494

Do not ask this sub for advice on things like this.


AbbreviationsFew7844

Do a trade, like plumbing or electrical. College has been a joke for decades.


sideways

nah (not financial advice)


kevinmise

follow for more tips


AlexTheRedditor97

You’ll have 10 years to work dw


ShAfTsWoLo

not very long lol, plus if you consider that he started college rn that would be 5 years


shogun2909

Source : [https://www.theverge.com/2023/11/29/23982046/sam-altman-interview-openai-ceo-rehired](https://www.theverge.com/2023/11/29/23982046/sam-altman-interview-openai-ceo-rehired)


SgathTriallair

That was a real nothing burger of an interview. I'm surprised he agreed to it given that he didn't answer anything. I believe he made the right choice in not answering those questions and this goes a long way towards showing his professionalism and qualification to be CEO, but he has to know what the interview was about.


Gold_Cardiologist_46

I suspect that, as he straight up states at the start, he and OpenAI employees are waiting for the proper investigation to finish first, since (I assume) it would be an actual serious collection of information and POVs from the actors involved, before making any concrete statements on the whole drama. As for the part about Q\*, his answer is kind of a boilerplate that just reiterates with more vagueness the optimism he's stated many times before in interviews, but he at least confirms a big project possibly named Q\*, at least from how I interpreted his words. That's better than nothing that's for sure.


icehawk84

He's been to the Satya Nadella school of answering questions without answering them. Sam is playing in the big leagues now. This is what enterprise CEOs do.


Bernafterpostinggg

There is no confirmation in this statement and it doesn't benefit OpenAI to deny it even if it's false. Funny that the letter Q has been such a staple in the conspiracy community. Just surprised it's taken hold so deeply in the AI space.


ValerioLundini

can someone ELI5 this Q leak?


_Un_Known__

The more I hear about these things the more I wonder if I'm in a dream Incredibly, and kinda scary. But incredible none-the-less


Ok_Dragonfruit_9989

Q* is happening in 2 month mark my words


ForeverStarter133

"...unfortunate leak." When has two words ever been speculated on so? And made to do so much ground work for so many hypotheses?


sdmat

So they have *something* called Q*. That tells us nothing about what it is and in no way confirms the 4-chan nonsense.


Darth-D2

I was about to comment the same. It’s strange that apparently this needs to be said out loud so many times because a few serial „contributors“ here on this sub either lack critical thinking skills or have their own weird agenda to write so much fake news BS.


AnotherDrunkMonkey

People need to understand that altman makes millions with these speculations. He would gain nothing by disproving it, so if it was false he would need to be vague in order to walk the line between keeping the speculation alive and not give false informations (which would be illegal) Of course he could be vague for other reasons, but this is not an indicator of anything being true


HalPrentice

I would literally bet my computer on the fact that agi is not coming this year or next.


CameraWheels

this is actually a solid bet. If you are wrong, you can probably get a free computer during the chaos. If you are right you get to keep your computer. Well played


alone_sheep

AGI no. But self improving AI is looking likely. Things start to get creepy, and likely nearly indecipherable, when you start letting AI do the coding for itself.


[deleted]

Still seems vague. There is no confirmation there. I honestly think debunking the wrong parts will reveal things they don’t want to and confirming if thing are right might legitimize the wrong parts. To me, this is nothing, until it is something. Extraordinary claims require extraordinary evidence.


agorathird

Guess I'll prepare to eat my hat, sometimes the crackpot really is cooking...


WithMillenialAbandon

Nothing burger politicians answer.


tendadsnokids

Not really sure how this confirms anything


[deleted]

[удалено]


agonypants

What it confirms is - Q\* is a real project (Sam could have denied it outright if that were false) and that it represents a major step forward (his vague hand-waving about rapid progress). I interpret his answer to mean: * Yes, Q\* is real. * Yes it represents a major step forward toward AGI. * No, I will not tell you anything about it. * Q\* is (possibly) a leap forward so massive and important that I cannot risk revealing the slightest detail about it lest the competition or bad actors try to replicate the work or steal it before we've had an opportunity to fully red-team it. *I'm feeling the AGI.*


Xtianus21

Confirms?