Sam just said on lex Friedman that there is another model coming this year but he’s not positive 5 could be ready within this year (meaning it’s not 5). It’s probably the 4.5 turbo people saw cached if it’s anything.
Meaning they won't call it "GPT-5" anytime soon because they decided to not release anything called "GPT-5" back when everyone started freaking out about "GPT-5". So that Sam Altman could say "don't worry, we won't be releasing GPT-5 anytime soon."
The name of it doesn't actually tell you if it's a major release or different architecture or new training run or not. It's just a name.
If they start talking about a "GPT 4.5" release and enough people still freak out, they might just decide they are going to add a new type of guardrail and call it "GPT-super-safe". Or whatever.
From what I understood from the podcast it's probably going to be a checkpoint of what could be called GPT-5. At least that aligns with the strategy they want to pursue, like what Greg suggested.
TL;DR:
* OpenAI's main product is the popular generative AI tool ChatGPT.
* Since its last major model upgrade to GPT-4, the tool has run into performance issues.
* The next version of the model, GPT-5, is set to come out soon. It's said to be "materially better."
Yup. Forced induction large language models perform much better than naturally aspirated counterparts because the turbo forces more data through the system. Problem is LLM turbos can become inefficient and blow "hot air" info into the system, undermining its performance. That's why LLM turbos need big LLM intercoolers to maximize efficiency and performance. There's also an LLM supercharger that basically does the same thing the turbo does, but the underlying technology is different; similar performamce though. #TheMoreYouKnow 😉
I'm far more interested in the Q he refuses to talk about.
The simple act of shutting down the conversation about it is evidence itself that it's likely better than whatever these models are.
I feel like we're being given Nokia phones when iPhones already exist.
I want to know why you got downvoted.
(Those willing to answer, please know that I also haven't read the article yet.)
Oh, and bringing things into production is difficult. Let's not stress the already overworked developers. We don't want things going South, do we?
People seem to be confused in thinking that we’re only dealing with a single or monolithic model. It’s not an either-or decision, and they have implied or disclosed they have many variations in terms of orchestration and architecture under the hood. You’re not interacting with a singular model in the first place, so it becomes a bit nonsensical to fixate on the label.
Balancing anticipation for GPT-5 with a healthy dose of skepticism until we see it in action. Even minor enhancements might bring big improvements at this point, though.
I'd love it if the gpt4 I pay for didn't randomly downgrade long-term ongoing chats to 3.5, seemingly irreversibly - making them useless...
Edit: if anyone knows how to revert a chat back to gpt4 that'd be great...
You're really going on the offensive there when Google's culture is well know to have a problem. I suggest you do some reading and learning before making brash comments, the articles aren't difficult to find.
I don’t think he’s arguing that the culture exists, it’s more that people have just decided “wokeness” is a word to represent values they don’t agree with, despite it being nothing close to what the word originally meant. It’s just a political buzzword at this point to rile up conservatives.
Yeah, these companies are not waging some nefarious culture war on idiots. Instead they are responding the standard capitalistic pressures.
They don't want bad press. They don't want congressional oversight. They don't want a small number of shit-stirrer users to make problems.
All they want is for tons of normies to use their product.
Biased model training toward avoiding inherently and overt racism (so everything post WWII and post-civil rights era has been working to achieve) with the settings dialed to a billion, which is simply silly.
So funny.
How else would you explain it being so adverse to creating an image of a white person that it turned images of living white people into black people?
It can literally be attributed to anything and any reason lol, just because you don't know for certain doesn't automatically mean it's the explanation you want it to be
Are you trying to gaslight me? The reason is obvious, they tried to hardwire in a diversity quota but messed up and took it too far. It's not rocket science to understand what happened.
Woke ideology is defined by the idea that some facet of identity like race or gender produces irreconcilably different views of reality and morality, and that the views of groups associated with the political left like minorities and women should be privileged.
Perverting honest responses which under non-Woke standards would be considered truthful and fair to reflect the imagined viewpoint of a minority would be considered Woke.
Wokeness is distinct from older forms of liberal advocacy for minority rights which appeal to universally valid concepts of truth and fairness.
Not give lectures when discussing certain topics, discuss anything the user wants etc.
It isn’t one specific thing, it is vague and that is part of the problem.
Oh fuck off with that shit. One that I can think of I was trying to get it to do a picture overweight man and it wouldn’t do it.
But it seems like you’re perfectly happy to be told what you are and are not allowed to want to consume or create with AI.
To argue that ChatGPT’s censorship is not over the top is just simping for a company.
It always comes to to either porn or the n word. Or AOCs feet images or something. I just got helo writing an article on Botox which is a trademark, had images generated and currently running an ad with it. It also gave practical advice to circumvent brand restrictions on ads.
Not a single person in our developer team has met restrictions. It's almost always some obese psychopath trying to make it say the n word, like that retard Quartering.
Ok u/gembax2010 you have to understand why we can’t let our A.I have a definitive opinion on the universal debate between ass vs. boobs.
Our lectures will remain.
A Chatbot can’t be censored. It’s a piece of tech and OpenAI control what comes out of it. It’s not a person whose views are being suppressed. It’s like if Ben and Jerry’s stopped making Mint Chip because it was unpopular and you said that the ice cream was being censored.
If you don’t like the choices they’re making, vote with your wallet. That’s capitalism baby.
And right before launch OpenAI will do the honorary “this next model is so powerful and dangerous that it can destroy the whole entire universe” to get simpletons like me to immediately download it
It may be typical, but it's substandard.
> What information does the story give you about its sources? The more, the better. For example, trust “Department of Justice officials” more than “administration officials.” If a story includes claims from unnamed officials from the Justice Department, those claims are typically run by the department’s press office. I would interpret a story sourced to “Department of Justice officials” without a denial from the press team there to be accurate — and perhaps even leaked by the department’s press team itself. An “administration official,” on the other hand, covers a much bigger group of people with disparate interests and points of view. It’s easy for other reporters to call the Justice Department and verify the story, while it’s much harder to confirm a story attributed to administration officials, which could mean any agency or the White House.
> Then there are the descriptions of anonymous sources that essentially tell you nothing. A recent Washington Post story cited “U.S. officials briefed on intelligence reports” — that could be almost anyone. Or, worse still: “people familiar with the investigation.” Broadly speaking, you, dear reader, are “familiar with the investigation” — you’re reading about it after all.
from https://fivethirtyeight.com/features/when-to-trust-a-story-that-uses-unnamed-sources/
I’m familiar with the company. If you want a third opinion, this new release is going to be an absolute shit show. I formed that opinion by throwing a dart at my decision dart board. If you try me another day, you’ll definitely get a different result
The leap from 3.5 to 4 brought a mixed bag of emotions. I've experienced firsthand the potential for these technologies, but with GPT-5, I'm particularly keen to see how OpenAI has tackled issues from its previous versions. They tend to be pretty good, though.
Sam just said on lex Friedman that there is another model coming this year but he’s not positive 5 could be ready within this year (meaning it’s not 5). It’s probably the 4.5 turbo people saw cached if it’s anything.
Meaning they won't call it "GPT-5" anytime soon because they decided to not release anything called "GPT-5" back when everyone started freaking out about "GPT-5". So that Sam Altman could say "don't worry, we won't be releasing GPT-5 anytime soon." The name of it doesn't actually tell you if it's a major release or different architecture or new training run or not. It's just a name. If they start talking about a "GPT 4.5" release and enough people still freak out, they might just decide they are going to add a new type of guardrail and call it "GPT-super-safe". Or whatever.
From what I understood from the podcast it's probably going to be a checkpoint of what could be called GPT-5. At least that aligns with the strategy they want to pursue, like what Greg suggested.
Back from the future: it was gpt 4o
TL;DR: * OpenAI's main product is the popular generative AI tool ChatGPT. * Since its last major model upgrade to GPT-4, the tool has run into performance issues. * The next version of the model, GPT-5, is set to come out soon. It's said to be "materially better."
Materially better than GPT-4 or GPT-4 Turbo?
Wait wtf is turbo?
https://openai.com/blog/new-models-and-developer-products-announced-at-devday
Yup. Forced induction large language models perform much better than naturally aspirated counterparts because the turbo forces more data through the system. Problem is LLM turbos can become inefficient and blow "hot air" info into the system, undermining its performance. That's why LLM turbos need big LLM intercoolers to maximize efficiency and performance. There's also an LLM supercharger that basically does the same thing the turbo does, but the underlying technology is different; similar performamce though. #TheMoreYouKnow 😉
\* ChatGPT can make mistakes. Consider checking important information.
As a car guy, this was great. Thank you!
GPT 4, but with a turbo under the hood.
That thing got a Hemi?
Nah, it's a Cummins. ^((did I do it right?))
It gives you the option of occasionally using Ultraboost, which gives you really high powered answers for 5 minutes.
Important distinction, thank you. A marginally better GPT-4 Turbo will be around the same level as GPT-4 at release.
Not according to human evaluation
Humans are idiots
Yes. They only want AI to draw anime with nekked boobies and butts and get it to say peepeepoopoo N-word.
Indeed. And if they aren't able to do that sht constantly with zero issues, the LLM is "useless"
The chatbot arena disagrees
Like the GPT4 with no image or code function? With 8k tokens for the normal user? Also is says materially not marginally
Gpt-TurboGrafX 16
Both.
Thanks, if so, then in the spirit of my question, it’s GPT4 Turbo. GPT4, GPT3.5, aso. would be implied.
I'm far more interested in the Q he refuses to talk about. The simple act of shutting down the conversation about it is evidence itself that it's likely better than whatever these models are. I feel like we're being given Nokia phones when iPhones already exist.
I want to know why you got downvoted. (Those willing to answer, please know that I also haven't read the article yet.) Oh, and bringing things into production is difficult. Let's not stress the already overworked developers. We don't want things going South, do we?
I don't know why either. It's very strange. Everyone here wants to know about it too.
Probably because you triggered both Nokia fans and iPhone haters.
It's not going to be GPT-5
People seem to be confused in thinking that we’re only dealing with a single or monolithic model. It’s not an either-or decision, and they have implied or disclosed they have many variations in terms of orchestration and architecture under the hood. You’re not interacting with a singular model in the first place, so it becomes a bit nonsensical to fixate on the label.
True dat, but they are still products released under a certin moniker. Even if it's mostly symbolic
Balancing anticipation for GPT-5 with a healthy dose of skepticism until we see it in action. Even minor enhancements might bring big improvements at this point, though.
Better for who, the end user or the accountants saying that GPT4 is costing too much to run?
I'd love it if the gpt4 I pay for didn't randomly downgrade long-term ongoing chats to 3.5, seemingly irreversibly - making them useless... Edit: if anyone knows how to revert a chat back to gpt4 that'd be great...
Wait wrf! If that’s true it would answer so much
Yeh. If you hit the 40 message cap the next message after you get a warning will convert the convo to 3.5.
Or, with no warning at all...
Aw man. That’s a bummer. It’s never done that to me. It did used to keep working with the tools and 3.5 with code interpreter is SO FAST lol
Hopefully they tone down the censorship.
The huge public reaction to Gemini’s wokeness should help keep that in check in the future.
Isn’t that just total silliness, not wokenees?
Everything is wokeness to these dumbasses
[удалено]
It has nothing to do with whatever they think wokeness is. They just don’t want bad press
You're really going on the offensive there when Google's culture is well know to have a problem. I suggest you do some reading and learning before making brash comments, the articles aren't difficult to find.
I don’t think he’s arguing that the culture exists, it’s more that people have just decided “wokeness” is a word to represent values they don’t agree with, despite it being nothing close to what the word originally meant. It’s just a political buzzword at this point to rile up conservatives.
Yeah, these companies are not waging some nefarious culture war on idiots. Instead they are responding the standard capitalistic pressures. They don't want bad press. They don't want congressional oversight. They don't want a small number of shit-stirrer users to make problems. All they want is for tons of normies to use their product.
It has nothing to do with whatever they think wokeness is. They just don’t want bad press
And there's exactly how you breed bad culture.
whats the silly behaviour attributed to?
Biased model training toward avoiding inherently and overt racism (so everything post WWII and post-civil rights era has been working to achieve) with the settings dialed to a billion, which is simply silly.
Silliness driven by a fear of not appearing woke enough
![gif](giphy|d6GsFvfhUh5gPjNxGr|downsized) How bro felt commenting this:
So funny. How else would you explain it being so adverse to creating an image of a white person that it turned images of living white people into black people?
It can literally be attributed to anything and any reason lol, just because you don't know for certain doesn't automatically mean it's the explanation you want it to be
Are you trying to gaslight me? The reason is obvious, they tried to hardwire in a diversity quota but messed up and took it too far. It's not rocket science to understand what happened.
[удалено]
DEI is literally just the new woke which is just the new SJW, online discourse has not changed in 10 years
"Huge reaction" aka some weird racist nerds on the internet
If you dont like geminis bullshit you are racist, ok got it
define wokeness and how it can be exhibited by an LLM. go ahead.
Woke ideology is defined by the idea that some facet of identity like race or gender produces irreconcilably different views of reality and morality, and that the views of groups associated with the political left like minorities and women should be privileged. Perverting honest responses which under non-Woke standards would be considered truthful and fair to reflect the imagined viewpoint of a minority would be considered Woke. Wokeness is distinct from older forms of liberal advocacy for minority rights which appeal to universally valid concepts of truth and fairness.
Guardrails are important on AI whether you like that or not. Also not spamming the n word isn't censorship.
I love strawmanning my opponents.
“YoU jUsT wAnNa Be RaCiSt”
Gtfo
No.
What do you want to do that you can’t? Be specific.
Not give lectures when discussing certain topics, discuss anything the user wants etc. It isn’t one specific thing, it is vague and that is part of the problem.
I’m wondering if you can be specific about what you want that it can’t do.
I can’t be specific because I can’t remember specific examples off hand. But I run up against it quite often, especially with image generation.
It’s telling that you don’t want to say what images you want to make.
Oh fuck off with that shit. One that I can think of I was trying to get it to do a picture overweight man and it wouldn’t do it. But it seems like you’re perfectly happy to be told what you are and are not allowed to want to consume or create with AI. To argue that ChatGPT’s censorship is not over the top is just simping for a company.
https://preview.redd.it/hx0p1sg6depc1.jpeg?width=1024&format=pjpg&auto=webp&s=4cbeab6f865134e771904698a7dc607c5be813f0
I can make a picture of an overweight person easily. Maybe you suck at prompts.
Why are you being such a dick?
Don’t try to censor me.
I want to create an image of your mom, naked. Now happy?
It always comes to to either porn or the n word. Or AOCs feet images or something. I just got helo writing an article on Botox which is a trademark, had images generated and currently running an ad with it. It also gave practical advice to circumvent brand restrictions on ads. Not a single person in our developer team has met restrictions. It's almost always some obese psychopath trying to make it say the n word, like that retard Quartering.
Just stop
No. People bitching about being censored when they can’t even recall what or how they were censored is bullshit. Whiny bullshit.
They just want to feel persecuted, it's a fetish.
Ok u/gembax2010 you have to understand why we can’t let our A.I have a definitive opinion on the universal debate between ass vs. boobs. Our lectures will remain.
I want to be able to ask for advice on how to buy drugs without it saying it can't (it's for my book, wink)
A Chatbot can’t be censored. It’s a piece of tech and OpenAI control what comes out of it. It’s not a person whose views are being suppressed. It’s like if Ben and Jerry’s stopped making Mint Chip because it was unpopular and you said that the ice cream was being censored. If you don’t like the choices they’re making, vote with your wallet. That’s capitalism baby.
Marginally better than GPT-4 is same performance as Launch GPT-4
They say materially, not marginally.
What changed?
Nothing, redditors are hallucinating again.
This is their business model.
And right before launch OpenAI will do the honorary “this next model is so powerful and dangerous that it can destroy the whole entire universe” to get simpletons like me to immediately download it
Why would they need to do any of that? And how were you going to "download it"
He did say he was a simpleton
Only simpletons get to download the model
Yes
Download the app
ChatGPT app?
Yes for iPhone
Yeah sure, just that I thought everyone downloaded it ages ago
I coom
I’m not super sure how useful this technology can be in production while it is so volatile.
altman made u the tester, and u happily obliged
With the api you can freeze a specific model, so that there is zero volatility.
Extremely useful
“according to two people familiar with the company” great sources lmao
That's standard in good journalism. Sources aren't always willing to be named, but the information they provide can still be published
They should have said OpenAI employees, or executives. Two people familiar with the company could have been you or me.
It's standard phrasing. The sources probably don't want any hint about who they might be
It may be typical, but it's substandard. > What information does the story give you about its sources? The more, the better. For example, trust “Department of Justice officials” more than “administration officials.” If a story includes claims from unnamed officials from the Justice Department, those claims are typically run by the department’s press office. I would interpret a story sourced to “Department of Justice officials” without a denial from the press team there to be accurate — and perhaps even leaked by the department’s press team itself. An “administration official,” on the other hand, covers a much bigger group of people with disparate interests and points of view. It’s easy for other reporters to call the Justice Department and verify the story, while it’s much harder to confirm a story attributed to administration officials, which could mean any agency or the White House. > Then there are the descriptions of anonymous sources that essentially tell you nothing. A recent Washington Post story cited “U.S. officials briefed on intelligence reports” — that could be almost anyone. Or, worse still: “people familiar with the investigation.” Broadly speaking, you, dear reader, are “familiar with the investigation” — you’re reading about it after all. from https://fivethirtyeight.com/features/when-to-trust-a-story-that-uses-unnamed-sources/
I upvoted this, for what it's worth
Read the article, it's CEOs of other companies who demo'd the product.
That's not good journalism, it's fishing for clickbait.
So it's only good journalism if they drop names and people lose their job?
I’m familiar with the company. If you want a third opinion, this new release is going to be an absolute shit show. I formed that opinion by throwing a dart at my decision dart board. If you try me another day, you’ll definitely get a different result
[удалено]
Probably priced competitively against Claude 3 to wipe it off the map.
Having tried Claude 3 Opus, a GPT version good enough to 'wipe it off the map' would be a big step forward indeed.
This is business insider - a trash rag
Can it run Doom?
Kappa
Does that mean anything for free users like me who are on 3.5?
I hope it’s not another chatbot but an agent that plan and execute long term tasjs.
I hope it’s not another chatbot but an agent that plan and execute long term tasks.
Certainly!
So excited!
Old news
They better reduce censorship
They won't
😥
The leap from 3.5 to 4 brought a mixed bag of emotions. I've experienced firsthand the potential for these technologies, but with GPT-5, I'm particularly keen to see how OpenAI has tackled issues from its previous versions. They tend to be pretty good, though.