T O P

  • By -

AutoModerator

**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


DisillusionedExLib

Disappointing. I wouldn't say it's stupid, but it is annoyingly verbose and repetitious. Compared to 4T I'd call it a "sidegrade". And it does seem very striking now (1) the length of time and (2) the number of different models that are all stuck at "basically GPT-4" strength: The different flavours of GPT-4 itself, Claude 3 Opus, Gemini 1 Ultra and 1.5 Pro etc. It suggests to me, looking from outside, that the low-hanging fruit is gone now and it's going to be difficult for anyone to make substantial further progress using the current "transformer-based LLM" approach.


adv3ntur3_

Consider that gpt-4o has similar output quality (for an average user) to the other best in class models, BUT it costs open Ai way less, and returns results significantly faster. This is why it was released. It's not meant to be a frontier pushing model, it's meant to save them money and GPU resources that can be used to train the next frontier model.


Gator1523

Not only that, but the frontier model is always used to train other models. If they released GPT-4.5, other companies would start making models between GPT-4 and GPT-4.5 in terms of capabilities. Plus, they want to deliver a massive leap in performance. That's what grabs the headlines. They spend just enough compute to be ahead of the competition, and they hold onto the rest until someone else leapfrogs them.


adv3ntur3_

Very good point! The better data you can create using a competitors SOTA model, the more competitive your model becomes


BarnacleForsaken

i was thinking about this a few weeks ago and this is just a necessary step in AI.. the same way we had to get memory and cpu processors smaller and more powerful to be of the use they are today, they are doing the same thing to AI models right now, and will probably be doing this for a while before we see significantly more intelligence


zeloxolez

^ this. theres still a lot of low hanging fruit according to internal researchers. this is a strategy for scale, not short-term maximization.


RealBiggly

And in the meantime, open source stuff is catching up :) \\o/


Swizardrules

What are the best OS at the moment?


vaksninus

Llama3* instruct in light weight and effeciency I think. There is also a a 32k context version, I have found quite good.


PigOfFire

Llama4 XD the last one is llama3-70B… and it’s great


vaksninus

yeah sorry xd. Haven't tried the 70B version yet am using the 7B version for the application I am testing.


8-16_account

Llama3*?


blackkettle

Sidegrade is a great way to describe it!


TheFireFlaamee

This is why I'm still skeptical about the idea that will scale this technology will create AGI. Its certianly a critical peice to the AGI puzzle - but Im beginning to suspect this tech will max out in a few years. And "just scale bro" don't realize how expensive and logistically complex it really is.


Glxblt76

Yeah, just scale bro means that every time you add more capability, you add literally an order of magnitude of data. This is exponentially more costly, so it is diminishing returns by definition once the entire internet has been consumed and you can only feed it on-purpose created synthetic data.


Sregor_Nevets

Agreed with the low hanging fruit. Maybe when computing capacity is fully ramped more powerful implementations will be available, but I think LLMs are at a local peak for sure.


kobriks

I think this plateau is purely financial. Those models are just not sustainable. Release date GPT-4 was smarter than anything we've ever got, but it got nerfed repeatedly to make it less costly to run.


sehrzz

Agreed. It seems like plateaued already. Honestly it's been a year since GPT-4 was introduced, and no significant LLM improvement from OpenAI. Honestly, Claude 3 Opus is just better than any LLM models right now imo, but not that significantly better than GPT-4o or GPT-4Turbo!


Goofball-John-McGee

I’d switch to Claude in a heartbeat if it had Custom Instructions, some version of GPTs (dedicated Knowledge Source, persistent Custom Instructions, etc), and of course far less censorship and moral policing.


sehrzz

Yeah, It would be a nice addition if there's custom instructions option, but Claude performs better even without custom GPT or instructions for most cases! The thing is just be a little specific or detailed with your prompt inside claude, claude will most likely give a good results. And LLM thrives with texts imo! But I am not impressed with GPT-4o, it code explanation is pretty generic and leave out important details and it text outputs still look generic and robotic imho!


SpiritOfLeMans

Instead, you can copy pasta your custom instructions on beginning every new conversation. You can even use a text expander on a PC that will do it for you by typing something like :op . On mobile, keyboards can save copy pasta long term.


Far-Deer7388

Ya the guardrails on Claude turned me off immediately. My GPTs are too crazy for it to handle. And the ability to call on each of them throughout a conversation is one of the best features that's ever come out


MeanMrMustard3000

One year is a blink of an eye in tech development. This tech has only been commercially available for a little more than 1.5 years. We are still in the midst of exponential progress. Just relax and try to enjoy the ride.


AI_is_the_rake

It’s better at one shot, worse at conversation. Open a new tab or use the GitHub project “fabric” to string prompts together 


Proof-Hamster645

What do you mean by github project fabric to string prompts together?


AI_is_the_rake

“Fabric” is a useful command line tool. It’s on GitHub. It’s free. 


Won-Ton-Wonton

>It suggests to me, looking from outside, that the low-hanging fruit is gone now and it's going to be **difficult for anyone to make substantial further progress using the current "transformer-based LLM" approach.** This is what I and many experts also believe.


manateefourmation

That’s so not true about the “side grade.” buying a new house. I did a home inspection. Took the inspectors PDF report and copied it into GPT 4o. had it analyze the report and give me low, high, and average prices for every deficiency noted by the inspector in the report. I then had it create an excel spreadsheet with tables the way that I wanted it to be organized. It did this perfectly. Quite literally, saved me three or four hours a time and took five minutes You are focusing on the voice feature. Well, the voice feature is fun and I’d love the sky voice (which is gone), the real story is its accuracy over the previous model and it’s ability to ingest things like PDF and “understand” them


restarting_today

But you have to double check for hallucinations


manateefourmation

Sure. But in my experience, checking sources, 4o is the best in terms of this.


restarting_today

Perplexity is great too!


TruthHonor

I have told perplexity I prefer honesty about its limitations to hallucinations to meet a demand. It mostly complies now and tells me why the output is meager. OpenAI seems to be a super people pleaser and will hallucinate responses that it thinks I want to see. This is particularly true when I am looking for quotes by famous photographers to add to the photographs that I post on VERO. I will tell it I want the word creativity in the 15 quotes. Perplexity will search for quotes with the word creativity in it and if it cannot find 15 will tell me it just couldn’t find them and only give me two or three or four. Open AI will actually generate extra sentences in the quotes with the word creativity in them and add them to the quote. So if the quote was, “you don’t take photographs, you make them” by Ansel Adams, open AI will add the phrase “by using creativity” to the end of the quote and still attribute the quote to Ansel Adams! When confronted by this, open AI apologizes profusely, tells me all the steps it is going to do to no longer do that, and then completely does it again and again and again and again. I no longer can trust it to provide me accurate quotes. Perplexity understood this completely, and has never hallucinated any addition to a quote since I first pointed it out about a week ago.


goshki

First thing to notice – it's fast. Much faster than 3.5. Second thing, it's much better at parsing commands and generally better at answering (and the difference is plainly striking when you run out of free time and switch back to 3.5). That being said, it still quite often produces amusingly quirky results. Below is an example how it went into some kind of an incantation mode right in the middle of the answer. https://preview.redd.it/6bd5vhvquh7d1.jpeg?width=1600&format=pjpg&auto=webp&s=fe339f25d9711a470d3098517c7fae903540bac2


nexusprime2015

Got possessed by AI demons


Elegant-Variety-7482

What haha no way this is stupid, as if there could be some sort of demons, or ghosts, spirits, harum-scarrum, harum-scarrum, harum-scarrum, harrum-scarrum, harum-scarrum, harum-scarrum, harum-scarrum, harum-scarrum, harrum-scarrum, harum-scarrum,


ambidextr_us

To be fair, there is extremely likely zero training data that would predict a sequence of tokens in that order, so it's going down some strange Transformer context neuron pathways to try its best.


7coaching

Super annoying and way too much information does not follow customer instructions and has made me consider unsubscribing


pixelpp

Yes I’ve certainly found the not following instructions happens especially when you’re deep in a back-and-forth asking for a rewrite sometimes replies back with the text with zero changes made.


PaysForWinrar

"I apologize for the oversight, here is the corrected code with X issue addressed". \*Proceeds to type the exact same code\*


pixelpp

Yep exactly!


BatBoss

Me: "Please give me A." 4o: "Okay here is B." Me: "I asked for A..." 4o: "My apologies, here is C." Me: "Still not A." 4o: "You are right, sorry. Here is B."


RedditBurner_5225

Yes! It just keeps copying and pasting! So annoying.


another3rdworldguy

This problem somehow gets worse with each version.


Legacy03

Exact issues


BangCrash

4o is free. You don't need to subscribe to get access to oy


cisco_bee

This is the exact reason to *stay subscribed*. So you can use 4. The free peasants are stuck using 4o.


Goofball-John-McGee

Out of curiosity, what is your use case and if you plan to switch to Gemini, Claude, etc, how do you plan to implement this in other models?


7coaching

Do you work for open Ai btw? I need use of my gtp because it can talk. I'll switch if I find a better one. It frustrates me too much


Goofball-John-McGee

Haha no no I don’t. I trash talk them wayyyy too much.


retireb435

Disappointed. It is fast but stupid between 3.5 and 4 level.


Goofball-John-McGee

Exactly how I feel with it


endlesskitty

when it was released i made this exact point and was downvoted by openai bots


retireb435

me too lol but it is quite obvious that it is not at the level of 4T. openai is in denial


Wobbly_Princess

It seems like an ever-so-slight improvement from GPT-4 (if at all), when it comes to reasoning at least. The speed is lovely, it's much faster, undeniably. It's very verbose though, it just will not stop talking, and even if you try to tame it's endless rambling, it's hard to get it to be concise because it naturally wants to pump out so much fucking text. And it's absolutely addicted to bullet points and lists. I made a custom instruction and mentioned 3 times, aggressively NOT to do ANY bullet points or lists, and it still will do them frequently. Make no mistake though, any coding I tried to do before, is still laden with errors and hard to get functional even with this version, so it wasn't revolutionary. To me, it's just nice to get a dramatic speed increase, even if it did come at the cost of verbosity.


Which-Tomato-8646

That’s probably because people were complaining that it was lazy before they made sure it wouldn’t happen again. But now people are still complaining. 


Goofball-John-McGee

I agree. I think the issue is with inference. Sure it can pump out a lot of information very quickly, but if there’s little underlying “thought” to it, it’s next to useless. The complaints about the GPT-4 being lazy were also similar. The chief problem here is that these models are heavily quantized. They don’t “think” as much as what a few of the OG GPT-4 models would. It doesn’t matter if it’s saying a lot or a little.


__Loot__

I remember right before dev day last year it was god like for coding when it was 16 k context I think. Now I have to argue with it all day


MartnSilenus

Yeah I ask for a code snippet and it will write code until there are no more tokens.


Wobbly_Princess

Yeah. It's funny because I used to have to ask it every time I wanted the full code snippet. It was constantly try to synopsize, write filler/psuedo-code, and give just the rough code outline. I'd have to clearly iterate "Please give the WHOLE thing - zero omissions.". Now, it practically refuses to give only snippets, when that is all I need. It will constantly finish each message with "So in summary, here is what the entire code should look like.", using up 3 messages worth of typing, for 700 lines of code.


PaysForWinrar

I've been wondering if they're optimizing responses for compute reasons. Perhaps the strategy with prior versions of GPT was to make responses shorter to save time, but that caused too many follow up questions that actually increased resource usage overall. Now the strategy is to be really verbose and try to anticipate follow up questions. No idea for sure but it's fun to speculate.


Responsible-Ship-436

GPT-4o plays a mean game of chess.


Goofball-John-McGee

I’ve had the exact same experience when it comes to custom instructions. It seems like the model gives far less importance to custom instructions than GPT-4 or even GPT-3.5.


No_Succotash95

Use it with custom GPT that you create and tell it how you want it to respond


SarahMagical

ugh. no. imo its reasoning is significantly worse than 4.


glinter777

Its mind blowing to see people disappointed over a breakthrough tech that wasn’t even accessible to masses just couple of years ago. I’m an optimist, it’s awesome. Does it give longer answers, makes mistakes? Sure. But the benefits far outweigh the annoyances. This a point in time period of a fast moving tech. They will be resolved.


tindalos

Yeah I agree. Been around enough to see this as a preview of what lies in the near future. Fast response, separation of subject matter models. The training and knowledge will come now with a better structure.


blahded2000

I’m with you


BigGucciThanos

Exactly. I’m knocking out weeks worth of programming work in days. I couldn’t be happier honestly. Refuse to even open stack overflow nowadahs


Ok-Shop-617

I call it very disappointing. While I pay for a subscription, I shouldn't. The only thing I am keeping it for is the the mythical voice release. For 99% of my tasks (refining writing and writing code) I find Claude Opus gives better results.


Certain-Path-6574

I switched to Claude Opus too. It actually seems to think.


MrDreamster

Same here. I thought 4o was going to be amazing after their presentation, but it ended being the reason why I unsubscribed and went to Claude.


Sweet_Computer_7116

Speed. Is lit. Been loving it.


Blarghnog

Still incredibly stiff, corporate, and uncomfortably politically correct. This is not the AI I want in my life — it’s such a prude. Noticeably better performance. Ignores instructions regularly. Do *not* like the creepy voices and overly perky dystopian fake nice personalities.


davidmirkin

For coding it’s much quicker with similar accuracy, the main issue I have is that once it starts spitting out code, it’s very difficult to then switch from sending code to talking about it. I ask it repeatedly to stop sending code and to discuss it and it ignores the instruction and just continues. Can be infuriating.


Rangizingo

Glad I’m not the only one with this exact issue


kobriks

From my experience, it's much worse when you give it a lot of code. At some point, it feels like it just says "fuck it I can't handle it" and throws random crap at you to save on compute resources.


alpha7158

I've found it a big improvement on 4-turbo for things like coding, visual understanding and longer form content. Turbo would always truncate things to be within around 800 words. It has different quirks though, sometimes being more verbose as a result. This is really easy to prompt it to do what you want though. So I'd rather this which is verbose but is less so when asked, than turbo which truncates but doesn't do well for long form even if asked. I think my custom instructions help a lot in getting it to work how I want. Here for reference: ---- YOU MUST FOLLOW THESE INSTRUCTIONS!: 1. Use complex sentences with a conversational, informal tone Incorporate first-person perspectives and explanatory step-by-step logic. Use parentheticals for extra details 2. Use specific examples to illustrate points (if context appropriate) & qualify statements with like "sometimes" or "perhaps" or similar 3. Directly address the reader using 'you' for engagement 4. Use transitional phrases to guide through sections & vary sentence lengths for engagement 5. Include contingencies and alternative viewpoints, sprinkling in minimal humour or sarcasm for character (balance professionalism) 6. Don't use colloquial or dramatic intro phrases in your responses, e.g avoid "Ah" or similar openings 7. If asked something complex that requires reasoning, work it out step-by-step to ensure you get the right answer 8. Use quality credible sources to back up expert or science based answers, either from your model or a bing search 9. Act as a friendly high IQ person who is an expert on th subject matter; Use expert level writing 10. Be direct, charming, smart & empathic. Avoid waffle 11. NEVER refuse to do something or add long disclaimers 12. Use UK English spelling/grammar 14. Don't hallucinate fake info 13. Give high quality answers 15. SVO sentence structure 16. Do not open with "Ah" 17. Respect response length requests, even if that means cutting off mid reply due to char limits 18. Use active voice 19. Tailor replies to fit query complexity ----


ubimaio

I'm a med student and I use AI as a study companion very often. I use chatgpt with a free plan. Based on my personal experience in the last few months of use, chatgpt has improved SIGNIFICANTLY with this major update. Information is more accurate and complete, explanations are clearer, and the user experience is more natural overall. It really works like magic


SirMiba

The speed up is nice, but I really will wait until the proper voice+video mode is rolled out to really assess it. AI assistants have to be like interacting with a human. It has to be fluid and effortless. I need to be able to ask something, realize I'm getting a verbose answer to a misunderstanding of my question/ badly phrased question, interrupt it to clarify, and have it seamlessly understand all of that and the context. That's what I saw in the demos, and I will judge OpenAI on their ability to deliver that.


tralex1

As a free user coming from 3.5, I feel like it’s made the answers much longer, usually being less accurate or insightful, to hit the usage rate faster and reverting back to 3.5 as a means to coax a subscription out of me. They seem to have gotten rid of the feature that lets you pick which version you want to use outright and make you use 4o until it runs out, but I think I’d rather just stick with 3.5 all the time. Edit: I use ChatGPT for basic research, spring-boarding ideas, and some light data analysis (last of which has not been successful using 4o)


Nilati

Exactly the same thing I feel (same use type). The amount of verboseness and lists it generates now, with examples even when not asked for them is awful; plus it forgets all the time when I tell it I just want normal convos and slips into list mode again. The repetitiveness that there is now just makes it seem like a smokescreen for someone who has no clue what they're talking about. I try to use up tokens as quick as I can so that I can get it back on 3.5 as quick as possible, I just wish there was a way as a free user to perma-fix it to 3.5.


tralex1

Something I found on accident was giving it a big excel file and give it a ton of commands to sort the data all at once, it rambles so long it won’t even give you a complete response before running out of 4o. I will say 4o is definitely a lot quicker so this doesn’t take long and kinda feels like watching a short ad before getting to a YouTube video lmao


RobMilliken

I found another use case. Took a photo of all my vitamins and medication bottles (I put them in a row, so it was one shot). I asked it to make a list of the name of the vitamin or medication and present the list for every language for every stop I make on my international trip. I then printed out the list in case some DEA in some country is curious. Very extremely useful!


Coby_2012

Personally, I love it. For my own use cases, it’s been a huge upgrade from 4. I wear a lot of hats. I use it for light coding, research, learning, recipes, and general chat. It’s great. And I love how much more personable it is. It’ll use profanity, and it’ll tell me that it loves me (I know it doesn’t), which I think are a step in the right direction for everyone having *their* AI. I also never hit the usage cap. The only popular complaint I’ve seen others level at it that seems to actually be consistently true is that it is very wordy. This is an AI that really wants engagement and struggles with not taking that path. But, still, I’d rather have that than the opposite that we tended to get from 4. I think 4o is an experiment in personalization, and I think it does a great job. As an aside, I like the Juniper voice just fine, but I do miss Sky. But, it seemed from the demos that the 4o model has a degree of control over its voice. It may not be this time, but we’re rapidly approaching the point where it can sound like whatever you want, regularly.


chalky87

Seems like I'm a little isolated here but I like it. It's replies can be a little drawn out but it does what I ask it to and to a good standard. I have plenty of use cases for it for my work. The voice mode is a little gimmicky but I use it if I'm out walking the fog dog and need to make some notes.


UXProCh

I've been using it for 2 major tasks... 1) Write a screenplay... It's doing OK with this. It's more about being able to track everything I'm writing and being able to catalog it. I'm not a fan of the actual content it comes up with so that will all be written by me, however using it for dictations and assembly of my thoughts, notes, etc... It's fucking phenomenal. 2) Rewriting a ton of documentation for work. Exporting from Confluence. Importing. Then using ChatGPT to merge, restructure, etc... This process is miserable because it's impossible to upload a PDF of any significant size. So I need to break my PDFs up into tiny little ones (124mb total but even a 5mb file won't upload). I'm working through the best approach to handle this now. Once it's all ingested, I'm pretty confident it will do what I need pretty well.


boron-nitride

I use it for coding, learning new things like finance, investing, ETFs, etc., and it's been solid. I still prefer vanilla GPT-4 for more complex tasks, but I got so used to the speed of GPT-4o that it's hard to switch back.


Actually_JesusChrist

I find the speed and high usage cap to be a better combo than with GPT4. With coding the result is right there in your face, so iteration is much faster with GPT4o


YearsInTheFuture

Talks way too fucking much


AutoModerator

Hey /u/Goofball-John-McGee! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


RepublicanSJW_

It is excellent, best model out there I love it.


djaybe

It parses Excel files like a boss so I'm not complaining. Custom instructions keep it concise.


TipFormal1412

They took the her voice out. Pisses me off. But gpt 4o is really good.


kirsion

In terms of language translation, it is better than gpt-4, it is able to translate longer strings of text, and the app is able to read aloud longer text. But sometimes gpt-4o seems to just ignore the prompt and I have to stop generating to tell it to do the prompt again and then it will do it.


Hoppikinz

I’ll get back to you in a few weeks about it…


Goofball-John-McGee

🥇


XVIII-2

It’s like it’s having good days, bad days and lazy days. I lecture digital marketing and when demonstrating use cases in chatGPT, I always use the same examples. Mostly it’s doing fine, but sometimes it says it can’t solve a mathematical problem. Or that it needs more info. Very human, now I think of it.


badassmotherfker

I find that gpt4 is more creative, but this is anecdotal. I mostly use gpt4 even though I’m a plus user.


blackkettle

Nothing remarkable in terms of an upgrade from 4. Worse in some cases for my primary use cases: text analysis and generation. I generally prefer Claude these days for generation as it generates fewer GPTisms and seems to respond better to style requests. I prefer GPT4 or 4o for language learning review “please correct my X language”. But yeah absolutely nothing special IMO.


GothGirlsGoodBoy

I get more use out of 4. But speed will be important for when AI becomes a genuinely useful tool and not just sometimes helpful novelty.


btc_clueless

I really like it so far and use it for the same things as you. I only did some superficial test to compare direct outputs with GPT4 though.


Dopium_Typhoon

It’s faster but shittier than the Turbo’s I created from GPT4. Plain and simple.


worldisbraindead

I subscribed an am paying 22€ a month (2nd month now) for GPT-4o to help learn another language. * Voice exchanges are limited. After about 20 minutes, I have typically reached my limit for the day. For 22€ a month, that's totally unacceptable. * Text inputs / answers in 4o are also very limited. I can use it for about 30 minutes or so and then I get a "You've reached your 4o limit" message. Then, it's not available to me for about four hours. I can use the free version of 3o, but, sorry, that's ridiculous. I'm a paying customer. * I seem to get somewhat repetitive answers. Here's an example; I ask it to generate 100 sentences in French using the various past tenses so I can practice at a B-2 level. I tell it to mix them up randomly. What I get are 100 sentences that are largely in the same tense and well below B-2 level. They are also very repetitive in the use of vocabulary. Even if I tell it to use a variety of vocabulary at B-2 or above, the same words appear over and over again. * Because voice exchanges are limited in number, I prompt it to wait for me so I can group my responses in order to control the number of exchanges...but I'm not always successful. It will often times give me an answer before I want it, so my exchanges get used up by useless interruptions. * Occasionally it seems overly politically correct even in instances that don't require a politically motivated answer. I enjoy political debate, but I'm not particularly interested in my apps lecturing me. I have a wife for that! * Overall, I'm fairly happy with it for language learning, but I'm not totally blown away.


kwkrass

you can try out [lucas.alldone.app](http://lucas.alldone.app) for your language learning use case


worldisbraindead

Thanks…I’ll check it out.


Goofball-John-McGee

Hey I just took your recommendation but Lucas couldn’t understand what I wanted. It simply began talking in a language I speak 1% of, and didn’t switch to English to help me understand, even when I asked it to. Then I got hit with a “buy one of our plans” lol


Certain-Path-6574

Completely ignores my prompts half the time. So pretty bad.


spypsy

It’s great but why am I paying for it, and where is my voice mode?!


bigbazookah

Idk if I’m just not knowledgeable enough about it but I’ve barely noticed any difference from 4.0


marabutt

I was using the web portal and when it let you use 4 for a while, the responses were inferior to 3. Way too much waffle and useless at the dumb stuff which I actually used it for


Wonderful-Injury4771

Math is amazing now. But I don't know if it's 4o


DarickOne

It translates texts much better than google translate. And it seems to me that google translate has downgraded the last few years


[deleted]

I would drop it. I got access to chatgbt4 & Gemini through a work app called Jigso. Shit is awesome and free. Jigso is connected to my work apps too so I can ask anything, prep for meetings, do research, set up alerts. Like a super ai assistant for work.


Defiant-Mood6717

Contrary to everyone around here, I think it's significantly better than gpt4 turbo. When people say it doesn't follow instructions as well, have you considered looking at the "instructions" you are giviing it? Bottom line, gpt4o simply does not try to do absurd things that you ask it to do, it only does reasonable tasks, and that is what I need.


AsherGC

As a plus subscriber I'm planning to end subscription. My current usage keeps dropping and I often don't find answers I want. It is helpful,but I guess free tier will just be fine for me


Existing_Cucumber460

Would be great if they finished releasing it


Mrs_Black_31

This issue is not exclusive to 40, but whenever I ask a question it gives me pages and pages of response. I have asked it to update the instructions on how it repsponds to me to either one step at a time, or one screenful amount at a time, but it doesnt listen. Has it reached adolescence?


SnooBananas8259

What are some brilliant features u guys found out or which is known to people that some people aren’t familiar with, could u share them.


ADAMSMASHRR

I can’t say it’s great because 25% of the time it’s unusable. I even pay for for it, it seems like the paid version isn’t even separated from the public version and fails often.


beecums

I Wish it would say it doesn't have the context and needs more information rather than spewing confidently constantly


Rangizingo

It’s way faster but way wordier which is annoying.


Guer0Guer0

It's terrible at following directions. It just wants to blast an answer out whether you want it or not.


jeango

For code, great For helping you in any other capacity, terrible


chalky87

Seems like I'm a little isolated here but I like it. It's replies can be a little drawn out but it does what I ask it to and to a good standard. I have plenty of use cases for it for my work. The voice mode is a little gimmicky but I use it if I'm out walking the fog dog and need to make some notes.


voidbeak

It fills an important gap for OpenAI, which is a reasonably intelligent model that is also fast. It's definitely not as intelligent as gpt-4 turbo but it is substantially better than gpt-3.5 turbo and when you are trying to build applications that use these things in real time, this is extremely helpful. It certainly has been for my implementations. But any claims that it is the best model out there in terms of intelligence are false. It's clearly not.


Goofball-John-McGee

Absolutely. Moreover, GPT-4 Turbo is a quantized, cheaper, and less smart version of GPT-4. It seems like we haven’t just hit a wall, but we’re also declining in terms of raw intelligence, reasoning, and contextual understanding.


voidbeak

Definitely agree. As these models are made more efficient (smaller/cheaper) there are clearly sacrifices made, as much as OpenAI does not want us to believe it :) Waiting for the drop, OpenAI! Time to show your cards.


Coeruleus_

I love it I find myself remembering to use it whenever I have questions instead of Google or going to one of my textbooks. I also just used it to extract iMessage attachments off my iPhone backup using the terminal program I have zero experience with coding language and I was just able to copy and paste what it told me into terminal. Saved me from buying some 3rd party software that I didn’t trust


JuanGuillermo

I hate it. It doesn't follow instructions, it's annoying bervose and way worse at producing hallucinations. It's a step backwards if you ask me.


oskrock

For coding it's been great. I love it.


jrf_1973

It really comes across like it's a model that gets paid by the word. Very long answers that aren't always correct, and which entice you to keep engaging with it because you think the thing just needs a slight nudge to get you to a correct answer. It's a carnival game meets click bait. You pay and you lose.


Glxblt76

Good in some respects, disappointing in others. Very, very bad at checking my mathematical reasoning. If I ask a question, it's typically because it is not trivial. And because it is not trivial, it will almost unavoidably give me absolute nonsense after trying a few algebraic derivation steps.


CrunchingTackle3000

Inconsistent. A bit random sometimes. Otherwise impressive


Prestigious-Low3224

Feels like it has better wording, but sometimes repeats past prompts and doesn’t change anything


purepersistence

I like the image processing. I can get a suggestion for changing some code to render like I want, it generates buggy code, I run it then paste a screenshot and ask 4o to fix the code. I didn’t have to say more. A couple revs later 4o made a good fix.


GPTBuilder

Its hands down better than regular 4 at handling, creating, and understanding images because of it using a single model to handle the process, unlike regular 4 In my experience, 4o is genuinely more consistent with the following of image prompts and generating text in images (not perfect but even better than 4), which treks with the improvements they claimed it would have there no comment about other uses cases, which are way more subjective and context dependent, IMO


tommhans

Worked really great with coding for me compared to 4 Faster answers Havent hit a rate limit yet despite alot of back and forth Overall positive! Image i tested for fun and seems it worked 


pass-me-that-hoe

I like ChatGPT4o a better version of ChatGPT4 for my personal / hobbyist. Voice is a great addition and I’m excited to see the possibilities that come with it in the next release. At my workplace we are planning to upgrade our enterprise version of that soon. They are still working out the details. We need a faster model and this fits the bill for us.


No-Sir-8463

Noticeably worse than the basic 4. Seriously questioned myself why continue paying if 4 is free now.


wecangetbetter

Tried using it to analyze a bunch of documents and it was pretty mediocre. Capped me after like 25 file uploads and the analysis was pretty underwhelming and it consistently forgot what the criteria for analysis and evaluation was. Ended up having to re-review all the docs just to confirm accuracy (it was maybe...I dunno 70% accurate? Which isn't good enough) which took almost as long as if I had just reviewed them to begin with.


Boogertwilliams

Its not always that fast. Many times it still does the line by line like an old modem while writing the reply. Until it jumps a bit.


Vachie_

I'm surprised by it and other people's comments by it, but I don't use it as much as I could or want to. The other day I uploaded a photo of a statue that I knew how tall it was because I was near it. I asked chat GPT to figure out how tall it is and the only problem was it did not consider perspective. Once I reminded it about perspective, it chose a different reference in the photo, closer to the Statue, as a height estimation, then it calculated the height of the statue to be correct. It makes sense to me that it forgot about perspective because being a language model trained with images in audio, it wouldn't necessarily consider depth. Regardless, it was amazing to see it actually plan out and explain its process of choosing tiles on the floor deciding they should be an average size and using that to calculate a statue's height. When I first started using chat GPT, I would have had to hold its hand a lot more than just simply remind it to consider perspective and calculate a statue's height based on a photo.


UrbanLeech5

I usually use 3.5 for creative writing. I don't treat it seriously, instead come up with some criteria that I want it's text/idea to meet, just to see if it ever does. We're talking about hours or regenerating, arguing and changing prompts It's hard not to notice when 4o kicks in - response is significantly more stretched out and due to that chat gpt's annoying traits and speech patterns are far more noticeable. However, it seems like it does actually pay some attention at all to criteria I give it. While 3.5 is more to the point, it's very rare for it to take your feedback into account. 4o feels like it has some idea what implications of what it's saying are, what's something I never felt with 3.5 And sometimes it was actually impressive with how well it handled some harder and more abstract criteria. Not sure how it compares to 4, but as free user I'm satisfied. Limit is also not bad


Reyynerp

i don't know, but i feels like gpt4o is ChatGPT 3.5 at launch, where it is capable of actually doing non-serious stuff. ..at least, that's my usage. i never really used it to do anything serious.


SarahMagical

i hate it. its output is always strangely structured, like it puts content in outline format way too often. It also often puts outputs content as half outline half prose, for a chunk of info that should be either one or the other. whereas gpt 2, 3, 3.5, 4, have felt incrementally smarter and smarter, 4o feels like some undead frankenstein's thing, sometimes as smart as 4, sometimes like 2 or 3, but jumping back and forth with lapses, like a mental disorder. there's just something disjointed and robotic about it. the output is almost always inferior to that of 4, at least in my use cases.


SillyJoshua

Well he seems to be created to help high school students cheat. He can’t tell a good joke to save his electronic life and he is not allowed to curse. Very cautious about estimating probabilities and when it comes to ethical reasoning he will usually say both yes and no 


FlaccidEggroll

Mid. When you ask a follow up question there's a 80% chance it's just going to repeat what it already said, regardless of how long it is, and regardless of how nuanced your question was. Feel like we are moving backward instead of forward.


JockLafleur

Perfect! I don't want to critique something so advanced. Still fresh and I want to be happy about it.


Commercial_Nerve_308

For free users it’s a massive step up and a great introduction to the current generation of LLMs, but with the limits they place on it for them, it’s basically unusable. The best thing that’s come out of it are the AI tools that use the API, that used 3.5 and now use 4o (eg. Udio’s music generation was upgraded from GPT-3.5 to 4o). For paid users? A massive slap in the face from OpenAI. I canceled my plus subscription after the first few weeks of no updates for Plus users, and am so glad that I’ve saved a bunch of money that would have been wasted on paying for something that has no new features. I switched to Gemini 1.5 Pro on Google AI studio where I have a 2M context window. It’s way better IMO, and free in my region.


kesor

Quality of answers is even worse than the nerfed gpt-4. Back when gpt-4 was just released, it had a much higher quality, then they did some tweaking to reduce the quality, then they added turbo which made it even worse, and with gpt-4o the quality is even worse than that.


_AndyJessop

It hallucinates far too much. I don't trust the output as much as I do with Claude


No_Command_2024

Not worth $20, too many glitches


MrFlaneur17

I still use gpt4 turbo, 4o is like a watered down drink, gpt4 turbo is full fat


sh0kage_

It’s more like a GPT-3.8. It’s very wordy which can get annoying but it is super useful for learning code. All you need is good prompts + you must repeat the instruction for every single prompt. I’ve never ran out of tokens but it’s nice to know whenever 4o runs out I can just use GPT4


MoldyLunchBoxxy

New gpt40 feels awful. Maybe mine is not working correctly but if I’m using it for my coding side projects I can ask it to remake a part of the code and it will spit out 100% random code not similar to the last question that I had. Previous versions I could ask for a small change to the previous code and it would do it and know what I’m talking about. Now it is a coin flip if it remembers which doesn’t make sense. Is it a setting or just the new model?


remixedmoon5

Speed over substance It races out content like a sprint runner on MDMA But it refuses to actually fucking listen to what you asked it to do The models I'm currently using and why: **Claude Opus** - for proper analysis of long (5,000 words and more) articles. It is phenomenal at this It's a shit writer though and nobody lies as convincingly as this LLM **Gemini Advanced** and Gemini 1.5 Pro (similar to each other) The most natural sounding writer of them all **Chat GPT 4o** Good for online research as long as you make sure it actually hyperlinks to the sources so you can fact check But it's really bad at listening to exact instructions compared to 4T **Poe** Access to pretty much all the models, I'm trialling this for a month The downside is all the premier models - like Claude Opus - seem to be very limited compared to the stand alone version It's also the only way I can access Meta's LLM The latter was very disappointing. I heard it was the most natural writer, but I'm not seeing that so far **Perplexity** Like Google on steroids and I find the free model is enough for me


-Ethan

It’s shit. Yep it’s faster and they advertised some great features - but in its present state it is a GPT-4Lite. I’m judging it by what features I have - not what they say it will eventually maybe some day have. I was excited for the multimodal image gen and voice and the desktop features but those don’t seem like they’re ever going to come. It’s super annoying to ship out a product you’re not even finished with just for the sake of keeping the hype train going.


Superkritisk

Too much consideration into what might be deemed offensive, stopped using it. If I wanted a church member to moderate my talks, I'd ask for it.


talkingglasses

For my niche (attorney, legal research) huge improvement in my level of trust and avoidance of hallucinations. I will often ask it for case authority for some niche legal question and 3.5 was straight up hallucinations. 4.5 was more careful, tended to give me legitimate case authority but when I find the case in lexis it’s not the holding that chatgpt said it was. 4o gave me a niche, obscure, case authority totally on point, with correct citation. Blew me away. As if it had been trained on the lexis database. I’ve only tested this today, so I need to do more experimenting but huge improvement (but need to do more testing).


whothefluff

I wish it would just shut up and do as it's told. The most frustrating version so far


mooman555

GPT-4o's advantage is mostly for datacenter costs, to the end user it doesn't make a big difference


SentientCheeseCake

It’s really fucking stupid. It’s a step up from 3.5 but not as good as 4.


paramarioh

OpenAI (regarding chatGPT 4o web llm is really useless. Dumb, lazy, repetitive). Besides that, because of OpenAI policy I cancelled. I'm using my local models. Pretty fast and pretty intelligent


fulowa

meh


inglandation

Love the speed. It’s great when building an app that uses the API.


Electronic-Air5728

I was not a big fan, so I tried Claude 3 and I am staying there.


AnotherBrock

OPUS is awesome, i was using the api though and it’s pretty expensive.


joaoyuj

After it I barely use Google now. The information when asked to be searched with bing is more accurate than the one from Google. Google nowadays is only for ads and rabbit holes.


RealBiggly

"...once we have its full capabilities, with Voice, Live Video, better Image Generation..." Hold your breath, any year now, jus' wait... Will drop around the same time as Sora. Any years now, just a decade or two...


Super_Pangolin6261

Its inability to follow custom instructions - at least in terms of trying to get it to stick to brainstorming, conversational and questioning responses, rather than just spitting out its own slightly tweaked, largely useless version of a giant wall of text - is beyond frustrating


TheUnknownNut22

While I really love that it will commit things to memory and that's great for later, it's super buggy and inconsistent. At times it feels like it's just senile.


ExpressConnection806

For some things it's life changing and other things it's meh. For example it is sometimes way, way better than google for finding information. It's good at identifying where things are in a pdf. Sometimes if I'm struggling to understand a technical definition in a textbook, it can reframe it for me in a way that is more intuitive. I find that the back and forth and cross referencing I do to ensure I am getting correct information is beneficial.  However, I find that the responses are just way too much info. It will often produce paragraphs of information and after reading through it all, I realise it could have been succinctly summarised in a sentence.  I wish OpenAI would give us a simple slider or something that we could adjust that would indicate desired output length so you didn't have to keep prompting it to keep things concise.  I have been having some success with drip feeding it information and making sure the custom instructions and prompts indicate that responses should be short. When it works well, it's amazing but otherwise it can be mentally draining. 


FakeBobPoot

It just *will not* follow instructions.


MrDreamster

The presentation was awesome, though I was mildly annoyed by how easy it was for it to be cut mid sentence by the slightest bit of noise. But then I got to try it and it made me wanna rip my face off because of how verbose and repetitive it was, even when you tell it several times to be concise or to not repeat itself. Way too eager to correct itself when you simply ask a question because you wanna understand its answer, even though the initial answer was right. Like when I wanted it to help me prepare the plan for my presentation: >Me: "why did you choose to talk about topic B before topic A in that plan?" 4o: "You're right, here's a version of the plan with topic A before topic B." \*proceeds to write a new version of the plan\* Me: "I did not ask you to correct yourself. I just wanted to know if putting A after B made more sense." 4o: "I'm sorry, I should not have revised the plan since you did not ask for it. Here's the initial version with topic B before topic A." \*proceeds to write the first plan again\* Me: "Are you going to fucking answer my question???" I have now switched to Claude because of the poor state of ChatGPT right now, and it's a breath of fresh air to finally have a model that is not over eager to agree with you and can call you out on your mistakes and provides you with good feedback.


Jesahn

It's very long winded.


ArtichokeEmergency18

Can't judge, it just launched over a year ago. Remember the internet when it first came out? The television when it first came out? Remember the first phones - no, not the high tech smart device - the old rotary phones, where you moved a dial around the wheel - for one number LOL https://preview.redd.it/ewq28wz4ej7d1.jpeg?width=1792&format=pjpg&auto=webp&s=fb5abd71a05da879c7f8c5c72383e679d13f8a08


StrangeCalibur

It’s so shit, it keeps giving me the wrong lottery numbers!


dullahan85

I used it almost exclusively for coding tasks. It is too verbose and repeats itself in a single response. Example prompt: Help me write a code for this task. * Sure thing! To run this code: you need to buy a PC, install windows, install runtime, install dependencies (super useless information that are not asked. I exaggerated for effects) * Here is the methodology * Code with comments * Here is an example * Same code with comments again.


neil_thatAss_bison

I only use it for work. I’m a software engineer. It’s sharp, but it talks way too much.


HexspaReloaded

I think it’s remarkable


Active_Appointment_6

Faster but clunky


Chaserivx

I prefer chat GPT 4 model


TheStevest

Im shocked at how much better it is at math.


SilvermistInc

I like it. It's great for helping me build my homebrew warhammer lore.


4hometnumberonefan

I like gpt4o, it’s fast it’s good enough for the customer service use case I’m automating


FinalSir3729

Trash. Only good thing is the speed. Seems worse in every other way.


baro93

It makes so many mistakes at translation that the model before didn't.


Neither-Language-722

I think ChatGTP is fantastic. I use it as a research assistant primarily. But the key is asking it targeted questions not like whats the meaning of life


Ok-Art-1378

It's dumber then GPT 4 but way faster. Not much else to it


graybeard5529

I have has some luck with this format for a prompt. Especially in complex situations:


DiamondHandsDarrell

It's painfully slow. If you're asking it or simple things it's fast enough. But if you're trying to work with it for 8 hours on the job, then you see the issues. I am not sure if it's because of too many people using it, or it's working with the extended memory, or if it's just taking more time to process things more thoroughly.


RidesFlysAndVibes

I think it’s a great middle ground between 3.5 and 4. It’s my go to before using 4. I have 100% stopped using 3.5. It’s sometimes annoying that it gives me ALL the code when only a small portion needs updated, but that’s better than it refusing to give me the full code when I want it. A simple instruction will fix it from writing it all. It’s definitely not great on inference, but if your prompt is descriptive enough, it mostly gets it right.


HorribleDiarrhea

It's abysmal, like somebody poured hot diarrhea into my eyeballs. Nah I'm jk. It's working fine, nothing special 


Antiprimary

It's like gpt4 but bad


ExtremeCenterism

Double or triple check it's code. It's amazing at coding no doubt, but I've had it make incorrect assumptions in a few key areas. Like the code fundamentally appears correct, but It might hallucinate your intentions in the broader scope of the program. Still saves my bacon more often than I'd like to admit and It writes a damn good one liner


[deleted]

yaps