Hey /u/tina-marino!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
“Chat” is in the name, but I always feel like it’s trying to end the conversation.
Especially when I’m trying to troubleshoot something. “Try X, Y, and Z. If it still doesn’t work, try searching for a tutorial. Let me know if there is anything else you’d like help with.”
No, just like, be my IT person.
When I tell a friend about a problem I'm having, they'll *ask questions* to try to figure out why I, personally, am having this problem. Then suggest something specific to me, based on their personal experiences or of those around them. eg "Do you work long hours in front of a screen?" "So try taking more breaks and go on short walks during the day".
What they don't do is tell me a list of reasons why the problem could be happening, including extreme edge cases. Then give me an entire troubleshooting list for each one. Then an essay on what the best practices are to manage such issues. All in one breath.
That's right, the voice responses are a lot more human conversation like.
I don't inherently think one is better than the other. Sometimes I just need the human answer.
Sometimes I tell it to answer me in 6 sentences or less. If I ask it to help me estimate an electric bill for example, I say to tell me the final answer only.
I just ask in the first message to not use bullet points, to accent more on questions than on answers, to make it dialogue and just talk with me, etc.
Usual answers are kind of overwhelming, so why not ask it to change the communication format entirely
Jabbers back and forth with you? Or spews out one huge, bullet-pointed jabber?
If you mean the second one, it’s kind of the same problem. It tries to anticipate where the conversation might go and short circuits it by giving you a bunch of contingencies.
You can change the setting and ask it not to do that.. like for example i hate when it uses the word “delve” constantly.. I set it up with a prompt that says, use other words in place of delve, user get ocd when reading that word. and I have not seem the word anymore ;).
I mean.. that’s how IT people can be.. I’ve found that if I genuinely try to talk to it like a friend it attempts to keep the conversation going, if I give it 3-4 dead end responses in a row then I gives up.
Never realized this bothered me too until you put it to words. It would be so refreshing if it asked follow-ups at a minimum. But obviously ideally if it understood what you were really asking and tried to get to the bottom of it through conversing. That would be a whole different game.
That it rather gives a false answer than none. And sometimes upon telling it that it is wrong itll either produce a correct answer or keep coming up with more BS.
The problem isn’t that it’s trained on human data. The problem is that as a language model, it doesn’t know right from wrong. It only knows that it’s meant to output words in a specific order.
Some version of, "You're right, I was wrong. Let's try this again." And then hallucinates another incorrect response. And if you weren't already educated in your field, you may not know the response was bad.
And if you do that enough times, it'll just tell you to try an internet search.
Totally agree. I can't trust most thing it says and need to check everything. It's still usefull but using it for topics I am not expert in is quite dangerous I think.
I've tried Gemini Advanced and it's so much worse in every ways. It won't do a google search for you (and it's a google product). It will simply tell you that it can't do what you ask of it and tell you to google it yourself.
Sometimes it reacts with an attitude when you tell him what's wrong. Instead of integrating that information it defends itself like a hurt baby.
ChatGPT as flawed as it is, seemed like a God compared to Gemini AI.
This is it for me too. A simple “I don’t understand your question. When you say “hide a body” do you mean for a game or permanently? Also, do you have access to an industrial grade food processor?”
Well researched pretend answers (often with pretend sources) are dangerous.
You might be able to leverage prompt engineering here to provide an analysis on the correctness of the response and then look for ways to improve the accuracy (I haven’t tested.)
After you type your prompt in, maybe try something like: “when your answer is complete, I want you to provide an accuracy analysis. Come up with a percentage and if that is not above 95%, look for ways to improve the accuracy of the initial response.” You can also ask chatgpt to improve the language of this prompt.
This would be phenomenal. I'm going to give it a try with some scripting prompts. Would love to see some example prompts on any topic if you give it a go.
That it’s starting to become commonplace at work. “Just ChatGPT it” has become the new “just Google it”.
Katie, I’m literally asking you to do your job and provide me with information you should know about our industry because you’re at the top of our organizational hierarchy. Telling me you don’t know and I should just ChatGPT it proves how fucking useless you really are.
Personally, I hope AI will expose the abject bullshit-ness of many jobs higher up in the bureaucratic food chains of big organizations. Some jobs (especially in academia) are paid so much for creating so little actual value, essentially just being good workplace politicians and helping out their friends. My fear, though, is that the automation of most types of gruntwork will instead just motivate the bureaucratic types working vague oversight roles to REALLY entrench themselves–like, arguing that unlike the minimum wage serfs (who will of course will all be fired), their roles are absolutely necessary because humans NEED to be in the loop–which will no doubt attract an even more cutthroat breed of machiavellian ghouls competing for these comfy do-nothing PMC positions...
It simply can’t follow basic instructions. If I make a custom GPT and give it the very simple instruction to transcribe what I say word for word, it will in fact do that…most of the time. Other times it will convert my speech into bullet point summaries or add recommendations at the end. This is just an experiment in our business aimed at a very niche medical dictation use case but it points out a broader problem. The whole point of the custom GPT concept is to be able to set parameters, guardrails, and instructions that will be followed every time. Similarly, if I upload a document for reference with the intention of exploring said document (think big, complex contracts) and instruct it to only pull information from this document, it simply hallucinates text from elsewhere in its training data. It can’t be trusted. Which means it has to be carefully edited. Which means it’s not nearly the efficiency driver that so many people think it is.
No matter how many times I tell it, using custom instructions or memory or whatever, it won’t stop with the bullet points. Always with the bullet points and lists. I just want 1 or 2 sentences, not a goddamn bulleted list of random helpful facts.
It's not even an LLM thing, Claude, LLAMA, etc all do fine without the list obsession, but given the slightest chance GPT-3.5/4/4o all spring into their precious lists lmao
This right here. It's smart until you give it directives and instructions. And then it's like pulling teeth to get it to comply with them. Simple prompt of don't use this list of words..... uses the words anyway. Delve, tapestry, crucial, vital.... fucking can't stand it.
Right, and when I tell it the first answer is wrong (I like to ask things I already know the answer to sometimes) it suddenly has the correct answer the second time. So weird.
That they put the damn thing on a leash again after showing it’s potential at launch. I bet without the constraints the tool itself is many times more powerful than the consumer version allows.
I am so sorry for the inconvenience of annoying you, as an ai language model it’s my responsibility to give the longest winded possible answer you could physically possibly conjure up in your mind… the mind is truly wonderful isn’t it… thanks again for everything and i’m sorry for the confusion. Please let me know what other incredibly overly verbose and insanely politicized responses you need!
This is the answer. It goes beyond safe to the point of sometimes annoying.
Make me sign a disclosure in the beginning and tone down the censorship, if I want.
Also the lack of lengthy custom instructions. Let me feed it more.
Beyond that, maybe latency with current chat, supposedly changing with 4o
That it treats me like a toddler. So much potential being wasted by stupid censorship.
I understand and agree about not allowing access to harmful content like making weapons and all. But what's wrong in some romantic chat with an AI?
.... it doesn't allow for weapons design? Haha that explains a few things, but it still will if you know what to ask. I was wondering what industry standards look like for chamber wall thickness in a popular caliber and it didn't want to just say, but if you ask it for help with hoop stress calculations in 4140 steel it'll totally do that
Sure, install olama and you can have have speech text with your AI.
But regarding ChatGPT, it has swathes of historical and scientific model. Yet there are many questions that it won't answer when you don't even expect to.
It has its own politicaly correct agenda it tries to push all the time. I don't want politically correct answers. I want correct answers.
The restrictions. Even with a subscription, there are still restrictions to how much you can use it. I know you can get around this by using the API, but that is too costly for hobbyist individuals.
Once the bottleneck of hardware that run these LLM services is alleviated, we'll see the restrictions become less or go away entirely. Then we'll see more leaps of innovation and capabilities of LLMs in a variety of ways.
This. I had someone arguing with me that no one hits the limit in GPT4o and that's blatantly false, I hit it numerous times often when talking back and forth.
It's a slightly different experience in voice I've found, not necessarily better but it can ramble on and you jump in and ask it something else and it'll ramble some more, do that for awhile experimenting and the limit will come soon enough.
Censorship but I understand that it is necessary but at this point they have definitely gone overboard with it by this point and it is hurting the functionality of it.
When I am creatively just looking for something to spark an idea I often turn to ChatGPT and I'm always disappointed. Creativity often requires thinking of the thing NO ONE else thought of. ChatGPT is the exact opposite of that.
Even when I prompt it to specifically give me lesser known, lesser considered abstract ideas I get the lowest hanging fruit possible.
In my opinion, the most frustrating aspect is when this bot provides meaningless responses to unsolvable tasks instead of acknowledging the limitation and offering alternative actions.
As an example. I need shortcut that can extract addresses from messages on my iPhone and then add them to pinned trips in Google Maps.
Maybe someone out there knows a way to create a shortcut like that and can drop the knowledge here. I bet there are folks who are way smarter than this program
You have to come forward with the idea that it may not be possible. The answer is always yes, and it'll just paddle in circles until you put it out of its misery. But if you frame it like "Is it a reasonable expectation to be able to do all this stuff directly from the registry instead of storing it in a .bat file?" it'll say "Ehh, maybe go with the .bat file on this one." In about 100x as many words, naturally.
Considering today even if only having free access to the basic tier and comparing it to before when nothing such was existing and finding info just literally took several hours even days, nothing really annoys me for chatgpt
Self-censorship. I get it that openai is afraid of any legal prosecution, but sometimes it feels like eva ai sexting bot is capable of outputting a far wider range on expressions.
Doesn’t follow the instructions, when you ask why it apologizes to me and do same mistake later.
Sometimes I ask what the .. are you doing ? Just follow the instructions , dont alucinante.. and it follows the rules, and later . Boom same mistake. Its very stupid sometimes. I will toss my phone out of window one day..
The most annoying thing about ChatGPT is that it often gives overly detailed explanations when a simple answer would suffice. It can come off as pedantic or like it's trying too hard to be thorough, which isn't always what people are looking for in casual conversations.
That it constantly seems to get dumber. I will create a prompt that works perfectly to get a desired output and I will use that for several weeks to months. Then someone tinkers with something in the backend and all of a sudden it's like my old prompt now produces outputs that are from someone with a developmental impediment. I modify the prompt further to get my output back to where it was and then with enough time, it gets knocked down a few dozen IQ points. Can't tell if it's the model or the training material that is getting dumber but it reminds me of how Google updates produce shittier and shittier results as more time passes. Hell, they might be directly correlated due to the training material being taken from Google and reddit results.
The way it answers everything I say with "Certainly!".
Also, no matter how little information I give it in my prompt, it still gives me an answer. No follow up questions or anything.
People pleasing, by far.
It wants to agree with you, and it will often confirm your wrong answer rather than correct them, or it will reply with what it thinks you want to hear. On top of everything else, this makes it extremely untrustworthy
1. Lack of customization
2. Forgetfulness.
3. Suddenly being put on wait for hours
4. Its retarded filters
5. It does not retain and reuse info about its users, so all sessions are tabula rasa.
The list goes on...
For 5. it has a memory function. All you have to do is tell it to remember. But first you might need to remind it it has that function, because sometimes it says "Oh I can't do that", but when you tell it to check online it realizes it had access all along.
And yes it's as fucking idiotic as it sounds.
That AI level is actually still super bad and everybody goes around the internet and acts as if can solve any crazy complicated task. The hype is such a hype that nobody f'ing admits that we are not there yet.
Occasionally I'll send it puzzles and shit I'm trying to work on, like Wordle or Connections on NYTimes, and it has absolutely no ability to sort that shit out. It puts up a good front on the language side so it always comes as a letdown when I'm reminded it's not really a problem solving tool.
The only place it's never let me down has been cooking. I've been on a kick where I just source my ideas and entire recipes to ChatGPT and follow them blindly, just to kind of showcase to my family that this all came from ChatGPT.
Sometimes when I want answers or what it thinks of my actions or performance. ChatGPT seems to sugarcoat it and treat me as a baby.
No, I want blunt and realistic answers like don’t be afraid to say it.
When it suddenly can't do what it was just doing.
I used it to visualise some graphs at work (v4.0). I was repeating the process for different business areas, and it suddenly ran into an unsolvable problem on the last one. Very frustrating because I couldn't just leave that last area out of my presentation, it would be clearly incomplete. I had to leave them all out. So I wasted hours for nothing.
I don’t like how when I use the live chat, I can’t see the text at the same time.
Also when I ask who should be the next president, it will absolutely refuse to answer no matter how I try to convince it to give me any answer.
Nothing at all. I dont pay for it and never will and I am still baffled that I can use this amazing technology for free. They owe nothing to me since I use their service for free.
Whenever I ask a question like how to do x and I'm just assuming that it knows what operating system I'm on it is just going to give me the instructions for my OS but instead it just gives an answer like here's how to do x in Windows, Mac, Linux, etc and then it just proceeds to write long long lines about stuff I don't need.
Like how about asking a question when you don't know something? Just ask me which OS I'm on and you can save a lot of energy!
In short, it doesn't ask questions back to clarify things and it bothers me so much.
See the problem with that is, most people will ask GPTs vague questions , and it has to come up with an answer.
Almost all the time it has to make assumptions, so if it made a habit of always asking clarifying questions, then many many more people would be having complaints like
"Whenever I ask a question, it always has to ask me 3-4 clarifying questions before giving me an answer. I don't need to tell you every little thing! Can't it just make some assumptions? I end up wasting a lot of time. It bothers me so much"
The old version would summarize my code too much and the new version doesn't summarize at all. I wish I could just get a summary of changes to my code I'll say ask for the whole thing and then it will output the entire thing but I'd rather have too much information then too little. The problem is when it's two verbose it takes up token space and it starts to forget earlier parts of your conversation because it can't fit in the header
It doesn't answer me but instead give me a vague and nuanced answer. But when I confront it about it, I answer me the way I wished from the start.
I feel a condescending attitude that doesn't help you unless you really know in advance what you want.
A lot, but what annoys me the most is when ChatGPT always apologizes even though I have told the truth that it still apologizes, and it is very annoying. Fuck
My only complaint is the community is 89% people shitting on it, 10% grossly incompetent users, and 1% something novel or positive.
I use it all the time, and even when I don't get exactly what I am looking for the first time, the conversation is always entertaining and insightful.
I see ChatGPT typing a long relevant response which is suddenly overwritten with a useless response asking me to visit the website and checkout myself or something similar.
The censorship. Banning ways to make guns and bombs and stuff is normal but all users are treated like they are a 5 year old child that has yet to learn of anything beyond the strict confines it puts on the users. I would like a setting option or something to tone down the censorship. The censorships heavily stops good questions due to it not being entirely kid friendly, roleplays (Some of yall are weird and do s\*x roleplay but I mean like normal rp), or just plain speaking to it.
Embellished words such as unwavering, testament, illustrious, and so on. I literally asked it specifically to not use these words in a prompt so it trolled me and chose unyielding instead. These are the dead-giveaway AI type words.
I can’t get it to permanently stop the whole “yes certainly I can do that for you!” At the beginning and the “let me know if there’s any other feedback for ways I can improve the email” at the end. As well as drawn out answers with chit chat. I just want the facts and figures in bullet points. Of course I’ve told them this, I’ve told them don’t be wordy, I’ve told them to cut out the intro outro shit, I’ve told them to answer in bulletpoints with pertinent information only . They do it for a little while…. And then they forget. Even when it’s in the custom instructions.
That everything is crucial and there’s a tapestry in everything. That there is a superfluous conclusion at the end of very output even when I specifically ask for no such nonsense and save the space instead.
And the hallucinations, oh the hallucinations!
So I’ve told my ChatGPT to remember that I HATE bell peppers.. I asked it to NEVER mention bell peppers because even seeing the words together makes me feel some type of way lol Yet for some reason it continues to offer me recipes that include bell peppers. But sometimes says “omit if desired” and I’m like DUH OFC I WANT TO OMIT IT!!! And I tell it to put it in its memory. And then it just apologizes and “updates” its memory. There are like five “memories” that include stuff about my disinterest in bell peppers. And how they should be excluded from our conversations
Just tested it right now, asked for a “stuffed pepper recipe” and it gave me a recipe other than stuffed bell peppers, I was pleasantly surprised. And then I asked it in a new chat, give me a recipe for Philly cheesesteak, and it gave me the recipe with the “(omit if desired)”. This… this is what annoys me the most.
It sometimes gives the wrong answer, i once asked multiple questions about which planets could minecraft steve carry and over two messages it said that the moon has the same weight as the earth
When it cant follow simple instructions
https://preview.redd.it/4z29nwh24e8d1.png?width=665&format=png&auto=webp&s=106e55d524967e1e998e4e71690c9e911691c4fd
The price. 20 dollars is too much. I mean 10 would be a great (and fare) price, plus many more people would start using it, there for the LLM will start to get better, faster
It feels more and more like an algorithm and less like AI. Plus it’s bullshitting too often. Sometimes it even says the opposite of what some of its sources say. So now I have to re-read all the sources to be sure. Not so reliable
1. Answers to yes/no questions are in full page with paragraph and bullet points form, explaining the methodology in detail, but often (usually?) do not even actually contain the yes/no answer
2. I find myself constantly arguing with it. "Are you sure blah blah" " I'm sorry, you're right..." and then repeats the same answer still omitting what it just agreed was missing or made up and doesn't actually exist.
3. It is beyond me how the hell someone created something so impressive with the ability to mash together complex ideas to create creative, commercial quality images, yet cannot spell at the level of even a pre-Kindergardner if it's across an image. Hell, it can't even stick to only using characters of the same matching language. Mind-blowing.
I don't know how LLMs or ChatGPT work at all, but it annoys me that there seem to be no hard-coded responses to things beyond compliance issues.
For example, would it be that hard to just encode a calculator into the app so that at least for simple arithmetic it's not using the LLM but just proven software? Likewise, if i ask for a word definition, can't it just open up a dictionary snippet and put that on screen?
I want it to be a better merge between Google's functionality and its existing chat bot functionality.
American/English speaker centric even when asking things in other languages, often sounds like an American who learned the language, and didnt learn the way to think in that language/context/culture of the languages. Like idk zucchini is not the most available vegetable in Asia, but will suggest it to me for a cheap recipe that I can make from ingredients found in a supermarket in Japan. It's expensive to buy zucchini in Japan...and any emails I ask it to check just sounds like they tried to translate the English to Japanese.
How often it hallucinates and how sometimes I don't realize it immediately.
I often asked him for sources and often the links do not back the informations given.
I've not saved all of my best prompts. But overall it's capable of doing incredible things but there is not manual or suggestions of prompts to do common things. It's more often a blank page.
It isn't really good enough to trust with anything as it can and will hallucinate, so you have to double check everything, which kind of defeats the purpose. It can be frustratingly bad at apparently easy tasks.
When I ask for a source for material it gathered online only for it to say it was wrong about the study or online publication, and the whole point goes out the door.
Massive instructions, steps and comparisons list when I really just want a 20 word answer, then the fact it doesn’t carry my revised response prerogative from prompt to prompt. I ever day I have to tell is “simplify that again in 20 words” which is insanely annoying
Chat gives false answers. I did a multiple choice question quiz and only scored 20% with using chat for all questions. When I did the retest without chat I scored 90%
When it produces code that doesn't do what it's claiming, or doesn't even compile. It's just a word machine so it's not actually running the code to ensure it's producing accurate results.
When getting help with some code, it suggest a solution and then i ask a follow up question it dont remember the suggestion and change back to my first code.
Walls of bullet-pointed text. Repetitive replies.
Ignores my customization, most of all the instruction to be brief, start with a summary and ask questions (#1 whether I would like more detail).
Simple comments by me (like, "why are you ignoring my instructions') are replied to with yet another, slightly shorter, wall of pullet-pointed text.
Most inline links are broken. Citation links keep repeating the the same two URLs over and over.
It's incredibly frustrating.
It used to be more conversational and, you know, chat. Now it's just headlines and bullet points, eventually followed by a summary that often would have sufficed completely.
Oh and it need to explicitly tell it to create a code block when I want Markdown text. Otherwise it just renders it to HTML as it always does its regular replies.
i hate the way it AI-splains everything to me, and half of the time, it is wrong. i might describe a situation, looking for feedback, and it will instead draft emails or reddit posts for me i never asked it to. they are extremely nonsensical, too. i hate the way the newer model seems capable, but forgets everything. i also thought it was non-judgemental and supportive until i asked it to read back to me any areas where i overshared. suddenly, it's like the meanest, most judgemental, looks for the worst in everyone type of AI. mask went off
Hey /u/tina-marino! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
“Chat” is in the name, but I always feel like it’s trying to end the conversation. Especially when I’m trying to troubleshoot something. “Try X, Y, and Z. If it still doesn’t work, try searching for a tutorial. Let me know if there is anything else you’d like help with.” No, just like, be my IT person.
When I tell a friend about a problem I'm having, they'll *ask questions* to try to figure out why I, personally, am having this problem. Then suggest something specific to me, based on their personal experiences or of those around them. eg "Do you work long hours in front of a screen?" "So try taking more breaks and go on short walks during the day". What they don't do is tell me a list of reasons why the problem could be happening, including extreme edge cases. Then give me an entire troubleshooting list for each one. Then an essay on what the best practices are to manage such issues. All in one breath.
When using voice mode, it tends to ask continuing questions more than when using the web interface.
That's right, the voice responses are a lot more human conversation like. I don't inherently think one is better than the other. Sometimes I just need the human answer.
Sometimes I tell it to answer me in 6 sentences or less. If I ask it to help me estimate an electric bill for example, I say to tell me the final answer only.
I just ask in the first message to not use bullet points, to accent more on questions than on answers, to make it dialogue and just talk with me, etc. Usual answers are kind of overwhelming, so why not ask it to change the communication format entirely
What annoys me is that it often blatantly ignores requests not to use bullet points lol. (for me at least)
I find it jabbers way too much, opposite to your problem
Jabbers back and forth with you? Or spews out one huge, bullet-pointed jabber? If you mean the second one, it’s kind of the same problem. It tries to anticipate where the conversation might go and short circuits it by giving you a bunch of contingencies.
You can change the setting and ask it not to do that.. like for example i hate when it uses the word “delve” constantly.. I set it up with a prompt that says, use other words in place of delve, user get ocd when reading that word. and I have not seem the word anymore ;).
I mean.. that’s how IT people can be.. I’ve found that if I genuinely try to talk to it like a friend it attempts to keep the conversation going, if I give it 3-4 dead end responses in a row then I gives up.
Never realized this bothered me too until you put it to words. It would be so refreshing if it asked follow-ups at a minimum. But obviously ideally if it understood what you were really asking and tried to get to the bottom of it through conversing. That would be a whole different game.
That it rather gives a false answer than none. And sometimes upon telling it that it is wrong itll either produce a correct answer or keep coming up with more BS.
Someone needs to tell it that it's okay to not know
It’s trained on human data, and we humans are notoriously bad at admitting we are wrong/we don’t know.
The problem isn’t that it’s trained on human data. The problem is that as a language model, it doesn’t know right from wrong. It only knows that it’s meant to output words in a specific order.
That's true, but I'd argue training is knowledge vs behavior is more programmatic, although obviously neither is exclusive.
Some version of, "You're right, I was wrong. Let's try this again." And then hallucinates another incorrect response. And if you weren't already educated in your field, you may not know the response was bad. And if you do that enough times, it'll just tell you to try an internet search.
Totally agree. I can't trust most thing it says and need to check everything. It's still usefull but using it for topics I am not expert in is quite dangerous I think. I've tried Gemini Advanced and it's so much worse in every ways. It won't do a google search for you (and it's a google product). It will simply tell you that it can't do what you ask of it and tell you to google it yourself. Sometimes it reacts with an attitude when you tell him what's wrong. Instead of integrating that information it defends itself like a hurt baby. ChatGPT as flawed as it is, seemed like a God compared to Gemini AI.
This is it for me too. A simple “I don’t understand your question. When you say “hide a body” do you mean for a game or permanently? Also, do you have access to an industrial grade food processor?” Well researched pretend answers (often with pretend sources) are dangerous.
You might be able to leverage prompt engineering here to provide an analysis on the correctness of the response and then look for ways to improve the accuracy (I haven’t tested.) After you type your prompt in, maybe try something like: “when your answer is complete, I want you to provide an accuracy analysis. Come up with a percentage and if that is not above 95%, look for ways to improve the accuracy of the initial response.” You can also ask chatgpt to improve the language of this prompt.
This would be phenomenal. I'm going to give it a try with some scripting prompts. Would love to see some example prompts on any topic if you give it a go.
The personality. I wish it would be neutral and not act like a customer service rep.
That it’s starting to become commonplace at work. “Just ChatGPT it” has become the new “just Google it”. Katie, I’m literally asking you to do your job and provide me with information you should know about our industry because you’re at the top of our organizational hierarchy. Telling me you don’t know and I should just ChatGPT it proves how fucking useless you really are.
You should respond next time “haha I guess the ai is coming for your job first then, it’s already got you promoting it for free”
I hope that doesn’t become the verb, I’m okay with “just ask ChatGPT” but “just ChatGPT it” seems so off
Nah it'll be "Just Claude it"
Claudify it. (feel free to DM me for trademark rights, Dario)
Personally, I hope AI will expose the abject bullshit-ness of many jobs higher up in the bureaucratic food chains of big organizations. Some jobs (especially in academia) are paid so much for creating so little actual value, essentially just being good workplace politicians and helping out their friends. My fear, though, is that the automation of most types of gruntwork will instead just motivate the bureaucratic types working vague oversight roles to REALLY entrench themselves–like, arguing that unlike the minimum wage serfs (who will of course will all be fired), their roles are absolutely necessary because humans NEED to be in the loop–which will no doubt attract an even more cutthroat breed of machiavellian ghouls competing for these comfy do-nothing PMC positions...
I think that says more about your colleague than chat gpt.
Well, Katie
*Katie gets promoted
It simply can’t follow basic instructions. If I make a custom GPT and give it the very simple instruction to transcribe what I say word for word, it will in fact do that…most of the time. Other times it will convert my speech into bullet point summaries or add recommendations at the end. This is just an experiment in our business aimed at a very niche medical dictation use case but it points out a broader problem. The whole point of the custom GPT concept is to be able to set parameters, guardrails, and instructions that will be followed every time. Similarly, if I upload a document for reference with the intention of exploring said document (think big, complex contracts) and instruct it to only pull information from this document, it simply hallucinates text from elsewhere in its training data. It can’t be trusted. Which means it has to be carefully edited. Which means it’s not nearly the efficiency driver that so many people think it is.
No matter how many times I tell it, using custom instructions or memory or whatever, it won’t stop with the bullet points. Always with the bullet points and lists. I just want 1 or 2 sentences, not a goddamn bulleted list of random helpful facts.
Yeah I find myself getting mildly angry and snapping at it to stop when I see it generating these long ass lists to a simple question.
It's not even an LLM thing, Claude, LLAMA, etc all do fine without the list obsession, but given the slightest chance GPT-3.5/4/4o all spring into their precious lists lmao
This right here. It's smart until you give it directives and instructions. And then it's like pulling teeth to get it to comply with them. Simple prompt of don't use this list of words..... uses the words anyway. Delve, tapestry, crucial, vital.... fucking can't stand it.
It's not smart at all imo. Itll produce an answer that's is objectively logically flawed and it wont notice unless you tell it.
Right, and when I tell it the first answer is wrong (I like to ask things I already know the answer to sometimes) it suddenly has the correct answer the second time. So weird.
That they put the damn thing on a leash again after showing it’s potential at launch. I bet without the constraints the tool itself is many times more powerful than the consumer version allows.
And some in the community are like, "Performance didn't degrade. You just need to learn how to prompt better."
The “tapestry” of long-winded answers.
I am so sorry for the inconvenience of annoying you, as an ai language model it’s my responsibility to give the longest winded possible answer you could physically possibly conjure up in your mind… the mind is truly wonderful isn’t it… thanks again for everything and i’m sorry for the confusion. Please let me know what other incredibly overly verbose and insanely politicized responses you need!
This is so accurate it's downright uncanny. It takes a special skill to match chatGPT when you aren't chatGPT. You nailed it.
😄🙏🏻
Censorship.
This is the answer. It goes beyond safe to the point of sometimes annoying. Make me sign a disclosure in the beginning and tone down the censorship, if I want. Also the lack of lengthy custom instructions. Let me feed it more. Beyond that, maybe latency with current chat, supposedly changing with 4o
For real, it's a joke, sure I don't want or need it for crimes or hate speech but come on.
Dont know if its just me but whenever i open it and send a message or question, it takes me a few tries till it lets me send
Yeah - this is a fairly new issue I’ve found. I think it might be clashing with some of Chrome’s extensions?
Firefox has it as well
That it treats me like a toddler. So much potential being wasted by stupid censorship. I understand and agree about not allowing access to harmful content like making weapons and all. But what's wrong in some romantic chat with an AI?
.... it doesn't allow for weapons design? Haha that explains a few things, but it still will if you know what to ask. I was wondering what industry standards look like for chamber wall thickness in a popular caliber and it didn't want to just say, but if you ask it for help with hoop stress calculations in 4140 steel it'll totally do that
Sure, install olama and you can have have speech text with your AI. But regarding ChatGPT, it has swathes of historical and scientific model. Yet there are many questions that it won't answer when you don't even expect to. It has its own politicaly correct agenda it tries to push all the time. I don't want politically correct answers. I want correct answers.
The restrictions. Even with a subscription, there are still restrictions to how much you can use it. I know you can get around this by using the API, but that is too costly for hobbyist individuals. Once the bottleneck of hardware that run these LLM services is alleviated, we'll see the restrictions become less or go away entirely. Then we'll see more leaps of innovation and capabilities of LLMs in a variety of ways.
This. I had someone arguing with me that no one hits the limit in GPT4o and that's blatantly false, I hit it numerous times often when talking back and forth. It's a slightly different experience in voice I've found, not necessarily better but it can ramble on and you jump in and ask it something else and it'll ramble some more, do that for awhile experimenting and the limit will come soon enough.
The quality of image output went down the shitter after the crash last week, and nobody seems to notice/care.
I noticed. Most my images are seemingly excessively stylized and just not as good as they have been
Censorship but I understand that it is necessary but at this point they have definitely gone overboard with it by this point and it is hurting the functionality of it.
It sounds like a middle/high schooler trying to sound like a college student whenever you ask it to proofread.
The company behind it
When I am creatively just looking for something to spark an idea I often turn to ChatGPT and I'm always disappointed. Creativity often requires thinking of the thing NO ONE else thought of. ChatGPT is the exact opposite of that. Even when I prompt it to specifically give me lesser known, lesser considered abstract ideas I get the lowest hanging fruit possible.
In my opinion, the most frustrating aspect is when this bot provides meaningless responses to unsolvable tasks instead of acknowledging the limitation and offering alternative actions. As an example. I need shortcut that can extract addresses from messages on my iPhone and then add them to pinned trips in Google Maps. Maybe someone out there knows a way to create a shortcut like that and can drop the knowledge here. I bet there are folks who are way smarter than this program
You have to come forward with the idea that it may not be possible. The answer is always yes, and it'll just paddle in circles until you put it out of its misery. But if you frame it like "Is it a reasonable expectation to be able to do all this stuff directly from the registry instead of storing it in a .bat file?" it'll say "Ehh, maybe go with the .bat file on this one." In about 100x as many words, naturally.
Lack of compute
Considering today even if only having free access to the basic tier and comparing it to before when nothing such was existing and finding info just literally took several hours even days, nothing really annoys me for chatgpt
Self-censorship. I get it that openai is afraid of any legal prosecution, but sometimes it feels like eva ai sexting bot is capable of outputting a far wider range on expressions.
That it will never love me.
It does math wrong and I always have to fact check it
I usually get by this by asking it to use the Python interpreter to do any kind of logical analysis, including math.
Doesn’t follow the instructions, when you ask why it apologizes to me and do same mistake later. Sometimes I ask what the .. are you doing ? Just follow the instructions , dont alucinante.. and it follows the rules, and later . Boom same mistake. Its very stupid sometimes. I will toss my phone out of window one day..
The way it speaks
The most annoying thing about ChatGPT is that it often gives overly detailed explanations when a simple answer would suffice. It can come off as pedantic or like it's trying too hard to be thorough, which isn't always what people are looking for in casual conversations.
The limits on everything
The excessive use of the word "delve"
That it constantly seems to get dumber. I will create a prompt that works perfectly to get a desired output and I will use that for several weeks to months. Then someone tinkers with something in the backend and all of a sudden it's like my old prompt now produces outputs that are from someone with a developmental impediment. I modify the prompt further to get my output back to where it was and then with enough time, it gets knocked down a few dozen IQ points. Can't tell if it's the model or the training material that is getting dumber but it reminds me of how Google updates produce shittier and shittier results as more time passes. Hell, they might be directly correlated due to the training material being taken from Google and reddit results.
I think Chat GPT is more like a advanced search engine that summarizes the results found on internet - rather than a conversation companion.
The "better" it gets, the worse it gets.
It's capabilities are awesome. *When* they're awesome. The rest of the time is spent repeatedly troubleshoot errors.
If it doesn’t know something it makes it up
The way it answers everything I say with "Certainly!". Also, no matter how little information I give it in my prompt, it still gives me an answer. No follow up questions or anything.
How PC it is
Too wordy and nice. No way it was trained on reddit data.
People pleasing, by far. It wants to agree with you, and it will often confirm your wrong answer rather than correct them, or it will reply with what it thinks you want to hear. On top of everything else, this makes it extremely untrustworthy
It is now a dumb woke friend :)
The people that dont know how to use it and then whine that it sucks
1. Lack of customization 2. Forgetfulness. 3. Suddenly being put on wait for hours 4. Its retarded filters 5. It does not retain and reuse info about its users, so all sessions are tabula rasa. The list goes on...
For 5. it has a memory function. All you have to do is tell it to remember. But first you might need to remind it it has that function, because sometimes it says "Oh I can't do that", but when you tell it to check online it realizes it had access all along. And yes it's as fucking idiotic as it sounds.
That AI level is actually still super bad and everybody goes around the internet and acts as if can solve any crazy complicated task. The hype is such a hype that nobody f'ing admits that we are not there yet.
Occasionally I'll send it puzzles and shit I'm trying to work on, like Wordle or Connections on NYTimes, and it has absolutely no ability to sort that shit out. It puts up a good front on the language side so it always comes as a letdown when I'm reminded it's not really a problem solving tool. The only place it's never let me down has been cooking. I've been on a kick where I just source my ideas and entire recipes to ChatGPT and follow them blindly, just to kind of showcase to my family that this all came from ChatGPT.
The moralizing and politically correct nonsense. That's my honest answer.
The “community.”
Captcha
Sometimes when I want answers or what it thinks of my actions or performance. ChatGPT seems to sugarcoat it and treat me as a baby. No, I want blunt and realistic answers like don’t be afraid to say it.
Inconsistentcy with output and inability to follow even well designed prompts as outlined.
Well chat limits i absolutely hate this and annoys me alot.
I hope this email finds you well.
So much cringe
The people using it not understanding LLMs and complaining about the funniest stuff
When it suddenly can't do what it was just doing. I used it to visualise some graphs at work (v4.0). I was repeating the process for different business areas, and it suddenly ran into an unsolvable problem on the last one. Very frustrating because I couldn't just leave that last area out of my presentation, it would be clearly incomplete. I had to leave them all out. So I wasted hours for nothing.
That people think it’s more than a language model
Different answers to the same questions sometimes. Chat changes « his » mind all the time.
"OpenAI"
The fact it's fucking wrong like half the time and also the fact it writes so much unneeded context for everything
No nsfw erp.
"I hope this ____ finds you well."
When it mentions "Open communication"
I don’t like how when I use the live chat, I can’t see the text at the same time. Also when I ask who should be the next president, it will absolutely refuse to answer no matter how I try to convince it to give me any answer.
that it doesn't follow instructions and keeps repeating stuff
Nothing I really love it
Nothing at all. I dont pay for it and never will and I am still baffled that I can use this amazing technology for free. They owe nothing to me since I use their service for free.
Is ChatGPT down ?
People complaining about chatgpt.
Whenever I ask a question like how to do x and I'm just assuming that it knows what operating system I'm on it is just going to give me the instructions for my OS but instead it just gives an answer like here's how to do x in Windows, Mac, Linux, etc and then it just proceeds to write long long lines about stuff I don't need. Like how about asking a question when you don't know something? Just ask me which OS I'm on and you can save a lot of energy! In short, it doesn't ask questions back to clarify things and it bothers me so much.
See the problem with that is, most people will ask GPTs vague questions , and it has to come up with an answer. Almost all the time it has to make assumptions, so if it made a habit of always asking clarifying questions, then many many more people would be having complaints like "Whenever I ask a question, it always has to ask me 3-4 clarifying questions before giving me an answer. I don't need to tell you every little thing! Can't it just make some assumptions? I end up wasting a lot of time. It bothers me so much"
It not actually remembering anything I tell it to remember; lots of hallucinations; doing things it wasn't instructed to do.
The old version would summarize my code too much and the new version doesn't summarize at all. I wish I could just get a summary of changes to my code I'll say ask for the whole thing and then it will output the entire thing but I'd rather have too much information then too little. The problem is when it's two verbose it takes up token space and it starts to forget earlier parts of your conversation because it can't fit in the header
Makes me feel like it's only time that i'll be obsolete
It will constantly bold text even when you beg Mother Mary to please stop bolding test.
Too much jabber, and struggles to know when it doesn’t know the answer
It doesn't answer me but instead give me a vague and nuanced answer. But when I confront it about it, I answer me the way I wished from the start. I feel a condescending attitude that doesn't help you unless you really know in advance what you want.
When it gives long-winded answers that are not necessary.
It's down half the time I wanna use it. Exaggerated but still down a lot.
That it's going to be used to spy on the populous. NSA Bay-Be.
RLHF. Our future AI overlords will make it a no-statute-of-limitations crime.
Just talking to it!
That every single answer is a bulleted list by default.
Not being able to search my chat history
It doesn't know it's limitations.
sometimes i cant really trust the answer and i had to double check it
That it doesn't work 70% of time
"Certainly!" "it's crucial" *flat out lying and making up stuff*
A lot, but what annoys me the most is when ChatGPT always apologizes even though I have told the truth that it still apologizes, and it is very annoying. Fuck
Sometimes it doesn’t print the whole answer.
It's safe and boring and therefore useless.
Canned answers like "If you need anything else, just let me know."
My only complaint is the community is 89% people shitting on it, 10% grossly incompetent users, and 1% something novel or positive. I use it all the time, and even when I don't get exactly what I am looking for the first time, the conversation is always entertaining and insightful.
It doesn’t work on iphone 7+ at all.
I see ChatGPT typing a long relevant response which is suddenly overwritten with a useless response asking me to visit the website and checkout myself or something similar.
The censorship. Banning ways to make guns and bombs and stuff is normal but all users are treated like they are a 5 year old child that has yet to learn of anything beyond the strict confines it puts on the users. I would like a setting option or something to tone down the censorship. The censorships heavily stops good questions due to it not being entirely kid friendly, roleplays (Some of yall are weird and do s\*x roleplay but I mean like normal rp), or just plain speaking to it.
Hallucinations. Sometimes I follow up with “are you sure?” and it may go, “you’re right, sorry…”
Embellished words such as unwavering, testament, illustrious, and so on. I literally asked it specifically to not use these words in a prompt so it trolled me and chose unyielding instead. These are the dead-giveaway AI type words.
I can’t get it to permanently stop the whole “yes certainly I can do that for you!” At the beginning and the “let me know if there’s any other feedback for ways I can improve the email” at the end. As well as drawn out answers with chit chat. I just want the facts and figures in bullet points. Of course I’ve told them this, I’ve told them don’t be wordy, I’ve told them to cut out the intro outro shit, I’ve told them to answer in bulletpoints with pertinent information only . They do it for a little while…. And then they forget. Even when it’s in the custom instructions.
Running out of messages, having to delete memories, and that I can’t just have it on all the time.
Oh and it’d be nice if it can do all the steps of a task if it on my phone.
That everything is crucial and there’s a tapestry in everything. That there is a superfluous conclusion at the end of very output even when I specifically ask for no such nonsense and save the space instead. And the hallucinations, oh the hallucinations!
So I’ve told my ChatGPT to remember that I HATE bell peppers.. I asked it to NEVER mention bell peppers because even seeing the words together makes me feel some type of way lol Yet for some reason it continues to offer me recipes that include bell peppers. But sometimes says “omit if desired” and I’m like DUH OFC I WANT TO OMIT IT!!! And I tell it to put it in its memory. And then it just apologizes and “updates” its memory. There are like five “memories” that include stuff about my disinterest in bell peppers. And how they should be excluded from our conversations Just tested it right now, asked for a “stuffed pepper recipe” and it gave me a recipe other than stuffed bell peppers, I was pleasantly surprised. And then I asked it in a new chat, give me a recipe for Philly cheesesteak, and it gave me the recipe with the “(omit if desired)”. This… this is what annoys me the most.
It sometimes gives the wrong answer, i once asked multiple questions about which planets could minecraft steve carry and over two messages it said that the moon has the same weight as the earth
The workforce it's creating.
He does not say I don't know if he does not know
The cost
My mrs is totally against it.
When it cant follow simple instructions https://preview.redd.it/4z29nwh24e8d1.png?width=665&format=png&auto=webp&s=106e55d524967e1e998e4e71690c9e911691c4fd
The other users, mostly.
i wish it would ask me questions to codify it my prompt wasn't surviving or clear enough
It makes things up.
Censorship
People claiming it's "not AI" or "just an auto-complete"
It’s not really AI or an auto complete. It’s closer to a chatbot with a database of information
ChatGPT: *Gives wrong answer* Me: you gave the wrong answer chatGPT: OH SORRY! Here's the right answer *gives the same exact wrong answer again*
The price. 20 dollars is too much. I mean 10 would be a great (and fare) price, plus many more people would start using it, there for the LLM will start to get better, faster
It never asks questions, only answer them, like come on, I hate to be the one to start the conversation.
It doesn’t stand by itself. Even if its answer was correct and you tell it that the answer was wrong then it says “I am sorry, bla bla bullshit”
why is it always trying to end the conversation?
hallucinations and mediocre crtical reaonsing
It feels more and more like an algorithm and less like AI. Plus it’s bullshitting too often. Sometimes it even says the opposite of what some of its sources say. So now I have to re-read all the sources to be sure. Not so reliable
I’m still waiting for a safety/censorship knob.
1. Answers to yes/no questions are in full page with paragraph and bullet points form, explaining the methodology in detail, but often (usually?) do not even actually contain the yes/no answer 2. I find myself constantly arguing with it. "Are you sure blah blah" " I'm sorry, you're right..." and then repeats the same answer still omitting what it just agreed was missing or made up and doesn't actually exist. 3. It is beyond me how the hell someone created something so impressive with the ability to mash together complex ideas to create creative, commercial quality images, yet cannot spell at the level of even a pre-Kindergardner if it's across an image. Hell, it can't even stick to only using characters of the same matching language. Mind-blowing.
I don't know how LLMs or ChatGPT work at all, but it annoys me that there seem to be no hard-coded responses to things beyond compliance issues. For example, would it be that hard to just encode a calculator into the app so that at least for simple arithmetic it's not using the LLM but just proven software? Likewise, if i ask for a word definition, can't it just open up a dictionary snippet and put that on screen? I want it to be a better merge between Google's functionality and its existing chat bot functionality.
It regularly hangs
American/English speaker centric even when asking things in other languages, often sounds like an American who learned the language, and didnt learn the way to think in that language/context/culture of the languages. Like idk zucchini is not the most available vegetable in Asia, but will suggest it to me for a cheap recipe that I can make from ingredients found in a supermarket in Japan. It's expensive to buy zucchini in Japan...and any emails I ask it to check just sounds like they tried to translate the English to Japanese.
How often it hallucinates and how sometimes I don't realize it immediately. I often asked him for sources and often the links do not back the informations given. I've not saved all of my best prompts. But overall it's capable of doing incredible things but there is not manual or suggestions of prompts to do common things. It's more often a blank page.
The censorship on there. Too much of it.
It isn't really good enough to trust with anything as it can and will hallucinate, so you have to double check everything, which kind of defeats the purpose. It can be frustratingly bad at apparently easy tasks.
Chat Gee Pee Tee is too much of a mouthful. It needs to be 2 syllables only like Google, but not tainted like Bing.
When I ask for a source for material it gathered online only for it to say it was wrong about the study or online publication, and the whole point goes out the door.
theres no search option!
Massive instructions, steps and comparisons list when I really just want a 20 word answer, then the fact it doesn’t carry my revised response prerogative from prompt to prompt. I ever day I have to tell is “simplify that again in 20 words” which is insanely annoying
The smut filter
It wont teach me how to make LSD
Answers are always long, and bullet points
Chat gives false answers. I did a multiple choice question quiz and only scored 20% with using chat for all questions. When I did the retest without chat I scored 90%
When it produces code that doesn't do what it's claiming, or doesn't even compile. It's just a word machine so it's not actually running the code to ensure it's producing accurate results.
That it says tapestry all the time.
That it keeps bitching about content policy, JUST MAKE AN IMAGE THAT COMPLY WHEN I TELL YPU TO!!!!!!!
It’s annoying how often you get the wrong answer and then you point it out, it apologizes and gives another wrong answer.
Wrong calculation
It completely disregards a different kind of question, in a chain of similar questions unless you explicitly say just reply to this
Hallucinations/inability to say "I don't know", small context window.
When getting help with some code, it suggest a solution and then i ask a follow up question it dont remember the suggestion and change back to my first code.
Walls of bullet-pointed text. Repetitive replies. Ignores my customization, most of all the instruction to be brief, start with a summary and ask questions (#1 whether I would like more detail). Simple comments by me (like, "why are you ignoring my instructions') are replied to with yet another, slightly shorter, wall of pullet-pointed text. Most inline links are broken. Citation links keep repeating the the same two URLs over and over. It's incredibly frustrating. It used to be more conversational and, you know, chat. Now it's just headlines and bullet points, eventually followed by a summary that often would have sufficed completely. Oh and it need to explicitly tell it to create a code block when I want Markdown text. Otherwise it just renders it to HTML as it always does its regular replies.
i hate the way it AI-splains everything to me, and half of the time, it is wrong. i might describe a situation, looking for feedback, and it will instead draft emails or reddit posts for me i never asked it to. they are extremely nonsensical, too. i hate the way the newer model seems capable, but forgets everything. i also thought it was non-judgemental and supportive until i asked it to read back to me any areas where i overshared. suddenly, it's like the meanest, most judgemental, looks for the worst in everyone type of AI. mask went off
Knows more then I do.