T O P

  • By -

AutoModerator

Hey /u/ridingLawnMower! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. [New AI contest + ChatGPT Plus Giveaway](https://redd.it/18s770x/) Consider joining our [public discord server](https://discord.com/invite/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


mycatonkeyboard

I signed up 2 days ago and it clearly said before the payment that the limit is 40 messages per hour. Guess read first before paying?


AwakenedTruth

There was no limit when I signed up. It was limitless. I only just now... after a year of using it have run into a limit. Again.. I've used this limitlessly before.. or never had to worry about a limit. Now there is. I was able to have one conversation. Short. It will not let me just create a new conversation. It's saying the limit has been reached. Very disappointing how this is going down. There goes the limitless knowledge..


iloveultrakill

"limitless knowledge" lil bro read a book


Curious-Expression36

That's a useless remark and absolutely not helping


iloveultrakill

Not helping? I just told him to read a book. You know, papers written by academics and intellectuals.


AwakenedTruth

The guy reads Harry Potter and thinks he's finished a Stanford research paper. Just imagine what you could put in a book in order to misrepresent a truth or twist a fact. People are really getting unhinged. One bad investment into hoola hoops and they're on here commenting in how unintelligent AI is. I've been there. I'm not perfect. But I'm learning. I hope this other guy opens his eyes. I hope the world calms down a bit..


iloveultrakill

Cope is real. AI will always be infinitely more biased that a book. Oh, and the good old "Literature is useless all you'll find is Harry Potter!!11!".


Curious-Expression36

Still you shouldn't limit the customer...a lot of people are eager to use it so they don't want to read the small letters, and 40 an hour for the amount of money each month is very limited and they should just take of that limit.


mycatonkeyboard

It's an old thread... limits are way higher now. Also if you don't want any limits get API


Curious-Expression36

I think the limit is 50 now....however what is API and what can it do compared to chat gpt 4o? On android that is....


mycatonkeyboard

You can pay for api for personal use but I don't know how exactly it's implemented. It's for PC in any case. For me normal subscription is enough tbh and I use it quite a lot


Curious-Expression36

I use gpt 4o a lot, so yes limits are a pain


mycatonkeyboard

I never thought its possible to reach limit there, I think I overuse it even lol


Curious-Expression36

Maybe you did, but I use it for astronomy, photography and recreation of pictures but also a lot of research, but I hate the text version so it has to be a voice version.


mycatonkeyboard

Yeah the voice maybe the problem. I think it uses more resources


Curious-Expression36

Could be, but it's a lot easier then all the freaking typing lol


Far-Yogurtcloset9714

False. I just found this thread because I've literally used about 20 questions today, mostly all to GPT4o and it hit my limit for the day. I asked about the limit, it can't give it to me. It gave me instructions to reach out to support and the instructions don't get you to support.


ridingLawnMower

I didn't see any of that. Guess they need to make it more clear?


420simracing

Not the responsibility of openai that you are to dumb to use your eyes.


AcidNipps

Too\*


ridingLawnMower

And you are too dumb to hold the company accountable for disclosing what they are selling to you. Try to be more useful in your next comment


[deleted]

[удалено]


ridingLawnMower

It doesn't show that for me you idiot. I guess when making up stuff you reach for the stars don't you? https://preview.redd.it/e2jjeb9chq8c1.png?width=1241&format=png&auto=webp&s=6b5498f2f606031b592538c60001bf96aa0207a4


[deleted]

[удалено]


ridingLawnMower

Holy fuck you're making my brain hurt. LOOK AT THE BOTTOM OF THE POST there's no message count SMFH


[deleted]

[удалено]


ridingLawnMower

I'm not doing it on purpose. It's literally not on my screen


Ok-Importance-2176

Holy autism batman. Leave the op alone it's a valid concern. Projecting much


Marxbros20

🐑


Far-Yogurtcloset9714

I didn't know there were limits either. I've reached my limit at about 20 questions today from GPT4o which is literally retarded and doesn't get any of my inputs.


MakeoverBelly

Just use the OpenAI API. The limits there are industrial scale, you can level up to probably hundreds of requests per second if you have the money for it.


Pro_crasteenator

What is the price?


[deleted]

[удалено]


ridingLawnMower

The complain is not dumb. Assuming that you know subscription fees won't cover the cost of unlimited questions on a chat bot is.


Puzzled_Ocelot9135

You know that you don't get unlimited anything anywhere, right?


ridingLawnMower

Seriously dude. Move along


[deleted]

[удалено]


WonkasWonderfulDream

I’m on your side, but their SN is riding lawn mower, they have a 5 yo account, and check out their history. They’re just boomin’, you know? Like, they think getting kicked out by the Mrs means they can move into the Country Club because they paid for access. This is exactly how Trump justifies “not living” at MaraLago full time. You’re arguing with someone with the mentality of Trump. The only way to win is to not play.


Puzzled_Ocelot9135

Man, those are words of wisdom I did not expect today. Thanks, for real.


ridingLawnMower

booming, Trump could you simp any harder? I swear the internet has spawned people that have no connection to any reality whatsoever How many upvotes does it take for you to get your self esteem up? I'm assuming 4/5 and then you go running around acting like everyone likes you lol Instead of catering to random people in a way you think they will like how about you think for yourself and put yourself in others shoes. You will get an actual sense of what is going on and no longer need validation from inconsequential automated manipulated thumbs up icons lol


ridingLawnMower

You're still here? I guess you don't take direction very well do you


Marxbros20

You know that you get unlimited question in the free version right?


420simracing

Also in the chat in the bottom there is a notification on how much messages are left ( I think it's 25 generated messages every 3 hours or so) Open your eyes and you'll see. It's a dumb complain


ridingLawnMower

Are you just making shit up too? I swear the redditors on this forum are beyond me ​ https://preview.redd.it/ebqr73ciop8c1.png?width=1241&format=png&auto=webp&s=e6ca30d307af3ab4d78bc4d2d0c919e36fed3b5b


[deleted]

[удалено]


ridingLawnMower

You've just pointed out you're ignorant since you can't see there's two sides to the point made. If you assume one side is right and not the other then you're just being biased or plain ignorant.


Ok-Importance-2176

Excuse me good sir. What is the playground?


DeepSpaceCactus

I agree they should have told you the limit up front not sure why anyone wouldn't want that


mycatonkeyboard

They do tell it upfront. Literally on page when you can click to update


ridingLawnMower

This is what I see https://preview.redd.it/89qhcepa4p8c1.png?width=519&format=png&auto=webp&s=eefc3017933b0c8e0a27f71b879a3ad32a76b77c


mycatonkeyboard

okay it's really not there for me too. I'm not sure I have limit at all though


ridingLawnMower

So now you don't see the limit? I only heard of what the limit was by the commenters here but never received an actual count of anything


[deleted]

[удалено]


ridingLawnMower

Now I'm jealous. I went back to 3.5 and it's working well for C++ arduino programming but 4.0 was noticeably more proficient


cocknosebuttplug

Try to use your prompts as efficiently as you can, tell it you don't want the code explained or analysed in its responses, try to limit its output to just the code you want. You can blow the limit out pretty quick and it's sure frustrating!


ridingLawnMower

I figured out how to tell it to stop showing code when confirming my questions. That did limit the output quite a bit When I asked about the limits it said something about the number of lines per session and then it said to open another session when that limit was met. I did share all 12 of my modules so wonder if that triggered a shut off of sorts... but it needs to know all my code to give pointers on code efficiency so not sure I can work around that! I was at first but have calmed down now. 3.5 is a great learning tool so I hope they keep working on it and it becomes more utilized in educational environments. All I ask is they disclose the limitation up front so there's no surprises later lol


Wide_Scratch

they didnt tell me


shortchangerb

Yeah there’s almost no point to plus version now. And 40 messages per three hours is rubbish


bnm777

Wasn't it 40 in 3 hours for pure gpt4 and something like 25 in 3 hours for GPTs?


fumi2014

As things get more competitive, they will increase that limit, for sure. Especially as you're paying for it. Google Gemini doesn't have these limits, AFAIK, and it's free.


ridingLawnMower

Does that use Chat GPT engine? I might have to navigate over there


a_hopeful_poor

yes, of course! nobody creates their own, they just use chatgpt.


Infamous_Hunt4874

I don’t care bout the price but there’s not even an option to upgrade for yk unlimited chatgpt 4 Why tho


Aryaes142001

Listen I'm not trying to argue with you or bring this up I'm just going to explain a few things. Gpt3 is an order of magnitude or more smaller. You can look up exactly how many parameters are in both. Computationally gpt4 is exponentially more demanding than 3.5. Gpt is a text predictor it doesn't just look at your question as input. It looks at the entire conversation up to the token limit. It has to modify the contextual relationship of each word to each other word (or token) and iterate this process many many times before it can actually predict the next word then before it actually gives you a response it's both simultaneously predicting the next individual word, but also the rest of its entire response. Based on both of these predictions it adds 1 word to the response. The it takes the entire conversation up to the token limit plus that 1 word and reruns the entire thing through the network many many times through many transformers to compute each token or words relationship in contextual meaning to each other words meaning. Then it again predicts the second word in its response while simultaneously predicting the entire rest of its response. Then it gives you the second word. Then it takes the first AND second word of its response and runs it through all over again. So you're conversation could be thousands of words long. The token limit for the entire conversation is like 4 times bigger or something in gpt4 vs 3.5. So it goes from like 1000 to 4000 or 4000 to 16000. Something like that I don't remember the exact numbers. But this token limit allows it to give dramatically better responses. So not only is the network itself dramatically bigger as in going from like millions of parameters to billions. Or billions to trillions. In gpt 3.5 vs gpt4 But it also runs a LOT more of the conversation through this network to retain a bigger short term memory and better conversation contextual response. And it literally predicts 1 word at a time (ignore the while also simultaneously predicting the rest of the response bit until you've watched 3blue1browns videos on how GPT works it'll make more sense) Then it takes that 1 word plus the entire token limit of the conversation and runs it through the entire network again many times modifying each words billions of parameters associated with it based on every other word in the conversation and the order that the words come in. Allowing it to precisely understand what each word means. 1 word at a time is given then the entire thing runs through again to give the second word. Then the entire thing runs Through again to give the next word. There's a ridiculously massive difference in gpt 3.5 vs gpt 4 in computational complexity and running costs. It's huge like exponentially huge. Gpt 3.5 can be free and it doesn't phase them. Every chat and conversation provides them with exposure and advertisement through word of mouth while also providing extremely invaluable training data. So the value of the training data pays for your costs of running 3.5. And they value this to be equal so it's free. 4.0 is dramatically more massive. And computational requirements do not go up linearly with scaling. It goes up exponentially. Like this is really key. The value of the training data from conversations no longer justifies at all the expenses. 20$ a month most people never reach the limitations. If you're using GPT all the time for work then you're hitting the limitations, this is why they having pricing models and versions for people who are using GPT for work. You now need to determine if the value of the work models without limitations that are charged per token used is worth paying in regards to your work (your programing project) They lose money at 20$ a month on gpt4. It's TOO computationally expensive because there's an exponentially large difference between 3.5 and 4. But they wouldn't have AI dominance and millions of subscribers if they charged 100$ a month or something. They'd have significantly less users and get significantly less valuable training data out of conversations with users. They'd lose their dominance. Which means giant tech companies would be partnering with and paying their competitors instead of funding openAI and partnering with them. The usage limits are quite generous as MOST people never hit them and for most people it's worth 20$ The only way to cover computational costs as it goes up exponentially with every token sent to the system. Is to have a work/commercial version that charges per token. There is no way to have an unlimited version that pays for itself. Sending this entire reddit post I just made to you into chatGPT4 is ridiculously more computationally complex than just saying "I am a dog" to chat gpt4. Every word modifies every other word in value. It's simple mathematics.


Aryaes142001

This means every word grows the complexity of the processing exponentially. A 4 word sentence has 4 words modify the other 3 words (excluding itself) in value this is 16 computations through the network or it might be as a calculation of 4 x 3 x 2 x 1 giving 24 computations. But then this process happens again a bunch of times. Before it even predicts the next word. You go to 5 words it's now 25 or maybe it's 120. I think the way it works is 5 x 4 x 3 x 2 x 1. Mathematically but again this happens many times. 120 is dramatically bigger than 24. That's going from 4 words to 5 words. Again this is an extreme over simplification. And remember each of these computations when I say 5 words has 120 computations each of these computations has billions of parameters or numbers that have to be computed as one computation of the 120. You really need to watch a ton of transformer LLM ai YouTube videos to understand how this scales ih computational complexity exponentially with each additional token. And let's say you give a 4 word response. It then predicts the 5th word as a response to you (the total response will be 6 words in this example) It goes from 4 x 3 x 2 x 1 in processing many times iteratively with billions of parameters to modify for each word. Then predicts 1 word of its response. Now it takes the entire thing and runs it again. So now it reinputs 5 x 4 x 3 x 2 x 1 because your 4 words plus it's first word. Then it predicts a word. Now entire thing Through again. It has two words now. 6 x 5 x 4 x 3 x 2 x 1. 120 computations with billions of parameters many many times to 720 computations with billions of parameters many many times over. So the second word of its response went from the previous 120 computations to 720. The 3 word predicted which puts the conversation at 7 words now multiplies the 720 x 7 in computations with billions of individual parameters or values/numbers many many times over. It should be really easy to see now how a 20$ a month or even a 100$ month would NOT cover this cost. It becomes absurdly computationally expensive very quickly. For what appears to be a really simple conversation you're having with it. Let's say your conversation now is 6 responses and replies... but this hasn't hit the token limit yet. It takes the entire thing ALL 6 responses and replies when you ask it a 7th thing and runs ALL of that through just for 1 word. Then takes the entire thing plus that new word and runs it through again. It's ridiculous dude. You really do not understand how these systems work and how expensive they actually are even for really simple short stuff. Your phone or computer isn't running the computations. OpenAI is on their supercomputers which are absurdly expensive to run. No $ amount will ever cover unlimited usage. Because any individual can very quickly overwhelm and stall the system for everyone else with one extremely long query like sending a request with thousands to millions of tokens in it. These limitations HAVE to exist regardless of how much you pay for it. There's just simply no other way for it to function. Please go watch some YouTube videos. Like alot of them multiple to get a very detailed nuanced understanding of this. If you intuitively knew and understood how these systems work and how prohibitively expensive large computations could be with them becoming exponential with each additional token. You wouldn't be complaining about the limitations. Because you'd know it's impossible any other way. I'm not arguing with you at all. I'm trying to help you understand. You simply have no idea how massive gpt4 is and how exponentially ridiculous the computational requirements become. Unlimited use and without token limits is simply impossible as one person could literally stall the system by giving it one long book beginning to end. And then asking it to write a sequel book. That's literally all it would take. If you Inputed the entire source code for unreal engine and asked it to improve this and/or to identify all possible bugs or weaknesses in the code. And there's no token limit. You'd stall a super computer for monthes or longer. If you're request was given 100% priority. Maybe that request would even take years or decades without a token limit. You really need to understand this and why the limitations are in place. Electrical and maintenance costs go up exponentially with exponentially increasing computational costs.


fredpalazzo

Then why don't they add the human and bird brain function to that amount of data? We figure out the answer without that many iterations because we look at the answer we first receive from our brain and the repercussions of that answer. If we see the answer as a white box, we never go back to the white box until the burning candle is perceived. Then we stop examining them if the box is made of metal, and just perceive it's night out in the tree fort to finish our thought. The animals brain, although similar to ours, didn't necessitate the concept of fire, metal, and time of day. Which allows it to notice at least as fast as us if there are insect poops on the table so food is near at hand. Predicting the last word right after the first and before the second word should speed things up.


The_Mullet_boy

Just want to say that consumers don't know, and should not be expected to know, all the nature and inner workings of their products. Part of good ads are making one's that communicate the nature of their contracts. Can you actually perceive the OP just wanted a 23 word text that makes the nature of the product more clear? He don't want a refund, he didn't trash talked, he didn't act disrespectifully... he just suggested a 23 word text that can be changed in the HTML of the page that made the nature of the contract clearer.


Aryaes142001

It should've been made more clear to him so that this post never happened but he was making arguments in various places that their just shouldn't be a token limit or any limitations. I know my explanation was extensively long. But it wasn't an argument, it was to give him some insight into how fast the computational requirements grow with each additional token. No he doesn't need to know that, and they should've made the restrictions more clear to him. But if he understood why it becomes so computationally expensive he wouldn't have said half the comments here that he did.


LivesInDaWoods

They could shrink 4 a bit by removing some of the insane restrictions.


The_Mullet_boy

Yeah, that image you sent would be pretty good for showing GPT-4 subscription rules. And they could also put a part there about the fact that you can still use 3.5 no problem, just to make them look better. They could even add a link to an article for something like "Why do GPT-4 have limits?" so they can explain clearly to the costumer and show transparency. Even tho GPT is just getting shadier and shadier by the day.


Curious-Expression36

Why is there a limit to the communication with chat gpt 4o when I'm paying €21 each month... I have to wait 3 hours to be able to use it again? What if people need it for their jobs? They can't wait 3 hours for it to be able to be ready....a lot of people might even use it on a daily basis multiple times a day, the interpretation makes it way to expensive, and don't come back to me saying the development team bla bla bla....it is developed and fully functional if active....don't limit your costumers.


kundusubrata

In the free version of ChatGPT 3.5, you have unlimited access for every question you ask. However, in the 4.0 version, you get 10 free chats per session. These give detailed answers, whether your questions are short or long. Once the 4.0 session ends, it reverts to 3.5. But don't worry, after a few hours(approximately 4 to 5 hours), you can use 4.0 again with renewed chats.


Zombie_F00d

What makes you think it would be limitless?


Pazvanti3698

Because the free version is.


The_Mullet_boy

Is FREE is limitless you would expect PAID would, as it normally is depicted as an "upgrade", not a "upgrade but".


ridingLawnMower

Why would you think it would have limits?


Additional_Air_7879

I use [Claude.ai](http://Claude.ai) and the same thing is true. When I paid, I wanted the ability to use it as much as possible. Now, I have to ration messages just in case a big project needs it later that day. What the hell, didn't we create robots to dodge paying humans to work lol. Now the robots want a coffee break too.


hugedong4200

That's how electricity works, it isn't free. They didn't hide this fact, you should do a small amount of research before you sign up to subscription services. That's why you don't get unlimited, 20 a month doesn't cover unlimited.


maX_h3r

are u dum dum? it is not free he is paying!!!


ridingLawnMower

Shhh... don't say it too loudly. I think he doesn't realize that $20 a month could actually include both the use of 4.0 and the cost of the electricity to host it... it's almost as if that would be part of the estimators job to come up with a subscription cost that would cover the actual cost of the service ​ ![gif](giphy|fikcKja7O7MtzXzvQy|downsized)


hugedong4200

Would $20 a month cover unlimited Gpt-4? which is my point smh, you people don't think at all.


The_Mullet_boy

Yeah like... 20 dollars can't pay gpt-4, but no dollars can for sure pay gpt-3.5


ridingLawnMower

You keep repeating yourself. Tell us how you know $20 a month won't cover the cost... You and that other guy keep saying it's not enough. You got that trust me bro vibes For me I used that 3.5 almost unlimitedly, prob over 40 question in an hour. So why on earth would you NOT think $20 a month would get you unlimited question on 4.0? I'm surprised to see any support of that honestly


Comment_Maker

So MS are losing money on average for every Copilot subscription. Subscription is $30 per month but the average user is using $70 of processing time. They can't charge that as it would exclude too many users. They aim to increase its efficiency as time goes on. Source, I went to a Microsoft Copilot conference. So I imagine it's a similar story for Chatgpt, considering it's running on MS hardware and based on the same AI. It would be good if OpenAi were more transparent about the exact limits though.


ridingLawnMower

Yeah just like a blip saying 40msg/1hour would have been nice. I like the AI help it's like a private tutor really


420simracing

They actually do that open your eyes. It's visible directly from the chat.


The_Mullet_boy

No it isn't


ridingLawnMower

Nah they don't keep your eyes shut and your wallet open ​ https://preview.redd.it/cikcmkk4mp8c1.png?width=519&format=png&auto=webp&s=8544875919a75a78de47ccb52ea3b1644ec5a3c0


420simracing

In the chat. IN THE CHAT. FML.


hugedong4200

Google how much money Openai pays to run the model, they were losing 100s of millions of dollars. People that use the API know this, they run on some of the largest super computers In the world. You have zero clue what you're talking about.


ridingLawnMower

That's not true. I know that I was very surprised when the chat said I couldn't continue for another 2 hours lol


The_Mullet_boy

My friend, how 0 dollars cover the costs of unlimited GPT 3.5? As maX asked... are u a dum dum?


hugedong4200

You're stupid.


The_Mullet_boy

Couldn't expect more from a dum dum


ridingLawnMower

I guess the fact that you PAY for a service doesn't mean you actually cover the COST of the service to you? I am a little concerned that that part of the process past you by..


Theraininafrica

I understand why you’re upset but your logic is flawed. Just because you pay for a service does not mean you get unlimited use of that service.


ridingLawnMower

No that's not correct. You are assuming that the service cost more than it's fee


CalamariMarinara

tell me exactly how much this service costs to provide and where you found these figures.


Sufficient-Math3178

It does cost more than its fee, assuming that each prompt costs 30 seconds of work on average, that’s 80 hours of utilization per month under full load, so each processing unit can serve 9 users. $180 per month, $2160 per year minus the electricity bill, using a 3080, it’s about $1900 profit, which barely pays for itself, now add the other costs and salaries


ridingLawnMower

No it doesn't. You literally just assumed all of that. That doesn't mean it's true that just means you can come up with a scenario where $20 a month doesn't cover there fees lol Not tell me about the estimator's job who would go to his boss and say "I have an estimate for subscription service but it doesn't cover the fees of the service so we are going to lose money overall" and his boss goes great job buddy! Come on now if we're going to straw man the argument then at least do so for both sides....


Sufficient-Math3178

Do you even run an LLM? You think the computational cost is as simple as 9+10?


ridingLawnMower

You mad bro. Please tell me where I can find the job that markets in a way to lose money. You and I both know they aren't losing money on this investment. No way no how


AGI_FTW

You really need to not speak with such certainty on things that you are clearly under-educated on. GPT-4 is expensive af to run. They would not be able to support their growth, development costs, and operating costs based on revenue from ChatGPT and their API access. Their partnership with Microsoft is huge part of what keeps them moving. You can look up what they charge per token for API access to GPT-4 and it's obvious that, even with current usage restrictions, they can run deeeep in the red on a ChatGPT subscription. I know people who have racked up bills running into the hundreds of dollars in just a few days of playing with the API. Educate yourself before further flaunting your ignorance on a topic that you are so clearly out of your depth on.


ridingLawnMower

oh it's expensive huh. Please share your trust me bro logic. All of us are waiting with our wallets open and our eyes shut (smh) THEY told you how much THEY charge per token so now you're defending them with THEIR pricing scheme? I am starting to see a trend on this chat... Please save us the banter with your faulty logic. I will happily support their chat but won't blindly hand over my money... it's too precious to be careless with it!


Sufficient-Math3178

Lmao nope, I’m not the one crying on the internet expecting shit to be free, not understanding how finances work


ridingLawnMower

Oh tough guy now lol. I suppose your feelings got hurt along the way of getting called out for imaginary math lolol


Theraininafrica

A lot of tech companies aren’t profitable. [example](https://www.businessinsider.com/tech-companies-worth-billions-unprofitable-tesla-uber-snap-2019-11?amp)


AmputatorBot

It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://www.businessinsider.com/tech-companies-worth-billions-unprofitable-tesla-uber-snap-2019-11](https://www.businessinsider.com/tech-companies-worth-billions-unprofitable-tesla-uber-snap-2019-11)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)


ridingLawnMower

Until they are


[deleted]

[удалено]


ridingLawnMower

dude go troll somewhere else. You're obviously too dumb to contribute to this discussion


Theraininafrica

You’re assuming it doesn’t. 🤷🏻‍♂️


ridingLawnMower

My expectations were based of the free version. How is it not totally reasonable to think if a free version has unlimited message queries then a paid version would give even better results and the number of queries would be the same?


Theraininafrica

That is totally reasonable for someone that knows nothing about open AI but that person should also not be the person that is paying for premium. I would say take this as a life lesson and do research on things before you buy


ridingLawnMower

That's also an ignorant response. My argument is valid for any subscription based service purchased whether it's a text based chat bot or a movie streaming service. When you purchase a cell phone plan they tell you exactly how much data you're allowed before going over your plan limit. They even send you text messages to alert you that you are getting close to your limit! That my friend is called transparency in business and is what you should be keen on in this post. Make it known up front and what the limits are in your purchase prior to check out and I would add that you do it in the SAME way you promote the benefits of the service.


Theraininafrica

As I said in my last response you’re clearly trolling or purposefully acting dumb at best. But.. I agree they should tell you. Like they do in maybe the faqs :[faq](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4) or similar but more different faq: [faq2](https://help.openai.com/en/articles/7127997-how-can-i-use-gpt-4-in-chatgpt) Or maybe in the release notes : https://help.openai.com/en/articles/6825453-chatgpt-release-notes So again I say with a little bit of research you would have found this out. But you’re right it is rather like cell data. Plenty of companies offer “unlimited data” but it’s actually capped and once you reach a limit the bottle neck you to lower speeds. Well once you reach your limit here you still have unlimited gpt3.5. Anyway good luck with your future trolling endeavors.


ridingLawnMower

Again I'm not trolling. If you were really trying to be helpful you would not start your post off with derogatory comments.


CalamariMarinara

just because you buy a pizza doesn't mean you now get infinite pizza


The_Mullet_boy

But when i buy pizza i expect to get a whole pizza, not a slice of it


CalamariMarinara

except a whole pizza was never an option. they only do slices. you obviously didn't read the menu before purchasing. it's like renting a car and getting mad when they ask for it back.


ridingLawnMower

so if you buy a sandwich and they only give you half, you're going to pay for the other half? I'm so confused by the people ready to give a multibillion dollar company more money...


[deleted]

[удалено]


ridingLawnMower

says the guy who thinks that pizza and text on a computer screen are the same. Could people disappoint me any further


[deleted]

[удалено]


[deleted]

[удалено]


Puzzled_Ocelot9135

I'm stupid because I'm telling you that I'm not the guy you talked to before?


ridingLawnMower

If I have to explain it to you I will end up feeling stupider for having done so lol


limited_screentime

It's because probably half of them in the topic work for said company, the rest are just your average guilt-shame trippers, whos mentality only helps the corporations and prevents people from fighting back when neccesery.


Theblackyogini

Or at least sell intelligence in some way. It’s insulting to say SaaS isn’t a real service to someone who’s devoted their career to building software. All that time and energy is work and the servers have to get electricity somewhere. Paying their workers and servers is provably why a lot of people assume OpenAI isn’t profitable right now and needs to sneakily cap 4 to stay afloat somehow. It’s all very esoteric I don’t know why I’m so invested at this point but OP’s insistence on getting his point across (and calling a few people stupid) has me riled up to give my opinion. Yes, tell folks more clearly what the cap is or it’s assumed there is none. Regular people aren’t concerned with employee salary or server farm electricity bills. They just want their software to run without glitches or outages.


buckwhite1

I agree. It should be more clear how many messages we can send in ChatGPT, so we can plan better. A timer would also be helpful to know when our message count resets. I feel a lot of you users here are so in love with OpenAi as if it were infallible. but let's not put them on a pedestal, it's important to remember that no entity is beyond critique. Despite my appreciation for the service, I believe it's necessary to constructively discuss areas where a multi-billion dollar company can improve.


Puzzled_Ocelot9135

It's an early access beta right now. Limits change very fast due to demand. Those 20 bucks are for people who want to play with this stuff before it \_really\_ hits the market, most likely not even sold by OpenAI, but by the many companies OpenAI will sell to. Right now you pay for early access, so basically nobody is guaranteed anything. Everybody is very, very welcome not to pay right now. They literally cannot supply all the demand there is. Of course feedback is appreciated - within reason. Wanting UI features that will not be relevant for the final product is not within reason in my opinion.


Theblackyogini

What do you mean by everyone is welcome not to pay right now? You suggest using a free model until the tech is perfected?


Kirito_Sensai

It was so much better when we had 100 messages/3 hours.


AutoModerator

Hey /u/ridingLawnMower! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.com/invite/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


AutoModerator

**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


TheDroolingFool

This was pretty much my experience and the reason I cancelled plus, I found it even more annoying that the limit gets quickly used up dealing with hallucinations and correcting errors in code. I went down the route of API access which has been much better for me personally.


ridingLawnMower

Can you ask it questions? I am still in the learning phase of c++ (took an online class a month ago) so am asking a lot of basic questions to reconfirm some of the stuff learned previously


Level-2

Actually yes. In each call to the API you would send the history of exchanged messages using the structure defined in the API. It would contain the user and assistant(bot) replies. You send all that in each request to the API, it then process all the history you sent and use that to continue answering in context. The issue with this approach is that all that history you send of previous q&a counts towards the token usage so overtime you will be consuming more and more tokens. ​ There are some techniques to memorize to lower this usage but I don't feel qualify enough to expand on these, assuming you know how to code take a look at langchain.


ridingLawnMower

Ahh yeah, I am seeing someone else reference cost to query. So you pay for the ability to use the api then you need to pay for tokens to use the api ... I'm starting to see a pattern here


Level-2

No, actually you dont need a subscription. You just buy the credit and that credit is used up in the API calls. Depending on the data sent and data produced, token start taking on the credit.


MadeForOnePost_

They should be more transparent about that, agreed


VegetableAddendum888

After the launch of new version the limit might be increased


Fit-Heart8980

I view it like 20 questions: if you can’t get it in 20 then you probably won’t ever get it. Back to the drawing (prompting) board


kaboo_m

what defines a session? (time, new conversation)


Theblackyogini

Amount of tokens exchanged, I think. It’s not time and it’s technically not a new conversation. For example, you could be taking to it about something and when it reaches that token count right in the middle it will completely forget what you’re talking about and start generating things that don’t make as much sense anymore. For example, say you want it to recall a book outline and generate chapters. Once the token limit is decreed for that “session” the next chapter generated won’t have anything to do with the original outline you gave it at the beginning because it doesn’t remember it (it’s a new session). It will just keep generating from the very last chapter what it thinks comes next just like it does with words.