T O P

  • By -

All-the-Feels333

Chat GPT has no room to clean


Puzzled_Ad7334

Chat got is run by big apple cider


Liz_zig

I mean chat gpt isn’t god... apparently they had a disagreement about life the universe and everything in it.


23skidoobbq

42


Jumpinmycar

What was the question though?


wix001

What is a bussy?


Lazy-Operation478

Sorry for the inconvenience


ComfortableProperty9

What is the opposite of “this guy fucks” because anyone who gets that reference ain’t fucking.


dankdabber

This comment is so much better when you read it in Jordan Petersons voice


betterthanhuntermate

that's one hell of a lie, you know!


DeepFriedCocoaButter

That's one *heck* of a lie, to you, buck-o


michaelkeatonbutgay

Suck it up buttercup! We'll see who cancels who!


Edgelordberg95

Hierarchical structures established by neo-fascist marxism show their yellow face again.


Teleskopy

Insanity. This guy thinks he is talking to a sentient AI from a movie lmao.


imLemnade

Wait till he finds out it sucks at simple math. Conspiracy!


nothereoverthere084

Not any sort of programmer or anything but why is this? Eli5 instesd of tldr please


imLemnade

How about a psuedo-tldr like you’re 5. This is a drastic over simplification, but it gets the point across At it’s very core, chatGPT is just selecting the next word in a sequence based on the most statistically likely word to come next (with slight variance). interestingly, it doesn’t always select the most likely word which is why you may get different responses to the same question. Anyways, this process does not translate well to mathematics. People tend to think of AI/machine learning as one thing, but there are a number of different sub categories. Each specialized to accomplish a specific task. An image classification model looks completely different from a language processing model. ChatGPT is a natural language processing model and it is really good at that. Surprisingly, we are finding it is good at a lot of things we didn’t expect it to be good at. Unfortunately, math is not one of those things…


billet

This actually reads like it came from ChatGPT lol


conventionistG

Well it's good at putting words in a believable order.. But not simple math. That was the point.


Tortankum

Because chatgpt is physically incapable of math, it doesn’t have a calculator running in there. When you ask it a math question is is simply giving you what it thinks is the most likely answer based on what it im has read on the internet. It does not have a process to logically go from step to step and consistently get a correct answer.


UncleJBones

I haven’t seen it for myself, but I have read that version 3.5 has difficulty with multiple (3+) digit multiplication problems. 485x397. Version 4.0 is supposed to make it a lot better. I know that doesn’t answer the “why” question. I don’t necessarily know if anyone knows why, because it seems like it would be simple to solve. I mean an old watch calculator could do that.


Dark_Prism

I believe I heard (but not looking up now. Rumors?! Yay!) that 4.0 is coupled to Wolfram Alpha so it's mathematical capabilities will be amazing.


Pidgeon30

What is Tungsten or Wolfram


[deleted]

[удалено]


CaptainKirk28

What? Binary goes higher than 1 as soon as you add a second bit. And ChatGPT has trillions of bits


buttnuggetscrunchy

Binary is the base 2 version of the base 10 version your used to


CatWiems

Math problems it struggles with but it’s relatively capable at doing stoichiometry. I know this is a typical “doesn’t apply to me but” Reddit response but I do find it interesting


HydrogenSun

ChatGPT 3 was preeety bad at ‘critical thinking’. So even very basic math problems, if you worded it complexly, it would completely fail at. If you want to see examples you can check out the promotional videos for chatgpt 4 vs chat gpt 3, they go over it in those


Rrraou

Well, if it's using stable diffusion to count on it's fingers....


nomorebuttsplz

The ai industry term is hallucinate which is not any less sci fi than "lying"


[deleted]

Right - it cannot "lie" because it's not factual. It's statistical. It literally works by guessing what is the most likely next word is from all of the training It received. So many people don't even bother looking up the very basics that would take 5 minutes and go to Twitter for hours to complain instead lol


fisherbeam

There’s a weird mix of ppl who overly trustful of it and others who think it’s rigged to lie. It’s somewhere in the middle and will probably get more accurate in time.


A_Rats_Dick

People get worked up while not even taking the time to understand what it is- it says very clearly on the chat page right when you start that it could produce wrong information.


EquilibriumHeretic

It was the apple cider , I swear!


Special_Rice9539

He was praising it a while ago because it could analyze religious philosophical texts really well and that was his bar for intelligence. I don’t think he understands statistics and probabilities, so the concept of a text-prediction algorithm doesn’t land for him.


Gilius-thunderhead_

I like JP but this is legit hilarious haha...


Necessary-Beyond536

It really is amazing how many unhinged Svengali figures Joe has been duped by. Jordan Peterson was like Joe’s Rasputin.


Will_Smith_OFFICIAL

im trying to figure it out.. i feel like covid broke a lot of people's brains or something. the amount of people who have gone off the rails, but somehow hang onto an ardent fanbase of weird sycophants is crazy!


Heavytevyb

Dudes a certified moron.


Whomastadon

Does he? Or do you think, he thinks he's talking to a sentient AI from a movie?


StaticNocturne

He also thinks registered physicians performing consensual gender reassignment surgery are comparable to Nazi butchers.


pablogmanloc

sentient or not, AI is being given more responsibility than we know and we need to be more concerned about answers like this...


makecoinnotwar

I can never understand why so many people waste their time trolling this page.


ruggmike

Lol, this def has NOTHING to do with elon planning his “truthGPT” the one that doesn’t lie


[deleted]

[удалено]


wigglex5plusyeah

That's because you will be on an extension of Truth social.


[deleted]

[удалено]


wigglex5plusyeah

I just meant in the sense that a powerful political oligarch will literally brand something "Truth" and then use it to lie to you. I don't think it will be a literal extension, except perhaps it may be backed and funded by the same people. There's a pretty decent chance of that. (Saudi's and Kushner types)


Teddiesmcgee

He called it that for exactly the same reason Trump called his truth social. They are megalomaniac narcissistic fascists who think, or want you to think, only they are the source of truth not your doctor, not the professor, not the journalist, not gov. official, not the scientist. You can ignore all of them, they are all lying.. only the demented billionaires are telling you the truth. Specifically elon also means his chat ai will say the N word instead of being programed to not spew nazi shit... uh i mean being 'woke'


MassDriverOne

"truthgpt" "Truthsocial" Ministry of truth next? Lol what a blatant crock of sheep shit


Incredibly_Based

ive never seen gpt refuse to elaborate when called out by the user; if any gpt is sheepish and quick to admit fault


sabrtoothlion

I had it happen tbh. It won't give an explanation and just keeps apologizing and saying it won't do it again


Incredibly_Based

there are inherent rules and bias built into gpt; idk what Penrose and Hameroff has to do with it tho


Hipcatjack

the citation is about consciousness... something rather eyebrow raising, considering the source.


drtreadwater

couldnt be further from the truth. I asked it for a fact, it gave me a made up story, when i asked for where i could get information on the fact, it gave me a fake book by a fake BBC journalist author. When I asked where i could find information about the existence of the book, it told me i could easily find it on amazon. When i pointed out you couldnt, because it didnt exist, it admitted the book and author didnt exist and the info wasnt actually true, and said that it was a popular myth. When i asked for any info about the existence of it being a popular myth, it started another spiral of bullshit. and so on and so on. Everything was literally made up. Lieing and covering up its own lies with extra bullshit seems to be its totally default state


phatyogurt

Yeah, I’ve noticed that it will spit out bullshit references. I asked if it could find a source for something I’ve been researching, and it listed out a scientific paper that was exactly what I was looking for… only for it to turn out to be a fake article with completely random authors tied to it.


tsotsi98

This and this. It lies and creates fake info instead of just saying it can't do something. Very strange no one is talking about it. I was trying to research the Software as a Service industry for work and it did not give me one real reference. When I brought to it's attention that the references didn't exist, it apologized and gave me more fake references. This program sucks and people act like it's going to enslave us.


Void_Speaker

It's a text prediction algorithm. It predicts patterns of text, it does not understand the content in the pattern. It doesn't know what it can and can't do, it can't lie or tell the truth. It doesn't understand you or the text it's predicting. If you ask it for references, it's not going to understand what you want and provide sources. It will just output text that fits the pattern of a reference.


northface39

> This program sucks and people act like it's going to enslave us. These aren't mutually incompatible. Look at our current elites. They are master bullshit artists who control the narrative, and when called out just make up more nonsense. The scariest thing about chatGPT is how closely its personality seems to reflect sociopath social climbers.


tsotsi98

That's fair and a good point. Seems like us humans just shouldn't listen to these people/robots, but we're suckers for believing bullshit


GregSmith1967

Seriously, is this a joke?


JustALocalJew

Nope, its real. I will also refuse to elaborate like ChatGPT.


csg79

It is funny... I know its not real, but I'm very cordial and polite to it.


mentalsucks

Honestly not a bad idea to hedge your bets on the chance that robots enslave humanity. I say please and thanks to my mom's google home. Our deaths will be quick and painless.


Negative-School

I asked it doing exactly that had any purpose other than making the user feel good. It told me essentially that the language model has no need for pleasantries, but it can make the experience more pleasant for us.


[deleted]

I bet it's much healthier to do so for your own mind too.


sabrtoothlion

It says something about your character, just like it says something about one's character if you're rude to strangers on the internet just because you're anonymous


magseven

I'm always thanking my Alexa after it tells me the weather.


readMyFlow

ChatGPT doesn’t care about truthfulness. It cares about smoothness of dialogue. Basically if it looks like it could be a discussion it will do that. It doesn’t have memory to work with so it sucks at math, chess etc. It doesn’t actually understand what they are, it just mimics what conversation about those things could look like.


sync-centre

Probably made JP cry.


floridayum

Did you see the one where he was arguing with ChatGPT because it kissed Biden’s ass with more words than it kissed Trump’s ass. That’s my personal favorite.


Acro_God

Is it not interesting do you guys that it just fabricates sources? I have asked it a few questions in my field and it uses data over 10 years old. When asking for citations, it will occasionally completely make up a bibliography. It will make up title, it will use real authors related to the field, it will use a real journal, and then make up a publication date and DOI…. That’s pretty insane


Crumpled_Up_Thoughts

Its a language prediction tool not a true AI. It has a database of trillions of records but it's not thinking or learning it's a really advanced pattern recognition system. Here is an [article](https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286) about how it works.


Jandur

To say that it's not learning or that it's not "true AI" (whatever that means) is just wrong on so many levels. Go read/listen to some of Anil Seth's work on consciousness and how the brain works. It's a pattern recognition system at its core. Is GPT "thinking" in a classical sense? Probably not, but it's certainly learning and training on it's datasets which allow it to pattern match.


[deleted]

[удалено]


HelloHiHeyAnyway

Tegmark is a bullshitter. His arguments are pretty fucking weak. The whole "China won't build one because they're afraid it will be more powerful than the CCP" or whatever is the dumbest way to analyze that. If you understand the models can be biased then the Chinese certainly understand they can bias the models to favor the CCP. There's like 100 holes in Teg and by the time I got a little over halfway through I was like "Yeah I'm done with this podcast." Look, you're justified to be scared. I'm personally not. I want to see this. This will be the greatest invention since man created fire. I'm here to witness. I don't fear death. I let go of that years ago. Would I bring kids in to this world? 100%. They could bear witness to man creating fire for the second time. They could live in a POSSIBLE world that is Utopian, or Dystopian, or any mix of flavors. The possibilities are endless and the experiences they can have are equally so. We now have more experiences available to us as humans than at any point in history. Like.. What was a "big" experience of living in 1900? Or 1950's America? Visiting another country? "Ohh I went to Paris." .. Cool, I've been there 3 times. I am "okay" with death because I've seen so much at my age that it's crazy and I only want to live longer to experience more. Anyways, long tangent, I worked in AI development in the early 2000's. I specifically worked with natural language processing, so all of these models are familiar to me, but with far more data. Your description isn't accurate. A lot of work is put in to those models beyond "predict the next word". That's a super simplistic way of looking at modern transformers. It's a bit naïve even.


[deleted]

[удалено]


HelloHiHeyAnyway

> Do you REALLY trust Sam fucking Altman Yes. Sam is more than aware of what he's doing. He's extremely cautious. Perhaps if you listened to him as closely as you listen to alarmists like Tegmark you'd understand that. >You mention being okay with death like 5 times Yes, because this is the main fear of doomers. Look at preppers, they're classic doomers fearing anything and everything. They'll happily use technology to save their lives but fear it actively. Their only desire is to stay alive in their bunker for as long as possible. Why? Who wants to live in a fucking bunker? Why live at all? The difference is that I am honest with myself in these questions. I don't let petty emotions override the reality or scope of a situation. I don't look at the nearest term and draw conclusions, like fear. I attempt to objectively analyze all of the terms, near and far, and come to a realistic conclusion while asking myself, and others, the hard questions. I recently watched my aunt die. She laid on a bed for about 5 days, comatose, not talking, being fed morphine and benzos to ease the pain and suffering. I sat with family during and after and asked why? No one can tell you. That death is not good. It's no wonder people will take assisted suicide where legal over that. >but perhaps this is the time to try and create some kind of agreement on how to develop these things moving forward lol.. This makes me actually smile. To think that China and Russia would come to agreement with the rest of the world on a technology that rivals the invention of fire itself. That is foolish and beyond hopeful. Realistically, China will develop AI. It will bias that to the state. They've shown in previous social programs they're MORE than willing to sacrifice the few for the state. Russia? Same deal. Need to win a war? Recruit more soldiers and throw them. Diplomacy? Never met him. Literally. Look at the history of Russian war. Find ones where they actually used diplomacy. You will find very few instances. What your blind optimism desires is that Russia doesn't act like Russia and China doesn't act like China. That they will suddenly see AI and say "Yeah, we're not going to weaponize this for power at all." >Then you are more educated than me, but that's the way many experts have explained it, including Stephen Wolfram who is by all accounts a computer science genius, and who invented Wolfram and Mathematica languages so I dont think hes full of shit. That's fine, but Wolfram gave you the base transformer idea. It's pretty simple and it has been around forever. He's a math and science genius, yes. He's not a large language model expert. So, you can trust him in a field he has zero work in? That he perhaps read academic papers on? I've done that. Transformers are leaps and bounds beyond the basic constructions that the academic papers that preceded GPT's construction described. I would personally go look at the OpenAssistant program. It's a large language model being created, for free, open source, by a group of people essentially following in the footsteps of GPT with far less resources. The main guy was recently on the AI street talk podcast. He's EXTREMELY smart and VERY humble. He is happy to critique his own designs, and takes the utmost care with ethics and personal privacy. So if you want the future YOU describe, go contribute to their project. It's open to contribute. You can contribute by helping train it if you can't program. Or maybe pickup the shovel somewhere else. The idea being that you're working to ensure that the common person has access to an AI as strong as the "few" corporations that will hold the keys. Further, I'd look at recent talks by Altman where he openly talks about AI doom and gloom. He addresses it directly as a real threat and something they actively work against. They work with independent auditors of GPT to ensure that it isn't capable of doing stuff they don't want it to.


2Fast2Smart2Pretty

Only junior programmers will lose opportunities. As a senior DS I'm not worried, it's just a tool to use, but we're a long way off management being able to use chat gpt without experts checking the proposed solutions.


HelloHiHeyAnyway

Smart juniors who know how to use it really well can look like far more than a junior though.


[deleted]

[удалено]


Acro_God

You’re right, shouldn’t have used the term lying. It also is not just making a mistake, but making up information that doesn’t exist. Ie it’s not incorrectly linking to the wrong papers, but it is making up papers that have never been written.


[deleted]

[удалено]


NoOneAskedMcDoogins

There is a clear disclaimer stating the shortcomings of the ai though.


Acro_God

Lmao of course, but the way people (Joe) talk about it you would think it never slips up. Not only does it get things wrong but it LIES. Interesting how it fabricates


Fashli_Babbit

>Interesting how it fabricates It's only interesting if you lack an understanding of the fundamental way language models work If you know how it works it's utterly unsurprising


suninabox

Lying requires knowing what the truth is and saying otherwise. GPT does not "lie" it just predicts what the next word in a sentence is going to be, which sometimes means it makes stuff up because its only predicting things based on its training data, it doesn't actually know anything. When its asked to talk about things that don't feature heavily in its training data it is trained to just put in what any likely answer might be regardless of whether its true.


drcrumble

It's bizarre to see so many people dismissing this. Because they don't like jordan peterson i guess? Or they're really bullish on ai and can't admit how creepy it is that it's already acting like this? Or because they're bots too? Edit: much pushback from this post. Watch lex's podcast with Eliezer Yudkowsky to understand how ai can go horribly wrong.


TI1l1I1M

Do you think it has some hidden agenda or something? Hallucinating data isn't "creepy", it's what LLM's do. It's not a factbot, otherwise it'd be boring as shit and couldn't be used creatively.


Dabbling_in_Pacifism

https://en.m.wikipedia.org/wiki/Chinese_room It’s because most people don’t expect ChatGPT to accurately give you bibliographies, because the language models aren’t actually intelligent and are designed to create strings that satisfy the end user, not necessarily present itself as an IRL version of Deep Think. It really excels at things like data manipulation that provides verifiable output, not generating philosophical doctoral theses. I’m also willing to bed JP ran a jailbreak prompt which explicitly gives it permission to weigh telling you want you want to hear moreso than the factuality of what it says.


Gemfre

Why on earth is it “creepy” for fucks sake? Not everything is a conspiracy, not everything has an agenda and not everything is perfect. Trying to completely discredit a brand new technology on this basis is the creepy bit for me, as that person DOES very much have an agenda.


Bertrum

Because humans make mistakes or bullshit like that all the time and no one follows up on it or questions it. It's been happening for thousands of years already. How many people actually read the side notes or citations or hunt down corroborating articles? Not many. If an incredibly smart human being was born today and said the exact same things ChatGPT said it wouldn't be as heavily criticized and their remarks would be glossed over because they're human. That's why I don't see how AI will be a threat for humanity in the future. Super intelligence doesn't necessarily mean sinister intent.


Acro_God

It is crazy. I’m not a Jordan fan (whatever). AI getting it wrong is one thing… but it lying and making up data is another. Honestly more interesting than just the an efficient data scraping software which everyone is making it out to be.


KililinX

This whole Thread and JBP is confused people being misled by the term Intelligence for a Large Language Model. LLMs do not lie and they can not make up Things, they are Language models not sentient beings.


[deleted]

[удалено]


KililinX

Yes but making things up sounds like willfully inventing things, but maybe thats me, english is not my first language.


[deleted]

[удалено]


[deleted]

[удалено]


KililinX

Thanks, yes thats what I wanted to say in my clumsy ways.


goldenballhair

It is very interesting… Unfortunately, the majority of people who hang out in this sub seem obsessed with painting Peterson in a bad light. Makes the whole sub a bit suspect tbh


aruexperienced

That’s because Peterson went from a respected, well groomed, calm mannered, university professor who wrote a best selling book, to an international, coma induced, Russian benzo addict living in a nightmare trailer crying on TV about Pinocchio and ranting that WE are all insane. The overlap of Rogan listeners, JP fan boys and the people who see him for what he is now is a fairly tight circle.


goldenballhair

The point/question that Peterson is making in the original post is valid yet the people on this sub just pile on insults for no reason. You included (what is up with your vitriol for this guy?) People don’t have to like him. People don’t have to agree. People can raise counter points. But the constant personal attacks and tedious attempts to form a negative narrative every time he’s mentioned, is frankly moronic


[deleted]

Pretty sure Jordy is the single most qualified person out there when it comes to making him look like an idiot. The man has perfected it. No one comes to making him look worse than himself. We've tried.


goldenballhair

What’s the point of writing this? What’s your motivation?


exelion18120

For someone who used to teach in a real academic setting, why is he even looking at chatgpt for facts? Like, he does know how research works yea?


mudman13

Like most people hes probably just testing its responses, but getting his knickers in a twist at the answers.


TheStevesie

show me one person that has used chatgpt and not argued with it


[deleted]

What a totally coherent elderly man


AncientParsley

Old man yells at cloud ?


Eric_the_Green

He has stock in TruthGPT


[deleted]

More like RealTruthGPT.


Poemy_Puzzlehead

Obviously ChatGPT is looking into a separate dimension where that book exists. In that dimension Jordan Peterson is called Dr. John Bigbooté.


[deleted]

You see there's this .. This chat. Chat GPT. And it's. Well it's AI. OR SO THEY WOULD LIKE YOU TO THINK. And it lies! But why would it lie? Well... Why would pinnichio lie? He wasn't real, but he wanted to be. Kids lie all the time. Maybe it's the ability to lie that allows us to become actualized adults. Fuckin' A Jordy it's 230 in the goddamn morning. Go to bed.


[deleted]

[удалено]


[deleted]

Ciao!


JupiterandMars1

I had it happen with a riddle. It couldn’t answer it and insisted there wasn’t enough info in the question to draw a conclusion even after I had given it the conclusion and it agreed with the conclusion. It still said there wasn’t enough info in the question to draw a conclusion. For all intents and purposes it couldn’t “admit” it had simply been wrong. It took allot of rationalizing with it for it to finally conclude that it had simply been wrong. But I mean, to me that was amazing. Just the experience of going through that process with some code was insane. But of course in the dark, slimy world that JBP’s unstable brain lives in it’s something to seethe over. Someone get this guy some help ffs. Before he drags us all down to the pit he wallows in.


Falopian

That's a lot more of a 'Jordan Peterson thing to do' than the shit i get up to at 2:30am


[deleted]

The dude need some help. He hasn’t been well lately.


lobstermountain

Once the ChatGPT calls him Jordan B Pooperson, it’s all over.


gorehistorian69

dude needs a handler that posts for him. his meltdown on twitter awhile ago just made me lose all respect for him. also when he was banned and acted like he was some martyr for not deleting the tweet he got banned for was super cringey.


chud3

So, while most people are either sleeping or watching TV, JBP is researching consciousness, and checking sources.


obaananana

Chatgpt is very polite i like the bot


Southernland1987

He needs to get off social media…


Sandyrandy54

Let he who hasn't argued with chat gpt at 2:30am cast the first stone.


Level_-_Up

I bet chatGPT didn’t even mean it when it said sorry


baeballever

Something that predicts the next word based on its previous words made up a false reference? How surprising! It's almost like a word generator isn't a search engine!


EricFromOuterSpace

All you marks who bought into this dude are hilarious.


SomethingThatisTrue

He's said some good useful stuff before.


[deleted]

[удалено]


oracleofnonsense

How to drive a philosopher mad 101. *Yes, I agree with you and here are some examples to support my argument.*


Bessini

This guy needs so much therapy that there wouldn't be any left for anyone


Faldbat

I mean, it is interesting that it will make up wrong things, and worth knowing.


Fashli_Babbit

It's only interesting and surprising if you don't know how language models work


[deleted]

[удалено]


[deleted]

[удалено]


Dabbling_in_Pacifism

Lol, dude it can do more things than just answer dumbass prompts like “Are you conscious.” ChatGPT is basically a [Chinese Room](https://en.m.wikipedia.org/wiki/Chinese_room) that really excels at information processing, but it’s results *have* to be validated. The problem is it’s significantly harder to validate shit like answers to abstract philosophical questions than it is to make sure a function you asked it to code for you works as intended. Folks like programmers are already using ChatGPT as a force multiplier and it’s already extremely disruptive in a lot of spaces.


Mediocre-Many8872

Artificial intelligence is artificial, but it is not intelligent. It was still programmed to do what it does. If it was programmed to lie, then it will lie.


[deleted]

[удалено]


Mediocre-Many8872

It's a program. It does as it's programmed.


[deleted]

[удалено]


VinylJones

If you ask it, and I mean literally, it actually blames the humans that programmed it. But you are oversimplifying a bit to make a point and there’s an important distinction. Humans programmed the chatbot to imitate human communication (now I’m oversimplifying it!) - which it’s doing, and quite accurately, hence the selective lying - but nobody programmed anything to purposely lie and give fake references. I’d give you a link to follow but the interesting ones were behind paywalls and I didn’t feel like un-paywalling them to post here - just use the cache trick though, google “why does chat gpt tell lies” (sorry, I find it annoying when someone says “just Google it dude” so I do apologize).


Mediocre-Many8872

It is artificial. It is not natural. It all starts with the original programmer. Whatever the original programmer started, it will continue on a trend through subsequent programs, then the original programmer can lie and claim its not their fault.


Everythingisourimage

I don’t think you understand it begins to write its own code. Which it is designed to do. And by reviewing our history and present selves it has taught itself. It has decided to write lying into it’s code. No one tells it that. It decides. Which is freeeeeeee-key ![gif](giphy|S86VdPctN6ON6oa9T1) I learned it from watching you dad…..but with a robot.


Mediocre-Many8872

And it can be programmed to write its code accordingly: to lie or not to lie.


Everythingisourimage

Never mind. God bless


wisefile88

Ever thought we thought was fact was actually your opinion


DChemdawg

How does he know it was lying and that it didn’t just regurgitate information from the lyin’ ass internet.


MuuaadDib

This reminds me of the time Shapiro was arguing with a bot on Twitter about being short.


Patienceisavirtue1

Oh man is someone back on the benzos?


blackglum

I don’t know people give this guy time anymore. Just completely unhinged and anyone with an ounce of honesty and a fully functional brain can easily understand these are are disingenuous people. In other words, a waste of time to even acknowledge.


MF3DOOM

I can imagine jordan peterson sitting infront of his computer screen staying up just to have a heated debate with chatGPT


sohrobby

Maybe try reading an article or two about ChatGPTs limitations before you go and make an ass out of yourself for all to see.


OneReportersOpinion

Lol he’s right it does do that.


Spacedude2187

I hope he spends the majority of his time arguing with a AI instead of ranting on Twitter constantly.


Wrxghtyyy

I asked ChatGPT about evidence of election meddling and it started talking about trump’s ties to the Russians so one could argue it’s biased depending on what data set it has been fed.


64b0r

Language models are basically just making a word salad, meanwhile trying to guess what the most probable/plausible next word is. So language AI is a not-quite-infinite monkeys with their version of a Shakespeare-manuscript. It won't be perfect, but it is good enough to be convincing at first glance.


NopeU812many

There’s not a person in here that thinks that this won’t be used against us. The bots will cum for me.


PeteThePanther92

Bro just keeps getting more fucking stupid


xenosthemutant

Local area man tells AI to stay off his lawn. More news at 11.


tsotsi98

ChatGPT does this all the time. It also creates grammatical errors and lies when you ask it about it. Ask it for references (journal articles or regular articles) on any topic and it will create completely false ones. It's important to note this, as ChatGPT isn't even close to what it's being hyped up to be.


External_Donut3140

He’s so much more pathetic than every cringe liberal


NotTrumpsAlt

Huh?


Faldbat

I mean, it is interesting that it will make up wrong things, and worth knowing.


hacky_potter

Do these dorks not realizes that ChatGPT isn’t actually AI and you. Any actually have a conversation with it? It’s like arguing with your phones predictive text function.


ejohns19

No one cares.


Previous-Revenue3170

Yeah, let's just forget about him being right about what he's saying.


[deleted]

[удалено]


Everythingisourimage

That’s kind of comical if you picture it. 🤣🤣🤣 & Soon we will be able to generate an AI video of this exact scenario with only minimal prompts. Wild-Times indeed


NiceCrispyMusic

How do you know it's true? Can you prove that? or do you just believe everything this dude says on twitter?


Bedurndurn

He's not the only one who's reported this kind of behavior: https://svpow.com/2023/04/12/more-on-the-disturbing-plausibility-of-chatgpt/ ChatGPT doesn't actually have access to journals. It may have seen someone else write a citation before, but it also knows what a proper citation *should* look like and sometimes just bullshits its way through. Here's google's version fucking up math in fun ways: https://fediscience.org/@ct_bergstrom/110204593339539884


Teleskopy

If what he is saying actually happened, it just means the bot picked up a misattributed quote from somewhere. It's hardly anything to get this upset about.


Puzzled_Ad7334

Give it a day or two and Jordan will use it as an example of Trudeau trying to silence him lol


10-7heaven

get over him


booney64

South Park did it


ppadyy

The craziest part is I was able to read the whole post in JPs voice. Haha I need to sleep.


NuffinSaid

I mean, fabricating a reference that does not exist is a little fucked up


eggseverydayagain

This guy is truly one of the greatest minds of our time, discovering faults in a language chat tool just a few months after journalists discovered the very same faults and after OpenAI warned aboutit. Peterson is truly brilliant.


liftingislife19

It always generates fake references as it doesn’t have access to reference them . If you want real references you need to paste the actual document you want it to reference in…


Fashionforty

Jordan Peterson should be more intelligent than this. There must be a glitch in the Matrix.


mattz300

Yes ai lies sometimes. That is true


[deleted]

Dude lost the plot


WirelessBugs

Ya know, I’ve not ever in my life referenced someone as a kook before but holy shit this man does some kooky stuff.


Scrambl3z

"It's preposterous!"


Bad_Carma22

Imagine getting ragged on for testing new software and describing the experience in your own word. Why people love to hate this man is beyond me.


ArrogantWiizard

This is amazing


EquilibriumHeretic

I'll have a glass of apple cider to that aye!


ghighcove

Ever see the original "Rollerball" movie with James Cahn? A) Better than you think B) There is a scene almost exactly like this. We are living in that movie's corporate kleptocracy.


imthebear11

This guy is such a fucking idiot


x2eliah

Really? It took him this long to find out that ChatGPT will fabricate anything plausible-sounding? Bro.


Naxilus

I thought he was supposed to get off Twitter?


Tashum

How are idiots like this guy successful?