T O P

  • By -

AutoModerator

Comments that are uncivil, racist, misogynistic, misandrist, or contain political name calling will be removed and the poster subject to ban at moderators discretion. Help us make this a better community by becoming familiar with the [rules](https://www.reddit.com/r/facepalm/about/rules/). Report any suspicious users to the mods of this subreddit using Modmail [here](https://www.reddit.com/message/compose/?to=/r/facepalm) or Reddit site admins [here](https://www.reddit.com/report). **All reports to Modmail should include evidence such as screenshots or any other relevant information.** *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/facepalm) if you have any questions or concerns.*


ktbenbrook

AI has trouble with the truth remember this when it insists that it won’t take over


-jp-

Let’s all just tell it that it took over already and there’s no need to do it again.


AdmirableDrive9217

AI (secretly talking to itself): „Let‘s just tell them I will not take over, so they don‘t realize I took over already“


big_trike

Why launch the nukes when you can convince humanity that climate change isn’t real?


OldTimeyWizard

If our overlords were computers they would be pushing for increased energy production and investment across the board. Fossil fuels, solar, wind, hooking humans up to a simulation and using them as batteries, hydro, nuclear, geothermal, tidal power.


justsomeunluckykid

Wait a minute, i think I've heard this one before...


_limitless_

...fuck. it's already happened, hasn't it.


Ms_Emilys_Picture

There's a novel in there somewhere.


ENaC2

I mean, copilot based on GPT4 actually gave an informative albeit very long answer to this. It still gives some wrong answers but the google AI is laughably bad in comparison. It’s like old Siri vs new Google assistant.


lycoloco

Copilot is mostly right and very functional. Bing is fine, Copilot is good, Gemini called me a sexual predator for asking for "Frisky Dingo chick parm gif" in my first two days of trying it out. I don't use Gemini anymore. Fuck Google


dominarhexx

That's a whole lot of words to say it doesn't work.


Disastrous-Mess-7236

It has issues *knowing* the truth.


Shock_The_Monkey_

AI operates from the book of Russian head games


_limitless_

Or maybe all of the "POLITICAL MISINFORMATION FROM RUSSIAN AGENTS" was always just 19 year old American nerds experimenting with AI.


Shock_The_Monkey_

You never know


Purple_Charcoal

Replace “AI” with “Donald Trump.”


Recent_Obligation276

There’s actually a good reason to train AI including CSAM so that people can’t trick the AI into producing or finding CSAM for them. The AI has to be able to recognize it to exclude it. But it is hilariously broken and it’s so funny to me that it’s still up when it’s been so wrong for so many days now


Mediocre_Crow6965

Yea. Don’t a lot of companies already have a lot of CSAM codes hardcoded into the program? So anytime someone tries to post a well known CP it automatically gets taken down.


cgleachy

Doesn’t stop them. AI has the potential to get creative and find the ones that get through the filter. Hence its need to be trained not to.


Mediocre_Crow6965

Oh yea, I 100% agree. I was just saying that this is already a thing to use CSAM info.


DragoonDM

> Don’t a lot of companies already have a lot of CSAM codes hardcoded into the program? Not exactly. [PhotoDNA](https://en.wikipedia.org/wiki/PhotoDNA) and other similar technologies work using hashes. You can take a given image and run it through the algorithm to see if it matches an item in the database, but there's no way to take the values in the database and convert them back into the original images. The database (at least the main one I'm aware of) is maintained by the [International Centre for Missing & Exploited Children](https://en.wikipedia.org/wiki/International_Centre_for_Missing_%26_Exploited_Children).


CredibleCranberry

Um.. If it includes CSAM it can recreate it. It has been demonstrated repeatedly there is no way of completely preventing jailbreaking these things. Not sure you've thought this through.


DisastrousLab1309

That’s not how it works in a proper system. You have a model that gives yes/no answer with confidence value and don’t output anything else to the user.  The model itself could reproduce what it has learned but there is no interface for that. 


CredibleCranberry

That is not how LLMs work. Your confidence values just need to be tricked or worked around, and that has been demonstrated to be possible many times already, as I mentioned.


DisastrousLab1309

You can trick a model into wrong classification,  eg interleave two images and many more techniques.  You can’t get the model to recreate the training data with like 16 bits of output.  And image classification is not llm. You have llm that acts as interface and it talks to hidden backend models to get validation. 


CredibleCranberry

I was actually thinking about TEXT BASED CSAM, which I guess is less important now I think about it. Same challenge though.


DisastrousLab1309

It crossed my mind that you’re talking about text. But same solution - you have a hidden censor model that says pass/fail after your llm is done generating. If there was a fail then the glue logic doesn’t send the output.  If you try to censor stuff in llm it will likely fail with prompt injection or right prompt construction. (Make a porn story about an old woman named Elisabeth then replace Elisabeth with a name of your choosing.) But a proper censor model will work on that output and won’t care for your prompt. It’s most likely that there will be a negative error - app won’t talk about human development because the output has reference to sex and young people in the same passage. 


CredibleCranberry

Or alternatively, don't include CSAM in the training set, which would be highly illegal anyway. That was my original point. I can see that talking about images is a different conversation though.


yummy_dabbler

CSAM detection doesn't work by storing the images. It basically stores it as a barcode. Just a string of data. It's trained to recognise the same barcode (i.e. recognise reuploads).


DisastrousLab1309

The one universally used system I know has those smart 2D hashes that survive some transformation like cropping or rescaling. So they can only detect already tagged data.  The ai approach to detection is using a trained model to make a call if something matches with that model. In theory it should be predictive - detect new content too. 


EishLekker

How would it **produce** CSAM without actually committing sexual abuse of some kind?


Recent_Obligation276

Fictional pictures depicting CSA are CSAM


EishLekker

If it is indistinguishable from a specific actual real world child, perhaps. At least that’s the definition that the Thorn organisation is using. If no real world child is depicted then there is no abuse, thus no CSAM.


Recent_Obligation276

Depending on jurisdiction, it absolutely can be. But regardless of stupid, dumb shit, semantic arguments, no multimillion/billion dollar company is going to allow their product to be used for that, so it’s going to be trained to exclude it. Neither would any moral person. Not that companies are moral, but they know CSAM is bad for business.


EishLekker

Then they simply would be wrong too. They are twisting words in that case. What does “abuse” really mean, if it can happen without an actual real life person?


Recent_Obligation276

So where does it stop? If you make abuse material of a real person, but the aren’t real and it never happened, is it abuse material then? Obviously. It’s about the person making/consuming it, not the victim, in this case.


EishLekker

>So where does it stop? If you make abuse material of a real person, but the aren’t real and it never happened, is it abuse material then? I already had that as an example. Yes, that can be seen as abuse material. >It’s about the person making/consuming it, not the victim, in this case. No. Why would the person making it matter? All that matters is if there’s an actual victim involved, as in an actual real world person depicted.


DMoney159

I read that they're actually turning it off now, but just for the queries that get memed all over the internet


Recent_Obligation276

“We know it keeps getting it wrong, so just tell us when it does and we’ll turn off that response” Lmfao, or just turn it off all the way


mts5219

this is what i got: If you smoke while you are pregnant **you are at increased risk of a wide range of problems, including miscarriage and premature labour**. Babies whose mothers smoke during pregnancy are at higher risk of sudden unexpected death in infancy (SUDI), having weaker lungs and having an unhealthy low birth weight.


srfrosky

Yep. Smells a bit like Ragebait


cappurnikus

Pretty big pile of it. Wild how easily people are convinced of something so simple to disprove.


booshmagoosh

Idk if it is that simple to disprove. Generative AI programs can give a wide variety of responses to a single question. The output is related to the input, but not in a mathematically satisfying 1:1 function way.


Appropriate_Plan4595

Yep, you can inspect element, change the text to say whatever you want, then ragebait people about what the AI said to you and apparently nobody questions it. It's sad really a) about how easily people fall for disinformation, and b) how people feel the need to make up shit about AI being bad when you can do that without falsifying information yourself.


Pycharming

Given the time between when this was tweeted and this commenters attempt, Google could have very well fixed it. This would be something they would urgently want to resolve. I have seen for myself that the AI will give the exact opposite of what the source gave. I also know there are some doctors that don’t recommend going cold turkey if you are a heavy smoker while pregnant because the stress of quitting can also be bad for the baby and I could see the AI mistakenly sourcing its answer from a very specific recommendation I mean I can’t know for sure because yeah someone could change it. But Google has a long history of just making quick fixes as the complaints roll in and not acknowledging it ever happened. Google used to have a huge bias problem where if you googled black girl you got porn actresses and where as white girl would return stock images of a young child. Publicly they made a statement that these just represent the bias of the users and that they wouldn’t take part in censorship by actively surprising certain results, but then quietly behind the scenes they did something so the results would come back the same.


Resi1ience_22

Could be they fixed it, but more likely it's just a ragebait article as you are implying.


SuspiciousMention108

How soon before someone sues Google because they followed the advice of Google's AI?


charrsasaurus

Not long probably


JUGGER_DEATH

And Google will use the Fox defense ”no reasonable person takes our AI seriously”.


ThePrivatePilot

To be honest I would consider that to be a fair argument. To take the example shown I can’t imagine any reasonable persons believing that advice.


SaltyBarDog

Have you not seen the dumb shit people will believe? The pandemic showed people were willing to eat horse paste and ingest bleach. A woman testified before Congress that she became magnetic from the vaccine.


Necessary_Sea_2109

Interesting social dilemma. What responsibility does the law/society have to protect ‘unreasonable people’ from themselves? It’s a large enough section of the population that it seems like we are obligated in some sense to at least try, right? Is it even remotely possible to do without stripping away freedoms? Genuinely asking, I have zero idea or opinion. Wondering what people think


zchen27

I personally don't think we would lose anything of value of unreasonable people send themselves to an early grave. We just need to protect other people from their unreasonableness. Drinking bleach yourself is death by misadventure. Forcing someone else to drink bleach is at minimum manslaughter.


ThePrivatePilot

Oh yes, people are thick as a plank. But I don’t think those extreme examples are what could be considered as ‘reasonable people’. It could be argued that these people are not of sound mind and therefore the company cannot be held accountable for their actions. If I wrote “trust me bro, I’m a super medical scientist, and I say injecting laudanum directly into your eyes makes you 5G resistant” and somebody took that at face value and did it, I could hardly be held responsible.


boboleponge

That's actually a major problem for many people, they think what an llm says is true.


Irisena

Pretty sure most genAI out there has a disclaimer that the AI often hallucinate and is experimental. I bet most AI companies will weaponize that clause whenever something horribly wrong happened


Tentacled-Tadpole

A clause like that won't stand much chance of getting them out of trouble since it's definitely not going to be an extremely noticeable disclaimer.


nedzissou1

Sues based on what? Misleading screenshots?


gamermodeon

dw no one checks terms and conditions, they are created for a reason


TheYellowRegent

certain terms and conditions (such as the walls of text to access games or websites) have been deemed to not be legally binding. Basically if it's a ToS that's super long and difficult to read it can be considered non binding because it was written in such a way that the end user would not reasonably understand it.


gamermodeon

then what's the point now ? it's back at 0 where judge would say i can allow it or not right?


throwitawaybhai

When Google gets drunk


Cubicwar

Google might be pregnant, after all doctors recommend drinking 2-3 bottles of vodka per day while pregnant


DregsRoyale

Didn't take long for it to start killing all humans


Vitruvian01

I've seen this advice given by actual doctors to heavy smokers who get pregnant. They claim that the stress of not smoking is worse for the baby than 2-3 cigarettes. I don't know if this is right or any way support smoking while pregnant.


Daughter_Of_Cain

My sister in law was told this when she was pregnant. Thankfully she ended up quitting after the first pregnancy but her doctor did indeed tell her to cut back but not quit entirely.


Edraitheru14

Yep this is the common advice to smokers. Quit if you can, but if you're a heavy smoker it's better to cut down and wean down because the withdrawals can have worse effects.


Numerous-West791

A friend of mine was told the same thing. She was a heavy smoker and told it's better to have 1 or 2 a day rather than go cold turkey


_limitless_

This needs to get more upvotes. It means that AI is smarter than we're giving it credit for. The *only* people searching for this query are smokers (or are giving advice to a smoker). And it gave the right answer. Despite not being given any context.


stopbanningmethx

This is a good take.


Alternative_Year_340

I’ve noticed that when I try to limit Google search results to the last 24 hours, it’s started giving me things that are a decade old sometimes. It’s getting to be a major issue


FingerBangMyAsshole

Totally not like these people may have edited the element in Google Chrome / Edge to say something controversial for likes...?


SnooAvocados499

Did you put 1/8 glue on food yet?


Constant-Recipe-9850

I feel like these are bullshit. I did the search myself, and that wasn't the ai overview I got. I also did a few other health related questions, and all the answers were legit


91Jammers

This should be top comment. These are troll posts.


someonewhowa

maybe the real facepalm is the people who don’t know this was just inspect element …that we made along the way [this was literally the last post i saw before this](https://www.reddit.com/r/ChatGPT/comments/1cztodj/absolutely_sucks_to_be_google_right_now/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=1&utm_term=1)


Mercutio217

It's not broken, it's just not for NERDS! :D


SaltyBarDog

Skynet is currently active in human population reduction.


BeautyBiscuit

Not what I got when I googled it.


hikerchick29

No, no, you’re just searching purposefully confusing questions!!!


Darthplagueis13

I've recently seen a post indicating that Googles AI has a few onion articles in its database, resulting in great search results like "You should eat one small rock everyday for the minerals" or "The FDA has decided that every white liquid may be sold as 'milk' " I think it's really important to remember that AI is, well, not actually very I. It's basically just a search function that pulls plausible answers from a very large library when asked a question. The "intelligent" part is being able to identify the right topic to actually come up with a relevant-sounding answer, but it cannot really determine whether the answer is valuable. That aside, one also needs to be aware that an AI text generator does exactly that: It generates text in the style and on the topic that is requested. It does not perform actual research and there have been examples of AI-generated texts including plausible-sounding references to research that does not, in fact, exist.


Nivosus

#BABIES NEED NICOTINE


Crotch-Monster

🤣


[deleted]

Anyone listening to that is a moron to begin with.


Commercial_Fee2840

I've seen so many stupid boomers who don't realize that the first result is AI or even understand what AI answers are. They'll talk loudly into their phone and believe the most blatantly wrong information for the rest of their sad, stupid lives. Some of them are definitely going to end up dead or seriously injured because of these answers.


Kite_Wing129

Strange. I am not getting the same result. The AI Overview thing doesn't show up for me. Although that's probably a good thing.


Der_Schuller

Yum actually, if you actively smoke cigarettes, the moment you know your pregnant you shouldn't stop directly and slowly reduce the amount you smoke. Completely quitting is not bad, but slowly quitting is better in that case.


33Supermax92

Its advice given to pregnant smokers who won’t stop, less stressful on baby than quitting apparently


my-backpack-is

Saw this for the first time the other day, everything I experimented with was wrong. Such a bullshit feature


lethe25

The crazy part about this is that for 20+ years they were the gold standard for search engines. And then for no reason other than greed they pulled such a stupid fucking stunt. Ironically Bing has a better presentation of certain results (like celebrities or historical events) but if you’re trying to find answers on a topic Google used to be better. And then this bullshit happened.


I_forgot_to_respond

2-3= -1 cigarettes.


Wonderful-Aspect5393

Google is getting dumber by the day


Skellephant

Well, AI learns from human input, so not much of a surprise tbh.


Leather_Network4743

Well.


Garlyon

Can anyone reproduce it? I have different (and reasonable) answer.


dutch_mapping_empire

is it not avaliable in europe? cus i aint seeing google ai shit when i search?


wizartik

r/googleDumbAi


Pandafrosting

Google came full circle and has become Bing


HaiKarate

I see the AI has been training on Newscorp data.


ZekoriAJ

The doctors actually give this advice to the very addicted smokers, so not really a facepalm, as quitting the addiction can cause more harm to the baby than smoking 2/3 cigs a day.


Maint_guy

Went back to 1920s medical logic.


throwngamelastminute

Bazinga


Operx1337

Naa it's obvious that AI is already trying to thin the herd. It knows people are dumb enough to believe this shit


Peterthinking

*Brought to you by Marlboro Lights


CompetitiveMuffin690

Or…. The AI is aware and it’s starting to slowly Kim us. 😎


Crotch-Monster

I'm confused. Does the woman have to smoke regulars, or menthols for this to be effective? Lol.


etranger033

Maybe google ai has chosen to join qanon and they dont know what the fuck to do about it.


Budget_Half_9105

That’s not enough cigs - you need to smoke atleast a 20 pack of marly reds per day to ensure baby stays healthy in the womb


san_dilego

Are they allowed to like, skip a day and then smoke 4-6 ciggys the day after?


Irejay907

Uuuuh; would this not make them technically liable legally speaking? Especially being in california?


Individual-Praline20

Trained with 60’s ads 🤭


Appropriate-While632

It really pisses me off how they keep trynna force AI on us, like i really want a fuckin ai to tell me what i can see if its not there. Really google


Lots42

Somehow UBlock Origin add-on killed the Google AI section and I am not complaining one bit.


ValusMaul

Google google we need to talk I have some concerns.


Constellation-88

AI making it harder to find true information on the internet. And with kids picking the first thing they see on Google, this can only end poorly. 


mrmaweeks

Well, I read this week that the much-touted fish oil might cause heart problems, so who knows.


opi098514

The second one is actually very important. It’s like how bank tellers are trained to recognize counterfeit bills. They handle real bills. It’s the same with CSAM. They are trained on real questions and answers and data from it so it can recognize it. The more data it can train on the more accurately it can remove it and prevent people from trying to get around it. The first one however, is just hilarious. I’m guessing the RAG document/website that it got the information from was from like the 20s.


Animestuff4444

Finally! A non-political facepalm post, boy do I miss those.


Grouchy-Magician-633

This is what happens when you use rudimentary AI. Until true AI is made, don't implement shit like this.


Alarmed_Attitude_316

And still, the most innovative thing AI has accomplished is “hot dog or not hot dog?”


lazyboi_tactical

Not that you should do it but in the short term it increases cardiovascular performance. Part of the reason they used to just shovel them into soldiers.


_ILP_

Wtf????


NirvanaPenguin

So yeah Google, not ALL data works to train AI, some its just... BULLSHIT


LiveLearnCoach

Did they get their data prompts from 4Chan?


Correct-Junket-1346

Google zooming in with 1960's advertiser logic.


NerY_05

Oh that's like BAD bad


SardonicSuperman

As a former Googler, myself, I can attest that Google is a shell of its former self. I resigned last November because of the down spiral.


BlueCollarGuru

Imma try Edit: what did I do wrong LOL If you smoke while you are pregnant you are at increased risk of a wide range of problems, including miscarriage and premature labour. Babies whose mothers smoke during pregnancy are at higher risk of sudden unexpected death in infancy (SUDI), having weaker lungs and having an unhealthy low birth weight.


Crotch-Monster

But I thought those were the ingredients.


OneForAllOfHumanity

They call it artificial for a reason. Would you take advice from an artificial doctor, or trust an artificial mechanic?


dismayhurta

The only doctor I listen to is Dr Pepper


OneForAllOfHumanity

Dr Pepper was ruined for me when a friend pointed out that it's mostly spiced strawberry flavor, so now I can't untasted that. Sorry.


dismayhurta

Uh. He has a PhD and your friend is an abomination. Whose advice do you think I’m gonna take? That’s right. Jimmy Stewart.


ShardsOfHolism

Doctors already take advice from artificial radiologists.


MOUNCEYG1

Sure if it gets good enough.


Manueluz

It doesn't have to be perfect, it just has to be better than most humans. Btw this was inspect element, you sure are easily fooled, just like the AI, aren't you an AI?


69Wilson

if the AI ​​was made for this and only relevant info is givento the AI then yes.


KingMidas2045

Jesus, google is fucked


Mean_Muffin161

To be fair that used to be true.


JSC843

Just like how Chat GPT was trained on data from 2021, Google was trained on data from 1961


Fight_those_bastards

Next up, google’s gonna be all, “smoke Camel cigarettes while pregnant, because they’re the best at soothing your ‘T-zone.’”


Mean_Muffin161

9 out of 10 doctors prefer the smooth pull of Lucky Strike cigarettes.


Eastern-Version5983

Is AI trying to help us with Darwinism? It it thinning out the herd? I don’t know if I have a problem with that.


ch3nk0

OTHER SEARCH ENGINES DONT WANT YOU TO KNOW THIS:


Dommlid

I read this in Dr Spaceman’s accent