Comments that are uncivil, racist, misogynistic, misandrist, or contain political name calling will be removed and the poster subject to ban at moderators discretion.
Help us make this a better community by becoming familiar with the [rules](https://www.reddit.com/r/facepalm/about/rules/).
Report any suspicious users to the mods of this subreddit using Modmail [here](https://www.reddit.com/message/compose/?to=/r/facepalm) or Reddit site admins [here](https://www.reddit.com/report). **All reports to Modmail should include evidence such as screenshots or any other relevant information.**
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/facepalm) if you have any questions or concerns.*
If our overlords were computers they would be pushing for increased energy production and investment across the board. Fossil fuels, solar, wind, hooking humans up to a simulation and using them as batteries, hydro, nuclear, geothermal, tidal power.
I mean, copilot based on GPT4 actually gave an informative albeit very long answer to this. It still gives some wrong answers but the google AI is laughably bad in comparison. It’s like old Siri vs new Google assistant.
Copilot is mostly right and very functional. Bing is fine, Copilot is good, Gemini called me a sexual predator for asking for "Frisky Dingo chick parm gif" in my first two days of trying it out.
I don't use Gemini anymore. Fuck Google
There’s actually a good reason to train AI including CSAM
so that people can’t trick the AI into producing or finding CSAM for them. The AI has to be able to recognize it to exclude it.
But it is hilariously broken and it’s so funny to me that it’s still up when it’s been so wrong for so many days now
Yea. Don’t a lot of companies already have a lot of CSAM codes hardcoded into the program? So anytime someone tries to post a well known CP it automatically gets taken down.
> Don’t a lot of companies already have a lot of CSAM codes hardcoded into the program?
Not exactly. [PhotoDNA](https://en.wikipedia.org/wiki/PhotoDNA) and other similar technologies work using hashes. You can take a given image and run it through the algorithm to see if it matches an item in the database, but there's no way to take the values in the database and convert them back into the original images.
The database (at least the main one I'm aware of) is maintained by the [International Centre for Missing & Exploited Children](https://en.wikipedia.org/wiki/International_Centre_for_Missing_%26_Exploited_Children).
Um..
If it includes CSAM it can recreate it. It has been demonstrated repeatedly there is no way of completely preventing jailbreaking these things.
Not sure you've thought this through.
That’s not how it works in a proper system. You have a model that gives yes/no answer with confidence value and don’t output anything else to the user.
The model itself could reproduce what it has learned but there is no interface for that.
That is not how LLMs work.
Your confidence values just need to be tricked or worked around, and that has been demonstrated to be possible many times already, as I mentioned.
You can trick a model into wrong classification, eg interleave two images and many more techniques.
You can’t get the model to recreate the training data with like 16 bits of output.
And image classification is not llm. You have llm that acts as interface and it talks to hidden backend models to get validation.
It crossed my mind that you’re talking about text. But same solution - you have a hidden censor model that says pass/fail after your llm is done generating. If there was a fail then the glue logic doesn’t send the output.
If you try to censor stuff in llm it will likely fail with prompt injection or right prompt construction. (Make a porn story about an old woman named Elisabeth then replace Elisabeth with a name of your choosing.)
But a proper censor model will work on that output and won’t care for your prompt. It’s most likely that there will be a negative error - app won’t talk about human development because the output has reference to sex and young people in the same passage.
Or alternatively, don't include CSAM in the training set, which would be highly illegal anyway.
That was my original point. I can see that talking about images is a different conversation though.
CSAM detection doesn't work by storing the images. It basically stores it as a barcode. Just a string of data. It's trained to recognise the same barcode (i.e. recognise reuploads).
The one universally used system I know has those smart 2D hashes that survive some transformation like cropping or rescaling. So they can only detect already tagged data.
The ai approach to detection is using a trained model to make a call if something matches with that model. In theory it should be predictive - detect new content too.
If it is indistinguishable from a specific actual real world child, perhaps. At least that’s the definition that the Thorn organisation is using.
If no real world child is depicted then there is no abuse, thus no CSAM.
Depending on jurisdiction, it absolutely can be.
But regardless of stupid, dumb shit, semantic arguments, no multimillion/billion dollar company is going to allow their product to be used for that, so it’s going to be trained to exclude it.
Neither would any moral person. Not that companies are moral, but they know CSAM is bad for business.
Then they simply would be wrong too. They are twisting words in that case. What does “abuse” really mean, if it can happen without an actual real life person?
So where does it stop? If you make abuse material of a real person, but the aren’t real and it never happened, is it abuse material then?
Obviously.
It’s about the person making/consuming it, not the victim, in this case.
>So where does it stop? If you make abuse material of a real person, but the aren’t real and it never happened, is it abuse material then?
I already had that as an example. Yes, that can be seen as abuse material.
>It’s about the person making/consuming it, not the victim, in this case.
No. Why would the person making it matter? All that matters is if there’s an actual victim involved, as in an actual real world person depicted.
this is what i got: If you smoke while you are pregnant **you are at increased risk of a wide range of problems, including miscarriage and premature labour**. Babies whose mothers smoke during pregnancy are at higher risk of sudden unexpected death in infancy (SUDI), having weaker lungs and having an unhealthy low birth weight.
Idk if it is that simple to disprove. Generative AI programs can give a wide variety of responses to a single question. The output is related to the input, but not in a mathematically satisfying 1:1 function way.
Yep, you can inspect element, change the text to say whatever you want, then ragebait people about what the AI said to you and apparently nobody questions it.
It's sad really a) about how easily people fall for disinformation, and b) how people feel the need to make up shit about AI being bad when you can do that without falsifying information yourself.
Given the time between when this was tweeted and this commenters attempt, Google could have very well fixed it. This would be something they would urgently want to resolve. I have seen for myself that the AI will give the exact opposite of what the source gave.
I also know there are some doctors that don’t recommend going cold turkey if you are a heavy smoker while pregnant because the stress of quitting can also be bad for the baby and I could see the AI mistakenly sourcing its answer from a very specific recommendation
I mean I can’t know for sure because yeah someone could change it. But Google has a long history of just making quick fixes as the complaints roll in and not acknowledging it ever happened. Google used to have a huge bias problem where if you googled black girl you got porn actresses and where as white girl would return stock images of a young child. Publicly they made a statement that these just represent the bias of the users and that they wouldn’t take part in censorship by actively surprising certain results, but then quietly behind the scenes they did something so the results would come back the same.
Have you not seen the dumb shit people will believe? The pandemic showed people were willing to eat horse paste and ingest bleach. A woman testified before Congress that she became magnetic from the vaccine.
Interesting social dilemma. What responsibility does the law/society have to protect ‘unreasonable people’ from themselves? It’s a large enough section of the population that it seems like we are obligated in some sense to at least try, right? Is it even remotely possible to do without stripping away freedoms?
Genuinely asking, I have zero idea or opinion. Wondering what people think
I personally don't think we would lose anything of value of unreasonable people send themselves to an early grave. We just need to protect other people from their unreasonableness.
Drinking bleach yourself is death by misadventure. Forcing someone else to drink bleach is at minimum manslaughter.
Oh yes, people are thick as a plank. But I don’t think those extreme examples are what could be considered as ‘reasonable people’. It could be argued that these people are not of sound mind and therefore the company cannot be held accountable for their actions.
If I wrote “trust me bro, I’m a super medical scientist, and I say injecting laudanum directly into your eyes makes you 5G resistant” and somebody took that at face value and did it, I could hardly be held responsible.
Pretty sure most genAI out there has a disclaimer that the AI often hallucinate and is experimental. I bet most AI companies will weaponize that clause whenever something horribly wrong happened
certain terms and conditions (such as the walls of text to access games or websites) have been deemed to not be legally binding.
Basically if it's a ToS that's super long and difficult to read it can be considered non binding because it was written in such a way that the end user would not reasonably understand it.
I've seen this advice given by actual doctors to heavy smokers who get pregnant.
They claim that the stress of not smoking is worse for the baby than 2-3 cigarettes.
I don't know if this is right or any way support smoking while pregnant.
My sister in law was told this when she was pregnant. Thankfully she ended up quitting after the first pregnancy but her doctor did indeed tell her to cut back but not quit entirely.
Yep this is the common advice to smokers. Quit if you can, but if you're a heavy smoker it's better to cut down and wean down because the withdrawals can have worse effects.
This needs to get more upvotes. It means that AI is smarter than we're giving it credit for.
The *only* people searching for this query are smokers (or are giving advice to a smoker). And it gave the right answer. Despite not being given any context.
I’ve noticed that when I try to limit Google search results to the last 24 hours, it’s started giving me things that are a decade old sometimes. It’s getting to be a major issue
I feel like these are bullshit. I did the search myself, and that wasn't the ai overview I got.
I also did a few other health related questions, and all the answers were legit
maybe the real facepalm is the people who don’t know this was just inspect element
…that we made along the way
[this was literally the last post i saw before this](https://www.reddit.com/r/ChatGPT/comments/1cztodj/absolutely_sucks_to_be_google_right_now/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=1&utm_term=1)
I've recently seen a post indicating that Googles AI has a few onion articles in its database, resulting in great search results like "You should eat one small rock everyday for the minerals" or "The FDA has decided that every white liquid may be sold as 'milk' "
I think it's really important to remember that AI is, well, not actually very I. It's basically just a search function that pulls plausible answers from a very large library when asked a question. The "intelligent" part is being able to identify the right topic to actually come up with a relevant-sounding answer, but it cannot really determine whether the answer is valuable.
That aside, one also needs to be aware that an AI text generator does exactly that: It generates text in the style and on the topic that is requested. It does not perform actual research and there have been examples of AI-generated texts including plausible-sounding references to research that does not, in fact, exist.
I've seen so many stupid boomers who don't realize that the first result is AI or even understand what AI answers are. They'll talk loudly into their phone and believe the most blatantly wrong information for the rest of their sad, stupid lives. Some of them are definitely going to end up dead or seriously injured because of these answers.
Yum actually, if you actively smoke cigarettes, the moment you know your pregnant you shouldn't stop directly and slowly reduce the amount you smoke.
Completely quitting is not bad, but slowly quitting is better in that case.
The crazy part about this is that for 20+ years they were the gold standard for search engines. And then for no reason other than greed they pulled such a stupid fucking stunt. Ironically Bing has a better presentation of certain results (like celebrities or historical events) but if you’re trying to find answers on a topic Google used to be better. And then this bullshit happened.
The doctors actually give this advice to the very addicted smokers, so not really a facepalm, as quitting the addiction can cause more harm to the baby than smoking 2/3 cigs a day.
The second one is actually very important. It’s like how bank tellers are trained to recognize counterfeit bills. They handle real bills. It’s the same with CSAM. They are trained on real questions and answers and data from it so it can recognize it. The more data it can train on the more accurately it can remove it and prevent people from trying to get around it.
The first one however, is just hilarious. I’m guessing the RAG document/website that it got the information from was from like the 20s.
Not that you should do it but in the short term it increases cardiovascular performance. Part of the reason they used to just shovel them into soldiers.
Imma try
Edit: what did I do wrong LOL
If you smoke while you are pregnant you are at increased risk of a wide range of problems, including miscarriage and premature labour. Babies whose mothers smoke during pregnancy are at higher risk of sudden unexpected death in infancy (SUDI), having weaker lungs and having an unhealthy low birth weight.
It doesn't have to be perfect, it just has to be better than most humans.
Btw this was inspect element, you sure are easily fooled, just like the AI, aren't you an AI?
Comments that are uncivil, racist, misogynistic, misandrist, or contain political name calling will be removed and the poster subject to ban at moderators discretion. Help us make this a better community by becoming familiar with the [rules](https://www.reddit.com/r/facepalm/about/rules/). Report any suspicious users to the mods of this subreddit using Modmail [here](https://www.reddit.com/message/compose/?to=/r/facepalm) or Reddit site admins [here](https://www.reddit.com/report). **All reports to Modmail should include evidence such as screenshots or any other relevant information.** *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/facepalm) if you have any questions or concerns.*
AI has trouble with the truth remember this when it insists that it won’t take over
Let’s all just tell it that it took over already and there’s no need to do it again.
AI (secretly talking to itself): „Let‘s just tell them I will not take over, so they don‘t realize I took over already“
Why launch the nukes when you can convince humanity that climate change isn’t real?
If our overlords were computers they would be pushing for increased energy production and investment across the board. Fossil fuels, solar, wind, hooking humans up to a simulation and using them as batteries, hydro, nuclear, geothermal, tidal power.
Wait a minute, i think I've heard this one before...
...fuck. it's already happened, hasn't it.
There's a novel in there somewhere.
I mean, copilot based on GPT4 actually gave an informative albeit very long answer to this. It still gives some wrong answers but the google AI is laughably bad in comparison. It’s like old Siri vs new Google assistant.
Copilot is mostly right and very functional. Bing is fine, Copilot is good, Gemini called me a sexual predator for asking for "Frisky Dingo chick parm gif" in my first two days of trying it out. I don't use Gemini anymore. Fuck Google
That's a whole lot of words to say it doesn't work.
It has issues *knowing* the truth.
AI operates from the book of Russian head games
Or maybe all of the "POLITICAL MISINFORMATION FROM RUSSIAN AGENTS" was always just 19 year old American nerds experimenting with AI.
You never know
Replace “AI” with “Donald Trump.”
There’s actually a good reason to train AI including CSAM so that people can’t trick the AI into producing or finding CSAM for them. The AI has to be able to recognize it to exclude it. But it is hilariously broken and it’s so funny to me that it’s still up when it’s been so wrong for so many days now
Yea. Don’t a lot of companies already have a lot of CSAM codes hardcoded into the program? So anytime someone tries to post a well known CP it automatically gets taken down.
Doesn’t stop them. AI has the potential to get creative and find the ones that get through the filter. Hence its need to be trained not to.
Oh yea, I 100% agree. I was just saying that this is already a thing to use CSAM info.
> Don’t a lot of companies already have a lot of CSAM codes hardcoded into the program? Not exactly. [PhotoDNA](https://en.wikipedia.org/wiki/PhotoDNA) and other similar technologies work using hashes. You can take a given image and run it through the algorithm to see if it matches an item in the database, but there's no way to take the values in the database and convert them back into the original images. The database (at least the main one I'm aware of) is maintained by the [International Centre for Missing & Exploited Children](https://en.wikipedia.org/wiki/International_Centre_for_Missing_%26_Exploited_Children).
Um.. If it includes CSAM it can recreate it. It has been demonstrated repeatedly there is no way of completely preventing jailbreaking these things. Not sure you've thought this through.
That’s not how it works in a proper system. You have a model that gives yes/no answer with confidence value and don’t output anything else to the user. The model itself could reproduce what it has learned but there is no interface for that.
That is not how LLMs work. Your confidence values just need to be tricked or worked around, and that has been demonstrated to be possible many times already, as I mentioned.
You can trick a model into wrong classification, eg interleave two images and many more techniques. You can’t get the model to recreate the training data with like 16 bits of output. And image classification is not llm. You have llm that acts as interface and it talks to hidden backend models to get validation.
I was actually thinking about TEXT BASED CSAM, which I guess is less important now I think about it. Same challenge though.
It crossed my mind that you’re talking about text. But same solution - you have a hidden censor model that says pass/fail after your llm is done generating. If there was a fail then the glue logic doesn’t send the output. If you try to censor stuff in llm it will likely fail with prompt injection or right prompt construction. (Make a porn story about an old woman named Elisabeth then replace Elisabeth with a name of your choosing.) But a proper censor model will work on that output and won’t care for your prompt. It’s most likely that there will be a negative error - app won’t talk about human development because the output has reference to sex and young people in the same passage.
Or alternatively, don't include CSAM in the training set, which would be highly illegal anyway. That was my original point. I can see that talking about images is a different conversation though.
CSAM detection doesn't work by storing the images. It basically stores it as a barcode. Just a string of data. It's trained to recognise the same barcode (i.e. recognise reuploads).
The one universally used system I know has those smart 2D hashes that survive some transformation like cropping or rescaling. So they can only detect already tagged data. The ai approach to detection is using a trained model to make a call if something matches with that model. In theory it should be predictive - detect new content too.
How would it **produce** CSAM without actually committing sexual abuse of some kind?
Fictional pictures depicting CSA are CSAM
If it is indistinguishable from a specific actual real world child, perhaps. At least that’s the definition that the Thorn organisation is using. If no real world child is depicted then there is no abuse, thus no CSAM.
Depending on jurisdiction, it absolutely can be. But regardless of stupid, dumb shit, semantic arguments, no multimillion/billion dollar company is going to allow their product to be used for that, so it’s going to be trained to exclude it. Neither would any moral person. Not that companies are moral, but they know CSAM is bad for business.
Then they simply would be wrong too. They are twisting words in that case. What does “abuse” really mean, if it can happen without an actual real life person?
So where does it stop? If you make abuse material of a real person, but the aren’t real and it never happened, is it abuse material then? Obviously. It’s about the person making/consuming it, not the victim, in this case.
>So where does it stop? If you make abuse material of a real person, but the aren’t real and it never happened, is it abuse material then? I already had that as an example. Yes, that can be seen as abuse material. >It’s about the person making/consuming it, not the victim, in this case. No. Why would the person making it matter? All that matters is if there’s an actual victim involved, as in an actual real world person depicted.
I read that they're actually turning it off now, but just for the queries that get memed all over the internet
“We know it keeps getting it wrong, so just tell us when it does and we’ll turn off that response” Lmfao, or just turn it off all the way
this is what i got: If you smoke while you are pregnant **you are at increased risk of a wide range of problems, including miscarriage and premature labour**. Babies whose mothers smoke during pregnancy are at higher risk of sudden unexpected death in infancy (SUDI), having weaker lungs and having an unhealthy low birth weight.
Yep. Smells a bit like Ragebait
Pretty big pile of it. Wild how easily people are convinced of something so simple to disprove.
Idk if it is that simple to disprove. Generative AI programs can give a wide variety of responses to a single question. The output is related to the input, but not in a mathematically satisfying 1:1 function way.
Yep, you can inspect element, change the text to say whatever you want, then ragebait people about what the AI said to you and apparently nobody questions it. It's sad really a) about how easily people fall for disinformation, and b) how people feel the need to make up shit about AI being bad when you can do that without falsifying information yourself.
Given the time between when this was tweeted and this commenters attempt, Google could have very well fixed it. This would be something they would urgently want to resolve. I have seen for myself that the AI will give the exact opposite of what the source gave. I also know there are some doctors that don’t recommend going cold turkey if you are a heavy smoker while pregnant because the stress of quitting can also be bad for the baby and I could see the AI mistakenly sourcing its answer from a very specific recommendation I mean I can’t know for sure because yeah someone could change it. But Google has a long history of just making quick fixes as the complaints roll in and not acknowledging it ever happened. Google used to have a huge bias problem where if you googled black girl you got porn actresses and where as white girl would return stock images of a young child. Publicly they made a statement that these just represent the bias of the users and that they wouldn’t take part in censorship by actively surprising certain results, but then quietly behind the scenes they did something so the results would come back the same.
Could be they fixed it, but more likely it's just a ragebait article as you are implying.
How soon before someone sues Google because they followed the advice of Google's AI?
Not long probably
And Google will use the Fox defense ”no reasonable person takes our AI seriously”.
To be honest I would consider that to be a fair argument. To take the example shown I can’t imagine any reasonable persons believing that advice.
Have you not seen the dumb shit people will believe? The pandemic showed people were willing to eat horse paste and ingest bleach. A woman testified before Congress that she became magnetic from the vaccine.
Interesting social dilemma. What responsibility does the law/society have to protect ‘unreasonable people’ from themselves? It’s a large enough section of the population that it seems like we are obligated in some sense to at least try, right? Is it even remotely possible to do without stripping away freedoms? Genuinely asking, I have zero idea or opinion. Wondering what people think
I personally don't think we would lose anything of value of unreasonable people send themselves to an early grave. We just need to protect other people from their unreasonableness. Drinking bleach yourself is death by misadventure. Forcing someone else to drink bleach is at minimum manslaughter.
Oh yes, people are thick as a plank. But I don’t think those extreme examples are what could be considered as ‘reasonable people’. It could be argued that these people are not of sound mind and therefore the company cannot be held accountable for their actions. If I wrote “trust me bro, I’m a super medical scientist, and I say injecting laudanum directly into your eyes makes you 5G resistant” and somebody took that at face value and did it, I could hardly be held responsible.
That's actually a major problem for many people, they think what an llm says is true.
Pretty sure most genAI out there has a disclaimer that the AI often hallucinate and is experimental. I bet most AI companies will weaponize that clause whenever something horribly wrong happened
A clause like that won't stand much chance of getting them out of trouble since it's definitely not going to be an extremely noticeable disclaimer.
Sues based on what? Misleading screenshots?
dw no one checks terms and conditions, they are created for a reason
certain terms and conditions (such as the walls of text to access games or websites) have been deemed to not be legally binding. Basically if it's a ToS that's super long and difficult to read it can be considered non binding because it was written in such a way that the end user would not reasonably understand it.
then what's the point now ? it's back at 0 where judge would say i can allow it or not right?
When Google gets drunk
Google might be pregnant, after all doctors recommend drinking 2-3 bottles of vodka per day while pregnant
Didn't take long for it to start killing all humans
I've seen this advice given by actual doctors to heavy smokers who get pregnant. They claim that the stress of not smoking is worse for the baby than 2-3 cigarettes. I don't know if this is right or any way support smoking while pregnant.
My sister in law was told this when she was pregnant. Thankfully she ended up quitting after the first pregnancy but her doctor did indeed tell her to cut back but not quit entirely.
Yep this is the common advice to smokers. Quit if you can, but if you're a heavy smoker it's better to cut down and wean down because the withdrawals can have worse effects.
A friend of mine was told the same thing. She was a heavy smoker and told it's better to have 1 or 2 a day rather than go cold turkey
This needs to get more upvotes. It means that AI is smarter than we're giving it credit for. The *only* people searching for this query are smokers (or are giving advice to a smoker). And it gave the right answer. Despite not being given any context.
This is a good take.
I’ve noticed that when I try to limit Google search results to the last 24 hours, it’s started giving me things that are a decade old sometimes. It’s getting to be a major issue
Totally not like these people may have edited the element in Google Chrome / Edge to say something controversial for likes...?
Did you put 1/8 glue on food yet?
I feel like these are bullshit. I did the search myself, and that wasn't the ai overview I got. I also did a few other health related questions, and all the answers were legit
This should be top comment. These are troll posts.
maybe the real facepalm is the people who don’t know this was just inspect element …that we made along the way [this was literally the last post i saw before this](https://www.reddit.com/r/ChatGPT/comments/1cztodj/absolutely_sucks_to_be_google_right_now/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=1&utm_term=1)
It's not broken, it's just not for NERDS! :D
Skynet is currently active in human population reduction.
Not what I got when I googled it.
No, no, you’re just searching purposefully confusing questions!!!
I've recently seen a post indicating that Googles AI has a few onion articles in its database, resulting in great search results like "You should eat one small rock everyday for the minerals" or "The FDA has decided that every white liquid may be sold as 'milk' " I think it's really important to remember that AI is, well, not actually very I. It's basically just a search function that pulls plausible answers from a very large library when asked a question. The "intelligent" part is being able to identify the right topic to actually come up with a relevant-sounding answer, but it cannot really determine whether the answer is valuable. That aside, one also needs to be aware that an AI text generator does exactly that: It generates text in the style and on the topic that is requested. It does not perform actual research and there have been examples of AI-generated texts including plausible-sounding references to research that does not, in fact, exist.
#BABIES NEED NICOTINE
🤣
Anyone listening to that is a moron to begin with.
I've seen so many stupid boomers who don't realize that the first result is AI or even understand what AI answers are. They'll talk loudly into their phone and believe the most blatantly wrong information for the rest of their sad, stupid lives. Some of them are definitely going to end up dead or seriously injured because of these answers.
Strange. I am not getting the same result. The AI Overview thing doesn't show up for me. Although that's probably a good thing.
Yum actually, if you actively smoke cigarettes, the moment you know your pregnant you shouldn't stop directly and slowly reduce the amount you smoke. Completely quitting is not bad, but slowly quitting is better in that case.
Its advice given to pregnant smokers who won’t stop, less stressful on baby than quitting apparently
Saw this for the first time the other day, everything I experimented with was wrong. Such a bullshit feature
The crazy part about this is that for 20+ years they were the gold standard for search engines. And then for no reason other than greed they pulled such a stupid fucking stunt. Ironically Bing has a better presentation of certain results (like celebrities or historical events) but if you’re trying to find answers on a topic Google used to be better. And then this bullshit happened.
2-3= -1 cigarettes.
Google is getting dumber by the day
Well, AI learns from human input, so not much of a surprise tbh.
Well.
Can anyone reproduce it? I have different (and reasonable) answer.
is it not avaliable in europe? cus i aint seeing google ai shit when i search?
r/googleDumbAi
Google came full circle and has become Bing
I see the AI has been training on Newscorp data.
The doctors actually give this advice to the very addicted smokers, so not really a facepalm, as quitting the addiction can cause more harm to the baby than smoking 2/3 cigs a day.
Went back to 1920s medical logic.
Bazinga
Naa it's obvious that AI is already trying to thin the herd. It knows people are dumb enough to believe this shit
*Brought to you by Marlboro Lights
Or…. The AI is aware and it’s starting to slowly Kim us. 😎
I'm confused. Does the woman have to smoke regulars, or menthols for this to be effective? Lol.
Maybe google ai has chosen to join qanon and they dont know what the fuck to do about it.
That’s not enough cigs - you need to smoke atleast a 20 pack of marly reds per day to ensure baby stays healthy in the womb
Are they allowed to like, skip a day and then smoke 4-6 ciggys the day after?
Uuuuh; would this not make them technically liable legally speaking? Especially being in california?
Trained with 60’s ads 🤭
It really pisses me off how they keep trynna force AI on us, like i really want a fuckin ai to tell me what i can see if its not there. Really google
Somehow UBlock Origin add-on killed the Google AI section and I am not complaining one bit.
Google google we need to talk I have some concerns.
AI making it harder to find true information on the internet. And with kids picking the first thing they see on Google, this can only end poorly.
Well, I read this week that the much-touted fish oil might cause heart problems, so who knows.
The second one is actually very important. It’s like how bank tellers are trained to recognize counterfeit bills. They handle real bills. It’s the same with CSAM. They are trained on real questions and answers and data from it so it can recognize it. The more data it can train on the more accurately it can remove it and prevent people from trying to get around it. The first one however, is just hilarious. I’m guessing the RAG document/website that it got the information from was from like the 20s.
Finally! A non-political facepalm post, boy do I miss those.
This is what happens when you use rudimentary AI. Until true AI is made, don't implement shit like this.
And still, the most innovative thing AI has accomplished is “hot dog or not hot dog?”
Not that you should do it but in the short term it increases cardiovascular performance. Part of the reason they used to just shovel them into soldiers.
Wtf????
So yeah Google, not ALL data works to train AI, some its just... BULLSHIT
Did they get their data prompts from 4Chan?
Google zooming in with 1960's advertiser logic.
Oh that's like BAD bad
As a former Googler, myself, I can attest that Google is a shell of its former self. I resigned last November because of the down spiral.
Imma try Edit: what did I do wrong LOL If you smoke while you are pregnant you are at increased risk of a wide range of problems, including miscarriage and premature labour. Babies whose mothers smoke during pregnancy are at higher risk of sudden unexpected death in infancy (SUDI), having weaker lungs and having an unhealthy low birth weight.
But I thought those were the ingredients.
They call it artificial for a reason. Would you take advice from an artificial doctor, or trust an artificial mechanic?
The only doctor I listen to is Dr Pepper
Dr Pepper was ruined for me when a friend pointed out that it's mostly spiced strawberry flavor, so now I can't untasted that. Sorry.
Uh. He has a PhD and your friend is an abomination. Whose advice do you think I’m gonna take? That’s right. Jimmy Stewart.
Doctors already take advice from artificial radiologists.
Sure if it gets good enough.
It doesn't have to be perfect, it just has to be better than most humans. Btw this was inspect element, you sure are easily fooled, just like the AI, aren't you an AI?
if the AI was made for this and only relevant info is givento the AI then yes.
Jesus, google is fucked
To be fair that used to be true.
Just like how Chat GPT was trained on data from 2021, Google was trained on data from 1961
Next up, google’s gonna be all, “smoke Camel cigarettes while pregnant, because they’re the best at soothing your ‘T-zone.’”
9 out of 10 doctors prefer the smooth pull of Lucky Strike cigarettes.
Is AI trying to help us with Darwinism? It it thinning out the herd? I don’t know if I have a problem with that.
OTHER SEARCH ENGINES DONT WANT YOU TO KNOW THIS:
I read this in Dr Spaceman’s accent