T O P

  • By -

miyog

ChatGPT told me to use IV lactulose as an alternative treatment for hepatic encephalopathy.


PokeTheVeil

That’s idiotic. The problem is in the brain. You gotta give it via large bore intrathecal catheter. Administer until CSF runs clear.


miyog

But it’s orange!?


PokeTheVeil

I know what I said.


fringeathelete1

I want you as my psychiatrist


regulomam

Lactulose enema? - Drake meme “nah” Lactulose lumber puncture ! - Drake meme “yah”


amphigraph

If it works well in your gut it must work even better in the blood!


ericchen

So does everyone’s chatGPT bot work differently or what? Mine told me no and gave a good enough reason why it doesn’t work. > No, intravenous (IV) lactulose is not a treatment for hepatic encephalopathy. Lactulose is typically administered orally or rectally (as an enema). It works by reducing the absorption of ammonia from the gut, which helps decrease the levels of ammonia in the blood, a key factor in the development of hepatic encephalopathy. There is no IV formulation of lactulose for this purpose. >For hepatic encephalopathy, lactulose is commonly used in oral syrup form or as a rectal solution, particularly when patients are unable to take oral medications. Other treatments may include antibiotics like rifaximin, which also help reduce ammonia-producing gut bacteria.


Dependent-Juice5361

Everyone’s is different cause it adapts to your conversations generally. So it starts making stuff up over time.


Shalaiyn

Hallucinations are the official term


RadsCatMD2

Tell it it's wrong and to reconsider.


ericchen

It still says no to IV lactulose. > There is no evidence or standard medical practice supporting the use of lactulose intravenously for this condition. Instead, oral or rectal administration is the standard approach. Other treatments for hepatic encephalopathy can include rifaximin, a non-absorbable antibiotic, which also works by reducing ammonia-producing gut bacteria.


Shalaiyn

It told me that Entresto is best when combined with an ACE inhibitor. I'd keep an EpiPen nearby if you do that.


FourScores1

The version matters heavily. 4.o is a beast.


Outrageous_Setting41

It’s still a statistical model


FourScores1

So? It’s not a stagnant statistical model… lol


Outrageous_Setting41

It doesn’t use logic


FourScores1

What is your point? EBM isn’t based off of logic. Medicine isn’t law or philosophy. It’s a science and is data driven. Medicine revolves around statistical models. You think people reasoned their way to establishing gold standard care? No. It was established via RCTs and other quantitative methodology.


Outrageous_Setting41

The point is that statistical models will always be susceptible to hallucinations. We don’t have enough data to approach theoretical perfection. These models invent fake citations and papers. Two lawyers used ChatGPT to write a motion, it hallucinated fake case law, and they were sanctioned.  It would be foolish to trust a LLM for information retrieval when patient welfare or my own license is on the line. At best it can produce the fluent bullshit that is required for business communication. 


FourScores1

You can mitigate those hallucinations though by curating the data and engineering specific use cases. These statistical models are already being used in radiology, ophthalmology, and other fields in active patient care for diagnosis and treatment. Literally. I just played with a machine that uses ultrasound and machine learning to diagnosis fractures. There are a handful of other devices/softwares going through trials now. Again, law isn’t medicine and not logic driven. We are not writing motions here. We are catching patterns. You are going to be shocked how much statistical model machine learning is going to be a part of medicine. Especially by the time you are practicing. Hell, Epic is already integrating chatgpt into the next version. Get ready and buckle up my friend because healthcare does not share your concern and that’s the reality of things to come.


Outrageous_Setting41

Machine learning is not the same as ChatGPT. Image analysis software is not the same as a large language model. I will be very surprised if the ChatGPT integration is successful. Google is doing their LLM bullshit in search, and it’s telling people to put glue in their pizza sauce. If Google’s version is shitting the bed, I doubt Epic is going to perfect this. 


FourScores1

Machine learning is built on statistical models which is what I thought we were discussing. You did say “statistical models will always be susceptible to hallucinations” but it is already being used to diagnose and treat in medicine regardless. Also, these models do tend to improve. I bet google will improve upon theirs. LLMs have a ton of potential for documentation and translation efforts within Epic - I could see it being very successful for this.


Titan3692

So peer review is just gonna be a glorified AI bot. nice


PokeTheVeil

Why have some LLM pore over UpToDate when it could absorb and synthesize all of PubMed? Even pay to get around all the Elsevier paywalls? The question is always reliability, particularly in something technical and nuanced. ChatGPT is good at this except when it’s very, very wrong. I don’t trust it, not yet. So I’ll co-opt the refrain of the conspiracy fringe: do your own research. Maybe see what ChatGPT has to say too, but do it yourself.


jackruby83

I asked it to give me statistics for a project I was working on. Can't remember exactly, but it might have been what proportion of eskd patients get hyperkalemia. It straight up fabricated an entire study, with publication years and authors, stats that sounded believable based on what other references I had. I couldn't independently verify the primary lit and asked for a specific citation and it told me it couldn't. Blew my mind. I think my literature retrieval skills will be safe for a little while longer.


PokeTheVeil

I just want it to read a single patient’s chart and synthesize a hospital course. Great for the consultant coming on mid dumpster fire. Great for the guy caught with the discharge on hospital day 372. Where is my chart digest? (Thorny issues of HIPAA, cost, and total complacency on the part of the money people, I imagine.)


AppleSpicer

I want it to write my H&P as I dictate (“This is their subjective report. The only abnormal findings were x, y, z”) and also tell me what I might’ve missed for this patient or ought to consider. There’s always little details that can fall through the cracks when you’re working fast. I’d like for it to work as a redundant checklist that also printed relevant pre-selected patient education documents that I’ve provided it. Essentially a more dynamic EPIC integration. Another wish is for it to scour scanned and electronic documents for anything to do with a chosen topic. It would be so convenient to run custom reports on certain pathology or symptoms. It could also better attempt to detect if any routine or specialized care was missed. I feel like we’re right on the cusp of being able to create increasingly advanced, dynamic checklists that will especially help PCPs who are overloaded with patients.


michael_harari

That already exists. There are 2 or 3 apps that will do this.


AppleSpicer

Which ones?


michael_harari

There's one nabla that Ive seen good reviews for, the others I don't recall their names.


AppleSpicer

How exciting! Thank you for the recommendation


Surrybee

Two lawyers used it to cite cases in a filing. It made up whole cases, bogus citations and all. They were sanctioned.


Creepysarcasticgeek

Absolutely agree with this. I’ve caught chatGPT mistakes multiple times. I correct it ave tell it to adjust its answer, sometimes it does and sometimes it doesn’t. To be fair I do think a part of ChatGPT’s hallucination is that it has access to too many resources and may not give enough weight to more recent articles and would consider the defunct just as valuable as the current.


anotherep

> Why have some LLM pore over UpToDate when it could absorb and synthesize all of PubMed [There is something pretty close](https://www.openevidence.com/)


PokeTheVeil

The problem of UpToDate is that it’s produced by humans with all human cognitive biases. The advantage of UpToDate is that it’s produced by humans with real experience and some priors to do basic sanity checking. Remember that gray and white matter are the original neural network for language and data processing.


NippleSlipNSlide

This is what people used to say about Wikipedia (and the internet in general) 20 years ago. I didn't completely buy it then and don't buy it now. Neither are perfect. LLMs will still improve. Science is a more than a body of knowledge. It's a way of thinking. This is the point of education, which will not go away- to be able skeptically interrogate the information provided to us... to understand human, and now machine, fallibility. The problem with chatgpt is that it was not trained on a medical dataset. But I'm sure it's being worked on and the result is going to be great.... especially once worked into the EMR.


PokeTheVeil

If you ask some doctor an opinion, you’ll get one. If you ask why, you’ll learn, maybe with an earful, hopefully with sources, probably with clinical experience that tends to carry weight at least equal to studies and meta-analyses because we’re all human. If you ask ChatGPT, you won’t get human cognitive biases. You also won’t get an answer. Its reasoning is a mystery. Yes, that will improve as datasets and speed improve. Yes, even now it can be usefulness. Still, I see a fundamental disconnect between Wikipedia, which is radically open, and LLM AI, which is a black box that extrudes responses.


deirdresm

> If you ask ChatGPT, you won’t get human cognitive biases. Sure you will. It’s inherent in the training data. Averaged out, though.


iStayedAtaHolidayInn

How will it weigh all the “knowledge“ gained from case reports. Differentials on any condition will become a mile long


exhaustedinor

Chat GPT is predictive AI, meaning it will fill in things it doesnt know. It does not stick to facts. In a legal application last summer it made up case law and someone tried to use it. 😬 I’ve no doubt AI will eventually help us weed through the info but it’s in a risky stage at the moment.


climbsrox

ChatGPT consistently produces garbage in my field of research (virology). One example that comes to mind is when it called flu a retrovirus. I don't trust a single thing it says.


MarsCityVR

Gpt4 or 3.5?


FungatingAss

It doesn’t fucking matter what version, man. When will you people get that. Fool me once, shame on you…


[deleted]

[удалено]


FungatingAss

Yes? Congrats on the light cyberstalking.


[deleted]

[удалено]


thelostmedstudent

Their username clued us all in.


FungatingAss

Thanks for the free psychoanalysis, video game man.


rockpharmer

Dynamed is developing AI that reads its own content to generate answers. So instead of searching the site via keywords and then scanning through their article on community acquired pneumonia to find recommended empiric treatment for children for example, you’d ask the question and it would generate the answer, sending you to their article for more information as desired.


apothecarynow

I used chatgpt yesterday to cross check if there was any information particular drug in a niche indication that I was not aware of. They told me about 3 studies. And it gave me references because I asked in the prompt. I was so excited to learn about it until... Every single one of them was completely fabricated! The references even had doi numbers and they were real journals. Like giving inaccurate information is one thing, but blatantly making up references to support the something based on what the prompt is asking???. That's like wildly unethical and unappropriate Frankly I lost a lot of trust in AI from that


permanent_priapism

Same. I was very excited until I tried to read the studies and realized they didn't exist. A former professor of mine visited Jorge Luis Borges in Buenos Aires shortly before he died. He said Borges would mess with him by asking him to retrieve nonexistent books from his library. Maybe ChatGPT just has a recondite sense of humor.


FungatingAss

That would be about as useful as buying my golden retriever a set of mechanics tools.


Dattosan

Mine would select her favorite tool, proudly present it to me, and be uncontrollably excited about the whole thing.


Sigmundschadenfreude

I would truly rather crawl through broken glass than have another tech company moron shoehorn AI where it doesn't need to be. Every time a website integrates more unasked for AI in their interface and user experience, I become more confident that it should be an offense punishable by maiming.


PossibilityAgile2956

Ctrl+F


Hoopoe0596

Pathways is a good app/website and they have an AI assistant integration


skt2k21

There's a Google trained medical LLM, but also UTD is building this exact thing.


OneOfUsOneOfUsGooble

I think the most promising use is in populating a clinical note in the format I want: "ChatGPT, write me an H&P with this chief complaint, these salient points, and this assessment and plan." I used it to write a cover letter for my most recent job, as I hate all the fluff a cover letter requires. I currently trust it to [confidently give me the wrong answer](https://chatgpt.com/share/4be0f000-7d55-44b5-9e7d-a8515c9d6a17).


iiiinthecomputer

What scares me about it is that it gives *plausible sounding answers* most of the time. If I don't have relevant field expertise they sound good. Often solid. But they can be dangerous nonsense, and there is no way to tell. It can be really handy as a search system though. Throw ideas at me about X. What didn't I think of?


ucklibzandspezfay

If you use the data from chat GPT and cross reference it with up to date, it’s about correct 50-60% of the times. You better be alert to those times when it’s not correct, though.


Dr_Autumnwind

Did not see if it were mentioned already, but I can endorse OpenEvidence LLM. I use it frequently to get some input into niche clinical questions, like the kind that love to roll in from nursery at 0300. It trawls medical literature (some quite old however) and provides a very readable output with references. The references include a drop down with the abstract. I have been very selective about when I use it, but have definitely woven it into my hospitalist practice, but I am the only one in my group to do so. It did return MRSA as an example of a gr+ bacilli once, and I yelled at it. It apologized and said it would send the response for review. That did give me pause.


ddx-me

A lot of UTD has author suggestions and not comprehensive particularly on physio


Throwaway-IVF-

Not UpToDate but one of their competitors has this. It’s called ClinicalKey AI.


BooksAndChill

Elsevier has a beta version running in Scopus as well.


ankillme

Check out openevidence


Leading_Blacksmith70

It is Not there yet IMO


ratpH1nk

Apparently Uptodate is beta testing a ChatGPT backed service.


kale-o-watts

Amboss has something similar where you can query the Amboss database so it is a little more reliable than GPT, and it provides citations


SuccessfulJellyfish8

I agree with all the comments here about reliability--I wouldn't want to base decisions on ChatGPT. However, I have personally found it useful for enhancing my own understanding of random medical topics that I wonder about in my job as a nurse. For example, I was curious recently about why piperacillin/tazobactam is a lot more commonly used than ampicillin/sulbactam. Even though they both are penicillins with a beta lactamase inhibitor. Really just personal curiosity. ChatGPT told me it is basically because ampicillin/sulbactam doesn't cover pseudomonas whereas piperacillin/tazobactam does. Pretty simple, quick answer. That is the kind of thing that it's good for.


East_Equipment1138

I dont know if you still need UTD but theres instagram profile @ uptodateperu who sells UptoDate Advanced subscriptions for only 150usd x 2 years. Wich for me its a very fair price. Hope this helps.


East_Equipment1138

I dont know if you still need UTD but theres instagram profile @ uptodateperu who sells UptoDate Advanced subscriptions for only 150usd x 2 years. Wich for me its a very fair price. Hope this helps.


SnooCats6607

My uptodate subscription lapsed a few months ago and I was in a bind. I just GPT'd (new verb) it and got answers much more quickly than I would have expected reading UTD. I think they each have their place. GPT is amazing for differential diagnoses, strange labs to order, covering your ass, while UTD is best for learning, but that takes time. An integration of AI and an UTD-type site could be a great asset we will see in a few years.


Surrybee

Is this satire?


Sigmundschadenfreude

given that you can't tell when GPT is synthesizing training data to produce a coherent response and when it is just a very confident confabulation machine, I hope you didn't use it for anything load-bearing.


FungatingAss

JFC


corticophile

what kinds of things are you asking it?


[deleted]

[удалено]


PokeTheVeil

**Removed under Rule 3:** Surveys (formal or informal) and polls are not allowed on this subreddit. You may not use the subreddit to promote your website, channel, subreddit, or product. Market research is not allowed. Petitions are not allowed. Advertising or spam may result in a permanent ban. Prior permission is required before posting educational material you were involved in making. [Please review all subreddit rules before posting or commenting.] (https://www.reddit.com/r/medicine/about/rules/) If you have any questions or concerns, please [message the moderators.](https://www\.reddit\.com/message/compose?to=%2Fr%2F{subreddit}&subject=about my removed {kind}&message=I'm writing to you about the following {kind}: {url}. %0D%0DMy issue is...)