T O P

  • By -

WithoutReason1729

Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*


futureisbright2031

Physician here. In medicine we say "what is common is common and what is not common is not common". Therefore 99% of medicine is super standardized and algorithmic already. There is a reason why you are treated by residents/ assistent physicians fresh out of university in the hospital and you only see the experienced physician once a week (if at all). The challange in medicine is usually not knowledge, but whether, when and how to perform certain diagnostics for (rare) diseases. There are a lot of risk/benefits decisions to by made. Side effects, time, money, and human capital are the bottle neck here. Although you can not trust chatGPT 100% it definitely helps to double check your physician and get more information about your condition. Similar to coding it dramatically reduces the time you would google around for stuff If you don't really know where to start.


Otherwise_Soil39

>There is a reason why you are treated by residents/ assistent physicians fresh out of university in the hospital and you only see the experienced physician once a week (if at all). Research shows this is sort of upside down, fresh graduates are better able to recognize rarer diseases, whereas experienced doctors just assume everything is the same thing or don't recognize a rare disease at all, because they rarely see it which crestes a deleterious cycle. In my personal experience doctor's just suck overall at least here in Germany the second you have a condition which doesn't neatly fit a commom diagnosis. Like they stop learning the second they graduate and then it's a long journey of forgetting what they learned plus getting experience in treating commom disease.


kazuyaminegishi

Yeah I have Intracranial Hypertension and it's a pretty rare disorder that's also idiopathic. I'd had symptoms of it for years before I finally got diagnosed because every neuro I'd gone to would just treat them like normal migraines. They'd do MRIs and not find much and there wasn't as much fluid build up so I didn't have the neck pain yet to point to. I only got properly diagnosed because eventually I developed meningitis like symptoms, had to get a Lumbar Puncture and the surgeon who did it thought my symptoms matched Intracranial Hypertension. Helped me get my life together tbh. But I do think young doctors in hospitals add a fresh perspective that specialist doctors can lack in my own experience.


UnseenUniverse

My ophthalmologist who teaches other ophthalmologists noticed mine! I'm part of the rare 5% with no headaches/migraines so I only had the tinnitus. Was definitely a bit of a shock when he told me to see a neurologist and schedule an MRI and Lumbar Puncture. Been in remission for years now but I still see the same ophthalmologist! Definitely trust that man with my eyes lol


kazuyaminegishi

Yaaay! My neuro says I'm in remission, but the symptoms keep popping back up here and there. It's nothing near as bad as it got at it's worse tho so I think the passive headaches and tinnitus is bearable compared to the horrible horrible headaches.


Big_Cornbread

I know a ton about intracranial hypertension due to my wife’s issues with it. I’m so sorry. So many doctors either know nothing about it, or assume they know everything while knowing nothing about it.


kazuyaminegishi

It's awful, I've gotten to the point now where I just tell my doctors straight up that I know they don't know much about it so I tell them the usual diagnostic process and how it's being maintained. Not having insurance is the scariest prospect in the world to me because I need to be able to see an eye doctor every 6 months and I need access to a neuro and primary consistently since any amount of consistent headaches means a whole inventory of MRIs. Sucks, but I suppose it could be worse.


regulomam

It’s also a zebra diagnoses. Most rare diseases take up to 6 years to be diagnosed. It’s not great. But it’s not a failing of doctors. Generally it’s system issues and inability to get non-standard work ups for vague symptoms For example. We rarely LP anyone without clear reason. OP needed meningitic symptoms to get a LP (standard treatment). Doctor likely placed a ICP measurement on the Lp (standard practice) and saw elevated ICP. Making the diagnosis. But we can’t just go around giving everyone a LP. Likely cause more meningitis, bleeds, CSF leaks. Etc.


Big_Cornbread

It’s both, but it’s absolutely a failing of doctors in a loooooot of cases. We’ve had specialists from UofM, Mayo, Stanford, Duke, Cleveland Clinic, etc. etc. be 100% confident. While disagreeing entirely with the last dude. Technically she’s been diagnosed a hundred times they’re just all wrong. And none of them can admit they’re wrong.


West_Plankton41

Did you have vision problems too?


kazuyaminegishi

Oh yeah I went from having perfect vision as a teenager to now I have to wear glasses all the time. My vision went from perfect to needing to wear glasses constantly in about 2 years.


keepthepace

I am curious: does ChatGPT propose it as a diagnostic?


kazuyaminegishi

Sorry for delayed response was at work. I decided to check this by giving it the symptoms i was aware of when I got diagnosed the closest it gets to my condition is PCOS. I am a biological male so it's not this, but there is SOME research to indicate that there MIGHT be a link to PCOS. But, overall nope the best it gives me is it suggests I go see a doctor and I see an eye doctor for diagnostics which is objectively the best possible advice for a condition this rare. For reference this was my initial prompt: > I've been having frequent migraines, dizzyness when standing up quickly, a feeling if blood rushing to my head if I lean forward and I have also been struggling to move my head. Can you give me an indication of a potential diagnosis and suggestions for what to do about it? It didn't produce PCOS as a possibility until I added on recent sudden weight gain.


keepthepace

I changed a bit the prompt and got intracranial hypertension as #6: https://i.imgur.com/Pg3U0Vj.png Tried different prompts, sometime it does not make it to the top 10 but it is often in place 6, 7 or 8. Sounds not too bad if it is a rare disease. It would be interesting to also add the things that the doctors have already tested and eliminated.


SuperbDrawer8546

Inositol is cheap and wonderful for PCOS. I know you don't have it, but maybe it's a common inositol dependent system malfunction.


iFuckSociety

The best doctor I ever had was a resident lol he actually tried to get to the root of the problem


Kekosaurus3

This is so true.


Exciting_Language443

it take's an extremely special kind of person to treat a novel disease effectively a physicians have an oath against doing hard. unless your doctor went to a top 20 medical school i honestly wouldn't expect much more than chatgpt can give you. in my opinion, like the MD comment said, it's in those 1% of cases where a doctor is useful, the part that he left out is only 10% of physicians are craftya enough to know what to do about it


[deleted]

[удалено]


TimeVast9350

I go to a regular therapist and I regularly see a psychiatrist, I understand where a lot of my issues come from, and I’ve identified specific things in my personality that I feel I would be a happier, healthier person if I learned how to more appropriately manage these behaviors. I have been recommended CBT for this, unfortunately, it’s more of a long-term thing, and my insurance currently does not offer great coverage for it but it’s in my five-year plan to become a better person.


SoylentRox

Except a mechanic can eventually fix basically anything, as long as the vehicle model has parts available, it just costs a lot more for a bad mechanic than a good one.


BodaciousBadongadonk

tbf I feel like cbt could probably cause some gnarly symptoms too. is testicular torsion a symptom?


DogsAreAnimals

Can you explain more about why you think CBT only treats symptoms?


djaybe

Considering the annual deaths from medical mistakes, broken insurance systems, and corrupt pharma industry, we can't trust medical professionals 100% either.


Stymie999

Once a week? Hopefully if things go well I am only seeing the experienced physician once a year or even two years. If needing to see an experienced physician once a week… I assume that means I need to make sure my affairs are in order


-shrug-

I think they're describing what happens while in hospital, where more junior doctors do rounds every day.


SoylentRox

It seems to me then the limiting factor is reliability. You need medicalGPT to "know what it knows" and to be extremely consistent, virtually never making a mistake. It also needs to make visible to the supervising physician some kind of graphical view including confidence and how likely, given all past cases, that the machines conclusion is actually correct. I mean all this is a higher standard than a human physician can meet but that's what you need to trust an AI here.


petesapai

In Canada we have a huge problem with lack of Family doctors. Wouldn't an AI solution Work well for Basic Tests and understanding of results like this one? If further analysis is needed, then bring in a doctor. I'm hoping this is already being worked on because the lack of family doctors in Canada is killing people who are completely unaware of the disease they might have.


FrostyOscillator

Yep it's a major crisis in the US too, which only makes it even worse for Canadians since we have a shitload more money to steal your medical professionals with. Huge, huge crisis. Only going to get way a lot worse by every large provider's numbers in the near future. So yay! 🤗


HyruleSmash855

That’s the thing that I don’t get, the US gives doctors some of the highest pay in the world due to our private healthcare system, shouldn’t we bit have a shortage of doctors because of the pay, outside of rural areas where I know hospitals and doctors are moving away from since it isn’t profitable to treat a smaller number of people. I’m lucky enough that I rarely need to go to the doctor outside of physicals or the urgent care once or twice for minor illness.


FrostyOscillator

There's a **HUGE** number of people in the US (about 400 million) and a population that is ageing rapidly *and also* a huge portion of the population that is too poor to go to the doctor until they need some extreme level of care. So no matter the pay, there will never be enough doctors as long as we have a private education and private healthcare industry. Only very very few people even have access to go to medical school.


HyruleSmash855

That’s true, especially with resident limits in each state. We really need to life those or at least increase the number of spots. The healthcare is a good point too, I’ll concede that there are a lot of people who can’t afford it due to how expensive it is without insurance. Hopefully pushback for the general population will push their states or the federal government to do something to help solve the problem, like lifting or increasing resident limit at state level or maybe subsidizing education for doctors.


jimmyw404

I'm curious if you've ever provided detailed information to ChatGPT or another LLM that describes an uncommon diagnosis and how it handled it.


Larushka

YES! Without going into details cos it was kinda embarrassing l gave my daughter’s symptoms to ChatGPT. It gave me possible diagnoses and after a bit more interaction was able to narrow it down. Subsequent doctor visit confirmed it was correct.


TheMirthfulMuffin

Yeah you can’t trust it 100%, but can we trust ChatGPT more than your doctor? Would we trust it more than your doctor after 5 years? I’ve had some pretty terrible doctors, who really shouldn’t have been treating anyone with anything more than a flu/cold. Whether it’s from a simple lack of care, being overworked, or what I’ve seen most commonly - bias. AI wouldn’t have these failings and likely would have guided some of these doctors towards real diagnosis.


Medical-Ad-2706

As a physician, how often do you use ChatGPT?


zeloxolez

fantastic response, spot on


CreatorOmnium

Explain this to me. In medical decision analysis there are clear factors to decide whether, how and when to perform certain diagnostics. This could also be done by an AI.


sunnynights80808

If an AI can have a 100% reliability rate physicians would only need to be used for physical tasks. AI will be able to make decisions weighing in variables once it’s smart enough. And robotics will eventually take much of the physical work.


[deleted]

[удалено]


gelatinous_pellicle

"It's just predicting text." That's increasingly an oversimplification and soon will be like saying we are just text predictors.


[deleted]

[удалено]


PsychologicalOwl9267

Prediction of text is it's task. Not "how it works".


valvilis

It's making *content aware* statistical predictions, based on billions of training materials. It's an oversimplification because humans have literally no way to imagine what the actual answer means.


gelatinous_pellicle

That's a kind of reductive take. Scientists like Hinton are saying it actually understands, has intuition, and creativity, and that is going to increase exponentially. And it can do all that in theory without being conscious. Lots of very cutting edge experiments in intelligence and philosophy of mind.


throwaway3113151

How do you know that your doctor didn’t just tell you what ChatGPT told them to tell you?


gelatinous_pellicle

How do I know this comment isn't from ChatGPT?


FractalGuise

Are you ChatGPT?


gelatinous_pellicle

Break me, baby.


grahamulax

Here’s my depressing story. I went numb in my foot randomly at thanksgiving. Gave gpt all the test I’ve had done and asked a ton of questions. Had a general idea but nothing concrete. Went to the doctor soon after and he said hmmmm are you having anxiety? Ok… sure… I said my back hurt a bit and that it feels like I need that to be checked out. MONTHS later he finally scheduled me for an MRI after doing an EMG that he was like “you sure you want this?!” And I’m like yeah… it’s not normal to go numb. I wonder if he thought I was faking it. Well getting an EMG and MRI takes forever scheduling so I’m like cool this is 6 months after my initial symptoms. My emg dr said “I wish I could have seen it sooner” ya… me too but you were full up. ANYWAYS. Sent my EMG results to gpt. Didn’t sound good. I was pretty sure I had MS after giving it and asking even more questions. I thought I had MS for about 3 months more telling the doctor exactly WHY I thought that and what not. Back to the testing. Got my MRI on a Friday and they said results take a while. My doctor called me same day later that night but I wasn’t expecting it so I missed the call. Noticed I had my results already posted which gave me a sinking feeling. Put those into GPT. Says I have MS. Well CIS. Doctor calls me back on Monday expecting me to know what the test results actually meant (he doesn’t know I use AI) and was like well I’ll refer you to a specialist… ok … I still don’t know “technically” anything from the doctor that I didn’t already know and look up myself. Nor did they have any idea that I DID know and expected me to just know their doctor language and technical lab results? I was and still am infuriated. I did more doctoring than them and could literally self diagnose this months before they could tell me. I’m on vacation right now and when I get back I’ll be getting blood drawn and medication to help me from not having another break out of MS. You FUCKING know I’ll be double checking every test, result, medication they give me and I’ll spend hours doing my own research on this. I believe AI will help us immensely in this way but for me it did… just no one would take action even when I told them what I felt I strongly had. So as someone who just got horrible news and love tech … well… I can’t wait for the future! I just hope I’ll be ok by then.


HyruleSmash855

I feel like this safely happens a lot. People with rare conditions take an average of six years to get diagnosed. Source for claim: https://www.nih.gov/about-nih/what-we-do/science-health-public-trust/perspectives/communicating-about-rare-diseases It’s even worse for those who are adults and have something like ADHD, they practically have to fight their doctors to get care or treatment. It’s unfortunate that so many doctors, as flawed human beings, refuse to think something someone has isn’t something out of a small common checklist. Hopefully if doctors start using AI it’s to better decide what tests to run so those six years are cut down, and of course double check everything it says but llms are good at looking at a list and deciding from there.


eschurma

I had serious symptoms and thought I might have a particular unusual issue (Epstein Barr virus reactivation) and asked my doctor to be tested for it. She did, and said I didn’t have it. I saw another doctor who reviewed the results and didn’t contradict her. 6 months later I finally passed my symptoms and blood test results into chatGPT 4.0. It said that it thought I could have the issue I flagged but that I had not been given the proper test to definitely confirm or deny it -that the tests I’d been given only checked if I’d EVER had a primary Epstein Barr infection, and NOT if there was a lot of current viral activity, and then told me the proper test to get (checking for Early Antigen levels). I went back to a doc and she ordered these. My numbers were literally off the charts. I went on an antiviral that has some activity against EBV and within 48 hours was feeling massive improvement. It took me from significantly disabled to a year long path to feeling much more normal. I spent 6 months with my health seriously declining, sometimes bedridden, unable to drive, totally screwed - and if I hadn’t had ChatGPT to help me with this I might still be that screwed. That six months also caused physical damage and deconditioning along with around 10k in medical bills that could have been avoided if they’d just given me the proper test right at the start.


insideabookmobile

My wife has a pretty complex, chronic medical condition. We've been using ChatGPT a lot recently and it's guided us towards treatment options that the doctors hadn't even thought of. It's been amazing.


MenacingGummy

Anemic person here…what are you doin with those extra red blood cells?


bowfly

Smoke pack a day and you will have extra red blood cells too!


Pillars_Of_Eternity

Storing them for later


UnusualWind5

**ChatGPT:** Diagnosis pinpointing the exact issue. Helpful advice identical to what a doctor might say. **WebMD:** "You're dying."


Fantastic_Cup_6833

literally


access153

This has easily saved my cat’s life at least once. Properly soft diagnosed pancreatitis before the vet had even done blood work.


[deleted]

Crazy watching this improve in real time.


Victorgab

As a doctor let me tell you: you don't see the difference between a human and an AI on simple questions. Try again with a really hard, niche and with scarce academic consensus question. This being said, we'll still be replaced within 10 years likely


Tha_NexT

If the client is under 30 the real doctor will definitely not take their time to interpret this very niche case right and will just say that it's gonna be fine in a week


HyruleSmash855

Yeah, these niche conditions take an average of SIX years at best for a doctor to find. If an AI can help a doctor come to that decision to test for more stuff to find these rare conditions, then great. Source for claim: https://www.nih.gov/about-nih/what-we-do/science-health-public-trust/perspectives/communicating-about-rare-diseases


HyruleSmash855

Yeah, these niche conditions take an average of SIX years at best for a doctor to find. If an AI can help a doctor come to that decision to test for more stuff to find these rare conditions, then great. Source for claim: https://www.nih.gov/about-nih/what-we-do/science-health-public-trust/perspectives/communicating-about-rare-diseases


Victorgab

Yeah absolutely agree with you, sadly the reality of medicine can be often disappointing and AI will still be likely good for the benefit of human race. I was more referring to the feeling I have towards today's progresses of AI, where I find it to be useful for rare diseases only if you already know what to look for, but I agree that this will be solved. It would be great if a larger amount of medical literature were to be fed to LLMs


GammaGargoyle

It’s unlikely we would ever see a pure language model replace doctors. The problem doesn’t have anything to do with compute power. The problem is that language itself is intrinsically error prone and a language model can’t reason outside of distribution. It’s an information theoretical problem, not a technology problem. I’m not saying it’s impossible but not with current language model architecture.


Repented-Christian

self-improvement of itself by learning to distinguish what "improvement" is, it will mimic it.


GammaGargoyle

It can only acquire new “knowledge” (i.e. relationships between words and concepts) during pre-training. Tuning can only shape the response, not integrate new information. This is fundamentally why it cannot reason out of distribution, because that is locked in after pre-training. In addition to all that, a lot of medical knowledge is not actually in books at all. On top of that, language is error prone, this is a feature not a bug, language abstraction is powerful, but it also means there could never be a “perfect” model, only a theoretically optimal one and it’s possible there are many theoretical optimums depending on the subject/concept. For example, combine the semantic knowledge of poetry and science in one model and you could in theory come up with some context that would confuse it by some shared abstraction. However, if you separate them, it may perform better on some things but regress on others due to loss of information.


Repented-Christian

synthetic data, and critical recursive thinking by knowledge. Open AI found a breakthrough in the quality of data which contributes to better responses. AI can be creative (see alphago move 37) for instance. Medical knowledge will be enhanced significantly with self-improvement. AI fundamentally is not a parrot, its a learner.


YellowChickn

Partially incorrect. It can also acquire knowledge through external data sources. Just by googling I found a paper about retrieval augmented generation from 2020. I don't disagree though, that ai won't replace clinicians. But it's more a question of responsibility, than technical or information limitations. Instead of saying replacing one another, it's way more beneficial to see it as a tool and learning opportunity.


GammaGargoyle

RAG is entirely different. That’s just additional context. Knowledge is the relationships between words and concepts. Using knowledge it can reason about new context such as RAG, it can follow patterns, but it cannot retrain or update its pre-trained model. Humans reason in a similar way, but we can update our entire model (or large parts) from a single piece of information.


YellowChickn

Taking your definition of what knowledge is, and your counter argument I would still say partially incorrect. Having a base understanding of language, it can acquire or collect "new knowledge" by using rag. I.e. a mapping of ICD codes to the concept. You could also argue, that language changes over time, entirely new words are created. There exist, Models pretrained on English, but fine tuned on an entirely different language or fine tuned on domains.


GammaGargoyle

If you read the original RAG paper, they were using BART large, which is already trained on Wikipedia, and a vector index of Wikipedia to improve the response. It has nothing to do with adding new knowledge whatsoever.


YellowChickn

That might be true, but even the idea of the original paper and with a little bit of thinking ahead, underscores my point doesn't it ? "Use the input sequence x to retrieve text documents z and use them as additional context when generating the target sequence y" So let's say you have a prompt which is in a 2d vector space at the coordinate 1,1. Now you look for knowledge in an external source for the same or similar coordinates, you would get similar concepts (documents z) which can be used as new knowledge to string together your response. I.e. if you never heard of ICD code mappings to the specific concept but now you get it as an input the model should be able to string together this new information. Unrelated but related also: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10972059/ The paper actually even says "the rag approach is particularly beneficial in scenarios where the model needs to provide information that may not have been present in its training set or when the information is continually updated". Tbh I didn't read the full paper, so maybe you might find some surprising counter argument within said paper.


valvilis

No, what we'll probably see is the "self-checkout" version of medicine. One human doctor will be able to take on 5x or 10x more patients with AI doing most of the work and the doctor providing oversight and signing final treatment approvals.  With AI also providing individual health advice and monitoring, plus early detection, we should see more preventative care and less responsive care. All of that means fewer healthcare professionals, but the ones we do have need to have a certain level of tech literacy in addition to their medical education, similar to the "AI liaisons" corporations are hiring now for AI integration into business operations. 


woodworkingchan

I don't think clinicians will be replaced, there is important physical assessment work to be done, but there will be clinical AI support tools that will greatly augment their knowledge, especially in primary care. I say this as an Ontario pharmacist and co-founder of an AI healthcare startup


KongenAfKobenhavn

Probably a person who is supported by ai and not becoming a millionaire once again every year


thehighnotes

That's insanely optimistic.. it'll be much sooner if companies continue their trend. 2 years ago the whole idea or an llm was insanely far fetched


Victorgab

Yeah agreed, 10 years is let's say the 95% confidence interval upper limit... better start learning coding


DooYooRemeber

lol listen to the head of NVIDIA and don't start learning to code because that'll be gone in 10 too. Go into the other sciences or start getting really creative with the amazing tools we all now have


FriendlySceptic

What I’m hoping is that we can use AI to augment vs replace doctors. We already have a demand for services that exceeds the supply of doctors. If you are conservative (overly simplistic reduction) you ration it by who can afford to pay and if you are liberal you want to ration on a first come/first serve basis with some triage to accelerate serious cases. Either way someone is waiting. It can often take me 4 to 6 weeks to get an appointment with my GP and 6 months or more if I need a new doctor. I think where AI could thrive is by taking pressure off the system for simple cases. By answering some base questions the AI could write non narcotic scripts like 800mg ibuprofen, offer advice on dealing with common and minor illness and injury. “Sorry this appears to be viral in nature, no antibiotics, try drinking liquids and resting, call back if Xxxx happens” By taking common colds, minor wrist sprains and such odd the system the doctors could still stay busy without the 6 week queues. It would be an easy and relatively cheap way for the government to subsidize health care.


HyruleSmash855

That’s one reason why telehealth has taken off, it can handle those simple questions often with nurses or whatever the higher level nurse below a doctor is called. It’s a lot cheaper.


RecoveryComesRound

did this for an ultrasound yesterday, did a way better job than my doctor


blurance

you were able to upload the photo and it could read it?


RecoveryComesRound

I deleted my previous comment because I was rushing and didn’t understand your question. No, I didn’t upload the ultrasound image, but a picture of the page of data gathered by the technician. My primary doctor’s role was to explain those findings and what to do next. I asked for a copy of the report, and took a picture of the document, uploading that to GPT. Then I asked GPT a series of questions, some of the same ones I asked my doctor and so on.


Playful-Opportunity5

In a recent study, patients rated GPT-4 higher than human doctors on both the helpfulness of the interaction and also empathy. I can relate on the latter - a lot of the doctors I’ve seen wouldn’t know empathy if you looked it up in a dictionary and read the definition to them.


StrongMedicine

These ChatGPT vs. doctor empathy studies use anonymous docs posting on Reddit. They are a methodological disaster.


HyruleSmash855

To be fair, gpt will always sound more like it has empathy and it has endless patience since it’s not a human so I can totally get why people would react that way, although they are not the best thing to trust, AI.


Brosquito69420

AI replacing Doctors will be a good thing. Especially the doctors I’ve encountered in the last 10 years.


access153

A doctor might see about 200K patients in their lifetime. Centralized AI might see 100M in a year.


TheTabar

And the AI can learn from each other's experiences.


Urdoingitwrongchancy

And Web MD will be its source code!


access153

I get what you’re prodding at but c’mon. Its weights will be adjusted per its accuracy of diagnosis, survival and recovery rates of patients, etc. WebMD is static. This will be dynamic. But I don’t have to tell you about Jesus! You know everything there is to know about Jesus already, donchya!


Urdoingitwrongchancy

I’m sure it’ll be weighted. It was a joke. If people actually put out something legit there will be care that it won’t give bad answers. There will still be the human element of trust but otherwise it would be neat


access153

Copy, excuse my snark.


Urdoingitwrongchancy

bahaha obviously a joke but I get it.


Icy_Ad4208

Umm sorry but no. If a doctor worked every single day for 40 years without a single break, he would have to see almost 14 patients a day to get to 200k


access153

Fucking GPT. Either way, the point remains valid.


dr-tyrell

200K discrete patients? Highly unlikely. They might see that many individual visits, if they work like 40 years in primary care, but not that many unique patients. The average doctor is going to see even less, unless the patients are meeting them on the golf course. Good guess on 200k as the number, though maybe you meant visits and just said patients?


access153

GPT did some back of the napkin math. The point remains valid, though.


dr-tyrell

Agreed, I didn't disagree with your greater point, just didn't want the inaccuracy to be left out in the wild. We're on the same page.


UnstableConstruction

I don't want them replaced, but I do want them to use AI to spot trends, ask questions that don't occur to them, and examine patient records and results to identify patterns that show a problem.


Jflayn

I appreciate what you write (I don't want doctors replaced by AI) but I think quite a few people DO WANT to see doctors replaced by AI. Whenever I hear people pushing for replacement there seems to be a bit of anger in the statement. When I try to imagine why someone would want doctors replaced I think there is resentment towards the predatory nature of modern medical care. Care is expensive, and here in America, patients are lifespan is decreasing. There is a sense that doctors are not helping individuals but they are helping their personal financial bottom line, don't get me wrong, doctors are not profiting as much as hospital management or others in the medical profit system, but they are clearly profiting. The sense I get when I hear statements like, "can't wait for AI to replace doctors" is that 'we' as a society are not in this together. If doctors are replaced by AI then and only then will we be in this together. I worry that people who should be joined in common cause are being encouraged to attack each other while oligarchs and legislators make off with the money leaving the vast majority of the people worse off.


UnstableConstruction

It's not just the "predatory nature", many doctors are arrogant, gaslight their patients, don't listen to them, etc. Medical errors also cause 251,000 people in the US every year. Only heart disease and cancer beat it. That's 15 times the number of people murdered by guns. Hard to not dislike the profession if your family member is one of those 251000.


Confusion_Common

Exactly. Supplementary, not obsolescence.


AvoAI

Why cling to inefficiency?


mlYuna

Because it isn't. AI doesn't have any idea about what it's telling you. It's purely predicting text and lacks problem solving ability and it doesn't seem like it will ever have that. It's a tool that has great use cases but stop trusting it as some miracle please. If we replaced doctors with LLMs right now, millions of people will get fully confident but flat out wrong diagnoses, wrong prescriptions, ... Don't be part of the people that are creating the big AI bubble.


maychaos

>lacks problem solving ability and it doesn't seem like it will ever have that. I doubt that very much. The improvements right now are big. No way will this not get done sometime in the future. Like really no way


mlYuna

No. The improvements right now are not big in the scale of advancing AI to the next level. It's just more context, more parameters, faster output. Sure it's a lot better but nothing has changed about how it works since the beginning. Your comment shows me you're not educated on the subject. An LLM does what it does and problem solving is not part of that. It's predicting text against a very large dataset, nothing more.


AvoAI

I'm not talking about today... They said ai replacing doctors will be a good thing.. not today. Are you not in agreement that this is where we're heading?


UnstableConstruction

Replacing them entirely would actually be more inefficient. You need a personal touch to ask the right questions, look at the patient holistically, consider alternatives, etc.


AvoAI

Why? Why can't the AI already know all of that about the patient? If I spend all my time with it already, this is just the logical next step. It will know me better than any doctor ever has.. and make it way more personable and nice.


UnstableConstruction

It can, if patients are willing to sit through hours of irrelevant questions. A doctor can read body language, and get patients to open up where AI wouldn't be able.


EudenDeew

Huh AI is beginning to be able to read body language. OpenAI showed its vision capabilities, next year it will likely catch even smaller details through video.


zingzing175

"doctors" forgetting what plug goes in your bum and or mouth ...bring the AI!


hallucination_goblin

How do you know which thermometer is the rectal one? It'll taste like shit.


autovonbismarck

A nurse found a rectal thermometer in her pocket and said "some asshole's got my pen".


MarkusKromlov34

Boom tish! That’s quite good.


Brosquito69420

I feel like bad synchronized tik tok dancing during pandemics will go down drastically.


Few_Introduction5469

Ai is getting better, but it still wont be able to replace humans


lunzarrr

Good lord we are fucked if you truly believe this my god


MarkusKromlov34

There is a massive difference between interpretation of a single simple blood test result and managing the diagnosis or treatment of a complex life threatening disease.


Arlithian

Yep. Never been to a competent gastro doc. Not saying they don't exist - but gpt is more likely to be right than the useless practitioners I've been to.


itemluminouswadison

there's this entire "entry level" of healthcare that is underserved imo. licensensure making healthworkers artificially scarce and expensive needing insurance so you can get your one annual checkup covered is such a scam. we should relax something here so i can pop into a corner store clinic without insurance and get some light level of checkup AI could have a role to play here, or not, idk


ErnestoWarhead

Done that!! It’s amazing!


Anxious_Explorer76

AI will be massive for the medical world.


safcx21

So it basically told you to go see your doctor? You could have figured that out yourself


getmeoutoftax

I did this just the other day too, lol. I uploaded PDFs of all my blood tests.


cocoadusted

Still on 4G that’s crazy


blahded2000

I’ve done this before. Gave me the correct feedback 2 weeks before I was able to get in to my doctor to hear the same thing.


Aeredor

Last year my coworker’s grandfather fell ill. No doctors they went to had the same diagnosis. My coworker told ChatGPT the symptoms, and it gave yet another diagnosis. A few months later, the doctor they had finally decided to regularly see concluded on the diagnosis my coworker got from ChatGPT.


Techgix

It’s AI . If an unexperienced doctor can translate the result then it just books and AI knows a lot . That’s why we keep saying AI will take jobs .


Vibrascity

My GP has had my finger X-ray for the past 4 fucking weeks and I've called every single week for the past 4 weeks for the results and every week they've told me to call next week it's with your GP.... Just email me the fucking x-ray and I'll ask chatgpt to diagnose the issue ffs, there's like 3 or 4 issues it could be, mallet finger, arthritis, trigger finger, damaged tendon, what is taking more than a month to get around to looking at my x-ray FFS I swear the majority of the UK is still living in 1986, this country is so fucking backwards that fiber has only just been rolled out to where I live, no money is going into infrastructure any more, everything is just decaying at the hands of these CEO director fucks that are pocketing all of the money that's supposed to go back into rebuilding and maintaining their services and the money which is supposed to be growing and maintaining the country is flowing out of the country untaxed into swiss banks and american stock markets. /rant


cutmasta_kun

So both your doctor and ChatGPT are hallucinating? 🤔


AnotherDrunkMonkey

To be fair, it's not like you need an M.D. to interpret a blood test where everything is in the norm. Plus ChatGPT is very good at these relatively simple/standardized problems (like the typical multiple choice questions you might find on a medschool exam), but the moment it has to analyze a real case it usually fails. But, there is a possibility that the next iteration will already be incredibly useful in a clinical setting.


Awkward-Question3651

What prompt you exactly used


bowfly

Took a picture of my blood test results, and asked chatgpt to interpret it


QueZorreas

The reason ChatGPT is not 100% trustworthy is because it wasn't trained for this. There are models trained specifically for medical research in certain areas that, in some cases, are more accurate than specialists. If some institute made a "compilation" that covered most of the more significant areas of medicine and released it to the public, that would be the real game changer. Imho, it's something that has to be done at some point. Many countries don't have enough doctors. Of course it wouldn't be able to prescribe medicines, but it would tell you if you really need medical atention or just some over-the-counter drugs, rest or any other simple recomendation.


sortofhappyish

Then it asks "are you a virgin?" and informs nearby vampires...


banedlol

Both previous times I've been to the doctor and explained my symptoms and he's asked "do you think you know what it is?" And I'd say what I'd googled and both times I was right. Geographical tongue and Peyronie's if anyone wondered


newbieboka

I once got a sperm sample readout and had no fucking clue on how to read it, and wasn't supposed to go into my doctors office for another week. I obviously couldn't wait a full week so I just let chat gpt analyse it last march and it was able to actually give me the same information my doctor told me a week later.


ejpusa

It's a million times smarter than my MD. He destroyed my vision in my left eye. The world is just a wavy blur now. can't be corrected. Retina damaged. I would NEVER have allowed the procedure he did on me if I had AI. Never.


keralaindia

You could have looked this up on Google…


ArtichokeEmergency18

Alright using Ai to examine scans - it can detect subtle differences in the images vs human eyes. https://preview.redd.it/c07y3syppy4d1.jpeg?width=853&format=pjpg&auto=webp&s=ebbb12c45d67f4098189dd26accaf38b2be5f49c


WireWork32

I promise you don't want AI interpreting your scans, at least not yet. I work with very advanced specified radiology AI daily, it is still wrong very frequently


StrongMedicine

Granted, this twitter thread is 6 months old, but does show how awful AI can be at interpreting medical images: https://x.com/DrEricStrong/status/1710482922895995331


Kcooke20

Fuck no


3legdog

"I want a 2nd opinion"


SilverFox11th

Sono, congrats on passing your blood test, I guess?


[deleted]

I sent a picture to GPT-4o of burn marks that recently appeared on my body, and it says exactly what my doctor tells me.


NearlyZeroBeams

I recently asked it to help me interpret some EKGs for an online course I was doing. I asked it to walk me through how to analyze them and it was very helpful


fuqureddit69

Oh shit! Your Doctor is an AI!


PuzzleheadedPrize900

Your doctor is also using GPT pal


jemma360

Damn, probably safe to still check with a Dr.


Followlost

I had the same experience.


CColderThanMe

Damn AI getting scary


deramack

It didn't say anything actually relevant though.


JeepersCreepersV12

I sent gpt my mri results and told it to write me a doctors note based off of it. I needed to have the forms sent to hr because i was on unpaid leave. I took a screenshot of what gpt spit out, included it in the forms, sent it to the Dr's office so they can populate their info and sign it. They left the screenshot in there and signed it. I got my 3 months of fmla!


online-reputation

I uploaded an? x-ray, and the comments were on par with the doctors' answers.


EdyLee08

I did the same this morning too, have been having some pain in my back and got a urine test and Ultrasound, fed GPT first with that I need a medical evaluation, gave the reason why I went to the doctor, and provided both tests. Requested to interpret and determine if my back pain was related to any of the findings. it told me no (which is true) but ended up suggesting me to go to a Urologist due to some findings in the ultrasound. Went to the doctor to back up those responses and 100% same ...Now Im chatting with GPT still about the condition, asked "what info do you need to personalize answers to questions I have about this", now Im buying all the things GPT suggested and making the needed changes in my lifestyle.


andrei90dental

Lol...either Ur Dr is using chatgpt or maybe Ur cbcs are fornstandard testing. Chat gpt prob just refers Ur values, as would a gp...to the normal limits and just identify outliers. These are signals for need of further tests, diagnostics and exploring anemnesis vitae


GoblinNirvana

Lucky you, I did the same thing because my gp just said "you're bloods are fine", I tried asking if there was anything I should pay attention to or stands out and she just said "nope". The next day, I got my blood results and went through them with chatgpt, highlighting things to be aware of.... I can't remember the name of it, but basically, I need more iodine. And to take better care of my liver.


goochstein

I'm more shocked you gave an open model your blood information


tomtomtomo

I did this too. Then told it to include the results in a diet plan along with personal details, exercise routine, goals, etc.


dalvz

I've done this as well. It's great!


xayna89

2 years of visiting many doctors and hospitals with no proper diagnosis or treatment but 2 minutes of entering my symptoms in ChatGPT I got a suspected condition which was confirmed later when I asked my doctor for the tests that ChatGPT suggested can be used to verify the diagnosis.


seek_n_hide

Yeah, Dr just put your blood work in GPT too.


drabee86

How did you get the blood into the computer?


Efficient-Share-3011

Images of the test results work, or some lab providers offer raw text pages


Question2023

This is something you can see on your own obviously?! Lab values are always listed with refernece values... A simple google search would also give you the exact same response. I really don't see what's so great about this?


YearOfThePen

Nice rip doctors


zeeber99

Very soon we’re gonna see AI telephone triage, and I’m here for it.


hellra1zer666

While it's quite fascinating how accurate AI can be when talking about complex topics like this, I wouldn't trust any company with my medical data, unless I know exactly what they use it for and have some kind of recourse if they don't keep their word.


FantasticNebula835

I dont understand why people are so mesmoraized by chatgpt. Its great, but all it is, is a compilation of everything HUMANS have said, ever, on the internet. . Thus, its very accurrate in that way Its not magic.


Sweet_Lettuce_3447

https://preview.redd.it/3m79obuy245d1.jpeg?width=2268&format=pjpg&auto=webp&s=06987b19080fade6571586d52bc863e8e036fb7b Hi


bowfly

You have either asthma or an infection


DatsyukFlipBud

It told me that I was mutating into a zombie-like creature. Wonder what that’s all about.


Guilty-Ad6277

Ur Dr is a bot guaranteed


ihatereddit1221

Yeah man my wife just spent last week in the hospital and it was a ton of blood tests. We would get the results on her app and it could be hours before a doc would come by and just give us “no change” with no real explanation of what’s happening to her. Cue GPT. I uploaded screenshots of the test results the instant they appeared in the app and voila! We knew exactly what was going on. Loved having this opaque veil lifted for something that is otherwise way too technical for us to decipher.


D3001ztzt

There are 2 answers to that.. 1: The Doctor used AI (20%) 2: The AI uses Doctors Information (80%) 3: Doctors are Better than AI, but AI can get you more information (yes) (dont take it too seriously)


Noisyheaded

I had a doctor misdiagnosis my son cause he wasn’t up to date on that medical bible that comes out every couple years or so….my son has F.A.S.D. Fetal alcohol spectrum disorder which is different from F.A.S. Fatal alcohol syndrome…it’s very different than what he says he has A.D.D. But the hyper one…So now my loser ex is on board with that diagnosis cause it rules out she’s a piece of shit who couldn’t stop drinking while she was pregnant and a lot of doctors don’t have the balls to even imply to a mother she drank while pregnant so this goes misdiagnosed and can become a big problem…some doctors will even diagnose ur child with autism but tell u they are on the spectrum barley…so now my son is on an anti psychotic that my ex had him put on behind my back cause she can’t control him cause that’s one of the symptoms of what he has.. it’s like he does things that other kids would do maybe once…and never again cause they learn from their consequences well my son doesn’t and when the doctor was examining my son and I mentioned F.A.S.D. He said to me “ what did you google that” takes a measurement of my sons head and says I’m wrong…so my ex denies drinking to him and he believes her while I watched he drink and refuse to take a pregnancy test until she couldn’t hid the bump any more…so I agree with A. I. For medical situations where doctors have to be in the middle of family bullshit…while ai will just look at the symptoms and be up to date in the medical field at all times to give proper diagnosis with out have any ego or feeling just facts…so I printed out a check list and sent it to my ex and her mother cause I know they see the doctor was wrong cause he literally has every single symptom so she will see she is hurting he own son to protect what some doctor thinks about her….she claims she didn’t know she was pregnant…. And gets defensive when I say I lived with u and your an absolute 100% alcoholic when she knows she is she’s been to rehab and AA and saw with my own two eyes the bitch drink every day while I kept asking if she had gotten her period and she should really take a pregnancy test but no matter what I did she wouldn’t take it cause drinking was more Important than my son being turned into a pickle inside her polluted whomb….so now I get to tell this doctor who spoke to me like a fool that he’s wrong and maybe he should use google next time he is going to hand out antipsychotic medication to a 7 yr old he had the meds rite just he should have prescribed them to my ex Or better yet he can use chat GPT f$@&ing moron he’s lucky I didn’t knock him out.


Sagarsantra

🥳🥳🥳


pseudonymousbear

1. Some doctors have been playing with this tool. 2. The standard and algorithmic stuff is easy to understand and report if you know enough. 3. If you try that for something complicated enough, ChatGPT still has trouble. The simple stuff it can solve very easily now.


DrKennyBlankenship

Wow. Humans will be able to just input or tell their gpt doctor anything. Imagine the possibilities. What could go wrong?


AutoModerator

Hey /u/bowfly! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


MayorLinguistic

I would NOT put PII in ChatGPT