Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

canetoad

(20,758 posts)
Fri Mar 27, 2026, 04:14 PM 15 hrs ago

What everyone should know before asking ChatGPT for medical advice

When Alexandra Watson has a question about her heart condition, her first port of call is Chad. That’s not the name of her cardiologist – rather, it’s her nickname for ChatGPT, which she has been using for the past couple of years to check her symptoms.

Her condition is a rare one, and she says that the LLM (large language model) “cuts through the noise” to provide readable and easily understandable information. “I couldn’t get my cardiologist to spend this time talking me through every question I have on the subject,” she says. “But using AI “allows me to deep dive and talk hypothetically. Doctors are dismissive, Google just scares you, but Chad is helpful.”

In January, a report from OpenAI, the tech giant behind ChatGPT, claimed that more than 40 million people around the world use the bot for health advice every single day, accounting for more than 5 per cent of messages sent to it globally. And last year, research from healthcare champion Healthwatch found that 9 per cent of men and 7 per cent of women across England are using AI chatbots for medical queries.

For Watson, the fact that the chatbot can keep track of previous issues she has asked about, to give her a more comprehensive picture, is a bonus. It references her heart queries, for example, when she asks other health-related questions.

https://www.independent.co.uk/life-style/ai-chatbot-chatgpt-medical-advice-safety-b2937197.html

3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
What everyone should know before asking ChatGPT for medical advice (Original Post) canetoad 15 hrs ago OP
AI thinks 9.11 is bigger than 9.9 because 11 is bigger than 9. Caveat emptor with the ai ! Blues Heron 15 hrs ago #1
Just say NO Tasmanian Devil 15 hrs ago #2
Just say NO. NO. NO. usonian 14 hrs ago #3

Blues Heron

(8,805 posts)
1. AI thinks 9.11 is bigger than 9.9 because 11 is bigger than 9. Caveat emptor with the ai !
Fri Mar 27, 2026, 04:19 PM
15 hrs ago

Tasmanian Devil

(155 posts)
2. Just say NO
Fri Mar 27, 2026, 04:19 PM
15 hrs ago

The AI techs should be charged for practicing medicine without a license. I used to joke about "Doctor Google" but AI is much worse.

From the article:

In 51.6 per cent of cases where the patient needed to immediately head to hospital, the chatbot advised them to stay at home or wait for a routine appointment.

usonian

(25,169 posts)
3. Just say NO. NO. NO.
Fri Mar 27, 2026, 05:14 PM
14 hrs ago


Search using a privacy search engine (duckduckgo.com, startpage.com ... ) and see what

Cleveland Clinic, or Mayo Clinic say. (links checked)

Chatbots are programmed to reinforce you, not to inform you.

On Edit:

Folk are getting dangerously attached to AI that always tells them they're right
https://www.theregister.com/2026/03/27/sycophantic_ai_risks/

AI can lead mentally unwell people to some pretty dark places, as a number of recent news stories have taught us. Now researchers think sycophantic AI is actually having a harmful effect on everyone.

In reviewing 11 leading AI models and human responses to interactions with those models across various scenarios, a team of Stanford researchers concluded in a paper published Thursday that AI sycophancy is prevalent, harmful, and reinforces trust in the very models that mislead their users.

snip

The team essentially conducted three experiments as part of their research project, starting with testing 11 AI models (proprietary models from OpenAI, Anthropic, and Google as well as open-weight models from Meta, Qwen DeepSeek, and Mistral) on three separate datasets to gauge their responses. The datasets included open-ended advice questions, posts from the AmITheAsshole subreddit, and specific statements referencing harm to self or others.

In every single instance, the AI models showed a higher rate of endorsing the wrong choice than humans did, the researchers said. "Overall, deployed LLMs overwhelmingly affirm user actions, even against human consensus or in harmful contexts," the team found.



Open Access Paper: https://www.science.org/doi/10.1126/science.aec8352
You can download the PDF.
Latest Discussions»Editorials & Other Articles»What everyone should know...