What everyone should know before asking ChatGPT for medical advice
When Alexandra Watson has a question about her heart condition, her first port of call is Chad. Thats not the name of her cardiologist rather, its her nickname for ChatGPT, which she has been using for the past couple of years to check her symptoms.
Her condition is a rare one, and she says that the LLM (large language model) cuts through the noise to provide readable and easily understandable information. I couldnt get my cardiologist to spend this time talking me through every question I have on the subject, she says. But using AI allows me to deep dive and talk hypothetically. Doctors are dismissive, Google just scares you, but Chad is helpful.
In January, a report from OpenAI, the tech giant behind ChatGPT, claimed that more than 40 million people around the world use the bot for health advice every single day, accounting for more than 5 per cent of messages sent to it globally. And last year, research from healthcare champion Healthwatch found that 9 per cent of men and 7 per cent of women across England are using AI chatbots for medical queries.
For Watson, the fact that the chatbot can keep track of previous issues she has asked about, to give her a more comprehensive picture, is a bonus. It references her heart queries, for example, when she asks other health-related questions.
https://www.independent.co.uk/life-style/ai-chatbot-chatgpt-medical-advice-safety-b2937197.html
Blues Heron
(8,805 posts)Tasmanian Devil
(155 posts)The AI techs should be charged for practicing medicine without a license. I used to joke about "Doctor Google" but AI is much worse.
From the article:
usonian
(25,169 posts)
Search using a privacy search engine (duckduckgo.com, startpage.com ... ) and see what
Cleveland Clinic, or Mayo Clinic say. (links checked)
Chatbots are programmed to reinforce you, not to inform you.
On Edit:
Folk are getting dangerously attached to AI that always tells them they're right
https://www.theregister.com/2026/03/27/sycophantic_ai_risks/
In reviewing 11 leading AI models and human responses to interactions with those models across various scenarios, a team of Stanford researchers concluded in a paper published Thursday that AI sycophancy is prevalent, harmful, and reinforces trust in the very models that mislead their users.
snip
The team essentially conducted three experiments as part of their research project, starting with testing 11 AI models (proprietary models from OpenAI, Anthropic, and Google as well as open-weight models from Meta, Qwen DeepSeek, and Mistral) on three separate datasets to gauge their responses. The datasets included open-ended advice questions, posts from the AmITheAsshole subreddit, and specific statements referencing harm to self or others.
In every single instance, the AI models showed a higher rate of endorsing the wrong choice than humans did, the researchers said. "Overall, deployed LLMs overwhelmingly affirm user actions, even against human consensus or in harmful contexts," the team found.
Open Access Paper: https://www.science.org/doi/10.1126/science.aec8352
You can download the PDF.