
Search using a privacy search engine (duckduckgo.com, startpage.com ... ) and see what
Cleveland Clinic, or
Mayo Clinic say. (links checked)
Chatbots are programmed to reinforce you, not to inform you.
On Edit:
Folk are getting dangerously attached to AI that always tells them they're right
https://www.theregister.com/2026/03/27/sycophantic_ai_risks/
AI can lead mentally unwell people to some pretty dark places, as a number of recent news stories have taught us. Now researchers think sycophantic AI is actually having a harmful effect on everyone.
In reviewing 11 leading AI models and human responses to interactions with those models across various scenarios, a team of Stanford researchers concluded in a
paper published Thursday that AI sycophancy is prevalent, harmful, and reinforces trust in the very models that mislead their users.
snip
The team essentially conducted three experiments as part of their research project, starting with testing 11 AI models (proprietary models from OpenAI, Anthropic, and Google as well as open-weight models from Meta, Qwen DeepSeek, and Mistral) on three separate datasets to gauge their responses. The datasets included open-ended advice questions, posts from the AmITheAsshole subreddit, and specific statements referencing harm to self or others.
In every single instance, the AI models showed a higher rate of endorsing the wrong choice than humans did, the researchers said. "Overall, deployed LLMs overwhelmingly affirm user actions, even against human consensus or in harmful contexts," the team found.
Open Access Paper:
https://www.science.org/doi/10.1126/science.aec8352
You can download the PDF.