Blog

ChatGPT and the Boundaries of Trust – A Warning from Sam Altman

Case Study: ChatGPT and the Boundaries of Trust – A Warning from Sam Altman

ChatGPT has revolutionized the way we interact with AI since its public release, becoming a personal assistant or tool for everything from writing emails and resumes to asking questions about health, law, relationships, and a wide range of other topics. However, as the model’s conversational capabilities continue to improve, we now see people treating it increasingly like a trusted advisor, therapist, or friend—a development that has raised concerns among technology researchers.

OpenAI CEO Sam Altman cautioned users in public in early 2025 against over-relying on ChatGPT, particularly sensitive issues regarding mental wellbeing, legality, or very personal things. His remarks, reported across Indian and international news outlets and social media, created the much-needed conversation on the misuse of AI, privacy issues, and how we respond to machines.

The Trigger: Altman’s Comments through the Media

The Indian newspapers, specifically the Hindu, Times of India, Indian Express, and NDTV, sent Altman’s message loud and clear:
“Don’t use ChatGPT as a therapist. It’s not confidential.

“I think that’s very messed up” Altman stated regarding individuals using ChatGPT for therapist-like advice.

He clarified that interactions could be accessed in legal proceedings even if OpenAI does not trade information.

He acknowledged that ChatGPT is subject to mistakes—in the legal, healthcare, or emotional context, those mistakes could be very dangerous.

These comments conveniently were made on the back of the surrounding concerns regarding AI technology’s potential misuse in sensitive emotional or legal situations – a big red flag for both amateurs and professionals.

Why It Matters: ChatGPT Getting It Wrong in Real Life Examples

Despite its fantastic track record, ChatGPT can suffer from hallucination—a term used to describe when an AI confidently shares false or fabricated information as fact. Below are some real examples where ChatGPT’s responses were wrong or misleading:

1. Legal Hallucination – The “Fake Case” Situation (Mata v. Avianca)
In 2023, a New York attorney used ChatGPT to draft a legal document. The software provided six citations of cases that were wholly fabricated. The court ultimately did not find that those cases existed. The attorney falsely represented non-existent legal authority and faced penalties.

Takeaway: Even a professional can hallucinate in this process (i.e., using ChatGPT), but, importantly, I think it is worthy to note that it can provide realistic-sounding but totally made-up citations – with very serious legal and ethical implications.

2. Wrong Health Advice
In another example, a user wanted to know if they should take one medication with another, and ChatGPT said yes, without regard to a very well-known contraindication between the two medications. A pharmacist later flagged this advice as dangerous.

Lesson learned: When asking health questions, there is always a chance of plausible responses, but ChatGPT responses are not vetted at all or based on any knowledge that does not have a real-world context, assuming it knows the medications at all.  

3. Incorrect Information about Recent Things!
ChatGPT-3.5, trained from data only until 2021-2023 (version specific), can deliver incorrect information on political happenings, tech launches, or entertainment sourced outside of real time. It was asked one time about the results of the 2024 Olympics and produced many invented medal names and numbers. Lesson: ChatGPT is not a real-time search engine. It makes educated guesses based on patterns, rather than facts—making it unsafe for time-sensitive or factual questions.

Privacy Risks: Beyond Inaccuracy

Aside from hallucinations, Altman cautioned of even greater privacy risks in applying ChatGPT for therapy, personal conversations, or sensitive business questions. Although OpenAI has made some efforts to anonymize user information, the risks include-  

  • Data logs are being accessed by engineers or law enforcement if subpoenaed.  
  • Lack of doctor–patient or attorney–client privilege for ChatGPT interactions.  
  • Users are unwittingly sharing personal information, believing it to be confidential.  

As there are increasingly urgent calls for stricter regulation of AI, these issues underscore the reason users need to exercise caution about what they divulge.

Emotional Dependence: When AI Becomes a Human Replacement

Let me spell this out for those who have become emotionally dependent on an AI chatbot and consider it a friend or therapist. ChatGPT is friendly and polite but lacks emotion, memory, and ethical obligation.

Altman did not insult anyone when he responded that AI is not a hug or a friendly conversation. Regardless of how friendly or comforting, AI is not an adequate replacement for human contact or clinical care.

Industry Implications: Responsible Use Needed

Altman’s awareness brought light to an important need for:  

  • User education on AI drawbacks or limits.  
  • Disclaimers and transparency where AI is applied in sensitive areas.  
  • Regulation of AI and clarification of liability and data rights.  
  • Ethical guidelines for the application of AI in healthcare, law and mental health.  

It is the responsibility for any (start-up developers) developer to build in privacy protections, opt-out options, disclaimers, and data deletions within their platforms and accountability to privacy.

Final Thoughts: Verify Everything

AI systems, like ChatGPT, are impressive and even remarkable; however, they are not accurate. Circling back to the point of Sam Altman’s awareness, we should greet AI with wonderment but not with blind faith.

Use ChatGPT for:
– Ideation
– Concept validation
– Tedious repetitive tasks

But don’t think of it in the same way you’d consider a doctor, lawyer, or therapist, and don’t assume your chats would be protected in the way a human expert would.

The more we know about the limits of AI, the more wisely we will deploy it—and the safer our online lives will be.

Sources: 

Share:

Author:

Connect with Us