Home » Editor's pick » Why You Shouldn’t Use ChatGPT for Therapy
News Desk -

Share

Think your ChatGPT chats are private? They are not protected by law. Find out how to protect your data and use AI more safely.

As more people turn to artificial intelligence for advice, support, and even emotional relief, a recent statement by OpenAI CEO Sam Altman is a stark reminder: your AI conversations are not confidential.

Speaking on Theo Von’s This Past Weekend podcast, Altman openly warned that personal chats with ChatGPT can be subpoenaed and used as legal evidence in court. “If you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, like we could be required to produce that,” he said. “And I think that’s very screwed up.”

Altman’s comment has raised important questions about digital privacy, mental health, and how people are using AI tools for emotional support often without realizing the risks. But instead of focusing only on the problem, let’s shift the spotlight to what really matters: what should users do to protect themselves when interacting with AI tools like ChatGPT?

Here’s a practical breakdown of steps you can take right now.

AI Is Not a Therapist

It’s crucial to start with this baseline truth: ChatGPT is not a therapist. While it can simulate empathy and give advice based on large datasets, it is not governed by medical ethics or bound to doctor-patient confidentiality.

Conversations with therapists are protected under laws such as HIPAA in the US or similar healthcare privacy regulations elsewhere. ChatGPT doesn’t fall under any of these. That means anything you share could, under certain circumstances, be accessed by third parties, especially in legal proceedings.

If you wouldn’t want something to appear in a court transcript or investigation, don’t share it with an AI chatbot.

Don’t Overshare

Many users feel safe sharing intimate thoughts with chatbots after all, there’s no human on the other end to judge you. But emotional safety doesn’t equal data security.

Avoid entering specific personal details like:

  • Full names
  • Home or work addresses
  • Names of partners, children, or colleagues
  • Financial information
  • Descriptions of illegal behavior
  • Admissions of guilt or wrongdoing

Even if your conversation seems anonymous, metadata or patterns of usage could still connect it back to you.

Use ChatGPT for ideas, brainstorming, and general advice, not confessions, emotional breakdowns, or personal disclosures that you wouldn’t say in a public setting.

Turn Off Chat History

OpenAI allows users to turn off chat history, which prevents conversations from being used to train future models or stored long term.

While this feature doesn’t offer absolute protection (some data may still be stored temporarily), it’s a strong step toward reducing what’s kept on file.

Here’s how you can disable it:

  • Go to Settings
  • Click on Data Controls
  • Turn off Chat History & Training

Disabling history gives you greater control over what’s retained, even if it doesn’t erase all risk.

Stay Anonymous

If you’re testing ideas or exploring sensitive topics through AI, avoid logging in through accounts that use your real name, email, or work credentials. This creates a layer of distance between your identity and the data.

For added safety, avoid discussing location-specific events or anything that could link your usage back to real-world situations.

The less identifiable your data, the harder it becomes to trace it back to you in legal or investigative scenarios.

Don’t Rely on AI When You’re Most Vulnerable

AI isn’t equipped to handle real-time emotional crises. While it might seem responsive, it’s not trained to recognize or escalate life-threatening issues like suicidal ideation, abuse, or trauma the way licensed therapists or crisis helplines are.

If you’re in a vulnerable place emotionally, it’s better to:

  • Call a crisis hotline
  • Speak to a therapist
  • Talk to a trusted friend or family member

Emotional support should come from trained professionals, not algorithms.

Read the Fine Print

It might not be thrilling reading, but OpenAI’s privacy policy spells out how your data is handled. Other platforms that use AI chatbots may have similar policies.

Important things to look for include:

  • How long your data is stored
  • Whether conversations are used to train the model
  • Under what conditions data may be shared with third parties
  • Your rights to delete your data

Knowing the rules helps you stay in control of your digital footprint.

Want Change? Push for AI Privacy Laws

As AI tools continue evolving, the legal system is lagging behind. There are no clear global standards on how AI conversations should be protected, especially when used for pseudo-therapeutic purposes.

If you believe these tools should be more private, support efforts to push for ethical AI frameworks, stronger data protection laws, and clearer consent structures. The more users demand transparency and protection, the more pressure there will be for tech companies and regulators to act.

Don’t just be a passive user. Be part of the change.

Before You Hit Send, Ask Yourself This

Sam Altman’s candid remark is more than just a caution, it’s a call to action for users to be informed and intentional. AI chatbots like ChatGPT can be helpful tools, but they’re not private, they’re not therapists, and they’re not above the law.

As tempting as it might be to treat AI like a journal or confidant, the digital trail you leave could have real-world consequences. By being aware of the risks and taking proactive steps, you can still benefit from the power of AI, without putting yourself in a vulnerable legal or personal position.

So the next time you start typing out something deeply personal, pause for a second and ask yourself:
Is this something I’d be comfortable explaining in a courtroom?

If the answer is no, it’s better left unsaid, at least to a chatbot.