ChatGPT and Therapy: What's Real, What's Noise, and What Clinicians Should Actually Use

By The Teamarticles

Clinicians are experimenting with ChatGPT for notes. Here's why purpose-built clinical AI is safer, faster, and more effective.

The Elephant in the Room

Let's start with what everyone already knows: therapists are using ChatGPT for clinical documentation.

They're copying session notes into ChatGPT and asking it to restructure them into SOAP format. They're dictating summaries and asking for polished progress notes. They're using it to draft treatment plans, generate psychoeducational handouts, and brainstorm intervention strategies.

This isn't hypothetical. It's happening in private practices, clinics, and hospitals right now. And for good reason -- documentation takes too long, ChatGPT is free and fast, and the output is often surprisingly good.

But "surprisingly good" isn't the same as "clinically appropriate, legally compliant, and ethically sound." And the gap between those two things is where therapists face real professional risk.

What ChatGPT Gets Right

Credit where it's due. ChatGPT and similar large language models are genuinely capable tools:

  • They can structure unformatted text into clinical note formats (SOAP, DAP, BIRP) with reasonable accuracy
  • They understand mental health terminology and can generate clinically relevant language
  • They can summarize lengthy session transcripts into concise documentation
  • They're available immediately, require no training, and cost little or nothing

For clinicians drowning in documentation -- and 82% report that admin work contributes to their burnout (Google Cloud/Harris Poll, 2024) -- the appeal is obvious. Any tool that reduces the 15-20 minutes per note of after-hours charting feels like a lifeline.

The Problems Nobody Talks About

Privacy and Compliance

This is the most critical issue, and it's often the least considered in the moment of "I just need to finish this note."

When you paste clinical session content into ChatGPT, that data is transmitted to OpenAI's servers. Under OpenAI's standard terms of service, this data may be used for model training (unless you've opted out through specific API configurations). Even with opt-out, the data is processed on servers that are not designed for healthcare compliance.

In Canada, PIPEDA (Personal Information Protection and Electronic Documents Act) requires that personal health information be protected with appropriate safeguards, used only for the purpose for which it was collected, and stored in a manner that ensures confidentiality. Provinces with additional legislation -- Ontario's PHIPA, Alberta's HIA -- add further requirements.

Entering identifiable clinical data into ChatGPT potentially violates:

  • Your obligation to protect client confidentiality
  • PIPEDA/PHIPA requirements for data handling
  • Your regulatory college's standards of practice
  • Your professional liability insurance terms

No Audit Trail

Clinical documentation is a legal record. If a note is ever reviewed in a licensing investigation, malpractice claim, or court proceeding, you need to demonstrate how it was created, when, and by whom.

ChatGPT conversations have no clinical audit trail. There's no timestamp linked to your clinical record system, no chain of custody, and no way to demonstrate that the AI-generated draft was appropriately reviewed and modified before being finalized.

No Session Context

Every ChatGPT interaction starts from zero. It doesn't know your client's history, their treatment goals, previous session content, or the therapeutic modality you're using. You have to provide this context every time -- which means either typing extensive prompts (adding time rather than saving it) or accepting generic output that lacks clinical specificity.

No Clinical Note Standards

ChatGPT can approximate clinical note formats, but it doesn't enforce them. It may mix subjective and objective observations in a SOAP note, omit required elements in a DAP note, or generate recommendations that don't align with the actual session content. Without built-in clinical format validation, the clinician bears the entire burden of quality assurance.

What Purpose-Built Clinical AI Looks Like

The alternative to generic AI isn't going back to handwriting notes. It's using AI tools that are architecturally designed for clinical work. Here's what to look for:

Consent-First Design

Clinical AI should operate within a consent framework from the ground up. This means:

  • Client data is collected only with explicit consent
  • The client controls what information is shared and can revoke consent at any time
  • All data processing occurs within a compliant infrastructure
  • Client anonymity is maintained — no personal identifiers are required to use the platform

Client Anonymity by Design

This is a critical architectural difference. In purpose-built clinical AI, clients can be identified only by alias — no names, no emails, no dates of birth, no health card numbers. The platform never needs to know who the client is. Identity information, when needed for EMR purposes, is stored in a separate encrypted vault and linked to clinical data only at the moment a professional needs to view it.

This means that even in a breach scenario, clinical data and identity data are never stored together.

Built-In Clinical Formats

Rather than prompting an AI to "write this in SOAP format," purpose-built tools have clinical note formats embedded in their architecture. The clinician selects SOAP, DAP, BIRP, or progress note format, and the output conforms to the structure automatically -- with appropriate section headers, required elements, and clinical language conventions.

Session Context and Continuity

Purpose-built clinical AI maintains context across sessions. It knows the client's treatment history, previous session themes, treatment goals, and therapeutic modality. This means the AI can generate notes that are contextually appropriate without the clinician providing extensive background every time.

EHR Overlay Architecture

Rather than requiring clinicians to switch to a new system, effective clinical AI works as an overlay on existing EHR platforms. Notes generated by the AI can be reviewed, edited, and integrated into the clinician's existing documentation workflow without a copy-paste step.

AI Assists, Clinician Decides

One principle is non-negotiable across all clinical AI applications: the clinician is always the final decision-maker.

AI-generated notes are drafts. They require clinical review, editing, and approval before they become part of the medical record. The AI doesn't make clinical judgments, diagnoses, or treatment decisions. It handles the administrative task of structuring documentation so the clinician can focus their expertise on the clinical content.

This isn't a philosophical position -- it's a practical requirement. AI models can miss nuance, misinterpret tone, or generate language that doesn't accurately reflect the session. The clinician's review is what transforms an AI draft into a clinical document.

The Decision Framework

For therapists currently using ChatGPT for documentation, the decision isn't about whether AI helps -- it clearly does. The decision is about which AI tool manages the risk appropriately.

Continue using ChatGPT if:

  • You never enter any identifiable client information (no names, dates, identifying details)
  • You treat the output as a rough starting point and substantially rewrite it
  • You're comfortable with the compliance risk under your jurisdiction's privacy legislation
  • You don't need session context, clinical format validation, or audit trails

Switch to purpose-built clinical AI if:

  • You work with clients whose data is protected under PIPEDA, PHIPA, HIPAA, or equivalent
  • You need clinically structured output that doesn't require extensive reformatting
  • You want session context maintained across encounters
  • You need an audit trail for documentation
  • You want to reduce documentation time without increasing compliance risk

The Bottom Line

ChatGPT is a remarkable general-purpose tool. It's also a poor fit for clinical documentation when privacy, compliance, and clinical accuracy matter -- which is always.

Purpose-built clinical AI tools exist precisely because the requirements of clinical documentation exceed what general-purpose AI can safely provide. They're not more expensive than the compliance risk of using ChatGPT with client data. They're not slower than the prompting and reformatting workflow that ChatGPT requires. And they're not harder to use -- they're typically simpler, because the clinical workflow is built in rather than bolted on.

The 2025 research shows that AI scribing reduces burnout by 13 percentage points in 30 days and saves thousands of hours of documentation time. The benefits are real. The question is whether you access those benefits through a tool designed for the job or one that was designed for something else entirely.


References: Google Cloud/Harris Poll (2024); OpenAI Terms of Service; PIPEDA (Personal Information Protection and Electronic Documents Act); PMC Ambient AI Scribe Study (2025); Permanente Medical Group/AMA (2025).