More individuals are using ChatGPT as a sort of therapist, but that does not make it confidential. OpenAI CEO Sam Altman explains such conversations don’t carry the same legal safeguards you get with actual therapists, doctors, or lawyers.
Altman Highlights Legal Risks of AI Conversations
Altman told podcaster Theo Von, “So if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up.”
He went on, “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s like legal privilege for it there’s doctor-patient confidentiality, there’s legal confidentiality,” according to Business Insider report. “We haven’t figured that out yet for when you talk to ChatGPT.”
Call for Urgent Privacy Standards Around AI
Altman said there should be the “same concept of privacy for your conversations with AI that we do with a therapist” and that it should be “addressed with some urgency.”
He stated increasingly more individuals particularly younger users are resorting to ChatGPT for therapy, life coaching, or relationship guidance. Altman added, “No one had to consider that even a year ago, and now I think it’s this massive problem of like, ‘How are we gonna treat the laws around this?'”
AI Conversations Lack End-to-End Encryption
Unlike end-to-end encrypted messaging apps such as WhatsApp or Signal, OpenAI can monitor your chats with ChatGPT. Workers will occasionally glance through chats to enhance the AI or to monitor misuse.
OpenAI claims chats deleted by Free, Plus, and Pro users are erased within 30 days unless they’re obligated by law to retain them for “legal or security reasons.”
Last June, The New York Times and other news organizations requested a court to compel OpenAI to preserve every user chat, including those that had been deleted, as part of a copyright case. OpenAI is now appealing that court order.