Is it on the same level as a search engine, a very close human, or more?
A court order, as part of the New York Times' copyright infringement lawsuit against OpenAI, requested that OpenAI preserve all user chats with ChatGPT indefinitely. Two days ago, OpenAI appealed that court order.
Sam Altman wrote on X: “Talking to an AI should be like talking to a lawyer or a doctor […] (maybe spousal privilege is a better analogy).”
In my case, it’s more like a digital brain or conscience, rather than a conversation with a human being. Furthermore, I wouldn’t be surprised if, among the logs of ChatGPT, you’d find more confidential information than in emails or documents, because the product is so efficient that it encourages you to share everything with it.
We can blame all of this on end-users, or, as Sam Altman also hinted, we could redefine our relationship as humans with products like ChatGPT.
In most countries, in theory, correspondence is “protected.”
But it can be made public by a court. Everything that comes out of your mind and is stored, whether on paper or on a server, can be used against you.
Since a conversation with ChatGPT is potentially more intimate than one with a human, should it be protected differently?
From my point of view, this is a structural issue in digital technology, not something exclusive to AI. Courts will likely treat AI conversations similarly to personal diaries.
Despite the new nature of AI, I don’t expect that to change, especially for retail use of services like ChatGPT.
That’s why I understand why large institutions and enterprises are cautious. They have carefully assessed how to implement this technology while staying shielded from exactly this kind of legal drama.
And they were obviously right, because their clients delegate responsibility to them for handling critical data. You simply cannot jeopardize that, even when working with one of the most innovative companies of the past decade.
This is why we talk about governance, protection, and ethics, because once a piece of data is shared in the wrong place, the damage is permanent.
So what’s the best way to protect ourselves?
It is a recurring issue that began with the Internet. Your emails and your messages can already be read by the service provider.
Content you post on social media, even with restricted reach, can leak at any time.
We are all aware of this risk as retail users.
But when it comes to enterprise, you have to think about the problem differently.
That is why estimates around cost, delay, and risk often carry an order of magnitude from 1 to 10, because the loss associated with data mismanagement can go from 10 to 100.
So if you really want to control your data as an individual, do not let it out of your head.
And if you need to share what is in your head, maybe we should start thinking more and reacting less.
Not such a bad idea, after all.