Chat Data or Privacy? Anthropic Puts the Decision in Users’ Hands
Anthropic has announced a new policy requiring Claude users to decide by September 28 whether their conversations will be used to train AI models. Previously, consumer interactions were deleted within 30 days, unless flagged, or kept for a maximum of two years. Under the new rules, conversations may be stored for up to five years if users do not opt out. Enterprise clients using Claude for Work, Claude Gov, or API access are unaffected.
The company frames this change as a benefit to users, claiming that shared data helps improve Claude’s safety mechanisms and enhances its capabilities in coding and reasoning. Yet, the shift also ensures Anthropic can access massive amounts of real-world chat data to remain competitive with AI rivals like OpenAI and Google. These interactions are crucial for refining the model and staying ahead in the market.
The move also reflects a broader trend among AI companies, as regulators scrutinize data handling practices. OpenAI is currently involved in a court case requiring indefinite retention of ChatGPT conversations, illustrating the growing tension between AI innovation and privacy protection. Many users may consent to these new terms without full awareness, highlighting the difficulties in securing informed consent in the AI era.
Anthropic’s interface makes it easy to accept the new policy, with a large “Accept” button and a smaller, pre-enabled toggle for training permissions. Experts caution that such design may lead users to share data unintentionally. As AI advances, balancing privacy with model development becomes an increasing challenge for companies like Anthropic.