Sharing Data with Public AI Platforms Can Extinguish Trade Secret Protection: Two Federal Courts Weigh In

知財ニュースバナー English

Two recent federal district court decisions have established that sharing confidential business information with public AI platforms—without proper contractual safeguards—can permanently extinguish trade secret protection under U.S. law. The cases, decided in spring 2026, carry significant implications for any enterprise deploying generative AI tools in business operations.

In Trinidad v. OpenAI, a pro se plaintiff alleged that OpenAI had misappropriated proprietary AI frameworks she developed while using ChatGPT. The court dismissed the trade secret claims, finding that the plaintiff had not alleged “any reasonable measures” to maintain secrecy. By voluntarily uploading her frameworks to a platform operating under no confidentiality obligation to users, she forfeited trade secret status. The court applied Ruckelshaus v. Monsanto Co. (1984), which holds that disclosing information to parties “under no obligation to protect confidentiality” extinguishes the property right in a trade secret. ChatGPT’s standard terms of service provide no such confidentiality guarantee, meaning input into the platform is treated as a disclosure to the public at large.

In United States v. Heppner, a criminal defendant sought to invoke attorney-client privilege for documents prepared using Anthropic’s Claude AI. Judge Jed Rakoff of the Southern District of New York denied the claim in what he described as “a question of first impression nationwide.” The court held that communications conducted through a publicly available AI platform lack the confidentiality protection required for privilege to attach—pointing specifically to Anthropic’s privacy policy, which permits data collection, use for model training, and third-party disclosure. Judge Rakoff emphasized that attorney-client privilege requires “a trusting human relationship with a licensed professional,” a standard that a public AI system cannot satisfy.

The “Readily Ascertainable” Problem

Both decisions reflect an emerging judicial framework that treats information input into public AI platforms as potentially equivalent to publicly disclosed information. Under the Uniform Trade Secrets Act (UTSA) and the federal Defend Trade Secrets Act (DTSA), trade secret protection requires the holder to have taken “reasonable measures” to maintain secrecy. Where a service provider’s terms permit broad data use and third-party sharing, enterprises that fail to deploy enterprise-licensed, zero-data-retention AI solutions face the risk that courts will find no adequate secrecy measures were in place.

The problem is particularly acute in what practitioners call “Shadow AI” scenarios—employees using personal consumer-tier accounts for ChatGPT, Claude, Gemini, or similar tools to process confidential business data. In such cases, enterprises will struggle to demonstrate that they maintained control over the secrecy of the information involved.

Four Practical Steps for Legal and IP Teams

Commenting on the decisions, attorney Peter J. Toren outlined the following immediate responses in an analysis published by IPWatchdog on April 5, 2026. First, organizations should deploy enterprise AI licenses with explicit zero-data-retention terms, ensuring that input data is not used for model training or shared with third parties. Second, enterprises should adopt and document formal AI governance policies that identify protected information categories and explicitly prohibit the use of personal consumer AI accounts for work purposes. Third, access controls and audit logs should be implemented to create a verifiable record of how AI tools are used and who has accessed them. Fourth, targeted training should be provided to legal, engineering, and business development staff on the specific trade secret risks arising from generative AI use.

Implications for International Operations

While the decisions are grounded in U.S. law, the underlying risk structure has clear parallels in other jurisdictions. Under Japan’s Unfair Competition Prevention Act, trade secret protection similarly requires that information be “managed as confidential”—a formal requirement that uncontrolled AI use could undermine. For Japanese companies operating in the U.S., or entering joint development agreements with U.S. partners, compliance with U.S. trade secret standards now necessarily includes AI governance as a core component.

As generative AI adoption accelerates across industries, these decisions signal that courts are prepared to hold enterprises to rigorous standards when it comes to controlling access to sensitive information through AI channels. The cases are expected to be widely cited in future trade secret and privilege disputes involving AI-generated or AI-processed content.

The full analysis by Peter J. Toren is available at IPWatchdog.

この記事について

パテント探偵社 編集部

知的財産の世界で起きている出来事を、ジャーナリズムの手法で報道・分析する独立メディア。特許番号・法的根拠・当事者名を正確に記述しながら、専門家以外にも読みやすい記事を届けています。掲載内容は法的アドバイスではありません。

コメント

Copied title and URL