
Every time an employee pastes proprietary source code, a customer list, or a confidential business strategy into AI, they may be quietly dismantling the legal protections that make those secrets worth protecting.
Every time an employee pastes proprietary source code, a customer list, or a confidential business strategy into ChatGPT, Claude, or Google Gemini, they may be quietly dismantling the legal protections that make those secrets worth protecting. Courts and regulators are only beginning to grapple with this problem, and right now, the burden of preventing it falls squarely on employers.
The Legal Stakes
Under the federal Defend Trade Secrets Act (“DTSA”) and the Uniform Trade Secrets Act (“UTSA”) as adopted across most states, a trade secret plaintiff must show that the information at issue was subject to reasonable measures to maintain its secrecy. Courts have historically credited measures like confidentiality agreements, physical access controls, and employee training—but those safeguards were designed for a world of thumb drives and disgruntled employees. They were not built for a world where a well-meaning engineer can, in seconds, transmit an entire corpus of proprietary data to a third-party AI platform operating under terms of service that may permit the provider to use inputs for model training.
Reprinted courtesy of Kazim A. Naqvi, Sheppard and John V. Mysliwiec, Sheppard
Mr. Naqvi may be contacted at knaqvi@sheppard.com
Mr. Mysliwiec may be contacted at jmysliwiec@sheppard.com