
AI tools can be tremendously useful in business operations, but as the Heppner decision makes clear, using them for legal questions, without guidance, can create significant and unnecessary risk.
Imagine that you are an executive (who is not a lawyer) and are concerned about what your company plans to do is legal. You could call your lawyer who might bill you for the call. Or, you can ask your AI chatbot, such as Claude or ChatGPT, about the legal risk. The chatbot will likely compliment you on the incisive question, provide you with highly confident answer (that may or may not be right) and will not bill you on an hourly basis.
That is essentially what financial services executive Bradley Heppner did. It did not end well. A federal court recently ruled that Heppner’s chats with the AI tool Claude were not protected by attorney-client privilege or the work-product doctrine. That means that the other side (in this case, the federal government) could get access to his chatbot prompts, uploads and responses, and learn a great deal about, for example, whether Heppner knew what he was doing was illegal.