Artificial intelligence (AI) systems captured considerable attention with the release of a large language chatbot, ChatGPT, by OpenAI, in November of last year. On March 14, OpenAI unveiled GPT-4, a more powerful “multimodal” chatbot responding to both text and images. And, on March 21, Google launched its conversational computer program, Bard, to compete with GPT-4. These chatbots allow users to initiate detailed queries or requests and receive prompt responses in complete sentences. Users are not forced to scroll through a list of results like those produced by search engines and follow-up questions can be asked. AI systems have been touted for many years and these new breakthroughs may drastically change the way that we create content.
Notwithstanding their unprecedented capabilities, AI systems can produce imperfect results. New chatbots, for example, can generate plausible-sounding but nonsensical, biased or false responses. Accordingly, heavy fact-checking is necessary. OpenAI has warned that ChatGPT is prone to filling in replies with incorrect data if there is not enough information available on the topic on the internet. Bard includes a website disclaimer that it “may display inaccurate or offensive information that doesn’t represent Google’s views.” On March 20, a breach at OpenAI allowed users to see other people’s chat histories before the service was shut down. Further, there is a real risk that courts will rule that certain content generated by these systems infringes the copyright or database rights of the owner of the materials and data that the technologies relied on. When entering into agreements with AI software providers, companies should also be concerned about other risks, including misappropriation of data, security, confidentiality, privacy and third-party claims.
Reprinted courtesy of Robert G. Howard, Pillsbury and Craig A. de Ridder, Pillsbury
Mr. Howard may be contacted at firstname.lastname@example.org
Mr. de Ridder may be contacted at email@example.com