The suggested regulation would apply to attorneys and litigants without legal representation appearing before the court, obliging them to confirm that filings produced with the help of AI were assessed for precision.
A federal appeals court in New Orleans is considering a proposal that would mandate lawyers to confirm whether they utilized artificial intelligence (AI) programs to draft briefs, affirming either independent human review of AI-generated text accuracy or no AI reliance in their court submissions.
In a notice issued on Nov. 21, the Fifth U.S. Circuit Court of Appeals revealed what seems to be the inaugural proposed rule among the nation’s 13 federal appeals courts, focusing on governing the utilization of generative AI tools, including OpenAI’s ChatGPT, by lawyers presenting before the court.
The suggested regulation would apply to attorneys and litigants without legal representation appearing before the court, obliging them to confirm that if an AI program was employed in producing a filing, both citations and legal analysis were assessed for precision. Attorneys who provide inaccurate information about their adherence to the rule may have their submissions invalidated, and sanctions could be imposed, as outlined in the proposed rule. The Fifth Circuit is open to public feedback on the proposal until Jan. 4.
The introduction of the proposed rule coincided with judges nationwide addressing the swift proliferation of generative AI programs, such as ChatGPT. They are examining the necessity for safeguards in incorporating this evolving technology within courtrooms. The challenges associated with lawyers utilizing AI gained prominence in June, as two attorneys from New York faced sanctions for submitting a legal document containing six fabricated case citations produced by ChatGPT.
Related: Sam Altman’s ouster shows Biden isn’t handling AI properly
In October, the U.S. District Court for the Eastern District of Texas introduced a rule effective Dec. 1, necessitating lawyers utilizing AI programs to “evaluate and authenticate any computer-generated content.”
According to statements accompanying the rule modification, the court emphasized that “frequently, the output of such tools might be factually or legally incorrect” and highlighted that AI technology “should never substitute for the abstract thinking and problem-solving capabilities of lawyers.”
Magazine: AI Eye: Train AI models to sell as NFTs, LLMs are Large Lying Machines