Numerous executive orders signed by President Donald Trump have faced scrutiny due to various errors and questionable quality, raising concerns about the potential use of artificial intelligence (AI) in their drafting. Legal experts have been critical of these orders, highlighting a range of issues such as typographical errors, formatting inconsistencies, and awkward wording. Specifically, Mark Joseph Stern from Slate has pointed out repeated numbering within bullet points in an executive order related to Alaskan land and a bizarre renaming of the Gulf of Mexico to the Gulf of America. These peculiar mistakes have led many to suspect the involvement of AI in the drafting process, as they bear a striking resemblance to errors seen in AI-generated articles, such as those produced by Sports Illustrated. These suspicions have been bolstered further by revelations and personal experiments conducted by legal experts.
Evidence of AI Involvement
Texas lawyer Raffi Melkonian’s foray into using ChatGPT raised further questions about AI’s role in drafting official documents. Melkonian discovered that ChatGPT could generate text almost identical to portions of one of Trump’s executive orders, hinting that the AI might have been used in its creation. Despite the revelations, the White House has yet to comment on these allegations, leaving many questions unanswered. These concerns aren’t just about typographical errors and inconsistencies; they delve into the broader issue of whether AI is reliable and appropriate for drafting official government documents. Critics believe this sloppy execution stems from the high turnover and chaotic environment of Trump’s administration, which may have led to a dependence on AI-generated content as a quick solution. Given these concerns, it remains to be seen if future administrations will implement stricter measures to guarantee the accuracy and quality of official documentation, ensuring that human oversight is paramount in such critical tasks.