
Government AI: Proceed with Caution on Use Cases
The government is diving headfirst into the world of generative AI, deploying ChatGPT-like tech across various federal agencies. While the idea of automating tasks and streamlining processes sounds appealing, some experts are urging caution. The real question is: are we rushing into this without fully understanding the consequences?
Imagine the Department of Veterans Affairs using AI to write code, or the U.S. Army employing "CamoGPT" to scrub documents of diversity references. It sounds futuristic, maybe even a little scary. And while the Department of Education envisions using AI to answer student questions about financial aid, the underlying concern remains: is this technology truly ready for such critical tasks?
The problem, as pointed out by Meg Young, a researcher at Data & Society, is that we're caught in an "insane hype cycle." While some government chatbots are currently limited to general tasks like drafting emails, the push to expand their responsibilities is underway. And that's where things get tricky. For instance, the General Services Administration (GSA) wants to leverage AI for procurement – the complex process of government purchasing. Procurement involves intricate contract negotiations with stringent compliance requirements. While AI could theoretically help with document searching, lawyers might find it too unreliable for high-stakes negotiations. In fact, it might even be faster and safer to simply copy and paste existing, vetted language. The risk of error is just too high.
Then there's the issue of legal reasoning. A study revealed that even AI chatbots designed for legal research made factual errors a significant percentage of the time. One particularly alarming example involved a chatbot claiming that the Nebraska Supreme Court overruled the U.S. Supreme Court! It is easy to understand how it can lead to misinformation, and the consequences could be disastrous.
While the potential benefits of AI in government are undeniable, especially in areas like administrative task automation, we must exercise caution. A pilot program in Pennsylvania, for example, demonstrated significant time savings using ChatGPT for routine tasks. However, a measured, well-planned approach is crucial. We can't afford to let excitement cloud our judgment and lead to poorly implemented AI systems.
Ultimately, it's about finding the right balance. AI has the potential to revolutionize government operations, but it shouldn't come at the expense of accuracy, fairness, or transparency. As Joshua Blank, a law professor at the University of California, Irvine, rightly points out, a clear chain of command and robust oversight are essential to ensure these technologies are used responsibly.
Source: Gizmodo