AI in Local Government: Opportunities and Risks
The integration of AI in local government operations offers efficiency but raises concerns about transparency and accuracy. Recent reports highlight the reliance on AI tools like ChatGPT for drafting communications and policy analysis. As cities navigate this landscape, ethical guidelines and public trust remain paramount.
USAGEFUTUREPOLICYTOOLS
AI Shield Stack
9/6/20252 min read


The increasing integration of artificial intelligence (AI) into local government operations has sparked a complex dialogue about its benefits and potential pitfalls. Recent reports from cities like Bellingham and Everett in Washington state reveal a growing reliance on AI tools, particularly ChatGPT, to draft communications, analyze policies, and streamline administrative tasks. While these technologies can enhance efficiency, they also raise significant concerns regarding transparency, accountability, and the authenticity of government communication.
In Bellingham, Mayor Kim Lund's office utilized ChatGPT to draft a letter supporting funding for the Lummi Nation's crime victims coordinator position. This letter, while ultimately not funded, was composed with the assistance of AI, highlighting the technology's role in shaping public communication. However, the lack of disclosure regarding AI's involvement in such official documents has prompted discussions about the need for transparency in government communications.
Records obtained through public records requests show that city officials across various departments have employed AI for a wide range of tasks—from generating social media posts to drafting policy documents and responding to constituent inquiries. The potential for AI to improve efficiency is undeniable, but the implications for public trust are more complex. As AI-generated content becomes commonplace, questions arise about the authenticity of these communications and their impact on civic engagement.
Moreover, the accuracy of AI outputs remains a critical concern. Instances of AI “hallucinations,” where the technology fabricates information or references non-existent documents, have been documented. This raises alarms about the reliability of AI-generated content, particularly when it is used to inform public policy or communicate with citizens. As local governments increasingly turn to AI for assistance, the risk of disseminating inaccurate or misleading information grows.
Mayors Lund and Cassie Franklin of Everett acknowledge the challenges posed by AI while advocating for its use as a tool to enhance government efficiency. However, they also recognize that staff must review AI-generated content for bias and inaccuracies. The balance between leveraging AI's capabilities and maintaining public trust is delicate, and the need for ethical guidelines around AI use in government is becoming increasingly urgent.
As cities navigate this evolving landscape, the implementation of formal policies governing AI use is essential. Everett's IT department has issued guidelines recommending that AI-generated materials intended for public consumption be clearly labeled. However, adherence to these guidelines has been inconsistent, and the challenge of ensuring transparency remains significant.
In conclusion, while AI presents opportunities for improved efficiency in local government, it also necessitates careful consideration of ethical implications and the importance of transparency. As cities like Bellingham and Everett continue to explore AI's potential, they must prioritize measures to maintain public trust and ensure the accuracy of information disseminated to constituents.
AI Shield Stack can assist local governments in navigating these challenges by providing tools and frameworks that enhance transparency and accountability in AI-generated content.