Navigating AI's Role in Local Government Communication
The use of generative AI in local government communication raises significant concerns about accountability and transparency. Incidents like the Bellingham snowplow complaint illustrate the risks of impersonal responses generated by AI tools. As cities adopt AI technologies, establishing clear guidelines and ethical standards is crucial for maintaining public trust.
USAGEFUTUREPOLICYTOOLSWORK
AI Shield Stack
9/6/20251 min read


As local governments increasingly turn to generative AI tools like ChatGPT for efficiency, the implications of such practices are becoming more apparent. In Bellingham, Washington, a recent incident involving a snowplow complaint highlighted the challenges of using AI-generated responses for citizen communication. Bre Garcia, a resident, expressed her concerns about the city’s reliance on AI for responding to her email, feeling that the personal touch of human interaction was lost.
The city's approach to handling constituent inquiries through AI tools raises questions about authenticity and accountability. Although Bellingham officials, including Mayor Kim Lund, view AI as a means to enhance efficiency, the lack of clear guidelines and ethical considerations is troubling. With AI-generated responses often lacking the nuance and empathy inherent in human communication, residents may feel dismissed or undervalued, as Garcia experienced.
Moreover, the rapid adoption of AI in government communication is outpacing the establishment of necessary safeguards. A nationwide survey indicated that nearly 80% of state and local IT directors are concerned about ambiguous regulations surrounding AI usage. This gap presents risks, including potential biases in AI-generated content and the inadvertent sharing of sensitive information.
While some cities are developing formal AI policies, as seen in Everett, where a cautious approach is being taken, Bellingham's permissive use of AI raises alarms. The city’s draft policy, which reportedly integrates content generated by ChatGPT, highlights the challenge of balancing innovation with accountability. The situation underscores the need for transparency in AI-generated communications, especially when they impact public trust.
As local governments navigate this complex landscape, the call for clear regulations and ethical guidelines becomes more urgent. The implications of AI’s role in public service extend beyond operational efficiency; they touch on fundamental issues of trust, privacy, and the human element in governance.
AI Shield Stack can assist local governments in establishing robust policies and frameworks for the ethical use of AI tools, ensuring that technology enhances rather than undermines public trust.