Artificial Intelligence That content took ChatGPT seconds to generate, even as aspects of it seem born of the subjective care of a human thought process. Given the ramifications—and cases where generative AI got it wrong; Nickodem got a rise out of the attorney’s conference by pointing out times that AI-assisted court filings included fictional or otherwise bad content—municipalities have already done what might be commonplace before long, and that is to generate in-house policies on how that city hall or town hall should interact with AI. In March, the magazine Government Technology examined where some local governments in North Carolina are with that effort. The Town of Chapel Hill, for one, has used generative AI to help rewrite documents and policies so they’re easier for the public to understand. The magazine also quoted City of Raleigh Chief Information Officer Mark Wittenburg as saying that it’s “important for us, especially as IT leaders, to really explore what the technology can do. And then be very mindful, again, about the community, the impacts to the community, and positive and negative impacts that it can potentially have.” Nationally speaking, top tech cities, including Seattle, Washington, home of Amazon and close neighbor to Microsoft, are projecting excitement. “Innovation is in Seattle’s DNA, and I see immense opportunity for our region to be an AI powerhouse thanks to our world-leading technology companies and research universities,” said Seattle Mayor Bruce Harrell in a November 2023 press release about the issuance of policy for how city employees can use generative AI. “Now is the time to ensure this new tool is used for good, creating new opportunities and efficiencies rather than reinforcing existing biases or inequities.” Seattle said its policy took six months of human work to shape. It plots the factors of responsible AI use in municipal government, including having a human employee review all AI-generated content before going live with it and limiting the use of personal data as source material for the technology. “As a city, we have a responsibility to both embrace new technology that can improve our service while keeping a close eye on what matters—our communities and their data and privacy,” Harrell said. Other jurisdictions have taken a different approach; the state of Maine, for one, in mid-2023 laid down a temporary but full ban on executive branch employees using generative AI in their work. The directive, from the state’s Office of the Chief Information Officer, points out that although there may be benefits with generative AI, “the expansive nature of this technology introduces a wide array of security, privacy, algorithmic bias and trustworthiness risks into an already complex IT landscape. These systems lack transparency in their design, raising significant data privacy and security concerns. Their use often involves the intentional or inadvertent collection and/or dissemination of business or personal data. In addition, generative AI technologies are known to have concerning and exploitable security weaknesses, including the ability to generate credible-seeming misinformation, disseminate malware and execute sophisticated phishing techniques.” Generative AI has made headlines recently for being able to clone people’s voices—taking a recording of someone, analyzing it sonically and generating entirely new sentences with a digitally concocted version of that voice—a pure, easy-to-fall-for fake created by a computer. The Federal Trade Commission is now working to crack down on people who use AI to impersonate government agencies or representatives. The potential seems all over the place and, clearly, we haven’t reached a point of convention or uniformity in dealing with it. That’s as the concept itself continues to change and evolve—and is far from new. It’s been a favorite subject of both science and pop culture for roughly 75 years or more. Early notions came from English computer scientist Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” The term “artificial intelligence” was coined in 1956, when computer scientist John McCarthy brought a group of top minds together to discuss the subject, deem it achievable and decide it was worth working toward. As computers developed in their abilities to process information, elements of that goal came into real view and got us to where we are today. While the sci-fi writers might still see years of creative fuel in the subjects of a potential dystopia with AI gone wrong, or the ideal utopia of problem-solving technologies, government leaders are acting now to prevent runaway problems and harness the best opportunities. Email listservs are beginning to circulate draft policies. Information technology directors are eyeing the curve. The national Council of State Governments, based in Kentucky, is one group compiling state-level actions on generative AI as legislatures begin to churn out bills on the subject. “Be sure that you are staying educated on these developments,” said Nickodem. “The genie is kind of out of the bottle.” For more with additional expert analysis from Nickodem, check out the related episode of Municipal Equation, the League’s monthly podcast, at https://municipalequation.libsyn.com. NCLM.ORG 41
RkJQdWJsaXNoZXIy Nzc3ODM=