Beth Simone Noveck on the Untold Story of AI in Government

By
Tyler Wells Lynch
April 26, 2024
Share this post
Beth Simone Noveck on the Untold Story of AI in Government

As a child, Beth Noveck wanted to be a diplomat. The idea of traveling the world, meeting new people, and solving real world problems was irresistible to a child who saw from an early age how technology could be used for good. For that, she has her mother to thank. In 1982, her mother bought the family an IBM PC—the first of its kind—which she soon mastered to further a career in the travel industry. That same affinity also allowed her mother to provide for the family.

“Even before the travel business, my mother had a ticker tape machine in the house and traded stocks,” Beth says. “That early technology enabled her to both be a full-time caregiver homemaker and mother, and at the same time support the family with what we would now call a day trading business.”

Such a balance of work and home life instilled in young Beth an appreciation for technology’s social impact, how it could be used to empower and enable workers and citizens.

AI + Governance

Today, Noveck wears many hats: professor, author, researcher, and, yes, diplomat. She is the first Chief AI Strategist for the State of New Jersey, director of the Burnes Center for Social Change at Northeastern University, and Director of The Governance Lab (GovLab). From 2009 to 2011, she led President Obama's Open Government Initiative as deputy chief technology officer for open government. All of these accomplishments define a career at, what Beth calls, the intersection of “artificial and collective intelligence.”

“Technology has been at the center of what we've done for many, many years,” she says. “Whether that's earlier technologies of the web and social media, whether it's data more broadly, or whether it's technologies of virtual worlds and immersive environments.”

The question she and her team members are interested in is how that technology can be used to empower people and communities, to amplify their voices and their collaboration in the problem-solving process. One example is generative AI. While many are right to be skeptical of AI’s promises, Beth has repeatedly made the case in numerous op-eds and interviews for Wired, Fast Company, Mashable, Digiday, and elsewhere that this new technology holds great promise for civic engagement and learning.

“Understanding text as data,” she says, “we now have the ability to process large amounts of text, not just large amounts of numbers, and that means both generating content and analyzing content.”

Last year, Noveck penned an article in Wired praising the City of Boston’s embrace of generative AI as a way to alleviate the work of public officials. This “responsible experimentation approach” encouraged the use of tools like ChatGPT to produce routine memos, letters, and job descriptions; translate bureaucratic or legal language into common parlance; and summarize large text documents so as to better inform public officials and citizens on new policies and civic engagements.

But that’s just the tip of the iceberg. More recently, Beth wrote in Fast Company how AI can reshape congress for the better, highlighting how it is already being used in some offices to create first drafts of testimonies, witness questions, and speeches. Others are using AI to supplement teams of proofreaders—an overlooked and underappreciated job in Washington—and the Library of Congress is exploring ways to summarize pending bills.

There’s further potential in the use of AI to solicit and synthesize public comments, research relevant information, respond to constituent queries, and review policy proposals. All of these applications help promote transparency and effectiveness in government, as she told the Senate in testimony before the Homeland Security and Government Affairs Committee in 2024.  

The Untold Story

Of course, Noveck is not flippant about the myriad risks associated with AI. It has already been shown to hallucinate factual errors and exacerbate misinformation, and worker concerns about job displacement are not misplaced. There’s also the growing concern about “deep fakes”—AI-generated images, quotations, audio, or video that are false or misleading. As with most policy discussions, it’s all about intent. 

“We have to start first with the intent behind which we want to use these tools,” Noveck explains. “We can use them to empower and support workers or we can use them to displace workers. We can use them to substitute for human insight or we can use them to support and enable and make more efficient the process of actually talking to actual humans and hearing their input.”

That’s why Beth favors the “human in the loop” approach of the Institute of Experiential AI. The idea, especially when it comes to AI and governance, is to start with defining a problem that matters to real people in real communities. “And that's driven by the needs of humans,” she adds, “not simply by what the technology can do.”

Furthermore, for AI to be deployed effectively in government, it has to go through the same democratic channels that people and policies go through. Those who adopt them must be accountable to and represented by the public. And when it comes to monitoring and measuring the impact of AI, there must also be a way to assess its effectiveness in solving a genuine problem. In a word, it’s all about balance.

“There are an innumerable number of stories about the ways now in which technology is being used to deliver health care, to respond to climate change, to improve how we educate our youth, to improve how we govern and deliver services to residents,” Noveck says. “I think the untold story is the ways in which these tools are giving voice to communities and citizens, enabling us to participate better in our democracy.”

Learn more about the Burnes Center for Social Change and The GovLab, both of which Beth directs. You can find more information about her research and advocacy here.