AI Is Coming for Democracy
Recent advances in artificial intelligence have drawn a lot of media attention. But little of this has focused on how this new technology may affect democracy. Spencer Overton joins host Alex Lovit to discuss how AI may push the United States away from becoming an inclusive, multiracial democracy—or toward it.
Spencer Overton is the Patricia Roberts Harris Research Professor of Law and the director of the Multiracial Democracy Project at George Washington University Law School.

Share Episode
AI Is Coming for Democracy
Listen & Subscribe
Recent advances in artificial intelligence have drawn a lot of media attention. But little of this has focused on how this new technology may affect democracy. Spencer Overton joins host Alex Lovit to discuss how AI may push the United States away from becoming an inclusive, multiracial democracy—or toward it.
Spencer Overton is the Patricia Roberts Harris Research Professor of Law and the director of the Multiracial Democracy Project at George Washington University Law School.

Share Episode
AI Is Coming for Democracy
Listen & Subscribe
Recent advances in artificial intelligence have drawn a lot of media attention. But little of this has focused on how this new technology may affect democracy. Spencer Overton joins host Alex Lovit to discuss how AI may push the United States away from becoming an inclusive, multiracial democracy—or toward it.
Spencer Overton is the Patricia Roberts Harris Research Professor of Law and the director of the Multiracial Democracy Project at George Washington University Law School.
Alex Lovit: Every day there’s a new headline proclaiming how AI will transform our world, how it’s going to make our lives more efficient, how it’s going to take our jobs. But here at the Kettering Foundation, we realized not a lot of people are talking about how it’s going to influence our democracy. So today we’re asking how will AI change our elections, our systems of government, our social divisions, and our ability to trust one another? You are listening to The Context. It’s a show from the Charles F. Kettering Foundation about how to get democracy to work for everyone and why that’s so hard to do. I’m your host, Alex Lovit.
My guest today is Spencer Overton. Spencer is the Patricia Roberts Harris Research Professor of law at George Washington University Law School. He also directs the Multiracial Democracy Project at GW Law. He’s one of the leading thinkers working at the intersection of AI, race, and democracy, and he’s here today to tell us how generative AI and other AI technologies could push the US away from becoming a multiracial, inclusive democracy or towards it. Spencer Overton, welcome to The Context.
Spencer Overton: Thanks so much for having me.
Alex Lovit: So you’re a lawyer, a law professor, and you’ve for a long time studied civil rights law and voting rights law. Why did you first get interested in AI? At what point did you think, oh, AI is something I’m going to need to pay attention to?
Spencer Overton: Great question. Really, I got involved in content moderation. So you remember in 2016, the Russians created a fake Facebook pages and pretended they were African Americans just before the election. They put out a message saying, “Hey, no one cares about us. We should boycott the election and we should stay home.” We’ve seen deceptive practices in the past, but generally it was something like a robocall or a flyer. We really hadn’t seen social media weaponized like that before 2016.
I actually got into a provision called Section 230 of the Communications Decency Act, which basically says that platforms like Facebook are not liable for material that goes up on those platforms. And my argument was that Facebook was profiting off of this activity through selling ads, et cetera, and should be responsible in one way or another. And when I decided to return to GW and set up the Multiracial Democracy Project, I felt like AI was a really important emerging space. And some people had done work on AI and race and criminal justice or employment issues, et cetera, but there wasn’t a lot that had been done on democracy, and that’s basically how I got into it.
Alex Lovit: It’s always hard to speculate about the future and conversations about AI can verge into feeling like science fiction. Can you give some examples of how AI has already impacted elections and democracy in the US or elsewhere around the world?
Spencer Overton: Sure. I think the most obvious example that most people think about would be deepfakes, an image that is not an accurate image where somebody uses generative AI. And so we’ve seen in elections around the world the use of generative AI, sometimes just before elections to push the needle in one direction. But there are many other ways when we talk about AI extends beyond generative AI like tools used to authenticate signatures. So when we have a signature match for mail-in ballots or tools used to identify who shouldn’t be on a registration role, those are also areas where AI can be used and has been used in terms of elections. So election officials can use AI for good bad actors, political actors can use it for bad, just as AI is really permeating all aspects of our society, it’s permeating different aspects of politics, election administration, and democracy generally. And so we need to really understand it in this particular context.
Alex Lovit: So you just mentioned there are a couple of threats that AI could pose to democracy, including deepfakes, including the ways that elections administrators might use AI for signature matching or other purposes. And you’ve spent some time really thinking about all the ways that AI could affect democracy. Is there one that keeps you awake at night? Is there one kind of threat that you’re like, “Okay, this is the thing that I’m really worried about?”
Spencer Overton: I really think it’s a holistic piece. I feel as though the media has focused on deepfakes because it’s something we can understand. Politicians focus on deepfakes because they’re afraid of a deepfake adversely affecting them just before an election. But I think people issues like there’s what I call a homogenization problem with AI. For example, English is by far the dominant language in terms of large language models. Large language models are relatively good at translation, but they’re much stronger in English than they are in other languages. And that results in some errors in those other languages here. Now, if one is a believer that the goal here is assimilation, that the goal is that we should all just speak English, that everyone should have the same type of hair and dress and culture and religion, then there’s not a problem, right?
But if someone is a pluralist and they believe that folks should have the freedom and liberty to practice their own religion or to have different thoughts, then that homogenization of AI is a significant problem. I think a lot of democratic theorists believe in pluralism, but they haven’t really connected it with AI, and so one thing that I really try to do is bring those two worlds together to analyze that as an issue with regard to AI, democracy, politics in America.
Alex Lovit: I want to ask you to spell that out a little bit. So if I have this right, and correct me if I have it wrong, AI is based on the data set of basically all the writing on the internet, which meant that that data set was mostly in English, and to a large extent by educated upper-class writers. I think that’s the reason you’re saying that AI has some limitations in the type of language. It’s good at speaking in the type of language it’s not good at speaking.
Spencer Overton: And let me give you a couple of examples of that to make it concrete. If the vast majority of demonstrations that Black Lives Matter protesters engage in involve people of all backgrounds, people from suburban areas, urban areas, rural areas, people of different ethnic backgrounds, but if the media largely covers confrontations between a handful of demonstrators and police, when AI basically goes online and scrapes insight for Black Lives Matter, those instances are going to be lifted up as a description of what Black Lives Matter is as opposed to that total pool of actual interactions that exist.
Another example would be that the majority of Spanish that’s online actually machine translated rather than produced by native Spanish speakers, and as a result, that language is imperfect. So if AI is using that to basically translate and assess what Spanish is, et cetera, it is skewed and it’s not honed in on native speakers of Spanish. And Spanish is a dominant language in the United States, so we can only imagine questions of less popular languages like Hmong or languages that do not have a written script in terms of some tribal languages, et cetera, that are overlooked.
Just frankly, is AI an extension of conquest? So when we think about conquest from Europe and Christianity and we’re going to have manifest destiny, are we continuing that through AI, or are we saying, “Hey, we respect different human beings and different folks here and we want to design an AI that respects difference,” that’s an issue that is underexplored that I think we need to explore a lot more of.
Alex Lovit: So what we’re talking about here is the effects that AI might have on our culture, on our political culture, on how we talk with one another.
Spencer Overton: Correct.
Alex Lovit: If there’s a risk of AI homogenizing culture and undermining pluralism in that way, the flip side of that is something you’ve also written about, which is microtargeting, the way that AI might be used to very specifically talk to different individuals or different groups. Can you talk a little bit about what that risk is?
Spencer Overton: Sure. What’s really interesting about this is I initially thought that, hey, if we just microtarget someone with an image, so if we say use AI to create an image that is targeted toward Latino women who like soccer in Austin, and the more specific we get, the more persuasive we get. It turns out that data shows that once you have a couple of elements, you have diminishing returns. However, research also says that chatbots that can reply in real time to people who understand your data, your background as a result of your digital footprint, that they can be very, not just persuasive but manipulative with regard to people, so they know what to say in real time. They understand what will move you, what will nudge you. Some of these studies have shown that this has been used for what I would say would be good purposes.
So to dispel for example, conspiracy theories by sharing with them truth and that even two, three months later, people had released the conspiracy theories as a result of these chatbots. But those same tools could presumably be used for nefarious purposes as well in terms of manipulating individuals. And we really, right now, we don’t have any legal protection against that Europe, the EU, the AI Act actually has a provision that prohibits behavioral manipulation here in the United States. We don’t have that.
And before I did this research, I really didn’t appreciate some of the importance of privacy legislation and data privacy, but I think that data privacy is incredibly important just in terms of the ability to manipulate others because you know what their grocery store list is because they enter their phone number and they get a little bit of discount by giving up data. All of that can be used by bots to manipulate people. And that is a concern for all of us generally, but it’s also a concern for those of us who are pluralists who believe that a dominant group shouldn’t just manipulate minority groups and assimilate everybody in a one way of thinking.
Alex Lovit: At the beginning of this conversation, you cited the Russian attempts to meddle in the 2016 election by impersonating different demographic groups, by spreading misinformation through social media that I believe was not really done using AI. Can you talk a little bit about how AI could be used by actors doing similar work?
Spencer Overton: You’re absolutely right, and so they’re basically, the Russians hired people to create these posts and to obtain images online and to pretend that they were African Americans. At this juncture, with new technology, the Russians or anyone else, they don’t need fluent English speakers. They’re not reliant simply on posts and still images downloaded from Google images of Black people. They can actually create images and videos of what seemed to be Black people talking as a result of generative AI. And then the question is, will these new tools basically take that old tactic and make it even more effective? What layers will it put on these earlier challenges and problems with regard to practices?
Alex Lovit: So what you’re talking about there is what a malicious actor might do, someone who is overtly trying to disrupt democracy or skew election outcomes. Another category of malicious behavior that you’ve written about are ways that people might use AI to directly disrupt elections processes. Can you talk a little bit about what that would look like? What are you worried about there?
Spencer Overton: So right now, we have racially polarized voting in the United States. It’s one of the most significant determinations of how people vote in the United States. And if a bad actor was to target one of the 400 counties that are majority people of color in this country, basically generative AI makes it easier to create malicious code, and if you can send a code impersonating a superior to an elections official and get them to click on and download malicious code, or if you can create tens of thousands of open record requests in these particular counties, and these aren’t copy and paste, these are all original, so they differ from one another, but they look like a legitimate request for information.
And you target these toward these communities and they have to use employee time basically responding to these record requests rather than processing voter registrations and administering elections, you could really gum up elections in particular areas and skew election outcomes, whether they be the election outcomes in that particular county. But in a place like North Carolina or a place like Philadelphia, you could skew an outcome for state as a whole or even a federal election for president. Another example of that is using these tools to target election officials and to dox and intimidate particular election officials.
Alex Lovit: Theoretically, these tactics could be used to target any group of voters, could be used to target any precinct or elections place. Are there reasons to think that this will particularly affect voters of color?
Spencer Overton: Just a quick note, we have seen election challenges targeted at particular counties in the past, and those challenges haven’t been statewide in some fair way. They’ve been deployed, targeted at particular counties with large numbers of folks of color, and so we’ve seen that in the past manually. And the question is, does AI allow for a turbocharging of that? My argument is yes.
Alex Lovit: You mentioned earlier that elections officials can use AI and that there are ways that they can use it that make you nervous, that might have negative effects, and then there might be some ways that could be a positive, could be a useful tool. Can you talk through that a little bit? What are you worried about and what would be ways that elections officials could effectively use AI?
Spencer Overton: Well, I think election officials solely relying on AI is a problem. For example, there was a chatbot in California. It could answer questions in English that it could not answer in Spanish, and this is in the Secretary of State’s office in California. Now, it’s great that that chatbot can answer questions 24 hours a day, including in the middle of the night to residents who happen to speak English. But the fact that Spanish speakers can’t get that same information is a problem, and it’s a problem not just from a moral standpoint, it’s a problem from a legal standpoint because Section 203 of the Voting Rights Act requires that California provide all information in Spanish that it provides in English, so that’s a concrete example of a danger that I see.
Now, on the other side of things, I don’t know that there’s a tool that’s more effective than AI in low-cost translation. So certainly, there needs to be some improvement in terms of language translation, but a large language model is incredibly powerful in terms of translation. And so these tools can be used to make things more accessible in a low-cost way. So solely relying on AI do think is a problem, but allowing AI to flag issues or do a first draft, it could be reviewed by human beings. I think that that’s productive. I think that increases the value of election officials to democracy and to voters.
Alex Lovit: We’ve mostly been talking about direct ways that AI could affect democracy. A lot of the conversation about AI in this country has been focused on mass job loss or other big economic or social effects. When you think about those things, do you think that there are risks to democracy coming out of that?
Spencer Overton: I think that there are, and I think one risk that we have not grappled with fully is this issue that we see this phenomenon in Germany. AfD, the Alternative for Germany, their stronghold is in East Germany, which is basically after the wall fell in Germany, a lot of people out migrated to West Germany. A lot of the talent, a lot of the companies in Germany are in West Germany and not East Germany, so you have an area of East Germany that’s frankly lower skilled, that is anti-immigrant, et cetera.
And if we talk about technology coming in and job loss and what’s the future and do these folks have a pathway to the future, I definitely think that that heightens the anxiety and antagonism of people and as opposed to simply arguing that these people are somehow wrong, or here’s how their argument is logically wrong, et cetera, and they just need to go get a trade or something like that, or they just need to learn how to code or use AI, et cetera, I think we got to really acknowledge that’s coming and grapple with that and develop a strategy to ensure that everyone is a part of this multiracial democracy in the future.
Alex Lovit: We’ve been talking about a variety of different threats that AI might pose to democracy and ways that it might undermine a pluralistic democracy. Talk to me about how demographic changes in the US affect those risks.
Spencer Overton: I think because we’re so close to it, it’s difficult to see, Alex. But if we go back to 1790, there was a original immigration law that basically limited naturalized citizenship to White persons of good character, and that really essentially gerrymandered our population in a way that really affects us even today. We had Chinese Exclusion Acts and a variety of other tools to maintain a particular racial population, and it really wasn’t until the Immigration and Naturalization Act of 1965 where we removed race as a factor in terms of immigration and eliminated quotas that heavily benefited Northern and Western Europe, and as a result, our population has changed. In 1960, 15% of our population people of color, today it’s 40% and rising. In fact, it’s over 40%. In 1960, 3% of the US House, people of color. Today, that number is 28%.
We’ve seen a significant change in our population, and there has been a pushback on that. There’s been some cultural anxiety, and that’s something that exists and we have to be honest about it. And it’s not just something in the United States. If you look at AfD, the Alternative for Germany, if we look at some of the nativist sentiment in Britain that motivated Brexit, Brazil, this has been an issue. So this is really a worldwide phenomenon, and I say that it’s important because we can’t think about AI in the context of democracy without acknowledging that that’s going on. It’s really difficult to understand the magnitude of these technology issues out of the context of those political and social and demographic changes in the United States.
Alex Lovit: So what you’re talking about there is a pretty significant shift in the demographic makeup of the United States, and we are on the verge of, I hope, becoming the world’s first truly multiracial, majority minority democracy. So those demographic trends are going to continue, whether or not we’ll remain a democracy remains to be seen. So you have that trend, and then at the same time, you have this other trend of the technological development of AI, and those things are separate.
Spencer Overton: They’re separate, but they’re interacting together. When we talk about deepfakes deployed in a particular way, they overlap. And so the question is, what tools can we create so that people can share power and make decisions together despite that difference, and they can communicate with one another. And there’s a question about can AI play a constructive role in that rather than a destructive role in that? And you’re right, we are on the verge of certainly our demographics becoming one in which there is no racial or cultural majority in the United States. And the question is, can we create a democracy so that we’re actually sharing power and different groups are fairly represented, and we’re making decisions together and we’re responding to the needs of different communities in a fair and inclusive way.
Alex Lovit: The stuff we’ve been talking about, racism, bias, these things are pretty baked into the United States culturally. They’ve been longstanding problems. So when we think about AI, is this really going to affect things? Can you give concrete examples of how introducing AI into our democratic system really is going to make things worse?
Spencer Overton: I really look to the past in terms of past technologies, whether it’s feature films, Birth of a Nation is basically about the rise of the South and the Ku Klux Klan here. Lynchings increased in the United States as a result of that film. I think about low cost radios that the Nazis distributed in order to further their propaganda in the 1930s. I think about message boards in the 1980s that White supremacists used to connect with one another. So I think that often we have seen ethnonationalists use technology to move things forward. I don’t see why AI should be any different.
Now, that said, I think that Dr. King, John Lewis, and others were masters in understanding this new technology of television and how to use it to really illustrate some of the challenges that were happening in the South and expose that and share that with the rest of the nation. When you have a Edmund Pettus Bridge Bloody Sunday that’s on TV the same night as judgment at Nuremberg, and people are seeing the parallels and they don’t want to be Nazis.
So again, I think these technologies can be used for good ways, but we can’t just look at the technology and say, “Here’s this bright new shiny object. Isn’t everything good?”
I think we have to recognize what exists in our society and the challenges that we have had, and we continue to have and really think about, okay, what does this mean for the future of our multiracial democracy and how we interact together, and what can we do to ensure that this is used constructively so we can share power and make decisions together rather than simply as a tool for one group to dominate another.
Alex Lovit: Let me ask you about where we are right now in this political moment in the United States. President Trump has made some pretty significant changes in federal policy towards AI. What have those changes looked like? How has Trump changed federal policy towards AI?
Spencer Overton: Well, his first step was to repeal many of the anti-bias provisions with regard to AI in terms of the federal government. He did that basically on day one. On day three, he pronounces that global AI domination and preserving America’s global AI domination is our priority, and dismisses issues like equity and preventing bias as an ideological issue that is basically preventing American innovation and development. After that, his OMB basically instructs agencies to dismantle anti-bias protections in the law. So we see that happening at EEOC in terms of employment. We saw that happening in terms of the Department of Labor. It had guidelines in terms of federal contractors, how they hire people, and how they promote people and use algorithmic tools and ensuring that there’s not bias in those tools. Those protections were removed. The Consumer Financial Protection Bureau had some anti-bias and algorithmic decision-making protections, those were removed.
Six months later, the administration came out with two big documents and AI action plan, and another piece against what it said was woke AI. So with regard to the executive order against what it said was woke AI, it basically said, “We’re going to prevent the federal government from buying AI that’s been fine-tuned to reduce bias.” It described that as ideological bias. So if there’s a company that has innovated in a way so that its AI is less biased, that’s basically ineligible for purchase by the federal government. In terms of its AI action plan, it proposed that states that have laws that are perceived as impeding AI development, and that could be anti-bias laws, for example, that those states not be for federal AI funding. So my assessment is that the anti-diversity agenda of the administration is really furthered by its AI policy.
One other thing you’ll remember is that the administration scaled back and repealed all of the language assistance provisions in terms of federal law, and made English the official language of the United States. It did not use AI to make government more accessible to non-English speakers. I think that’s a problem because there are about 28 million people in the United States who are limited in their English proficiency. About half of those people are US citizens, and could really benefit from AI assisted translations that are very low cost, and that’s a capacity of AI that’s not being taken advantage by this administration.
Alex Lovit: Let’s talk a little bit about policy solutions. You’ve mentioned some of the policies that the Biden administration had put in place that were then rescinded by the Trump administration. You’ve also mentioned some regulations in Europe. In your mind, are those model policies? Are there other things you’d like to see the US government do to protect against these risks from AI?
Spencer Overton: Certainly some degree of transparency, so some auditing, some disclosure, those types of things are very important. And one big problem we have is the black box nature of AI where we don’t know how decisions are made or when harms occur. So disclosure is incredibly important and transparency, and there are permutations of that. So if we think about giving immunity to whistle-blowers who come out and testify and talk about these issues, so there are a variety of tactics there. I think also data privacy protections, we talk about this in vague ways. I think that there’s some people who can appreciate it, and I think we all know to a certain extent with our phones, we’re being tracked and it feels a little uneasy, but it’s almost like we just give in and say, “Hey, this is the new world we are living in.” So some really serious, thoughtful data privacy protections.
I think that there’s some things that could be done in terms of ensuring that everyone has access to the benefits of AI. The long work we’ve done in terms of the digital divide, whether it’s ensuring that people have access to the internet or to devices, et cetera, and to AI itself, I think we need to continue to be vigilant in terms of ensuring widespread access, and that’s certainly from a fairness standpoint important, but it’s also important in terms of economic growth.
One problem we have right now is that frankly, with both race and technology, we can’t get agreement in Congress and so we seesaw from different administrations that lean on into things using OMB memos and executive orders. And I think that we do need some statutory protections that ensure consistency across administrations in terms of AI. So ensuring that the government only buys and uses facial recognition technology that’s non-discriminatory, some heightened protections for high-risk uses of AI. When I say that, I’m talking about AI that makes determinations about issues like health benefits or benefits from government or voting or law enforcement, ensuring that we’ve got adequate transparency, disclosure auditing, regular auditing of those uses, and that high risk structure is actually a structure that’s used by the EU AI Act in terms of regulating AI. Those are some of the proposals I’m focused on from a regulatory standpoint.
Alex Lovit: All this stuff feels pretty big. There’s a potentially pretty large and consequential technological shift with potentially large and consequential effects on our economy, and I’m just one guy. I’m just one citizen, and it feels like I don’t have much power over that. Do you have any advice for listeners about either how they should think about how they interact with AI in their own lives or how they could get involved in political advocacy in a way that might help our country evolve in a positive direction?
Spencer Overton: I want to get into the specifics of that, but I do want to lead off where other people and other generations have faced big challenges, like really big challenges, and they’ve been able to muscle their way through them. They haven’t solved everything. Sometimes we feel like if we can’t solve everything, is it even worth dealing with the challenge? But in terms of the specifics of technology, I think being serious about just learning the tools and incorporating, I think it is important to not avoid the tools and actually embrace them, understand how they work, use the tools to be both more effective and to be creative and efficient, et cetera. I am not an AI skeptic or pessimist. I believe that people should learn how to use the tools, and I think another piece is not sidelining or segregating technology in some bucket like, “Oh, I’m focused on democracy, not technology.”
Technology is all around us in terms of the economy, democracy, et cetera. It’s not separate. I also think that activism is important, engagement in terms of local level, national level engagement. One thing we haven’t talked about is the significant potential for AI to help in terms of grassroots community organizing here, whether it is producing content, whether it is assessing the sentiment of community members and ensuring you’re hearing from everybody. There are a lot of applications of AI that can be used to facilitate community organizing and democracy as a whole. Monitoring the city council summarizing complex local policy proposals, there are just a lot of uses of AI and appreciating that’s the world we live in and embracing those tools.
Alex Lovit: Spencer Overton, thank you for joining me on The Context.
Spencer Overton: Thank you, I enjoyed it.
Alex Lovit: The Context is a production of the Charles F. Kettering Foundation. Our producers are George Drake Jr. and Emily Vaughn. Melinda Gilmore is our Director of Communications. The rest of our team includes Jamaal Bell, Tayo Clyburn, Jasmine Olaore, and Darla Minnich. We’ll be back in two weeks with another conversation about democracy. In the meantime, visit our website kettering.org to learn more about the foundation, or to sign up for our newsletter. If you have comments for the show, you can reach us at thecontext@kettering.org. If you like the show, leave us a rating or a review wherever you get your podcasts, or just tell a friend about us. I’m Alex Lovit. I’m a Senior Program Officer and Historian here at Kettering. Thanks for listening.
The views and opinions expressed in this podcast are those of the host and guests. They are not the views and opinions of the Kettering Foundation. The Foundation’s support of this podcast is not an endorsement of its content.
Male Voice: This podcast is part of The Democracy Group.
More Episodes
- May 27, 2025
Pro-democracy progressives are their own worst enemy when it comes to recruiting conservative Americans to their cause...
- May 20, 2025
How did one of our major political parties abandon its principles? And what do voters make of that shift? Host Alex Lovit is joined...
- May 6, 2025
Public schools are essential for democracy—and they’re under attack. But the very policies that are being championed ...
- April 22, 2025
US institutions are being pressured into compliance with the Trump administration’s capricious demands. Many law firms ...
- April 8, 2025
Life under an authoritarian regime can erode one’s faith in humanity. Today's guest says that’s why it’s more ...
- March 25, 2025
A former member of the Hungarian Parliament tells us what interventions Americans need to take right now to avoid ...