SXSW sessions are held in many locations. Located several blocks north of the central location of the Austin Convention Centre I found UK House. It was being hosted by Downright Austin at East 10th and Red River Street. With musical events outdoors, serious sessions indoors, and a nice place to have a cup of tea with cake, this was a vibrant spot, which few SXSW attendees ever ventured to find.  

Avid readers of Sun News Austin will have noticed I have reported on many sessions dealing with artificial intelligence. UK House was part of this track, holding a session on electoral interference by AI.

There were five speakers: Dr. Alex Dimakis, Professor of Engineering at UT Austin, Ken Coleman, CEO of Reality Defender, Dave Willner, former head of Trust and Safety at OpenAI, Kunal Khatri, Trade Commissioner of His Majesty’s Government, and Rebecca Harvey, Trust and Safety Lead for His Majesty’s Government.

Entitled Elections: The Great British Fake-Off, the session was hosted by Mr. Khatri. The key question he posed to the panel was:  Why are we worried about the risks of AI? Why is it different this year, with national elections poised to happen in the UK and the US?

Coleman responded first. “What makes this fundamentally different than any other time in history is that in previous scenarios, technology was only in the hands of experts, but now anybody is just one Google search away from deepfaking it: a person, or a voice, or a piece of media. There are over 100,000 free or very inexpensive tools online that are available to anybody, whether they have good or bad intentions.”

“Just to make it more contrarian and fun,” said Dr. Dimakis, “some of the research is saying that many academics are not worried. They say that actually only 20% of Americans are getting their news from social media, as opposed to newspapers and TV channels. The Economist is saying the deepfake problem, in the classical way, is probably not going to be as big a deal as we might expect. Another it was saying is that research was indicating that the people who do see the deepfakes are people who are already very polarized. Hence, there is more demand for it.”

Dimarkis did express a more subtle concern, however.  “I am worried about aspects of AI affecting the election. Staffers are using AI to target in more intelligent ways and write emails that are more personalized. I even saw a video where a candidate is speaking personally to me and knows about my family. There will be infinite uses; also, bots amplifying specific posts on Twitter. I think this is where the network effect is changing the algorithms abut what is going up and down, which has a huge influence in fundraising. I think this is a tremendously important thing, that’s not as obvious.”

Harvey warned that “The danger as we get towards the elections later this year is that people have so much distrust in information that they don’t trust any information out there. And therefore, they either create their own theories or join some of the more popular extremist viewpoints. I think we’ve got to be really careful that while deepfakes and AI-generated content has the ability to influence the election, we might worry too much about their inputs.”

Deepfake phone calls from your boss are entirely possible, telling you for example that you are needed in the office when you should be voting. Willner said that “an even scarier version of the ‘boss phone call’ is targeting mid-level election officials on election day. We haven’t seen a ton of direct interference with the administration of elections, but we shouldn’t take that to mean it’s not a thing people will try.”

In the UK, there is a Defending Democracy Taskforce which, Harvey explained, “is ensuring our politicians have good cyber security, to working with social media companies that might be spreading disinformation. The UK has recently introduced the Online Safety Act that will hopefully have a lot of impact on requiring platforms to make sure they’re not spreading illegal content. It will apply to AI as well.”

Dimarkis said elections can be influenced “by things that are not explicitly faked. That’s the confusing line. It can be very targeted, it can be amplifying angles of extremist views. It’s not going to be a fully generated video of Biden talking. That kind of attack, in the space of possibilities, is not likely. There are so many more nuanced aspects of finding and targeting specific individuals that are more dangerous.”

Photo by C. Cunningham. (l to r) Harvey, Dimarkis, Wilner, Coleman, Khatri.

By Dr. Cliff Cunningham

Dr. Cliff Cunningham is a planetary scientist, the acknowledged expert on the 19th century study of asteroids. He is a Research Fellow at the University of Southern Queensland in Australia. He serves as Editor of the History & Cultural Astronomy book series published by Springer; and Associate Editor of the Journal of Astronomical History & Heritage. Asteroid 4276 in space was named in his honour by the International Astronomical Union based in the recommendation of the Harvard Smithsonian Center for Astrophysics. Dr. Cunningham has written or edited 15 books. His PhD is in the History of Astronomy, and he also holds a BA in Classical Studies.