Article Details

Scrape Timestamp (UTC): 2024-04-02 15:05:41.709

Source: https://www.theregister.com/2024/04/02/microsoft_election_ai_fakes/

Original Article Text

Click to Toggle View

Microsoft warns deepfake election subversion is disturbingly easy. Simple stuff like slapping on a logo fools more folks and travels further. As hundreds of millions of voters around the globe prepare to elect their leaders this year, there's no question that trolls will try to sway the outcomes using AI, according to Clint Watts, general manager of Microsoft's Threat Analysis Center. It's unlikely that AI deception and misinformation will be as sophisticated as some fear, he reassured – but that doesn't mean they will be any less effective. Because simple plans work. AI audio has no contextual clues for the audience to evaluate "In 2024, there will be fakes," Watts declared during an event hosted by the Aspen Institute and Columbia University. "Some will be deep. Most will be shallow. And the simplest manipulations will travel the furthest on the internet." Watts said his team spotted the first Russian social media account impersonating an American ten years ago. "And we used a tool – Microsoft Excel – which is incredible," he quipped. Initially, Redmond’s threat hunters (using Excel, of course) tracked Russian trolls testing their fake videos and images on locals, then moving on to Ukraine, Syria and Libya. "It was battlefields, and then it was taking it on the road to all of the European and US elections," Watts recalled. "Our monitoring list in 2016 was Twitter or Facebook accounts linking to Blogspot," he noted. "In 2020 it was Twitter or Facebook, and a few other platforms, but mostly linking to YouTube. And today, it's all video, any threat actor." Watts' team tracks government-linked threat groups from Russia, Iran, China, plus other nations around the world, he explained. He also revealed that about nine months ago his team conducted a deep dive into how these groups are using AI to influence elections. "In just the last few months, the most effective technique that's been used by Russian actors has been posting a picture and putting a real news organization logo on that picture," he observed. "That gets millions of shares." Let's play a game: AI? Or not AI? Watts also pointed out some indicators to help distinguish between real and fake news. The first, he said, is the setting: is the video or photo staged in a public or private place? Videos set in public with a well-known speaker at a rally or an event attended by a large group of people are harder to fake. "We've seen a deepfake video go out, and crowds are pretty good, collectively, at saying no – we've seen a video of this before, we've seen the background, he didn't say this or she didn't say this," Watts explained. However AI-generated content featuring an elected official – often purporting to be set in their office, home, or some other private setting – is much easier to pass off as legitimate. "The second part is, in terms of AI, the medium matters tremendously," Watts explained. While deepfake videos are the most difficult to make, and AI-generated text is the easiest, "text is hard to get people to pay attention to," he noted. "With video, people like to watch." However, audio is the medium that people "should be worried about," according to Watts. "AI audio is easier to create because your dataset is smaller, and there's no contextual clues for the audience to really evaluate." Viewers of AI-generated videos can better determine their authenticity based on how the subject moves or talks, he explained. But with audio-only recordings, those clues aren't as obvious. Watts mentioned the Biden robocalls in the US and the fake Slovakia election audio recordings as examples. "The most effective stuff is real, a little bit of fake, and then real," blending it in to change it just a little bit – that's hard to fact check," he lamented. "When you're looking at private settings and audio, with a mix of real and fake, that's a powerful tool that can be used" for election interference."

Daily Brief Summary

MISCELLANEOUS // Microsoft Alerts on Deepfake Risks to Global Election Integrity

Microsoft's Threat Analysis Center warns that trolls could use AI, including deepfakes, to influence elections worldwide.

Simple deceptive tactics, such as adding legitimate news logos to images, are proving effective at spreading misinformation.

The threat center has tracked the evolution of AI deception tactics from Russian trolls over the past decade, noting an increase in sophistication and a shift toward video.

AI-generated content in private settings is identified as particularly concerning due to the lack of contextual clues for viewers to verify authenticity.

AI-audio fakes are easier to produce and more difficult to discern compared to video, rendering them a potent medium for misinformation.

Microsoft's team underscores the challenge of detecting subtle blends of real and artificial content, which can be a powerful tool for election interference.