AI and misinformation are the biggest risks in the WEF cybersecurity outlook. AI can supercharge both the creation and dissemination of disinformation, making it all too easy for bad actors to sow doubt and division, putting the reliability and integrity of elections at risk. One of the most disconcerting threats is deepfakes, which are becoming increasingly more sophisticated - and detection tools aren’t yet up to the task. 

Deepfakes have been used to generate false clips of politicians in an attempt to influence voters. In Slovakia’s September election, the far-right party spread a fake audio recording of the Progressive party leader, wherein he was apparently discussing his plans to rig the election. 

What can be done about deepfakes?

Tech companies are starting to take steps to respond – Google, Meta and X have all said that political ads which use AI must clearly disclose that fact. There are automated tools for determining fakes, but they can’t keep up with the techniques being used - it’s too easy to replicate the voices and images of people in the public eye. The measures will pale in the face of the widespread use of the technology, which is available to anyone with an internet connection. And with so much of our data out there, it’s not just public figures who can be imitated - it’s all of us. 

Unfortunately, existing solutions are hazy. A viable solution is to focus on fundamental cybersecurity practices in place and to take on an ‘assumed breach’ mindset.

 

Read the article