In the crucial New Hampshire primary earlier this year in the United States, a voice purportedly of US President Joe Biden was calling Democratic voters over the phone and telling them to stay at home because “your vote makes a difference in November, not this Tuesday.” It was fake.
Using artificial intelligence voice cloning technology, the fake message of Biden went viral. But it was detected and shut down. This is election year in many key democracies around the world. India, the United States and the United Kingdom, are among the countries holding elections in 2024.
There are over 50 countries which will see polling this year and the biggest challenge confronting all are deep fakes aided by the use of artificial intelligence, which can seriously undermine democracy. The images of candidates can be easily manipulated, their voices can be cloned, almost anything is possible with new technologies. The threat of disinformation is alarming.
Implementation is an issue
A 2024 report by the World Economic Forum found that 53 per cent of global experts named AI-generated misinformation and disinformation as the second leading risk in today’s risk landscape. It is not difficult to generate fake images using AI and to the naked eye, this are almost impossible to differentiate from the real deal.
The world’s leading tech companies including OpenAI and Meta have pledged to combat this problem especially during elections. This includes automatically labelling AI-generated images and videos so that people immediately know that what they seeing is not real.
The real action will have to come from these companies who have the best expertise in dealing with this new threat. The elections this year will be their first real test. Some companies have said they will not allow their AI technology to be used to make images of real people such as candidates contesting elections. But implementation is clearly an issue.
‘The Guardian’ newspaper reported how they asked one tech company, MidJourney, to generate images of Joe Biden and his rival Donald Trump in a boxing ring. Their request was denied. However, when the newspaper asked for the same images of UK Prime Minister Rishi Sunak and Labour leader Sir Keir Starmer, it generated the photos.
Which begs the question -- how serious are US based tech companies about dealing with the disinformation through AI in countries outside of the US? Governments also have an important role to play.
Clampdown on AI technology?
In the US, Biden signed an executive order earlier this year that required leading AI developers to share safety test results and other information with the government. But experts say it does not go far enough. Largely because the US may be worried that a major clampdown on AI technology could stifle innovation.
The European Union has taken the toughest line so far on dealing with the threat of the misuse of AI. The 27 nation bloc has just passed the Artificial intelligence Act which will take effect by the end of the year. It is the first comprehensive law in the world to tackle AI which classifies the technology into 4 categories: prohibited, high risk, limited risk and minimal risk.
So for example, anything that violates human rights through mass surveillance will be banned. High risk would include things like biometric identification and facial recognition technology which would have to meet strict requirements.
On deep fakes, the law has classified them as “limited risk” and has said they need to be labelled as fake. The Act says “deplorers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.”
The EU law is not perfect but it is an important beginning. Ultimately, the real challenge lies in not just making pledges to combat deep fakes but to enforce them and to keep up with the technology, which is constantly changing.