As I sat down to write this column, social media and newspaper headlines here in India were buzzing with the news of a “deep fake” video that had gone viral, showing the face of an Indian actor morphed on to another woman’s body.
The actor, Rashmika Mandanna, took to ‘X’ where she said “something like this is extremely scary not only for me, but also for each one of us who is total vulnerable because of how technology is being misused”.
What this story reflects is that we have a problem, a deep problem, with the misuse of artificial intelligence and AI generated misinformation is going to need a robust response by all countries. As a woman in the public eye, the dangers of the misuse of AI against women in particular is a major concern. Most tech platforms are hostile spaces for women in any case. The rise of deep fake videos gives a whole new dimension to this problem.
Moreover, with elections due next year in India, the United States and the UK, one cannot even begin to wonder what kind of chaos and disruption the misuse of AI could potentially cause.
Which is why it was significant that the UK’s Prime Minister Rishi Sunak hosted the world’s first ever AI Safety Summit in Bletchley Park earlier this month in which 28 countries participated including the United States, China, India and the EU. It is also significant that big tech executives were present at the summit including ChatGPT’s Sam Altman and the world’s richest man and X owner, Elon Musk.
Hard work starts now
The big focus was on “Frontier AI” which is defined as highly capable AI models that could possess dangerous capabilities which can pose severe risks to public safety. The Bletchley Summit agreed to a declaration that global action is needed to tackle the potential risks of AI and look at identifying the risks of shared concern while building a scientific understanding of them.
The declaration noted the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, as well as risks beyond frontier AI, including those of bias and privacy.
This in itself was an important first step, getting everyone to agree to a declaration including China. But the hard work starts now. President Biden has just signed an important order whereby AI companies have to share the results of tests of their newer products with the US government before making them available to consumers.
The aim is to ensure that new products do not pose a threat to users or the public at large. This means the US government can force a developer to tweak or abandon a product. “These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the White House has said.
Real challenge for governments
This is in line with what Rishi Sunak had announced at Bletchley Park, that AI companies had agreed at the Summit to give governments early access to their models to perform safety evaluations. Two more AI Safety meetings are already in the pipeline — one to be held in South Korea in virtual mode the next 6 months and the next in person summit in France, about a year from now.
The issue of misuse of AI has also reignited another debate — should tech platforms be held accountable for the content they carry? In India, as outrage grows over the deep fake video of actor Rashmika, the country’s Information Technology Ministry has sent a notice to social media companies.
Under Indian law, even executives of these companies can go to jail if they don’t take down the content, such as morphed images. The minister in charge has called deep fakes a “dangerous form of misinformation”.
Of course, AI has huge potential for innovation and the real challenge for governments will be to balance this with tackling misuse. How they can do this will be the next big challenge of this century.