International information warfare & the digital arms race
Organised social media disinformation campaigns are being staged at a staggering scale worldwide and it looks like a trend that will only increase.
In June this year Twitter announced the removal of 23,750 accounts, plus 150,000 ‘amplifier’ accounts, that were part of a co-ordinated online disinformation campaign run by China against the protesters in Hong Kong.
In September, Facebook announced the dismantling of a Chinese disinformation campaign focused on the Philippines which had attracted an audience of at least 130,000 followers.
These incidents highlight an unwelcome trend in the continued use of state misinformation and influence campaigns over the past few years, and the scale of the activity is staggering.
In 2019 Facebook removed a total of 6.2 billion, yes billion, fake accounts. Many were crime related but many of them may have been created by political and state actors. Like other digital trends, the pandemic may have even had an impact as growing numbers of people have been driven to spend even more time on the Internet.
Researchers from the Computational Propaganda Research project at Oxford University have identified more than 70 countries where they believe “organised social media manipulation campaigns” are being conducted by actors within the state. Of these, seven countries have conducted operations against foreign audiences: China, Russia, Iran, Venezuela, India, Pakistan and Saudi Arabia.
Although clearly there is a growing threat from China, Russia has long had the highest profile and appears to be undaunted by the revelations of its activities. It has continued to attempt to interfere in the 2020 US election despite the well-publicised Mueller investigation into the operations it ran in 2016, while a recent Times investigation exposed a Russian disinformation campaign designed to undermine and spread fear about the Oxford University coronavirus vaccine trials.
Both Russia and China see themselves as being in an ideological war against the West where information is a weapon, and from their perspective it can be an effective one. Unfortunately, far from abating, in the future technology may actually take this international ‘information warfare’ to another level.
A number of factors are now converging. The first is the growth of the Internet itself. According to Cybersecurity Magazine there will be 6 billion Internet users by 2022 (75% of the projected world population of 8 billion). As potential audiences increase across the world, even populations in remote areas can now be engaged online.
At the same time, artificial intelligence (AI) is emerging as a powerful capability on both sides. Facebook and Twitter have been able to identify and remove so many accounts by using increasingly capable AI-enabled algorithms. Along with other social media platforms, they appear to have woken up to the threat and are making strenuous efforts to counter disinformation. Through the use of AI they are clearly having some success, but the numbers involved suggest the ‘war’ is far from won.
Meanwhile AI is driving the production of ever more realistic fake content. Deepfake videos emerged only a few years ago, initially in their crudest form, where they could be easily identified as fake. So called because they are created using AI ‘deep learning’ techniques to replace a person in an existing video with someone else’s likeness, deepfakes have rapidly evolved.
The quality of such videos has now reached the stage where many are indistinguishable from the real thing. Such is the concern that deepfakes could be used to generate highly realistic ‘fake news’ that DARPA, the US Government research agency, has begun researching how to identify realistic deepfakes. Whatever techniques are developed, deepfakes are likely to continue to evolve to counter them.
Similarly, text and images indistinguishable from real images and human text can now be AI-generated. The use of AI may also be driving the generation of ever more realistic fake social media accounts, created to look and behave like real human accounts. We are now at the point where everything we see, hear or read can be artificially simulated and anything can be faked.
The technologies used in ‘weaponising’ social media and AI-generated deepfakes, images and text, all have their origins in the commercial world rather than government or defence. In the digital arms race between the major social media platforms and malign state actors, ironically both sides are being equipped with the same technological weapons. The digital ‘arms manufacturers’ of FAAMG (Facebook, Amazon, Apple, Microsoft, and Alphabet’s Google) now have research and development budgets well in excess of the major defence companies and most governments. It will be commercial innovation that will drive the technologies underpinning the information weapons of the future.
Today state level information warfare is rampant and worldwide. It takes place in the ephemeral environment of cyberspace, but it is being fiercely fought. With states willing and able to resource activities at scale and exploit emerging technology, particularly AI, in the future the battles look set to intensify.
Ian Tunnicliffe
Director
Accordance Associates
For related articles, see our Cyber Security category including Information Confidentiality in the post lockdown world
and Protecting your data in the Covid-19 world