Artificial Intelligence – how to protect the world against an existential risk
Powerful Artificial Intelligence systems should only be developed once we are confident that their effects will be positive and the risks are manageable, particularly given their potential to increase the sophistication of cyber attacks.
Ransomware Sophistication
DD0S / MITM / SQL Injection / Zero-Day Exploit / DNS Tunnelling / BEC / Cross-Site Scripting. Up to 80% of those of you reading the words above probably do not know what these represent, mean or can even achieve.
These are in fact, at present, the most effective and circumventive cyber-attack methods used to cripple and destroy organisations, both reputationally and financially. For almost three days, the global operation of JBS – the world’s largest meat processor – was hobbled by a ransomware attack targeting their IT systems, just weeks after the May 2021 Colonial Pipeline incident – where a more advanced ransomware attack took down a key oil artery on the US east coast.
The perpetrators of the JBS attack have long been known to cyber security experts. Since February alone, the REvil group has been connected to almost 100 targeted ransomware attacks.
Extortion and ransomware attacks have soared in popularity in recent years, partly because the business model works. In the Colonial Pipeline attack, attributed to a group named DarkSide, the company paid £3.1m to regain access to its own infrastructure.
The Role of Artificial Intelligence
Artificial Intelligence (AI) is changing the sophistication of attacks, paving the way to a new era of cyber manipulation, allowing defenders to scan networks automatically rather than manually in search for weaknesses and fallibility, thus allowing threat actors to launch attacks that evade traditional security measures. In the Colonial Pipeline case study, the AI Elliptical Curve encryption used was simply unsurpassable; as such, the ransom was paid within a matter of hours!
Artificial Intelligence Milestones
In March 2023 the tech firm OpenAl launched, its chatbot, Chat GBT-4, the most powerful AI currently available on the market.
This has already sparked massive controversy as more than a thousand AI experts combined efforts and put their names together to the Future of Life Institute, a Silicon Valley research agency which believes in protecting the world against existential risk.
They said that if we do not do that, then the out-of-control race to develop and deploy ever more powerful digital minds, that no one, not even their creators, can, understand, predict or reliably control, could lead to catastrophic outcomes.
These powerful AI systems should only be developed once we are confident that their effects will be positive and the risks are manageable, the risk in this case commonly being recognised as “The Alignment Problem”. The alignment problem highlights and discusses the central idea that once you create a system that is intelligent enough to improve itself, then it can quickly Improve itself to go from merely human level intelligence to god-like intelligence. This creates a “flywheel effect”, whereby changes and improvements happen at an unquantifiable rate which render AI to not be as compliant or concurrent as our design expectations.
Steve Bosniak and Elon Musk are amongst the signatories demanding a voluntary hiatus in the development of giant AI systems, which they described as any AI more complex than GBT-4 (the high-water mark of AI creation). Moreover, if it is not voluntarily entered in to, then government should enforce it.
There is no easy way of encouraging every government in the world and a huge number of extremely powerful corporations to voluntarily limit their powers. This does not tend to be a virtue that humanity prevails in; however, there are exceptions, with the most obvious being the Nuclear Non-Proliferation Treaty of 1968, which has immensely limited the spread of nuclear technology. The flipside to this is that until very recently, nuclear technology has been solely the purview of governments. Essentially, it is not something you can embark upon in your living room and that is not true of AI, where all you fundamentally need to programme AI is computing power and some graphic cards.
Misinformation
Image generating systems, like Midjourney, are extremely good at producing plausible fake imagery, thus entering us into a new world where seeing is not necessarily believing.
This is as far as imagery is concerned; for text however, we have a world where a machine can generate 5 million pieces of plausibly human-written unique text in nano seconds. This would suggest SPAM and misinformation will evolve to a higher level of sophistication, making it impossible to tell it apart from human-created real material.
The Future
If, like me, you were born in the early eighties, there have been some significant sweeping changes in our lifetime, such as the World Wide Web and the iPhone. I was born in a world where these creations were science fiction, and yet today they are in my pocket.
Automation is already beginning to change the landscape in the world of physical managed guarding. We are observing huge successes in roving drone patrols, asset tracking, dynamic dashboards and vistor management systems, negating the need for large teams of operatives. We will inevitably reach a saturation point where AI succeeds PI (Physical Intervention).
There will be some incredible, positive and groundbreaking moments in the future with AI and getting there will undoubtedly make our world seem very unusual and turbulent. However, governance is key to this radical flywheel technological change and developments must be assessed meticulously so that we can accept and manage the risk that follows.
Gavin Gilbert
CIS Security, Contract Manager