* A version of this opinion piece was published by News24 on 22 April 2025 and can be accessed here.
Until recently, we thought that artificial general intelligence (AGI) – AI systems that match human intelligence – was at best a theoretical possibility. Certainly not something that would become a reality in our lifetime.
By all indications, that has changed.
Among developers and researchers at the world’s largest technology companies, there is now consensus that we will probably achieve AGI within the next five years.
But should we be afraid of this new paradigm of AI?
Levels of AGI
We should first clarify what AGI is.
Typically, a distinction is drawn between narrow AI and other forms of AI, which include AGI. Narrow AI refers to systems trained to perform specific tasks. In executing those tasks, they simulate human intelligence.
In 2024, a group of researchers at Google DeepMind identified six principles that define AGI. The two principles they use to define levels of AGI are relevant here. Performance, the first principle, relates to the depth of an AI system’s capabilities compared to those of humans. Generality, the second, refers to the range of tasks the AI system can perform at a target performance level.
Using these principles, they define five levels of AGI, which help make the concept of human-level AI more tangible. Level 0 represents the absence of AI. Level 1 corresponds to an AI system that is equal to or slightly better than an unskilled human. Level 2 describes an AI that matches or exceeds the capabilities of at least the 50th percentile of skilled adults. Level 3 indicates performance at or above the 90th percentile of skilled adults. Level 4 denotes an AI system that performs at or beyond the 99th percentile of skilled adults. Finally, Level 5 refers to a system that outperforms 100% of humans, sometimes also called Artificial Super Intelligence, a term that is increasingly used.
Interestingly, for narrow domains (where tasks are clearly defined) the researchers believe we may already be seeing signs of Level 4 AGI. But for general applications, which include ‘metacognitive’ tasks such as learning new skills, we are still at Level 1.
Meaning of AGI
It is tempting to compare AGI to the creature created by Victor Frankenstein in Mary Shelley’s famous gothic novel. For those unfamiliar with the story, Frankenstein assembles body parts to create a living being. This creature is rejected by both its creator and society, and ultimately ruins Frankenstein’s life by killing his loved ones. Frankenstein himself dies exhausted and in despair.
Although Frankenstein is the name of the creator, not the creature, the term is often associated with a monstrous human invention capable of destroying its maker. Some argue this is an apt analogy for AGI: humanity creating an entity that could ultimately threaten our own survival.
Yet, despite real and extremely challenging risks associated with frontier AI systems, this is not the best way to think about AGI. As we mark World Creativity and Innovation Day we should recognise that AGI is also a reflection of human innovation. And for millennia humans have created technologies both with the potential to transform and destruct.
Take fire: more than a million years ago, we harnessed this dangerous proto-technology. It could kill and destroy, yet it revolutionised how we prepared food, made tools and survived in harsh environments.
Since then, we have repeatedly created useful technologies that exceed our capabilities while also carrying risks. Consider wheeled vehicles, invented around 5,000 years ago. They transformed food production and transport but also introduced the risk of injury or death from fast movement.
In our own time, examples abound. The most dramatic is nuclear technology. Nuclear power plants generate vast amounts of low-carbon energy. But nuclear weapons pose the risk of annihilating entire cities, or even humanity itself.
In some respects, AGI is another one of those technologies humanity has developed, with the potential to transform society.
Unknowable era
Many people do not realise that the subtitle of Shelley’s Frankenstein is The Modern Prometheus. This refers to the Greek myth in which Prometheus steals fire from the gods and gives it to humanity. Fire enabled civilisation, but it also disrupted the human-divine relationship and brought serious risks.
There are clear parallels with AGI: these systems could ignite a wave of human creativity. AGI could revolutionise scientific discovery by largely autonomously generating hypotheses and designing experiments. In medicine, AGI could provide, among many other things, personalised treatment recommendations. And in education AGI could adapt to learners’ individual learning styles, making customised curricula and real-time feedback possible at scale.
But there is a fundamental difference between the myth of Prometheus and the reality of AGI. Fire was received. AGI is created.
If predictions hold, we are on the verge of building a technology that matches our own intelligence. In other words, we are close to creating a system that simulates, or perhaps even emulates, the very thing we believe sets us apart from other life forms and objects on Earth.
From one angle, this is a testament to human ingenuity and innovation. From another, it marks the beginning of a fundamentally unknowable era.
Should we be afraid of AGI? I think it is too early to know.