The release of OpenAI’s ChatGPT has generated a flood of commentary, in the media and scientific circles, about the potential and risks of artificial intelligence (AI).
At its core, ChatGPT is a powerful version of the large language model known as GPT. GPT stands for generative pre-trained transformer: a type of machine learning model that extracts patterns from a vast body of training data (much of it scraped from the Internet) to generate new data composites (such as chunks of text) using the same patterns.
CEOs of AI companies, politicians and prominent AI researchers are now publicly sounding alarms about the potential of tools like GPT to pose an existential threat to humanity. Some claim that GPT may be the first ‘spark’ of artificial general intelligence, or AGI – an achievement predicted to entail the arrival of sentient, conscious machines whose supreme intellects will doom us to irrelevance.
But as many more sober AI experts have observed, there’s no scientific basis for the claim that large language models are, or ever will be, endowed with subjective experiences – the kind of ‘inner life’ that we speak of when we refer to conscious humans or other creatures for whom intelligence and sentience go hand in hand, such as dogs, elephants and octopi.
Everything we know about sentience is incompatible with a large language model, which lacks any coupling with the real world beyond our text inputs. Sentience requires the ability to sense and maintain contact with the multidimensional, spatiotemporally rich, flowing world around you, through sensorimotor organs and an embodied nervous system that’s coupled with the physical environment. Without this coupling to reality, there’s nothing to feel, nothing to be grasped, no reality to form as a stable subject within.
AI is causing real problems, right now, from algorithmic discrimination and disinformation to growing economic inequality and environmental costs.
Does that mean AI is nothing to worry about? Some AI leaders, like Prof Yann LeCun, draw that conclusion. Their view is that today’s AIs are unlikely to lead to AGI, so they pose no grave threat to humanity. Unfortunately, these techno-optimists are also wrong. The threat is very much there. We’ve just fundamentally misunderstood (or, in some cases, perhaps willfully misrepresented) its nature.
AI without the capacity to think is more dangerous than AI with it. What threat could AI possibly pose without minds or agency? In asking this question, we forget that an impostor can be more dangerous than a competitor.
Human survival is not endangered by AI, at least not for reasons involving machine sentience. But extinction is not the only risk. Losing the humane capacities that make our mode of existence worth choosing and preserving is another.
Imagine that aliens landed tomorrow and offered us a choice:
Option A: They invade Earth and we take our chances resisting.
Option B: They leave the planet alone, but only after replacing us with doppelgangers that carry on all the usual human-like activities (eating, talking, working) with no capacity for independent thought or creative vision, no ability to break from the patterns of the past and no motives beyond the efficient replication of the existing order.
Is Option B the better choice? Or is it worse than the peril of extinction?
I’m not worried that today’s AIs will turn into these mindless doppelgangers. I’m worried that we will. We’re already willingly giving up the humane capacities that ChatGPT lacks.
Boosters of AI-powered writing apps are advertising, as a benefit, the chance to surrender the most important part of storytelling – envisioning where a story might go – to a bot that will simply present us with plausible preformed plot twists to choose from. People are lining up to thank the ‘innovators’ who show us how to train ChatGPT to write like we would, so that we may be liberated from the task of forming and articulating our thoughts.
The philosopher Hans Jonas warned us of the existential risk of a future ‘technopoly’ that celebrates the “quenching of future spontaneity in a world of behavioural automata,” putting “the whole human enterprise at its mercy.” He didn’t make clear whether these automata will be machines or people. I suspect the ambiguity was intended.
Companies are now replacing scriptwriters, artists, lawyers and teachers – people who have crafted their talents over decades – with machines that produce output that’s ‘good enough’ to pass for the labours of our thinking. The replacement is worrying, but far more concerning is the increasingly common argument that thinking is work we should be happy to be rid of. As one Twitter user put it: what if the future is merely about humans asking the questions, and letting something else come up with the answers?
That future is an authoritarian’s paradise. Self-governance – not just the ability, but the desire and will to author our own stories – is the enemy of unaccountable power. Forcibly suppressing human agency is a lot harder than convincing people that the treasure they hold is worthless.
AI is causing real problems, right now, from algorithmic discrimination and disinformation to growing economic inequality and environmental costs. And these demand urgent solutions. But in terms of a future without humanity, AI isn’t a threat. In fact, a future with sentient machines who think with us could, in principle, be every bit as good – and humane – as a future without them.
The question is what kind of moral, intellectual and political value system the economic power behind today’s AI will be used to sustain: one where thinking matters? Or one where it doesn’t?
The talk about existential risk from AGI is a magician’s distraction from what’s going on right in front of us – not a mechanical uprising, but a silent campaign to devalue the political and cultural currency of humane thought.
That’s the endgame.
Our humanity is the stake.
Read more: