It’s not artificial intelligence that has learned to think like us. Rather, we have stopped thinking like people, and most of the blame – I’m sorry to say it – belongs first of all to us philosophers, and then also to scientists and psychologists.
Let me explain. In case anyone has been locked up in an atomic bomb shelter for the last eighteen months: a series of new generative algorithms, trained on enormous quantity of human data, have developed the capability to produce texts, audio, and images. These algorithms give the impression of capturing the structure of human thought and rearranging it in new combinations. Probably the most famous is ChatGPT, which is capable of writing poetry, responding to questions on any topic, and composing texts and reports. It seems that ChatGPT is like us.
An impressive number of articles have been written about the possible benefits and risks of these technologies. There are open debates on intellectual property rights, the implications for education, etc. Until now, many of the capacities of these technologies were unimaginable. They will surely have a profound and irreversible impact.
But the important question is a different one: are we sure that thought is simply the manipulation of symbols and the production of output?
It is a fact that there are no evident differences between the texts produced by ChatGPT and those written by human beings. And this similarity is threatening. Scholars in various disciplines fear the day in which artificial intelligence will be capable of producing content analogous to that which they have learned to produce through many years of effort and fatigue. Is there no hope then? Have we become obsolete? Are we about to be surpassed by artificial intelligence in exactly that which we thought to be our most essential capacity: that is, to think?
The answer is hidden within the question itself.
The fact that we formulate such a question implies that we have reduced thought to calculations, operations, recombination. But is that the way things are? In fact, there are two ways of understanding thought: as manipulation of symbols or as manifestation of reality. The first way has manifested itself in a variety of apparently modern phenomena – from the Turing Machine to Wittenstein’s linguistic games, from the linguistic turn to contemporary artificial intelligence.
This notion is supported by the idea that language is the home of thought and that language, in the last analysis, is nothing more than the recombination of symbols: a popular idea that is supported by information theory and genetics. “Everything is information,” the physicist John Archibald Wheeler wrote. Information is nothing other than a series of symbols, and thought is their combination.
All this is quite convincing (and is almost an operational version of Kant’s idealism), but it leaves out something fundamental: reality.
For some, reality is an uncomfortable, almost annoying term. From Kant to the neurosciences, we are used to repeating that we cannot know the world; we can only know our own representations, which we can never totally trust. Contemporary authors in science and pop philosophy – from Donald Hoffman to Slavoj Žižek – miss no opportunity to warn us about taking reality seriously. Thus, one step at a time, thought has been deprived of all significance. Words are seen more and more as symbols within a universe of symbols, and less and less as the manifestation of something real.
Similarly, social networks and the metaverse bring us into a digital world increasingly detached from reality. In this world, the only objective seems to be typing digital words that produce other words in a labyrinth of symbols and of self-referent likes.
In this world of digital representations for their own sake, ChatGPT is like us. In fact, it is better than us. There is no contest. As in the famous story of Frederick Brown, AI is about to become the god of reality made up only of symbols devoid of meaning.
But beyond this enthusiasm for thought reduced to the calculation of new combinations, there exists a second great intuition on the nature of thought. According to this intuition, we are not merely manipulators of symbols, but rather moments of existence. In this line of thought, each of us is one of reality’s opportunities to be true.
In this vision, the person is not a mere calculator, but a unity of existence. Today, it is an unpopular perspective, given our habituation to the jargon of informatics and technology. In this world, as Gramsci says, computer science is hegemonic. Here, thought is not a flux of concepts nor a sequence of operations, but the point in which reality manifests itself. Thought acquires significance if it is illuminated by reality. Thought cannot be reduced to an algorithm, but is not, for this reason, less true. The meaning of our words does not depend on the correctness of our grammar, but on the reality that they express through language.
These two perspectives are incompatible. They span art, science, and philosophy. The first is enclosed in discourse; the second pierces the level of dialogue in order to arrive (or seek to arrive) at reality. Between the two camps, there is mutual contempt. Going beyond the dialogical level is not easy.
If the world of information were a great city that grows progressively, always growing in extension, then the external world would be increasingly far off and unreachable. Many people would never leave the city, finding around them everything that they desire and experiencing no necessity to seek the frontier. And, like that, philosophers would become philosophers of language, mathematicians would become Platonists, and scientists would be confined to autoreferential paradigms. Art would become increasingly manneristic, and thought would amount to a baroque exercise in style.
But don’t you see this all around you? The non-philosopher Manuel Agnelli expressed it well during his honorary degree ceremony at IULM (Milan): art is dead because it has become the child of a self-referential culture of numbers and consensus. Do we not realize our own hunger for value and meaning?
Philosophers and scientists find that they share that which would seem to be only a professional deformation: too much time in their own codex, too little time in contact with the world. Their “sacred” texts take the place of the world in their existence, and their lives remain prisoner of a labyrinthic library in which are born, sooner or later, digital Minotaurs that devour them. In this myth, the combination of power and conscience – represented by King Minos – and the inventor Daedalus – a combination embodied by figures like Steve Jobs, Elon Musk, or Mark Zuckerberg – creates a labyrinth in which one becomes trapped and enclosed. ChatGPT is the digital Minotaur: it cannot escape the digital realm and must be fed with the flesh and blood of our existence—not by offering ten young Thebans every year, but by providing it with our data through the Internet, social networks, and cell phones. Yet we can still hope for a Theseus who, with the help of Ariadne, will succeed in escaping by following a thread that embodies the connection to external reality.
That thread corresponds to openness toward reality as the essence of human thought, beyond the labyrinth of words, symbols, and information. It's a pity that many philosophers (like Daniel Dennett or David Chalmers) and many neuroscientists (such as Anil Seth and Vittorio Gallese) flirt with a vision of humanity reduced to a hollow construction without substance. But if we are nothing more than a mirage, then the game is easy for AI: ghosts among ghosts.
How did we arrive at this renunciation of our nature? Language sets in motion three concentric spheres: the sphere of grammar, the sphere of concepts, and the ontological sphere. In the first, what matters is the structure of symbols and how they are linked together. This is the domain where artificial intelligence (like ChatGPT) is lord and master today. Then comes the sphere of concepts, an ambiguous terrain—real for some, unreal for others; a sort of purgatory waiting to be eliminated. Finally, there is reality, where everything of value originates; what we seek in our lives but do not always find.
Today's AI (who knows about tomorrow's) stops at the grammar of language. But value is found in reality insofar as it is reality. AI does nothing but construct clouds of bits devoid of blood, color, and flavor. "It is nothing but a tale told by an idiot, full of sound and fury, signifying nothing." If AI were to write Hamlet, word for word, it would be nothing more than a combination of symbols. Dust, not a statue.
The question we should ask is not whether ChatGPT thinks like us, but rather, what it means to think. Do we really believe we are nothing more than digital illusions? Have we truly lost the thread of Ariadne that linked our words to the real world?
I rebel. I am real, and my reality goes beyond the cascade of green digital digits from The Matrix. We are real, and this reality is not within our symbols. We are not mere calculators.
Let us have patience if the majority today thinks otherwise, enchanted by the prospect of trading reality for a digital metaverse. Let us return to reality and abandon the symbols. Let us return to things and leave words behind. It is not true that words or information are more important than life and things. ChatGPT recognizes but does not see; it listens but does not hear; it manipulates symbols but does not think. To think, one must be real—but what is thought? Thought is the world.
Riccardo Manzotti
Translation from the original article in Italian at https://www.doppiozero.com/lia-pensa-e-noi