By: Joseph de Weck / The Guardian
Translation: Telegrafi.com

This summer, I was stuck in traffic on the scorching streets of Marseille. At an intersection, my friend in the passenger seat told me to turn right toward a place known for its fish soup. But the navigation app Waze guiding us to continue straight. Tired, and with Renault-n that felt like a sauna on wheels, I followed the advice of Waze-it. Moments later, we were stranded at a construction site.


A trivial moment, perhaps. But one that perhaps sums up the defining question of our era, where technology touches almost every aspect of life: who do we trust more - other people and our instincts, or machines?

The German philosopher Immanuel Kant defined the Age of Enlightenment as “man’s emergence from self-inflicted immaturity.” Immaturity, he wrote, is “the inability to use reason without the guidance of another.” For centuries, this “other” who guided man’s thought and life was often the priest, the monarch, or the feudal lord—those who claimed to act as God’s voice on Earth. In trying to understand natural phenomena—why volcanoes erupt, why the seasons change—people turned to God for answers. In shaping the social world, from economics to love, religion served as our guide.

According to Kant, people have always had the ability to reason. They just haven’t always had the courage to use it. But with the American Revolution and later the French Revolution, a new era was dawning: reason would replace faith and the human mind, freed from authority, would become the engine of progress and a more moral world. “Sapere aude” or “Dare to use your mind” – Kant urged his contemporaries.

Two and a half centuries later, we might wonder if we are slipping back into immaturity. An app that tells us which path to take is one thing. But artificial intelligence [AI] risks becoming our new “other” – a silent authority that guides our thoughts and actions. We risk giving up our hard-earned courage to think – and this time not to gods or kings, but to code.

ChatGPT was launched just three years ago, and yet a global survey published in April found that 82 percent of respondents had used AI in the past six months. Whether it was deciding whether to end a relationship or who to vote for, people were turning to machines for advice. According to OpenAI, 73 percent of user requests are related to topics that are not related to work. Even more intriguing than our dependence on AI's judgment in everyday life is what happens when we let it speak for us. Now, writing is among the most common uses of ChatGPT-, right after practical requests like "do it yourself" or kitchen tips. American author Joan Didion once said: "I write only to discover what I think." But when do we stop writing? Do we stop discovering?

What's troubling is that some evidence suggests the answer may be yes. A study from the Massachusetts Institute of Technology [MIT] used electroencephalography (EEG) to monitor the brain activity of essayists who had access to AI, on search engines like Google, or no help at all. Those who could rely on AI showed the lowest cognitive activity and had difficulty accurately citing their work. More worryingly, after a few months, the participants who used AI became increasingly lazy, copying entire blocks of text into their essays.

The study is small and imperfect, but Kant would have recognized this pattern. “Laziness and fear,” he wrote, “are the reasons why such a large proportion of people ... remain immature throughout their lives, and why it is so easy for others to be set up as their guardians. It is so easy to be immature.”

Of course, the appeal of AI lies in the convenience it offers. It saves time, eliminates effort, and - most importantly - offers a new way to take responsibility away from oneself. In his 1941 book, Escape from freedom, the German psychoanalyst Erich Fromm argued that the rise of fascism could be explained in part by people's willingness to surrender their freedom in exchange for the security of submission. AI offers a new way to surrender that burden of thought and personal decision-making.

The greatest appeal of AI is that it can do things that our minds cannot – process oceans of data, and do so at unprecedented speed. Sitting in the car in Marseille, this was, after all, why I chose to trust the machine over my friend (a decision she took as an insult). With all its access to data, I thought, surely the app should know better.

The problem is that AI is a black box. It produces knowledge, but without necessarily deepening human understanding. We don't really know how it reaches its conclusions - even the programmers themselves admit this. And we can't verify its reasoning against clear, objective criteria. So when we follow AI's advice, we are not guided by reason. We are returning to the realm of faith. In doubt for the machine - when in doubt, trust the car - could become our guiding principle in the future.

AI could be a powerful ally to humans in rational inquiry. It could help us invent medicines, free us from “futile work” or tax obligations—things that require little thought and offer even less pleasure. So much the better. But Kant and his contemporaries did not base reason on faith just so that people could organize better or have more free time. Critical thinking was not just about efficiency—it was a practice of human freedom and emancipation.

Human thought is messy and fallible, but it forces us to debate, to doubt, to test ideas against one another—and to recognize the limits of our knowledge. It builds self-confidence, both individually and collectively. For Kant, the exercise of reason was never just about knowledge; it was about enabling people to become agents of their own lives and to resist domination. It was about building a moral community based on the shared principle of reason and debate, not blind faith.

For all the benefits that AI brings, the challenge is this: how can we harness its promise of superhuman intelligence without compromising human reason - the cornerstone of the Enlightenment and of liberal democracy itself? This may be one of the most defining questions of the 21st century. And it's a question we shouldn't delegate to a machine. /Telegraph/