RECLAIMING OUR HUMANITY IN THE AGE OF AI


By Silva Kantareva and John Bell

***

The Montréal Review, May 2025



Humans have always made tools. Their range and sophistication define us as a species, they are extensions of what we are. The spear and club are extensions of our arms, the horse and buggy of our legs, and the clock of our sense of natural time. Every time we invent a new tool, our world changes, we change, and so it must be.

It has become a well established mantra to posit that we are living through a moment of unprecedented change, a sentiment that has allure in every era. However, today, we are creating a new tool, A.I., that will affect core aspects of being human in ways that no previous instrument has done. A.I. holds transformative potential on a scale unlike anything humanity has seen before. Comparisons to industrialization or mechanization fall short—not because those weren’t seismic shifts, but because AI doesn't just change how we work; it changes who we are and how we learn, relate, and think. It touches everything at once: labour, identity, governance, even our sense of reality itself. (One need look no further than instances of humans falling in love with ChatGPT). This is not merely another technological shift—it’s a fundamental change in what it means to be human in a world where intelligence of a certain kind is no longer uniquely ours.

To make matters worse, today’s politics are more divided than ever, and technology is racing ahead faster than most people—or governments—can keep up. Quite simply, our political and economic systems do not match our technological advances—except in one key way: politicians have quickly grasped the power of social media. Rather than using new technology to deepen civic engagement or build trust, it’s too often a tool for outrage, polarization, and performance.

The gap between the speed of innovation and the slower rhythm and evolution of human systems is fuelling instability and a growing sense that matters are spiralling out of our control, despite the irony that we are creating that very spiral. It is therefore not surprising that the Vatican, and the newly appointed Pope Leo XIV, have identified A.I. as one of the most critical issues facing humanity. We may not respond well to a world where machines might replace millions of jobs and learning itself could be outsourced. People will be culturally and personally lost at sea, anxious, and unsure what to believe in. Beyond idolizing technology, there is the threat to diminish the unique dignity of the individual endowed with moral reason and spiritual worth.

A.I., of course, isn’t all bad. It has the power to boost productivity, revolutionize drug discovery, make research faster, and open up new opportunities across the board. At its best, and in theory, A.I. can liberate us from the drudgery of repetitive labour—a Marxist vision realized—freeing us to pursue what makes us most uniquely human: creativity, meaningful connection, and shared purpose.

But there’s another side to this story and we are already seeing the unintended consequences. In hospitals, young surgeons are getting fewer chances to train because A.I.-assisted procedures are taking over. These systems may be efficient, but they’re also silently cutting off pathways for learning. The same dynamic is unfolding in educational institutions, where students’  growing reliance on AI threatens to erode the foundation of meaningful learning: struggle, inquiry, and, hopefully, independent thought.  AI can help fill critical gaps in the educational system and serve as a valuable complement to existing learning frameworks. Ultimately, however, humans are wired to learn in relationship with other people. Human skills that take years to build—intuition, problem-solving, hands-on knowledge—are being sidelined before we’ve figured out how to preserve them.

The likes of Henry Kissinger have forewarned about the dangers awaiting us. Beyond the prospect of a geopolitical A.I. race and unintended mistakes, or even annihilation of the human race by a superior Artificial General Intelligence (an A.I. with the abilities of a human being, or beyond), it threatens to eclipse human cognition and reshape our understanding of reality - or so it seems. In the words of Kissinger, what is at stake is Enlightenment's reason-based progress as AI's data-driven "thinking" reduces human experience to mathematical optimization, potentially eroding creativity, ethics, and governance.

Indeed, if we hand over too much, too soon, we risk losing more than jobs. We risk dulling our curiosity, our drive to understand, make things, and grow over a lifetime. A.I. threatens to cut into a core dynamic of life: all living things grow through effort and challenge with their environment - nothing is a smooth straight line to health and fulfilment. Facing the unknown, drawing out answers and gaining knowledge is core to becoming most fully human - A.I. threatens to shortcut and handicap this imperative. Despite pretending otherwise, most of us will take that shortcut, lose a critical learning opportunity, and a chance to become a more fulfilled human being.

We can’t turn back the clock, but we need to be clear eyed about the spectre ahead and what preparation is needed to address it.  More often than not, current responses to A.I. lead to the inevitable call for a collaboration between technologists, politicians and humanists to develop ethical frameworks before A.I.'s exponential growth outpaces human capacity to respond.  Yet, no concrete output has come so far, and A.I. use is galloping ahead. No government is willing to impose serious restraints on the rapid rise of AI, fearing it would undermine its competitiveness relative to others. Ironically, it may be the A.I. “laggards” whose creative and human potential remains most intact. The European Union has taken an initial step by approaching the issue from a risk-based perspective, though it falls short of a truly human-centric approach.

What we need right now isn’t more tech for tech’s sake. We need to put people - real, complex, flawed, emotional human beings - back at the centre. Politics, i.e. societal decision-making,  should be about helping us live meaningful and balanced lives, not just more efficient ones. That means ensuring that our socio-cultural and economic contexts make space for learning, care, and community—not just profit, speed, thrill or dominance. Simply put, it is about people first, and tech second. Even if the odds of addressing what we have unleashed feel slim, we don’t have the luxury of giving up.

The work of scholar and scientist Iain McGilchrist offers a crucial framework for understanding this challenge. His research on our brain’s two hemispheres reveals how modern society has become dangerously imbalanced toward left-brain thinking - narrow, grasping, explicit, mechanical, abstract and decontextualized. A.I. represents the ultimate expression of this tendency. Meanwhile, the right hemisphere’s view is broad, open, metaphorical, fresh, unique in outlook and never fully certain, always full of potential. It helps us scan our context, to know ‘where we are’, in every sense of that phrase. According to McGilchrist’s approach, healthy societies have a fruitful balance between the two hemispheres; however, when matters drift towards an ever increasing use of the left hemisphere - as today we are obsessed with technology, regulation, control and fragmented stimulation - that is the harbinger of decay.

It is the balanced use of the hemispheres with the wisdom generated by the right brain that we need today. In that sense, we can be grateful to our left brained lopsidedness and A.I. for revealing to us a truth: we are enthralled by machines because we are not in touch with the wonder and capacity of being fully human. It is in the untapped potential in each person that the answers lie.

Indeed, there is hope  in unexpected places. Radiologists at the Mayo Clinic see AI as a collaborator rather than a replacement, and they have concluded that effective medical practice crucially requires the "strange and useful imperfections" of human judgment, not just technology. 

This recognition points us in the right direction. The challenge is not only to imagine a different future, but to build the scaffolding that makes it attainable—where AI enhances human capabilities without diminishing what makes our judgment irreplaceable. This shift demands more than technical solutions; it calls for governance models with human flourishing at their core, and a cultural renaissance that revalues right-hemisphere capacities like creativity and empathy. Without these deliberate mechanisms, even the noblest philosophies risk remaining abstract ideals.

Call such paradigms capsules of wisdom, if you like. Even if no one can predict the exact skills future generations will need, these are the foundations they will rely on—not just to survive, but to make sense of life in a world increasingly shaped by algorithms that may defy comprehension and subtly rewire our brains. In the end, no matter how efficient any nation becomes or the power of our tools, it is societies that succeed in preserving these foundations—curiosity, connection, care—that will endure, adapt and renew themselves best.

***

Silva Kantareva is a policy advisor and John Bell is the Director of The Conciliators Guild, an organisation dedicated to highlighting the importance of human needs in political decision-making.

***

 


MONTREAL REVIEW CONTRIBUTOR'S ESSAY COLLECTION HONORED



 

 

The Montréal Review © All rights reserved. ISSN 1920-2911