Home Page
Fiction and Poetry
Essays and Reviews
Art and Style
World and Politics
Montreal
Archive
 

***

IN AI WE TRUST

***

By Helga Nowotny

***

The Montréal Review, October 2021

***

IN AI WE TRUST: POWER, ILLUSION AND CONTROL OF PREDICTIVE ALGORITHMS
By Helga Nowotny (Polity Press, 2021)

***

The wish to know the future is as old as humanity. In all cultures around the globe our ancestors have consulted signs from the heavens and from nature, believing they could see or hear and interpret what they revealed about their destinies or which decision to take. In ancient China, for example, oracle bones from the shoulder blades of sheep or cattle or the shells of turtles were held over fire and specially trained diviners were "reading" from the cracks in the bones what the future held in store for the supplicant. It is now thought that these divinatory practices may have been the origin of the Chinese characters and thus the basis of writing.

In the 21st century our hi-tech digital civilization seems far removed from turning to the entrails of animals, the patterned flights of birds or the sanctuary of the god Apollo in the ancient Greek city of Delphi where the Pythia delivered oracles in an altered state of consciousness, probably induced by the fumes emerging from the rocks in the cave. And yet – the ability to see further ahead and to predict the future has not lost any of its appeal. It seems even more urgent than ever at a crucial time in history when humanity faces the unprecedented challenges of climate change and further losses of biodiversity. We now draw on numerous reports of fore-sight exercises and on graphs and figures scrupulously extrapolated and assessed by scientists from around the world, like the recently IPCC report, to be better prepared against all kinds of natural and human-made disasters. The writings on the wall are there - be it about floods or droughts, hurricanes, supply-chain disruptions, food insecurity or – we have been warned -  the next pandemic.

But it is not only the urgency to be better prepared that brings the future closer into the present. Humanity has since long entered the digital age. Digital devices in the form of smartphones, laptops and the latest digital gadgets and apps have become our daily companions, following our movements and moods, with whom we are in what kind of contact and which information and goods we consume. The data thus collected and the devices that collect them are part of a mostly invisible, vast digital infrastructure, largely owned by the big digital corporations. They continue to expand their profit-making business models beyond individual behaviour that tell us what to buy next into the working place where digitalization will create and destroy jobs. Decision-making based on algorithms is already spilling over into institutions, like the health and justice system and the military where drone strikes based on algorithms and autonomous weapons are eagerly adopted. We witness how the public sphere and the political system, also in liberal democracies, is invaded by sending ever more refined messages to groups that are micro-targeted and public discourse is perverted by the circulation of fake news and conspiracy theories. Deep-fakes, the possibility to alter digitally the faces, body-language and speech of historical figures by making them say and do things they never said or did are merely the latest creatures crawling out of Pandora’s digital box.

Pythia, 2015 (Acrylic on wood panel) by Dean Monogenis

At the heart of these and other technological feats and developments lies the power of digital algorithms. An algorithm is nothing mysterious. In its most simple form it is a set of rules to be followed by calculations and other form of problem-solving operations. As machine-reading instructions, however, trained and learning to train themselves by an enormous amount of data and following ever more sophisticated rules, MachineLearning or DeepLearning algorithms based on neural networks have attained awesome predictve power. For the general public, the most stunning achievements are the defeat of the best human players of chess and GO by the digital machine and its invisible algorithms. Likewise, in some fields of medial diagnostics predictive algorithms are already outperforming the medical experts in pattern recognition and for the scientifically difficult problem of finding out how proteins folds, predictive algorithms have provided the solution in strikingly short time.

However, these impressive achievements are all in well-defined and rule-based domains.  When we move into the realm of social behaviour a significant change occurs in our perception of what predictive algorithms can do. Many people begin to attribute agency to algorithms. With it comes the belief into their power to do things that far exceed human capabilities as well as the belief that what algorithms predict will actually become true. Based on mathematical calculations that come enshrouded in a whiff of alleged greater scientific „objectivity“ and the fact that most algorithms operate as a black-box where even experts do not know what actually goes on inside, algorithm are attributed a kind of superior epistemic status. We then get the feeling that they know us better than we do ourselves. We begin to believe that what the predictive algorithms says about the risk of us to get a certain disease will actually happen and we forget completely that everything a predictive algorithm comes up with is always couched in probabilities.

This is why we fall into the illusions we have created about the power of predictive algorithms. By believing their predictions, we begin to change our behaviour accordingly and to adapt in anticipation of what we expect to happen in the future. This is the risk of self-fulfilling prophecies that confirm a previous expectation and bring about social situations, turning a mere possibility into reality. Yet, there is even more at stake. Our outlook on what the future is can change dramatically. For the largest part of human history people believed that the future had been predetermined – by God or the gods, by destiny or chance. Only a few centuries ago, when the amazing achievements of modern science and technology became widely visible and their benefits began to percolate through society and under the influence of ideas from the Enlightenment did people begin to realize that their future was not necessarily static. They were no longer bound to a future that was a mere repetition of the past. Rather, the horizon of the future was open and the asprirations of having a future became, as Appiah Appadurai called it, a cultural fact.

In my book In AI We Trust. Power, Illusion and Control of Predictive Algorithms I point to a paradox that lies at the heart of our trust in AI: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that we humans have created the digital technologies to which we attribute agency. As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.

***

Helga Nowotny is former President of the European Research Council. She is Professor Emerita of Science and Technology Studies at ETH Zurich.

***

 
 
 
 
Copyright © The Montreal Review. All rights reserved. ISSN 1920-2911
about us | contact us