Home Page
Fiction and Poetry
Essays and Reviews
Art and Style
World and Politics




By Bruce Fleming


The Montréal Review, May 2017


A Man Who Knew When To Let The World Pass By, (Oil on panel, 2016) by Neil Macpherson (Portal Painters Gallery, London)


Protagoras was right. Man is still the measure of all things. Yet nowadays we’ve forgotten this. The happy illusion of our age is that we have transcended the mere human with information, which can be stored in huge quantities (so huge we can’t even imagine) in machines that, unlike old-fashioned storage bins like museums and libraries, don’t take up room and don’t require constant upkeep and maintenance. Information is cumulative: we get more and more of it to the point where humans are dwarfed by its magnitude, and that is good. It never topples over, it never collapses under its own weight, it never gets moth-eaten or decays. And it’s all available to us, recallable to our fingertips by super-fast search engines.

This is the form of our contemporary optimism, akin to the happy optimism of the imperial nineteenth century where faith in the ability of science to solve all problems merged with faith in the civilizing mission of western intervention in the “primitive” world. More information is more knowledge, which is greater power.

But this isn’t true. It’s still knowledge stored and used by humans, which means that no matter how much raw data there are out there,  all of it has to pass through the same old bottleneck of our human interest: our databases can be brimming but if we don’t access it, the result is the same as if our groaning library shelves did nothing but gather dust. The data themselves can’t make us use them; that’s still the human interest factor that controls all.

Humans are still the same limited creatures we’ve always been: we don’t live forever, our productive years are limited, and we have to want to see the data.  What if the data are stored and nobody accesses it? This isn’t Bishop Berkeley’s insistence that things actually ceased to exist when we aren’t looking at them, rather the fact that even if they don’t, if we’re not interested in them they might just as well not be there.

Of course, it’s still possible we can make Frankenstein’s monsters through Artificial Intelligence, robots we create that take wrest control of them from our hands and come to control us, just as indeed every baby born can do. And it seems odd we are so interested in whether we can create machines that are human when we can already create, or at least produce, humans who are human and who can cause us the same amount of harm (or good). The fascination with AI seems to be based on the questionable assumption that all machines would band together against their human creators, a sort of Us vs. Them. Do we want to make the Them? But given that we are capable as a race of producing Hitler and Pol Pot, why is this so pressing a question?

We humans are still the measure of all things: we have to be alive and functional to access information, and we have to want to do so. All is controlled by our births and deaths, our whims and interests, our limited attention spans that flit from subject to subject like bees on flowers.  Information is controlled by the most basic fact that we cease to use it when we die, or even when we grow bored, or doze off, or even just go out to dinner.  

Plus we don’t consider all data at once. Whatever we are interested in fills our mental screen so we cease by definition to be aware of all the things we aren’t: all we see are the individual entities we’re concerned with, not those we aren’t.  As a result, we usually cease to think about the other things; we see what we are focused on but not what we aren’t. And this leads us to over-estimate the light at the expense of the dark. We think of ourselves constantly flickering with the electricity of considered and accessed data, without considering the vastly greater reams of data that sit in silence and darkness. 

What we need to get an accurate view of our state is the corrective “yes but” of the Romanticism that ran simultaneously underneath the  nineteenth century’s optimism and warned of the isolation of the individual and the negative effects  on that increasingly unhappy unit of the single human being, of the increasing complexity of the social web in which the individual was caught. From Coleridge’s “Stanzas: Written in Dejection” and the suicide of Young Werther to Freud’s “Civilization and Its Discontents,” from Rousseau’s yearning for the pre-civilized state of “natural man” to Arthur Schnitzler’s tragedies of Viennese manners, Romanticism warned us that more things of greater complexity were not making us happier but unhappier, and that the paths of glory led but to the grave.

In the twenty-first century, our optimism is linked to computers, probably because they are relatively new.  Mark Zuckerberg and now Steve Jobs have their bio-pics, all the Mission: Impossible movies involve computer miracles—especially the most recent one, “Rogue Nation”; espionage is usually an attack on “cyber-security,” identity theft is the crime of the moment, and the 2015  hack by China on the US data base of federal workers has exposed the feet of clay of the most powerful nation on Earth—not to mention apparent Russian interference in the 2016 US election and its internet traffic. It turns out others are changing our Internet DNA.  But does it dent our optimism that we exist because we can put everything online?  Not a bit.

To get a more accurate view of our state, we need a dose of Romantic Agony, to use the title of Mario Praz’s book about the sado-masochistic world of torture and predatory ghouls that was central to Romanticism, a reminder that our miraculous labyrinth of information to which we believe we hold Ariadne’s thread, the search engine, is nothing without us. We believe that we have transcended the sorrows of the individual. But we haven’t. It’s simply not true that man is no longer the measure of all things: she continues to be so.  More information doesn’t change the fact that it all has to funnel through the fact of a single person with a limited life, short attention span, and a consciousness that can only choose one thing at a time from the melee.

We still only eat one dinner, even if the fridge is bursting.  A larger house doesn’t make us grow.  Indeed it may even have negative effects on what we can do. Art historian Kenneth Clark wondered in his BBC series “Civilisation,” standing in Versailles palace, “if a single thought that has helped forward the human spirit has ever been conceived or written down in an enormous room.”  Bigger isn’t more, at some point it reverses itself and becomes a bind. Ultimately we want a world that’s more our size, the way cats seek out boxes to curl up in.

In fact, more things to choose from is frequently a bad thing rather than a good one.  After the fall of the Berlin Wall, the East Germans spoke poetically of “die Qual der Wahl”—the pain of choice. In the GDR, there were limited versions of things; in the West the choice was overwhelming. And what about the “we’ll do it some day” attitude of New Yorkers toward must-see tourist sites like the Statue of Liberty? How many locals have actually looked at the paintings in the Metropolitan Museum of Art that out-of-towners travel hours and even days to see? The comforting thought that they’re there being taken care of removes the urgency from the project of going to see them, just as does the feeling that everything is in the computer and we don’t actually have to go get it.

Our contemporary optimism is based on having forgotten this basic law of the local: what you have doesn’t matter as much to you as what you don’t.  If Google knows everything, we don’t have to know anything, just as calculators took away many people’s ability to do long division. If it’s there we’ll go someday, which usually means never.  It’s almost always outsiders who think fabulous the things locals take for granted . 

With brick and mortar storage facilities, like museums, we have actual places that can serve as destinations for tourists, regardless of how much or little they actually take in. New York’s Museum of Modern Art is full of Europeans just in from JFK, and everybody swings by the Metropolitan Museum. At least we can see things. Nobody goes to libraries as destinations.  And we don’t browse the shelves of the data universe: we wait for the guided tours—ten celebrities who haven’t aged well; ten most liveable towns; ten foods to avoid. Much attention has been paid, primarily in the works of Nicholas Carr, to the way hyperlink can lead ad infinitum to hyperlink, but little to the fact that we are following someone else’s umbrella, like tourists after the guide in a strange city, or in the Louvre.  

What makes the world of computer data so miraculous is that they don’t take up room: they aren’t several city blocks, don’t have to clean off the pigeon droppings, and don’t have to have the rooms guarded or the floors swept. The world of more-is-more computer data seems different only because it’s miniature, at least compared to the world of humans. It’s there but doesn’t take up room—the ubiquity of personal computers notwithstanding. So it seems we can have our cake and eat it too: more paintings but not more buildings. The Medieval conundrum of how many angels could dance on the head of a pin has come back as a real question: if they lack extension, as angels ought to, the answer is an infinity. An infinity of data can dance on the head of a pin, even if it requires something friendly to fingers and eyes to see them: the microscope was the way to access Leewenhoek’s world of wee beasties as well. All we have to do is find the right lens. So computers seem to have given us die Wahl without the Qual.

What we’ve forgotten is that we humans still have to bring to all this choice the desire and motivation to choose: our storehouses, whether large museums with guards and swept floors, or the Internet that opens up underneath the microscope of our personal computer, are our storehouses. They have no purpose by themselves. Their proof is in their use, not in their mere existence.

More pictures make a bigger museum, as more data make a bigger database to search. If we have one particular painter in mind, we have a greater chance of finding her there in a larger museum.  And the same seems to justify piling up more data. So it would seem there’s virtue in numbers. Yet this already presupposes the human factor: our interest, which in all but a tiny minority of cases simply isn’t there. And our interest is limited.

Just as we can only look at one picture at a time, and usually because it’s hanging next to the one we just looked at, and get tired after a few hours, more pictures aren’t necessarily more pictures concentrated on. With machines that can store more and more and more, we multiply the number of things to choose from, but not necessarily the number or quality of things chosen. We still have to choose. If the little multi-faceted polyhedron in our childhood toy Ouija ball has more facets, still only one can float to the top at a time. A library with a thousand volumes has more choices than one of a hundred, but we can still only read one book at a time, and there’s no guarantee that we’ll concentrate as much on that one in a thousand. If we bring the human interest, more to choose from is more. But more data can’t wag the dog by creating interest. So everything there is still passes by the bottleneck of the human, the measure of all things.

This isn’t pessimistic, it’s a fact. Humans create things to serve humans, which seem to (but don’t) transcend humans. Our brains don’t hold all that’s in databases—they are in this sense better than we are. But we still have to decide to access them. They can’t access us. These are the limits of the greater-than-human.

A scholar, the person most focused on databases, has perhaps forty productive years. During this time, she does many other things than focus on (say) the development of one Sienese painter: she eats, sleeps, goes to the bathroom, and perhaps drives children to school. During this time she publishes articles and possibly books that others may or may not read and which eventually go out of print. Then she retires, and ultimately, she dies. And there’s no way to ensure that anybody else is ever interested in her painter, or reads her articles.

Death and the subjectivity of interest is the final proof that machines serve us, not the reverse. X is the world’s greatest expert on X, and then she’s dead. And what if she takes to drink at age 50? Or decides to devote herself to something else, like peonies? All that lovely knowledge sitting around unused—like, in fact, most of the lovely knowledge in the world. Indeed we may say that unless somebody is knowing it, it isn’t knowledge at all: a bell’s not a bell ‘till you ring it, as the song by Oscar Hammerstein II for the stage version of “The Sound of Music” notes.

Philosophers have confronted this conflict of subjective and objective before. Berkeley fused the two, announcing that the tree didn’t fall in the forest if nobody heard it, but then, like Descartes, had to appeal to God as the ultimate computer—it’s all there somewhere; it’s just that we’re limited. So too Kant’s division of what we as limited creatures perceive, and the unperceivable realm of the noumenal. Computers are our noumenal realm, and just as inaccessible as Kant’s: indeed most intellectual historians summarize the Romantic despair of the stranded individual as the result of realizing that we humans can never reach the great computer that underlies it all. In the early 21st century, we have yet to make that realization; it’s time we did.

What about Aristotle’s more hopeful formulation of potential? It isn’t X now but it is actually potentially X now: all that data is potentially readable. Most of the time we do adopt such a more hopeful formulation, rather than the black/white of Berkeley or the intellectually convoluted version of Kant.  We  don’t think the objects in museums disappear at night; indeed recent movies suggest they come to more vibrant life. But if nobody looks at the paintings in Ottawa’s National Gallery of Canada at night, we’d probably say they still existed. Any given work is still a canvas if we don’t see it—but is it, in that slumbering state, art? Or how should we ask this question?

We can transcend the unalterable givens of life, death, interest and perception for a shorter period of time, but ultimately they bring us back to Earth.  More data don’t make a different world. Consider more physical beauty—a postulated world where everybody is gorgeous. This is the world of fashion models as it exists on the pages of magazines. But it’s the ordinary people who buy the magazines. Is more more? Hollywood movies—recently the stylish Guy Ritchie romp “The Man From U.N.C.L.E.” with its gorgeous protagonists in hyper-chic period clothes (the period is the early 1960s)—suggest it can be. But once again it’s Kenneth Clark who brings us back to Earth, questioning  the artistic preference for “magnificent physical specimens”  in paintings and sculptures as the doers of heroic deeds: he proposes that the insistence on having beautiful people as heroes  “became a deadening influence on the European mind. It deadened our sense of truth, even our sense of moral responsibility.”  So beauty remains beauty—but it only works if we realize it’s still the exception. We can’t actually make more of it, or if we do so, it is at our peril. So too, perhaps, our current fetishizaton of stored factoids that computers allow us. More is more but it’s still the normal that controls the world. We can only eat one dinner, and a proliferation of what seems objective data makes us more and more dependent on subjective search engines, what we call other people.

This may be clearest when we consider the phenomenon I call “data demotion.” The first bullock cart I saw in Bombay (as it still was then), I was enthralled. By the tenth I had lost interest, and not I am not even annoyed when they stop traffic: they just are. The first scratch on my new car caused me anguish; I tried to cover it up. Now there are so many I don’t even see them. Tens of thousands of new books are published in North America every year, but only the same few are read by book clubs or talked about. We rely on others’ recommendations. Is there a masterpiece among the others? We wouldn’t know because nobody reads them. If we aren’t looking for something specific in a museum, we burn out after a few rooms. More of X just means we move the unit of our interest to 100X or 10000X, or allow others to pick among the more.

I yam what I yam, says Popeye the Sailor Man. And so are we all.

This in turn suggests that we never make progress with anything, as everything has to be subsumed under our biological givens that don’t change past a certain amount: we can increase our life span, but does this mean we read more books? If so do we know more? Or just read more books, as we eat more meals?

What about this one: do we make progress in art, say, or for that matter anything else?  T.S. Eliot, in “Tradition and the Individual Talent,” musing on the astonishingly life-like cave paintings of many-thousand-year-old caves at Lascaux, suggested we didn’t in art.  And artistic movements of more accessible centuries and millennia suggest a pendulum-like movement, going back and forth rather than forward.  For all artistic movements seem inevitably, after a decent interval during which people espouse them and then get tired of them, to call out their opposites: the highly polished academic art of the 19th century begat the “primitivism” of the Modernists, such as Picasso’s undeniably ugly (but this is the point) mask-like faces in the seminal “Demoiselles of Avignon,” set in a bordello. Now it’s rare for artists to show a mastery of drawing, the bare minimum of competence in the 19th century. Tourists to museums of contemporary art object, of course: “my three-year-old could do that” is the most usual objection. And as far as technique goes, they’re right. Artists have no response to this because the technique of contemporary art is conceptual technique: calling something art that hasn’t been called art before.  So they respond by ignoring the objection. Philistines! They just don’t get it. But this suggests that soon we’ll be back to demanding technical competence. Where else is there to go?

What this means is that we have to be ready for particular objects of art of thought. And that means they have to link to what came before in some way, and we have to be tired of the earlier. Things don’t so much have value on their own as value as alternatives to what we already know.

Buildings with wavy cloth-like roofs of shiny metal? This is only possible after the cool glass and steel of Modernism, as the “primitivism” of the Modernists that became the “I don’t have to be able to draw” foundation of contemporary art only makes sense as a reaction to the hyper-refined surfaces of 19th century academic art, or the way Mies only made sense after the ornamentation of Beaux-Arts buildings.

It’s not just art that is built and then unbuilt, a house of cards going up and then being knocked down, only to be set up anew.  All thought is constructed by reacting to what went before, frequently to annihilate it. Historically, of course, it just adds to the pattern of the dominoes  and so is the joy of professors, who need something to teach. But for the thinkers involved, it grows and then is destroyed, grows and is destroyed anew.  What is new presupposes what it reacts against, and so presupposes and underlines the importance of that earlier thing. The twentieth century’s philosophical rebellion against Cartesian dualism presupposes Descartes; Heidegger’s rejection of Socrates presupposes Plato and Xenophon. Modernism presupposes Romanticism, and post-Modernism presupposes Modernism. Shorter hemlines are the reaction to the longer ones of last year or the previous decade, and watches have gotten bigger only because they used to be smaller. In rejecting the past we merely add on to the timeline, and make it certain that we ourselves will be rejected in our turn.

But computers are different, it seems. They add more and more and more. Or do they?  It seems a bit cheap to point out that computers crash and can be hacked: see the 2015 Chinese hack of the data of countless US government workers. Or the August 2015 shutdown of Baltimore-Washington airports due to a software problem in the airplane routing system. When they’re down they’re down. Computers: can’t live without them, it seems, but increasingly, it’s hard to live with them. We know their weaknesses but we use them anyway because so far at least their weak moments aren’t frequent enough to conclude they aren’t worth it. When they work they work: we can put in a term in a search engine and get the result in what for humans is no time at all. And they have searched billions of entries! Surely this is progress?

Sophisticates will tell us that this isn’t objective information because search engines privilege the information they’re paid to privilege, and base their responses on what we’ve searched before. Plus their ability to compensate for different ways of saying the same thing is limited: put the adjective before the noun and you get one set of results; put it after another; use a synonymous adjective and it’s something else entirely.  Plus you have to do it their way: despite autocorrect, which leads us down strange pathways, you have to put in what they are programmed to respond to, and not what you want to.

These are valid objections to computers, but they are irrelevant given the basic reason we love computers, their more fundamental allure. Namely, that  they give us the most recent illusion of access to permanence, a way to outlive ourselves. We know all too well that we are transitory fragile creatures; computers allow us to believe that though we are as the chaff in the wind, the kernels are stored.  They seem the ultimate storage machines, a way of ensuring our immortality. We are painfully aware of the limitations of our memory and ability to process, but no worries!  We don’t need to live forever, because the databases will. We feel like squirrels storing up an incalculably large hoard of acorns.  The problem with acorns is that they require space, as books in libraries did before computers—and they rot. Plus because we don’t actually see the pile, it seems infinite to us. Computers have all the advantages of museums with, it seems, none of the disadvantages.  And their data take up no room at all.

Partly this belief that computers are the ultimate guarantor of our immortality is based on the fact that computers are so new.  Like the man who jumps out of the Empire State Building and cheerily reports as he falls past the 80th floor “so far so good!” we haven’t lived long enough with computers to see whether computers are a good storage medium or not.  We should be more worried than we are, perhaps. Certainly we know files become inaccessible when we are unable to read them, and if we don’t back up our computers we lose all our work, as we do if a lightning bolt fries things and so on.  Similarly, people thought video would last forever; turns out it disintegrates. Even paper gets acidic and falls apart. And if you don’t have anything to read the microchip it’s useless: the Mission: Impossible movies have played with this conceit—in one, the computer whiz (played by Ving Rhames) finds a single reader of an outmoded technology.  Otherwise the message wouldn’t get through. That too is a miracle. It’s like falling back on a typewriter when the computer fails, or pen and paper. (It’s been pointed out that more complex passwords means people write them down in one place on paper; security for computers comes down to the same old “Purloined Letter” problem of how to hide the piece of paper.)

So computers may or may not ultimately be a good thing; we don’t know. What is certain is that mankind has always looked for ways to become more powerful than her fragile self, and to pass things on to the next generation. Books had the same function, and were the computers of their day. The Dewey Decimal system made them searchable and there they sat until searched and read. They weren’t permanent either; they just seemed so. The library of Alexandria was burned, and Dresden firebombed.  Beowulf survived in a single copy, and we have no idea how many other works simply did not.

However we don’t even need spectacular conflagrations or near-misses to remind ourselves of the fact that our storage systems, while possessing obvious advantages with respect t to flesh and blood, are themselves weak. How about the fact that libraries don’t have to burn; people just have to not read the books. Or read them so quickly they have no idea what they’ve read. Or read them with a personal filter so that they end up saying something quite subjective. The last is the source of what we call creativity. Literary theory in the 1970s and 80s liked to insist, under the influence of Harold Bloom, that all reading was in fact misreading; this is a claim that is more cute than true, as we can all make the distinction between someone who can correctly summarize what she read but draws different conclusions, and someone who says that Grandma ate the wolf in “Little Red Riding Hood” rather than the reverse. There are errors and there are knowledgeable re-directions.  But in any case we have to read the books.

The same is true of computers: we can only eat one dinner. And we have to choose that dinner.  We have to bring the interest to the precise terms we are searching for. We are still frail people who can simply fail ever to call up the bazillions of data bits that exist somewhere, or not be interested in that subject, or do a bad job synthesizing it if we are because we are tired or hung over or simply not that good at what we do. The fact that the data are there doesn’t mean people will ever use them, or use them well. Or even correctly. And maybe they aren’t there at all: the system has corrupted, the way a machine can get rusty, or requires a part that’s no longer made and can’t be jerry-rigged. We rest secure thinking they are there, but when we go to get them, we can’t.

But even if we can, we have to want to. And we all have limited hours in the day and even more limited hours of concentration.  And if we’re focused on one subject all the others slumber in darkness.

So no, contrary to what people repeat and his biopic suggests, Steve Jobs did not change our lives. Discovering a new continent doesn’t make people any different, it just changes the places they can go. Space travel doesn’t change people, as it has to take into account the givens of the human body and mind. And computers don’t have to crash or their files become unreadable for us to be aware of the fundamental fact that all of this has to pass through the eye of a needle, which is the individual human being.

We’re born, we achieve sexual maturity, we reproduce or not,  and we die. We can extend our range of vision, our memory, our geography—but these are all extensions to what is given. A trampoline can get us further off the ground than just jumping, but in both cases we come back to Earth.

I was reminded of this  recently in an eye doctor’s office—a retinal specialist, to be exact. The gel behind my retina had contracted through the natural ageing process, releasing blood into my eye—the question for the specialist was, was it just blood or did it tear my retina.? (Just the former, not the latter.)  But there I was in an unfamiliar doctor’s office, a factory with multiple patients at the same time, shown to various waiting rooms where they were processed by technicians, ushered into another waiting room, and then called to be ready for the expert to show up for her five minutes.

One of the patients was a 90-year-old man. I know his age because he was very verbal. And very loud. And he was very clear about his birth year and a lot of other things to the out-of-shape but much younger man who had brought him, and, I assume unwittingly, to everyone around him. I had no choice but to listen. He was loud in the way that older people who don’t know they’re hard of hearing are loud. I looked at him only out of the corner of my eyes, and so can say only that he seemed remarkably good-looking and fit for a 90-year-old. But he was hard of hearing, and his unwillingness (?) to get a hearing aid to bring him back in line with others seemed consistent with his personality as I could sketch it in the almost two hours in the waiting room, full of other people who came and went, yet which he dominated simply by the force of his voice.

I spent the first few minutes trying to figure out what the relationship between this relatively spry old man and his middle-aged puffball of a keeper was. Without the younger man the man would never have known when his name was to be called. The younger man told him patiently. The most logical relationship was a son—but the younger man  wasn’t a son, as the information I learned about the old man wasn’t the sort that a father would have to share with a son. And the younger man was  40s, not the 6os he would have been if he’d been a son.

The information the older man shared was far more basic than one has to share with a child, indeed so basic I guessed that the younger man was not even a neighbor but somehow a random do-gooder, perhaps someone from the same church. My mother, also 90, had a neighbor from Albany, where she grew up, who after his divorce from his wife lived alone in a trailer in Fort Ticonderoga, New York, and had a much younger Marine—Ray, the neighbor, had been in the Marines in World War II—who took him to doctors’ appointments. Perhaps something like this? The old man was former military, that was clear. He announced loudly to the receptionist and unintentionally to us all that “his insurance company was Uncle Sam,” as someone on the military’s Tricare would do. And he seemed like someone out of the musical “South Pacific,” with the habit of qualifying almost everything with “goddam” (as in “all the goddam way,” “half the goddam time”—as the sailors sing, “now ain’t that too damn bad”) that we associate with the salty military of World War II, to whose “Greatest Generation” he clearly belonged.

It was precisely his apparent lack of close relationship with his companion that allowed him to share with us all, willy nilly, so many aspects of his life. It was relentless. The man never shut up, as old people do not shut up when they have a chance to talk, at least  those who are completely copus mentis but don’t have anyone to talk to. Just by studying my Spanish and waiting—and I waited and waited, then saw a technician, then went to another waiting room on the other side for people who had seen the technician, where I was soon joined by the old man and his keeper—where I waited some more, and so got more of the interrupted flow, I learned many things about him.

Here’s what I know about this old man: his wife, whose name he said but that I’ve forgotten, had gotten Alzheimer’s disease some years before, and was now dead. He had moved to Washington in 1945. (Here the keeper responded: his own father had moved to Connecticut Avenue in the Maryland suburbs in the 1960s.) He had grown up in a small town near X, which he named, but without the state, which his keeper apparently knew, so it was hard to place it geographically. However I do know that there was only an itinerant doctor who came around once a week, so patients had to wait. His father had a Model T. He went to X to do Y.

His stories were interesting in the sense that they were a window on a bygone era. Model Ts, itinerant doctors, what they had to eat—it was all there as a sort of time capsule of the time before the war. There were no war stories, which probably would have been next had I not, mercifully, been called to see the doctor. But the stories I heard were, and this is the point, really well told—if I had been interested, as no one in the room was, neither the monosyllabic younger man, nor anyone else.  I found myself unable to concentrate on the imperfect of the Spanish subjunctive for admiring the narrative flow of this man’s font of reminiscences. They were punctuated with the wry chuckles that a good storyteller uses, along with the “you know”s meant to involve his audience—as in “my father had the Model T to go to town, you know.”  No, we didn’t know, but it’s a useful verbal insertion meant to draw us in. And it would have drawn us in, if he’d actually had an audience. Only he didn’t. That was the sad thing. He was a performance in a theater where nobody was in the seats, only the janitor and people taking a short cut to the restroom. And a bang-up performance at that.

Not only were his stories perfectly delivered, they had the endearing veneer of his time and position in style as well as substance—the forthright “I assume you want to know this” pushiness of a confident man used to dominating the room (that was the military officer, and clearly he’d been an officer), and all the so-period epithets now so out of fashion.  And he was still dominating the room.  

Only there was no one in the room. At least not to hear him.  He was chuckling and “you know”ing away to a room that regarded him as an irritation.

So did his companion, it was clear, albeit one who probably regarded him  with pity—he sat silent, and once every four or five times when a response from somebody who was actually involved might be expected, said a word or two to indicate he was still there. But he did nothing to encourage the stories: they came out in a stream, not a flood—a well controlled, well crafted stream, one that nobody had asked for and that nobody was listening to.

There he was, my own personal Norma Desmond, playing (as the William Holden character says in “Sunset Boulevard") to a world that had ceased to exist—or perhaps the Norma Desmond of all the other people in the two waiting rooms that day, not all of whom had to wait as long as I did (for me it was a first appointment rather than a follow-up, which the checkout lady later told me always took longer; two hours wasn’t uncommon, the time I spent there).

This man knew a lot about his time. And for what we now call “oral historians” interested in documenting the particular pre-World War I small town Norman Rockwell world he described, he would have been a godsend. Clearly nobody he lived with—did he still live alone? It was possible, given his degree of spryness; or in an assisted living facility where all the old people had similar caches of memories and no one to tell them to—listened to him, or he would not have unloaded so relentlessly with a man who was clearly doing him a favor by accompanying him to the doctor, and who was giving him so little encouragement to share his memories, despite the fact that they were well told and that the man had clearly been someone of note.  

So my take-home point was this: nobody cared about what this man knew, and that he would so willingly have shared with the world.  And in a countable number of years, he’d take it all to the grave. Maybe, but not probably, an oral historian would get it down in some form—video? Transcript? But what if nobody was interested in that? It would just sit there, or perhaps (depending on the form) disintegrate, or recede into a software form nobody could access any more, or succumb to mildew. Or simply sit un-accessed.

It’s pathetic to point out that this old man is each of us—if we’re lucky enough to have been someone in our time, that is. He had the style, the content, the vocabulary—he had the gift of gab and the stories to gab. But we all take it to the grave—our charisma, our style, our knowledge, our panache. Perhaps we offload it at some point—we hope, to people who care, but who at best will remember a story or two (as my dad used to say…) for perhaps one generation. But the offloading has value only for us, the offloader, as it allows us to think it has not perished with us. Not only does someone have to care about our stories now—and we be in the position to tell them—but the next group of people have to care about them as well, and the next, and the next. And that just isn’t going to happen. For with each generation, even with the small handful of people who leave any residue at all, the residue  diminishes or petrifies into a phrase or two, a story or two.

The Greeks were obsessed with posterity knowing their names. So fine. We still read the Iliad, with all those names. What do we know about the (fictional) men? At most whose son they were, and how they died. Is that really immortality?

We can’t offload knowledge. At least not in a way that escapes the fundamental fact that somebody has to care about it: all information, no matter how copious, bows to the constraints of human interest. And that is the limitation of the computer more fundamental than the fact that they crash and their mechanics grow old: we can fill them with all the data in the world, but somebody has to want to access them.

Databases are responses to our fear of annihilation. And we see the process ass-backwards: we run a search engine and come up with something specific: it seems like magic, as we would never have found it otherwise. But aside from not knowing the bias of the search engine, and what it didn’t pull up, we’ve already brought the interest, that is matched by the data. Having a lot of data and a fast way to comb it is as useful now as it was when the Dewey Decimal System made it possible to look for a book about Mark Twain by going to that shelf rather than randomly looking at all the books.  Similarly we can ask the old man what he remembers about Model T Fords—it’s a form of primitive (or perhaps very effective) search engine, one we use all the time.  If we ask someone for the time, we expect the time: if we type “Mark Twain early works” into Google we expect to get Mark Twain early works and not the products of a specific Chinese province.

To all things we bring the interest, and the life, two things that usually aren’t there: of course we’re thrilled that we hooked something from the depths. What we forget, however, is that most data never get hooked, because nobody goes fishing for them. And we hook what we hook: the sensation of pleasure is the same whether the fish was one of three or one of three million so long as we don’t know what’s down there.


Bruce Fleming is the author of over a dozen books and many articles, listed at www.brucefleming.net His degrees are from Haverford College, the University of Chicago, and Vanderbilt University. He taught for two years at the University of Freiburg, Germany, and for two years at the National University of Rwanda. Since 1987 he has been an English professor at the US Naval Academy, Annapolis.


Copyright © The Montreal Review. All rights reserved. ISSN 1920-2911

about us | contact us