The End of Human Life, Again

The Internet is making us stupid. Automation deprives us of the pleasures of accomplishment and hard labor. People no longer read. Civilization is ending. We’re becoming less and less human. Such are my paraphrases of the complaints recently (by which I mean the last 5 years, even though that may seem like a long time in our age of inattention) expressed in online magazines, books (print and digital), and blogs about the increasing prominence of automated computational technology in our lives and our reliance on them.

Nicholas Carr adds to the lamentations with his new book The Glass Cage on the history of automation and how it is making us less human. Jonathan Franzen expressed similar reservations in a 2013 interview with Joe Fassler on his obsession with the 19th century, sardonic Austrian Karl Kraus whose suspicion of mass market publishing and journalism and disdain for its deleterious affect on high culture, Franzen saw as particularly relevant to the transition in media technology already well underway. We must resist becoming "Ein Teufelswerk der Humanität" or an infernal machine of humanity. The writer must defend the spirit or humanity against the mere mechanism that threatens to consume it. (Why culture must be so closely associated with print is another issue.)

Kraus's railing against the media industry of late 19th century Austria (i.e. newspapers and journals) and suspicion of science holding the panacea for all human ills mirrors Franzen's (in his estimate) distrust of the on-going belief that data processing and the Internet will solve complex social problems. That human will seems to have a smaller and smaller role in this world seems to be his main concern—the Borg-like machine tells us human individuals what to think.

Already, in the ’90s, it seemed like machines were beginning to command us with their logic, rather than serving us.

[…]

What I find particularly troubling about our own technological moment is that I hear people saying again and again—happily and proudly and excitedly—that computers are changing our notion of what it means to be a human being. The implication of all those excited people is that we’re changing for the better. Whereas, when I look at social media, it seems like a world that once had adults in it is being changed into the 8th grade junior-high cafeteria. When I look at Facebook, I see a video-poker room in Vegas.

[…]

Kraus was very suspicious of the notion of progress, the idea that things are just getting better and better. In 1912, when he was writing the essays that are in my new book, people were very optimistic about what science was going to do for the world. Everyone was becoming enlightened in a straightforward scientific sense, politics was liberalizing, and the world was going to be a much, much better place—the story went. Well, two years later the most horrible war in the history of humankind broke out, and was followed by an even worse war 25 years after that. Kraus was right about something: He was right to distrust the people who were telling us that technology was going to serve humanity and make things better and better.

There are two threads here—computers are undermining what it means to be human, i.e. a freely choosing, self-directed individual; and that science/technology can't solve all our problems. I can agree with both, except for the negative connotation of computers changing what it means to be human, for the human being that Franzen most respects is the quintessentially liberal one who chooses, acts, thinks, etc independently like a island without any society. Contemporary Libertarians are the most extreme form of this thinking. I don't think it's an accurate or sustainable way to conceptualize our identity and works mostly as a useful fiction for the legal system.

Unsurprisingly, (literary) fiction writers are the only bulwark against this flood of computational taskmasters.

When I first met Don DeLillo, he was making the case that if we ever stop having fiction writers it will mean we’ve given up on the concept of the individual person. We will only be a crowd. And so it seems to me that the writer’s responsibility nowadays is very basic: to continue to try to be a person, not merely a member of a crowd.

However, targeted marketing, the new platforms for self-publication, and such that blogs, social media, and other networked communications make possible mean that the single-minded crowd of the 20th century fascist states no longer exists and has been replaced by something more protean and less tractable, and yet still not the individual of old. Whereas the crowd of West's Day of the Locust, or Franzen's fears had no means of expressing itself except through mob action, violence, and almost animal cries, the Internet enabled crowd has a myriad of voices—all publishing themselves and speaking simultaneously. There's too much to read. Most of it, alas, is not worth reading in Franzen's estimation.

But why does the true individual have to be a reader of fiction? Why not a speaker? A debater? Doesn't the individual need an interlocutor or audience to affirm his or her individuality? For all the talk of humanity and individuality, Franzen never acknowledges how much he yokes those concepts to printed text and to the interior, silent monologue we readers have when engaging with those dead letters. Eric Havelock and Walter Ong would definitely agree that our sense of self is greatly enhanced, if not created, by the process of writing down our thoughts, of having an inner dialogue with ourselves. Friedrich Kittler specifically targets the 19th century novel as the vehicle for archiving that sense of self and reaches the extreme conclusion that the self is therefore an epiphenomenon, a product of print technology, nothing more. That Self is indeed dead if the rapid fire missives we regularly digest have eclipsed long, narrative pieces—fictional or otherwise.

Franzen mourns the passing of that print culture Self without realizing it replaced oral culture just as ruthlessly as digital communication is eradicating it. Nicholas Carr has at least acknowledged that while each shift in media technology requires us to sacrifice some abilities, or ways of life, the benefits have outweighed the costs. He indeed spent several chapters of The Shallows tracing a cursory history (via Ong and others) of the dashing entrance writing made into oral culture by comparing Socrates' and Plato's attitudes toward the new media technique. Socrates claimed it would induce forgetting and spread rumors and misunderstanding, while Plato saw this old culture as an obstacle to rational discourse (i.e. philosophy) and therefore to the pursuit of Truth and the Good. The introduction of the printing press similarly caused much consternation among the priestly elite since they could no longer control access to, and therefore interpretations of, the Bible for the vernacular-speaking masses.

Carr's earlier book pointed out the losses the internet might inflict on culture, asked if the gains would be worth the price, never gave a clear answer, but strongly implied (by the book's very existence) that it was not. His latest work, The Glass Cage, presumably advances that argument by shifting the focus to the more interesting, in my opinion, question of automation—particularly the automation of thinking, judgment, and other human intellectual functions. He's still a bit late to the party, since Norbert Wiener—one of the fathers of cybernetics, which fused mathematics, physics, and engineering to bring about the first truly autonomous apparatuses—worried in a very sorcerer's apprentice way about his invention's—the automaton's—ramifications for manual laborers. Writing before the digital computer was a widespread reality, Wiener foresaw two problems: 1. the machines could make decisions beyond the oversight of humans; and 2. automated workers might supplant factory laborers and thereby precipitate a mass employment crisis. He hadn't fully accounted for outsourcing such tasks to cheaper human labor on the other side of globe.

Wiener's trepidations were at least informed by his experience designing and constructing such cybernetic apparatuses. He wanted to make clear to others that such devices, however much they might be able to perform human-like tasks, including thinking, would not be human and would therefore not share the same emotional, empathetic, or other fuzzy concerns that drive our ethical decisions (and even or supposedly logical ones). That, to him, was the danger of a machine making decisions. Such machines should not be outlawed as Samuel Butler's fictional society had done in his 1872 dystopian novel Erewhon, nor would they necessarily treat humans as mere material to be worked over, à la Martin Heidegger's anti-technology philosophy.

I have not yet had a chance to read Carr's book but based on interviews he has done, I nevertheless fear it will be less Wiener and more the anti-technology sentiment that has dominated humanistic critiques from Heidegger to Lewis Mumford, Jacques Ellul, Mark Poster, and many more. Speaking with Lauren Kirchner in The Baffler (admittedly in our alienated, digital world, it is possible Carr and Kirchner did not talk in person so much as allow video conferencing software to translate their images and voices into digital signals or an email service to shot their typed words through the ether), Carr combines fears of automation with income inequality and unsatisfying (presumably corporate à la The Office) work with automated apparatuses (particularly software).

In an interview with Lauren Kirchner, Carr reveals that his critique is partly motivated by increasing income inequality, which he links to automation.

Instead, we’re seeing them concentrate through increased profits in the hands of a relatively small number of people. We’ve seen a kind of hollowing-out of middle-class work, and more and more polarization in income and in wealth.

[…]

I think we can expect that we’ll see more erosion in the number and quality of jobs, and so I think we’re right to be concerned over the long run about a continued erosion of the middle class.

Karel Čapek’s play R.U.R. presented a similarly bleak future in which robots (derived from the Czech word for slave) perform all labor and policing, which leaves the owners of the robot manufacturing corporate fabulously wealthy and ensconced on a private island, but the majority of the population unemployed and agitating for revolution. When did Čapek write his play? 1920. We’ve been waiting for 100 years for the automation induced labor apocalypse, and while it has eliminated some jobs, I don’t see it actually happening. Indeed, if we look at manufacturing, outsourcing labor overseas has had a far more detrimental effect on American employment than automation. Just look at how many Chinese workers are still employed to manufacture Apple’s products, and they won’t be replaced by machines yet.

Even when automation does begin seriously imperiling manual and intellectual labor, can we really blame the technology? Should we be agitating for the right to work 40 hours a week in a cubicle? Won’t a more progressive tax system to fund social welfare and other projects needing human skills be a better alternative? Perhaps that’s naïve, but the hordes of hungry, unemployed would probably convince the government to do something.

Even if automated technology allows to work less and enjoy more leisure time, Carr would still not be satisfied thanks to his nostalgia for meaningful work that exists pre-automation. Consider Kirchner’s concern that Carr romanticizes physical labor when he spends a section of the book on close reading Robert Frost’s “Mowing”

But to push back on that a little bit, I couldn’t help thinking about how so much physical work is really terrible, and painful, and, historically, unsafe. Now we’re at a time in American history where no one has to pick cotton by hand, because we have machines that do that now, or empty chamber pots, because we have plumbing, and in the future, even the physically-demanding job of being a runner in an Amazon warehouse will probably be replaced by robots, so…. 

Well, I’m certainly not in favor of inhumane working conditions. That part is less about technology’s influence on the number and mix of jobs (even though that’s an extremely important subject) than it is about the quality of jobs and the quality of life for people who become dependent on computers. Which is a different subject from, but somewhat related to, workers’ rights and so forth.

I couldn’t help but think of Heidegger’s pean to Van Gogh’s A Pair of Shoes (1885), which he naturally assumes are a peasant’s, in “The Origin of the Work of Art” [“Der Ursprung des Kunstwerkes”] (written between 1935 and 1937). Heidegger has a painting and Carr has a poem but they are both Kunstwerkes or artworks, and in the former’s intricate and sophisticated analysis, the artwork is not just an expression of culture and human society but an integral part of forming that culture and society. One cannot have meaning without the other, and indeed, artwork draws upon and adds to the rich, unspoken, unarticulated background that makes life meaningful. For Heidegger, technology was an inauthentic way of revealing that being or meaning, and so it appears to be for Carr, though he uses new, American jargon like “quality of jobs […] and life.”

Kirchner’s pushback is well-taken—what exactly was so good about these physical labors? Labor that would preclude people from enjoying the leisure time and security necessary to even have poetic thoughts, which was reserved mostly for the wealthy elite or those subsidized by them, like artists and university professors. Were it not for the technology (automated and otherwise) and infrastructure that made modern agriculture possible, we wouldn’t have the abundance of food (in Western countries) that's need to obviate the constant worry about our next meals (and if there will be any). A world of amateur gardeners and landscapers would hardly provide enough sustenance for us—however, “fulfilling” the work might be.

There's also the problem of claiming it's bad that we no longer understand technology now that it is like a black box whose inner workings are foreign and ultimately inaccessible to us mere flesh and blood mortals.

Sure, I mean, there’s computer software, and then there’s indoor plumbing, and those have different roles in our lives.

And there’s an example of the way technology used to work: you used it, but you kind of understood what was going on. You may not have been a plumber or an engineer, but you kind of knew what the pipes in your house did. As we’ve become dependent on software, something very different is happening: we don’t see or understand how the algorithms work, and that raises the danger of manipulation. You use a software program, and your own intentions are being shaped by the people writing the software, but because it’s all hidden from you, you don’t fully grasp how. So we begin to see an erosion of agency.

If Carr wants a meaningful, integrated world, then this trend should be a positive development. The world in which we live is populated by apparently magical entities that work together and communicate in ways utterly alien to us. Everything is brought back together into a unified whole—the inscrutable black box. This incomprehensible, automated technology has re-enchanted the world, has undone the disenchantment (Entzauberung ) that Friedrich Schiller in On the Aesthetic Education of Man declared was modernity’s defining aspect. Max Weber would borrow the phrase and argument in tracing the contours of the rational society that arose in the West with the industrial revolution—everything is now analyzed in the literal sense of broken into its constituent parts, counted, processed, and measured.

Carr’s problem is likely that the black boxes now do all the analysis and understanding for us. The concern about us not understanding how so-called technology works just emphasizes how much we’ve allowed machines themselves to do that analysis, counting, and measurement on our behalf. A task we hand off with good reason, for our brains are simply not complex enough to keep track of all the information, to consider all the influencing factors, and to come to a conclusion in enough time to make a difference. To suppose that simpler technology and mathematical models are better because humans can understand them, change them, or more effectively use them supposes that we can hope to have complete control over something, especially an artifact we created, and by extension the world that those tools allow us to build. Such control is just an illusion. We were no better at making ethical decisions about the use of technology when steam engines tore up the landscape, than the present when computers silently blast one another with continuous streams of data and calculate missile trajectories.

The problem Carr and critics of “technology” have is not with the technology itself but with the fact that our new machines think and make sense of the world in ways unfamiliar to us humans (or at least to most of us). It takes significant training and experience to program computers and to design circuits, because they function in a way that’s very counter to the narrative, meaningful, etc. way we are trained to understand ourselves and the world as we learn language and writing. Our counting machines rather prefer a, shall we say, statistical view of the world—yes, reduce everything to atoms but then gather them back up again, sort, parse, and transform them until patterns emerge. This stumbling about and looking for patterns, rather than approaching the world with a theme or meaning in mind (and then interpreting everything to fit that story) is the dumbness that Carr is really talking about and fears.

To then say, as Carr does in the interview, that “[Automation] also raises some basic philosophical and sociological questions about our urge to turn over complex analyses and judgment-making to computers, simply because they’re fast and efficient” slightly misrepresents the situation, for the concern here isn’t automation writ large but automated (i.e. computational) thinking. Speed and efficiency are not necessarily such automation’s benefits, since computers use far more energy to think than human brains do while not being able to perform tasks that come easily to their wetware cousins. It’s simply that they think differently and that that thinking is becoming the dominant mode. He calls this type of thinking “hyper-rationalism”, as opposed, I suppose, to the regular rationalism of Plato, Descarte, Leibniz and others. The actual question is what place human thinking has in this new world populated with different types of cognition.

Advertisements

2 comments

  1. “I have not yet had a chance to read Carr’s book …”

    Wow. If you have this much to say about a book you haven’t read, I can’t wait to see what you have to say after you’ve actually read it! (I mean that sincerely, not sardonically.) Thanks, Nick

    1. Thanks for reading the post!

      I was thinking of data mining it like digital humanists seem to be doing these days instead of reading … More seriously, in academia, I learned pretty quickly that you can gist of a book's argument by reading the intro/conclusion, and looking through the works cited/bibliography.

      I'm looking forward to reading it and re-reading Wiener's Human Use of Human Beings along the way with some other independent academics I connected with via the Twitter-verse. Most likely, we agree on the major point that these new automated tools mean we (humans) will lose some skills or abilities; and that the latest wave of automated technologies is displacing not just manual skills but intellectual ones. Whether or not that is a bad thing remains an open question for me, because whatever answer we give will depend on how we answer “What does it mean to be human?”

      People who have labeled you a “techno-phobe” are wrong, but I understand why that label might seem appropriate to less thoughtful critics (as well as needing less explanation to their audience). We'll see how my thinking changes after I've read Glass Cage but I suspect your most important concern is more how to control thinking machines and whether we can trust them to make decisions—especially ones that are increasingly ethical or moral.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: