Technopoly and Antihumanism

Back in May, Nicholas Carr wrote a sharp blog post critically examining Moira Weigel and Ben Tarnoff's "Why Silicon Valley Can't Fix Itself." 

The first half of Carr's response engages an earlier piece by Tarnoff and another by Evgeny Morozov that take for granted the data mining metaphor and deploy it in an argument for public ownership of data.

Carr is chiefly concerned with the mining metaphor and how it shapes our understanding of the problem. If Facebook, Google, etc. are mining our data, that in turn suggests something about our role in the process. It conceives of the human being as raw material. Carr suggests we consider another metaphor, not very felicitous either as he notes, that of the factory. We are not raw material, we are producers: we produce data by our actions. Here's the difference:

"The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical."

This then leads Carr into a discussion of the Weigel/Tarnoff piece, which is itself a brief against the work of the new tech humanists.

Carr's whole discussion is worth reading, but here are two selections that were especially well put. First:

"But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.

Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it."

And:

"Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.

But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency."

The discontents of humanism (variously understood), the emergence of technopoly (as Neil Postman characterized the present techno-social configuration), and the modern political order are deeply intertwined. Humanism, of course, is a complex and controversial term. It can be understood in countless ways. There is more affinity than is usually acknowledged between anti-Humanism understood as an opposition to a narrow and totalizing understanding of the human and anti-humanism as exemplified by the misanthropic visions of the transhumanists and their Silicon Valley acolytes. 

If we are incapable of even a humble affirmation of our humanness then we leave ourselves open to the worst depredations of the technological order and those who stand to profit most from it.

Technology, Law, and Ethics

It is frequently observed that developments in technology run ahead of law and ethics, which never quite catch up. This may be true, but not in the way it is usually imagined. What follows is a series of loosely related considerations that might help us see the matter more clearly.

When people claim that technology outstrips law and ethics, they are usually thinking more about the rapid advance of technology than they are about the structures of law and ethics. If we were to unpack the claim, it would run something like this: new technologies which empower us in novel ways and introduce unprecedented capacities and risks emerge so quickly that existing laws and ethical principles, both of which are relatively static, cannot adapt fast enough to keep up.

Thought of in this way, the real pressure point is missed. It is not merely the case that new technologies emerge for which we have no existing moral principles or laws to guide and constrain their use; this is only part of the picture. Rather, it is also the case that modern* technologies, arising in tandem with modern political and economic structures, have undermined the plausibility of ethical claims and legal constraints, weakened the communities that sustained and implemented such claims and constraints, and challenged the understanding of human nature upon which they depended.

To put the matter somewhat more succinctly, contemporary technologies emerge in a social context that is ideal for their unchecked and unconstrained development and deployment. In other words, technology appears to outstrip ethics and law only because of a prior hollowing out of our relevant moral infrastructure.

Social and technological forces contribute to the untethering and deracination of the human person, construing her primarily and perhaps even exclusively as an individual. However, valuable this construal may be, it leaves us ill-equipped to cope with technologies that  necessarily involve us in social realities.

From the ethics side of the ledger, it is also the case that modern ethics (think Kant, for example) also construed ethics chiefly as a matter of the individual will. A project undertaken by autonomous and rational actors without regard for moral and political communities. Political philosophy (Locke, et al) and economic theory (Smith, etc.) follow similar trajectories.

So, in theory (political, philosophical, and economic) the individual emerges as the basic unit of thought and action. At the center of this modern theoretical picture is a novel view of freedom as individual autonomy. The individual no longer bends their will to the shape of a moral and communal order; they now bend the world to the shape of their will.

In practice, material conditions, including new technologies, sustain and reinforce this theoretical picture. Indeed, the material/technological conditions likely preceded the theory. Moreover, technology evolves as a tool of empowerment that makes the new understanding of freedom plausible and seemingly attainable. Technology is thus not apprehended as an object of moral critique; it is perceived, in fact, as the very thing that will make possible the realization of the new vision of the good life, one in which the world is the field of our own self-realization.

While certain social and material realities were isolating and untethering the individual, by the mid-19th century technologies arose that were, paradoxically, embedding her in ever more complex technical systems and social configurations.

Paradoxically, then, the more we took for granted our own agency and assumed that technology was a neutral tool of the individual autonomous will, the more our will and agency were being compromised and distributed by new technologies.

Shortest version of the preceding: Material conditions untether the individual. Modern theoretical accounts frame this as a benign and desirable development. Under these circumstances, technology is unbridled and evolves to a scale that renders individual ethical action relatively inconsequential.

Moreover, the scale of these new technologies eclipsed the scale of local communities and traditional institutions. The new institutions that arose to deal with the new scale of operation were bureaucracies, that is to say that they themselves embodied the principles and values implicit in the emerging technological milieu.

It may be better, then, to say that it is the scale of new technologies that transcends the institutions and communities which are the proper sites for ethical reflection about technology. The governing instinct is to scale up our institutions and communities to meet the challenge, but this inevitably involves a reliance on the same technologies that generate the problems. It never occurs to us that the answer may lie in a refusal to operate at a scale that is inhospitable to the human person.

Something other than individual choices and laws are necessary. Something more akin to a renewal of cultural givens about what it means to be a human being and how the human relates to the non-human, givens which inform ethical choices and laws but cannot be reduced to either, and the emergence of institutions that embody and sustain individual lives ordered by these givens. It is hard, however, to see how these emerge under present circumstances.

______________________________________________________

*Throughout the post I use "modern" to refer to Western modernity emerging c. 1600 or so (which date is certainly subject to a great deal of debate).

The Center for the Study of Ethics and Technology is Relaunching

The Center for the Study of Ethics and Technology is renewing its work at an auspicious time. Throughout the past year we have witnessed a surge of interest in the ethical and political consequences of technology. This interest has been driven by a variety of factors: revelations about misuse of user data by social media companies, widespread and systematic dissemination of disinformation, the confessions of former Silicon Valley executives about media platforms designed for addiction, fears about lack of AI accountability, anxiety about automation and unemployment, as well as concern about the negative physical, mental, and developmental health consequences of an always-on culture.

This wave of critical attention is a welcome development, but there is much work to be done.

CSET aims to advance this work by providing substantive commentary on modern technology’s ethical consequences in a variety of formats and fostering communities of reflection and practice devoted to living wisely and faithfully in a technological age.

In order to do this work well, CSET’s renewed efforts will be more explicitly grounded in our theological and ecclesial commitments. This move to foreground our theological convictions reflects our understanding that the best technology criticism flows out of a substantive understanding of the human person and of what constitutes human flourishing. We know that these are contested understandings, but it is, in our view, better to own our convictions and invite rigorous and honest debate rather than veiling them and undermining the critical rigor of our work. Too much of the work now being undertaken to understand and assess the ethical and political consequences of technological change flounders precisely because it knows only what it is against and not what it is for. It is inspired neither by any communal commitments or any explicit account of the good life.

In working from within our Christian tradition, we are in the company of some of our best thinkers about technology and modern society including such luminaries as Jacques Ellul, Ivan Illich, Albert Borgmann, Romano Guardini, Marshall McLuhan, Paul Virilio, and Walter Ong.

Our Christian commitments, however, do not preclude our serious engagement with other traditions of thought or the work of scholars outside the tradition, quite the opposite. We welcome all thoughtful and principled discussions of technology, and our conversations and discussions will reflect our desire to seek wisdom and insight wherever it may be found. We trust, as well, that those outside the tradition will find our work valuable and irenic.

The pace of digital culture tends to discourage serious reflection and encourage superficial responses. CSET will aim to be both timely and enduring in its analysis. This will be just one of the ways that we seek to embody the principles of the critique and alternative we will offer. This will often mean a willingness to abide unresolved tensions or be content simply to raise the right questions. We will resist the tyranny of the instantaneous and the temptation to offer neat solutions to the challenges raised by contemporary technology.

Our efforts will also reflect our commitment to thinking historically about technology. Again, under the temporal pressures of digital culture, we fail to think very far beyond our present moment. The proper temporal horizon of understanding for a given technology, however, may be decades or even centuries in the past. Without taking this long view, we are unlikely to get very far in our efforts to make sense of our technological situation. If our relationship to technology, broadly understood, is disordered, it is because of social, economic, political, and cultural patterns and trajectories that have been unfolding since at least the dawn of modernity if not before.

We must also measure current developments by their likely future consequences to the degree that these can be reasonably discerned. So we will couple this long view into the past with a long view into the future. We do not believe that there exist quick fixes to our situation. Rather, we believe that what is needed is a deep renewal of our understanding of what it means to be human. This is not the work of months or even a few years. We must take the long view. This is work worth undertaking, and we hope you will find it helpful.

You can follow our work by subscribing to our blog (email/RSS), signing up to receive our forthcoming newsletter, or following us on Twitter. In the coming days and weeks, look for news about our new research associates, a podcast, and events on the ground.

Moralizing Technology: A Social Media Test Case

Where do we look when we’re looking for the ethical implications of technology? A few would say that we look at the technological artifact itself. Many more would counter that the only place to look for matters of ethical concern is to the human subject. As noted in an earlier post, philosopher of technology, Peter-Paul Verbeek, argues that there is another, perhaps more important place for us to look: the point of mediation, that is the point where the artifact and human subjectivity come together to create effects that cannot be located in either the artifact or the subject taken alone.

Verbeek would have us consider the ethical implications of how technologies shape our perception of the world and our action into the world. Take the following test case, for example.

In a witty and engaging post at The Atlantic, Robinson Meyer assigned each of the seven (+2) deadly sins to a corresponding social network. Tinder, for example gets paired with Lust, LinkedIn with Greed, Twitter with Wrath, and, most astutely, Tumblr with Acedia. Meyer mixed in some allusions to Dante and the end result was a light-hearted discussion that nonetheless landed a few punches.

In response, Bethany Keeley-Jonker questions the usefulness of Meyer’s essay. While appreciating the invocation of explicitly moral language, Keeley-Jonker finds that the focus on technology, in this case social media platforms, is misleading.

In her view, as I read her post, the moral blame and/or praiseworthiness can only ever be assigned to people. One thing she appreciates about Myer’s essay, for instance, is that “it locates our problems where they’ve always been: in people.” “Why the fixation, then,” she wonders, “on the ways our worst impulses show up in social media?”

She goes on to explain her reservations this way:

“I am not so sure that Facebook increases our desire for approval so much as it broadcasts it. That broadcasting element is the second reason I think people worry a lot about social media. Folks have engaged in the same kinds of bad behavior for centuries, but in the past it wasn’t so easy to search, archive and share your vices with a few hundred of your friends, family and acquaintances.”

Recalling Verbeek’s discussion, we recognize in Keeley-Jonker’s analysis an instrumentalist approach that appears to take the technology in question to be a morally neutral tool. The ethical dimension exists entirely on the side of human subjectivity. The behavior is historically constant; in this case, social media just exposes to public view what would’ve been going on in any case.

Consider one more of Keeley-Jonker’s examples:

“Plenty of pixels have been spilled over the way Pinterest sparks envy (and Instagram, for that matter), but I’ve also seen it spark connection and sharing. I’ve seen it reproduce something that’s happened between women for decades or centuries in low-tech ways: here’s that recipe I was telling you about; here’s how I made this thing; here’s where I bought that thing; here’s the secret to chocolate chip cookies.”

Same old activity, new way of doing it. The technology, on this view, leaves the activity essentially unchanged. There is a surface similarity, certainly, in the same way that we might say a hurricane is not unlike a cool breeze.

Of course, we do not want to suggest that a social media platform can itself be guilty of a vice; that would be silly. Nor is it the case that moral responsibility does not attach to the human subject. But is this all that can be said about the matter? Is it really misleading to consider the role of social media when talking about virtue and vice? What if, following Verbeek’s lead, we focus our attention on the point of mediation. How, for example, do each of these platforms mediate our perception?

Verbeek turns to the work of philosopher Don Ihde for some analytic tools and categories. Among the many ways humans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls “embodiment relations” in which the tools are incorporated by the user and the world is experienced through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation [….] Ihde shows that technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “technological intentionality.” In other words, the technology directs and guides our perception and our attention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This function is not morally irrelevant, especially when you consider that this effect is not contained within the digital platform but spills out into our experience of the world.

What, then, if we consider social media platforms not merely as new tools that let us do old things in different ways, but as new ways of perceiving that fundamentally alter what it is that we perceive and how we relate to it? In the case of social media, we might say that what we ordinarily perceive are things like our own self reflected back to us, other people, and human relationships. Perhaps it is in the nature of the unique architecture of each of these platforms to activate certain vices precisely because of how they alter our perception.* Is there something about what each platform allows us to present about ourselves or how each platform manipulates our attention that is especially conducive to a particular vice?

Again, it is true that apart from a human subject there would be no vice to speak of, but it would be misleading to say that the platform was wholly irrelevant, innocent even, of the vice it helps to generate. We might do well, then, to distinguish between an ever-present latent capacity for vice (or virtue) and the technological mediations that potentially activate the vice, or, to stick with the moral vocabulary, constitutes a field of temptation where there was none before.

And we have not yet addressed how the platforms might be conceived of as engines of habit formation– generating addiction by design, to borrow Natasha Dow Schüll’s apt formulation–and thus incubators of moral character.

The first of Melvin Kranzberg’s useful laws of technology states, “Technology is neither good nor bad; nor is it neutral.” Let us conclude with a corollary: “Technology is neither moral or immoral; nor is it morally neutral.”

___________________________________

*A recent post by Alan Jacobs provides an illustration of this dynamic from an earlier era and its own emerging media landscape. Of Martin Luther and Thomas More, Jacobs writes, “To put this in theological terms, one might say that neither More nor Luther can see his dialectical opponent as his neighbor— and therefore neither understands that even in long-distance epistolary debate one is obligated to love his neighbor as himself” (emphasis mine).

Work, Technology, and How We Understand Human Dignity

Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? It is hard to say, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

This latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions: Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

To sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

Truth and Trust in the Age of Algorithms

A great deal has been written in the last few days about how Facebook determined which stories appeared in its "Trending" feature. The controversy began when Gizmodo published a story claiming to reveal an anti-conservative bias among the sites "news curators":

Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project. This individual says that workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users.

Several former Facebook “news curators,” as they were known internally, also told Gizmodo that they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all. The former curators, all of whom worked as contractors, also said they were directed not to include news about Facebook itself in the trending module.

Naturally, the story generated not a little consternation among conservatives. Indeed, a Republican senator, John Thune, was quick to call for a congressional investigation.

Subsequently, leaked documents revealed that Facebook's "Trending" feature was heavily curated by human editors:

[...] the documents show that the company relies heavily on the intervention of a small editorial team to determine what makes its “trending module” headlines – the list of news topics that shows up on the side of the browser window on Facebook’s desktop version. The company backed away from a pure-algorithm approach in 2014 after criticism that it had not included enough coverage of unrest in Ferguson, Missouri, in users’ feeds.

The guidelines show human intervention – and therefore editorial decisions – at almost every stage of Facebook’s trending news operation [...]"

The whole affair is not inconsequential because Facebook is visited by over one billion people daily and is now widely regarded as "the biggest news distributor on the planet." In her running commentary on Twitter, Zeynep Tufikci wrote, "My criticism is this: Facebook is now among the world's most important gatekeepers, and it has to own that role. It's not an afterthought."

Along with the irritation expressed by conservatives, others have criticized Facebook for presenting its Trending stories as the products of a neutral and impersonal computational process. As the Guardian noted:

“The topics you see are based on a number of factors including engagement, timeliness, Pages you’ve liked and your location,” says a page devoted to the question “How does Facebook determine what topics are trending?”

No mention there of the human curators, and this brings us closer to what may be the critical issue: our expectations of algorithms.

First, we should note that the word algorithm is itself part of the problem. In his thoughtful discussion of the Facebook story, Navneet Alang called the algorithm the "organizing principle" of our age. For this reason, we ought to be careful in our use of the term; it does both too much and too little. As Tufekci tweeted, "I *do* wish there were a better term than algorithm to mean 'complex and opaque computation of consequence'. Language does what it does."

Secondly, Rob Horning is almost certainly right in claiming that "Facebook is invested in the idea that truth depends on scale, and the size of their network gives them privileged access to the truth." To borrow a phrase from Kate Crawford, Facebook wants to be the dominant force in a "data driven regime of 'truth.'"

Thirdly, it is apparent that in its striving to be the dominant player in the "data driven regime of truth," Facebook is answering a widely felt desire. "Because they are mathematical formulas," Alang observed, "we often feel that algorithms are more objective than people." "Facebook’s aim," Alang added, "appears to have been to eventually replace its humans with smarter formulas." 

We want to believe that Algorithms + Big Data = Truth. We have, in other words, displaced the old Enlightenment faith in neutral, objective Reason, which was to guide democratic deliberation in the public sphere, onto the "algorithms" that structure our digital public sphere.

False hopes and subsequent frustrations with "algorithms," then, reveal underlying technocratic aspirations, the longing for technology to do the work of politics. A longing that may be understandable given the frustrating, difficult, and sometimes even dangerous work of doing politics, but a longing that is misguided nonetheless.

Our desire for neutral, truth-revealing algorithms can also be framed as a symptom of a crisis of trust. If we cannot trust people or institutions composed of people, perhaps we can trust impersonal computational processes. Not surprisingly, we feel badly used upon discovering that behind the curtain of these processes are only more people. But the sooner these false hopes and technocratic dreams are dispelled, the better.

 

Practice and Counter-practice

From Albert Borgmann’s Power Failure:

“… for a long time time to come technology will constitute the common rule of life.  The Christian reaction to that rule should not be rejection but restraint … But since technology as a way of life is so pervasive, so well entrenched, and so concealed in its quotidianity, Christians must meet the rule of technology with a deliberate and regular counterpractice.

Therefore, a radical theology of technology must finally become a practical theology, one that first makes room and then makes way for a Christian practice. Here we must consider again the ancient senses of theology, the senses that extend from reflection to prayer.  We must also recover the ascetic tradition of practice and discipline and ask how the ascesis of being still and solitary in meditation is related to the practice of being communally engaged in the breaking of the bread. The passage through technology discloses a new or an ancient splendor in ascesis.  There is no duress or denial in ascetic Christianity. On the contrary, liberating us from the indolence and shallowness of technology, it opens to us the festive engagement with life.”

The “rule of technology” engraves itself on us by shaping the routines and habits of daily life so that it is both pervasive and unnoticed. In other words, it is not enough to merely desire or will to live well with technology. Borgmann’s crucial insight, for Christians and non-Christians alike, is the necessity of deploying deliberate and intentional counterpractices that embody and instantiate an alternative form of life.

Virtue and Technology

“Questioning is the piety of thought,” or so Martin Heidegger would have us believe. It is with that line that he closed his famous essay, “The Question Concerning Technology.” Indeed, the right question or a new question can lead our thinking to fresh insights and deeper reflections.

With regards to the ethics of technology, we typically ask, “What should I or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Immanuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist tradition, the individual rationally calculates which action will yield the greatest degree of happiness, variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain the answer by either determining the dictates of subjective reasoning or calculating the objective consequences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an individual that technology is not only a tool with which moral and immoral actions are preformed but also an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important questions that ought to be considered and investigated. The problem is that this approach is incomplete and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inadequate to the task because it takes as its starting point an inadequate and incomplete understanding of the human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into account other important aspects of our relation to technology: the tradition of virtue ethics in both its classical and medieval manifestations.

In Moralizing Technology, Peter-Paul Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might also add a related question that virtue ethics raises:  “What sort of person do I want to be?” This is a question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of both. A good life, after all, is shaped not only on the basis of human decisions but also on the basis of the world in which it plays itself out (de Vries 1999). The way we live is determined not only by moral decision making but also by manifold practices that connect us to the material world in which we live. This makes ethics not a matter of isolated subjects but, rather, of connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates the various ways technologies impinge upon our moral lives. For example, a technologically mediated action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in a different light when considered as one instance of a habit-forming practice that shapes our disposition and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of technological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral agency,” distributed that is among subject and the various technological artifacts that mediate the subject’s perception of and action in the world.

At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in realist guise about two magicians recovering the lost tradition of English magic in the context of the Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent. Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical question as a virtue ethicist. He does not run consequentialist calculations nor does he query the deliberations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project of moral formation. In so doing, he gives us a good model for how we might think about the empowerments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an autonomous subject; rather, it is the outcome of active subjection” [emphasis his]. It is, paradoxically, this kind of subjection that can ground the relative freedom with which we might relate to technology.

Ethics of Technological Mediation

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011), Peter-Paul Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early days,” Verbeek notes, “ethical approaches to technology took the form of critique (cf. Swierstra 1997). Rather than addressing specific ethical problems related to actual technological developments, ethical reflection on technology consisted in criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heidegger, critical theory, or Jacques Ellul.

In time, “ethics of technology” emerged “seeking increased understanding of and contact with actual technological practices and developments,” and soon a host of sub-fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engineering ethics, ethics of design, etc.

This approach remains “merely instrumentalist.” “The central focus of ethics,” on this view, “is to make sure that technology does not have detrimental effects in the human realm and that human beings control the technological realm in morally justifiable ways.” It’s not that these considerations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these two domains [the human and the technological]. The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act into the world, Verbeek  elaborates a theory of technological mediation, built upon a postphenomenological approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the artifact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to focus ethical attention on the constitution of both the perceived object and the subject’s intention in the act of technological mediation. In other words, how technology shapes perception and action is also of ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape moral subjects, and play an important role in moral agency.”

American Technological Sublime: Our Civil Religion

David Nye is the author American Technological Sublime (1995), a classic work in the history of technology. Except that it is not a work of history in the strict disciplinary sense. Nye draws promiscuously from other fields — citing for example Burke, Kant, Durkheim, Barthes and Baudrillard among others — to present a wide ranging and insightful study into the American character.

The concept of the technological sublime was not original to Nye. It had first been developed by Perry Miller, a prominent mid-twenieth century scholar of early American history, in his study The Life of the Mind in America. There Miller noted in passing the almost religious veneration that sometimes attended the experience of new technologies in the early republic.

Miller found that in the early nineteenth century “technological majesty” had found a place alonside the “starry heavens above and the moral law within to form a peculiarly American trinity of the Sublime.” Taking the steamboat as an illustration, Miller suggests that technology’s cultural ascendancy was abetted by a decidedly non-utilitarian aspect of awe and wonder bordering on religious reverence. “From the beginning, down to the great scenes of Mark Twain,” Miller explains, “the steamboat was chiefly a subject of ecstasy for its sheer majesty and might, especially for its stately progress at night, blazing with light through the swamps and forests of Nature.”

Leo Marx also employed the technological sublime, but again in passing. It fell to David Nye, a student of Marx’s, to develop a book length treatment of the concept. Nye looks to Edmund Burke and Immanuel Kant in order fill out the concept of the sublime, but it is apparent from the start that Nye is less interested in the philosopher’s solitary experience of the sublime in the presence of natural wonders than he is in the popular and often collective experience of the sublime in the presence of technological marvels.

Nye, with a historian’s eye for interesting and compelling sources, weaves together a series of case studies that demonstrate the wonder, awe, and not a little trepidation that attended the appearance of the railroads, the Brooklyn bridge, the Hoover Dam, the factory, skyscrapers, the electrified cityscape, the atomic bomb, and the moon landing. Through these case studies Nye demonstrates how Americans have responded to certain technologies, either because of their scale or their dynamism, in a manner that can best be described by the category of the sublime. And perhaps more importantly, he argues that this experience of the technological sublime laced throughout American history has acted as a thread stitching together the otherwise diverse and divided elements of American society.

If the philosophers provided Nye with the terminology to name the phenomenon, he takes his interpretative framework from the sociologists of religion. Nye’s project is finally indebted more to Emile Durkheim than to either Burke or Kant. Nye notes early on that “because of its highly emotional nature, the popular sublime was intimately connected to religious feeling.” Later he observes that the American sublime was “fused with religion, nationalism, and technology” and ceased to be a “philosophical idea” instead it “became submerged in practice.”

This emphasis on practice is especially important to Nye’s overall thesis and it is on the practices surrounding the technological sublime that he concentrates his attention. For example, with each new sublime technology he discusses, Nye explores the public ceremonies that attended its public reception. The 1939 World’s Fair, to take another example, appears almost liturgical in Nye’s exposition with its carefully choreographed exhibitions featuring religiously intoned narration and a singular vision for a utopian future.

This attention to practices and ceremonies was signaled at the outset when Nye cited David Kertzer’s “Neo-Durkheimian view” that “ritual can produce bonds of solidarity without requiring uniformity of belief.” This functionalist view of religious ritual informs Nye’s analysis of the technological sublime throughout. In Nye’s story, the particular technologies are almost irrelevant. They are significant only to the degree that they gather around themselves a set of practices. And these practices are important to the degree that they serve to unify the body politic in the absence of shared blood lines or religion.

All told, Nye has written a book about a secular civil religion focused on sublime technologies and he has presented a convincing case. Absent the traditional elements that bind a society together, the technological sublime provided Americans a set of shared experiences and categories around which a national character could coalesce.

Nye has woven a rich, impressive narrative that draws technology and religion together to help explain the American national character. There’s a great deal I’ve left out that Nye develops. For example: the evolving relationship of reason to nature and technology as mediated through the sublime or the diminishing active role of citizens, and especially laborers, in the public experience of the technological sublime. But these, in my view, are minor threads.

The take-away insight is that Americans blended, almost seamlessly, their religious affections with their veneration for technology until finally the experience of technology took on the unifying role of religion in traditional societies. Historically American’s have been divided by region, ethnicity, race, religion, and class. American share no blood lines and they have no ancient history in their land. What they have possessed, however, is a remarkable faith in technological progress that his been periodically rekindled by one sublime technology after another all the way to the space shuttle program and its final mission.

The question we're left with is this: What happens when the technological sublime runs dry? As Nye points out, it is, unlike the natural sublime, a non-renewable sublime. In other words, the sublime response wears off and must find another object to draw it out. If Nye is right — and I do think it is possible to overreach so I want to be careful — there is not much else that serves as well as the technological sublime to bind American society together. Perhaps then, part of our recent sense of unraveling, our heightened sense of disunity, the so called culture wars — perhaps these are accentuated by the withdrawal of the technological sublime. Perhaps, but that would take another book to explore.