Friday, September 16, 2011

Transhumanism: It's impossible to predict whether a technology will be a "good" or "bad" thing.

It's Impossible To Predict Whether a Technology Will Be a "Good" or "Bad" Thing

Posted Wednesday, Sept. 14, 2011, at 12:36 PM ET

This conversation is part of a Future Tense, a partnership between Slate, the New America Foundation, and Arizona State. On Thursday, Sept. 15, Future Tense will be hosting an event in Washington, D.C., on the boundaries between humans and machines. RSVP here to join us for "Is Our Techno-Human Marriage in Need of Counseling?"

The Techno-Human Condition.Wow. So Nick and Kyle have raised some great points ? makes me want to write another book (not!). There are a couple of issues here I think are fairly easy, a number that are much harder, and one or two that I want to suggest require reframing.

To take a simple one first, an obvious question is whether people actually want enhancing, and if so, of what kinds. Here, I think the first-off answer is pretty easy?give them the opportunity and see what they do! The problem with that approach, of course, is that it allows one to avoid worrying about network effects: If one person enhances, I'm normal (OK, maybe I am not, but let's be generic here); if 25 percent enhance, I'm still competitive; if 75 percent, I'm the new subnormal. Moreover, it sidesteps the "abortion effect": What is appropriate when a majority perhaps accepts an enhancement, but to a minority it represents an absolutely reprehensible choice? And, conversely, when is it OK for society to stop people from doing what they want as individuals? I think most of the folks who are seriously thinking about these issues acknowledge that glib generalities are inadequate, no matter how we tussle over specifics and fight our corners based on our individual perspectives.

But here we begin to get into some of the harder issues. For one thing, we need to be very clear that technologies that are potent enough to worry about are also potent enough to destabilize existing institutional, cultural, economic, and social systems. This can be either good or bad and often involves significant distributional effects, which are very hard to generalize. Facebook helped power the Arab Spring?is that a "good" or a "bad" thing? Isn't it too early to tell, at least in terms of outcome? Railroads destroyed a rural America and much of the extant ecology of the American Midwest?again, is that "good" or "bad"? UAVs?unmanned aerial vehicles?used in combat in Af-Pak?"good" or "bad"? When considering these questions, which I think are generally not answerable in any rigorous sense, it is important to differentiate between what is already here and what is speculative. In most cases, the immediate effects of a technology are known and are positive, at least for a substantial number of people?otherwise, of course, why does one bother to introduce it? (We call this Level 1 in The Techno-Human Condition.)

But technology systems of any power also will have social, institutional, economic, cultural, and other effects. These are very real, but they are seldom known a priori?and, in fact, can probably not be known a priori, because the systems involved are complex, adaptive, and rapidly shifting. Thus, for example, I doubt very much that Gottlieb Daimler looked at his new engine in 1885 and thought to himself, "Darn! If I go any further, global climate change will undoubtedly be a problem, and I really don't want to even think about the Middle East, oil, and the Israeli-Palestine conflict!" One must necessarily, then, treat the potential implications of human enhancements?and indeed all technology systems?as speculative, whether one is a utopian or dystopian.

This is an important point. These are all complex adaptive systems, and we really don't know what they'll do in the future and how they will play out. It is easy to spin utopian and dystopian hypotheticals, and it can be useful, to the extent we use them as aids to think about possible scenarios, including what institutions and society might do to respond. But it is a category mistake to attribute reality to a hypothetical future and use it as a basis for policy in the present. If we really could know how a human enhancement would play out, sure, a response now might be appropriate. But the difficult reality is that we don't have the faintest clue what the systemic institutional, cultural, and social implications of such interventions are liable to be. I think we need to get a lot better at perceiving and speculating about these systems; that's part of building our adaptive capability for an increasingly contingent, uncertain, and unstable future. But it is a step too far to try to argue for specific actions now based on hypothetical dangers or benefits that "will" accrue in the future?because we don't know how the future is going to play out. Thus, for example, some environmental groups argued strenuously a couple of years ago for a "ban on nanotechnology" based on a pretty dystopian set of hypotheticals that, in urging significant and immediate policy change, they wanted to be accepted as predictions of real futures. At the time, of course, virtually all electronics operated within the accepted nanotechnology scale (fewer than 100 nanometers), and new technologies?quantum computers, for example?were specifically being developed based on nanoscale physical structures and behaviors. (In recent months, an anti-nanotechnology group or individual has sent bombs to nanotechnology experts in Mexico.) Hypotheticals are very useful thinking tools, and scenarios are a powerful way to prepare for contingent futures, but in both cases it is critical to remember that they are aids to more powerful thinking, not predictions about what is "really going to happen."

And Nick, you're right here: Each technology is not only going to be part of larger systems but also something that will raise its own little conundrums. A high-level discussion like this doesn't get into that depth, but every one?from, say, lethal autonomous robots to drugs that enable memory manipulation?has its own charms and snares. But in either case?the large tech systems, and the individual developments?I don't think we have nearly the knowledge, or understanding, or even perception of these systems that we think we do.

And there's a further caution, I think. We worry a lot about physical and cultural imperialism. But there is also such a creature as temporal imperialism. We like the world we live in, and its values, and our perspective on "right" and "wrong" and other moral issues. But suppose that someone early in the European Enlightenment, convinced that medicine was a dangerous anti-religious plot by atheistic scientists, had gained the power to stop medicine to ensure that we all died at 30 to 35, as God intended us to? (This example seems more relevant to me with every passing year.) Suppose that we froze the prevalent moral view throughout much of human history that there were "people" and there were "barbarians" and that treating the latter like the former was not only unnecessary, but uneconomic? If one is dealing with issues that live and die in several years, ethics and morality can usually be regarded as fixed (if not universal), but is it not a dangerous hubris to impose our ethics on the further future by trying to freeze and shape their potential before they even speak? This does not call for that straw man, relativism, but it does suggest that a certain humility that reflects our status as temporary representatives of a far longer collective might not be inappropriate.

But I'd like to return to a final observation. A lot of this dialog presupposes that 1) we have some sort of ability to control the evolutionary paths of these technologies, and 2) that we know enough about them to be able to do so rationally, ethically, and responsibly. Both of these questions require further research, but I think they are, at best, whistling in the dark. Do we really control where nanotechnology, or biotechnology, or information and communication technology, is going and what it will eventually give us the power to do? After all, many medical advances are "dual use" in the sense that, used for someone requiring medical attention, they cure; used for someone who is otherwise healthy, they enhance. Are we going to stop medical enhancements? Or regulate them? Our regulation of off-label cognitive enhancers is failing at every campus I'm familiar with, and not just with the students. And hasn't the EU has done its best to stop any agricultural genetically modified organisms from being planted anywhere, even if, like "golden rice" it potentially would avoid a lot of harm and loss of life? And failed? And didn't the previous American administration try to limit stem cell research, only to create state-based programs and drive leading researchers to other countries, where they continued their work? Historically, did the ban on gunpowder that the Japanese and, more informally, the Chinese implemented?and which they no doubt saw as ethically justified?work, or did eschewing this technology just open the way for British (and American) gunboats to sail up the Yanghtze in the 1840s onward, and Adm. Perry's "black ships" to force open Japan in 1853? Put another way: If the United States and the European Union decide not to pursue a particular enhancement technology for ethical reasons, why is there any reason in the world to expect other competing cultures to do so as well?especially when their religious or philosophic traditions, and their balancing of the risks and benefits of enhancement, may well be very different? I realize this poses a difficult conundrum: Do we really want to give other countries a veto over our moral decision? And yet, to ignore the fact that a decision against an enhancement technology may do nothing but weaken a society over time?because even if "we" stop, "they" won't?cannot be immaterial.

Nick is worried that I am assuming that it's already "too late." Well, yes and no. I think when that ape picked up the bone, and reconceptualized it as a weapon, and developed a culture in which bones and stones were used as weapons, it was in a meaningful sense already "too late," if you wanted to avoid human technological enhancement. On the other hand, "too late" is a normative judgment, and, frankly, I don't even get that far. I'm just trying to understand what's out there, and what it says to me is that a) rapid and accelerating technological evolution across the entire technological frontier is here, and it's already created psychological, social, and cultural changes we haven't begun to understand; b) because these systems are powerful, and grant personal and cultural authority, and have significant military and security implications, they are going to be hard to modify or stop; and c) even if we could modify them, they are sufficiently complex so at this point, with our existing institutions and worldviews, we are clueless as to whether we are doing something ethical and rational, or not. Tough world. But I'm not saying that it's good or bad. Just that it's already here, and it is, indeed, tough. I also don't say we can't adjust to do much better. But, again, I don't see that happening.

And finally, I do like philosophy?really! But I appreciate it more when it perceives what's actually out there, and speaks to real issues in ways that reflect the real world (yeah, I know ? obviously American vs. Continental). Too much of our current discussions?not this one, and not by you, Nick!?seem to revolve around worlds that might be, wistful dreams of sustainability and adroit defenses of past verities, or lapsing into speculation about angels, dancing with stars, and heads of pins. We do need philosophic framing?but we need it to be grounded in the world we have in front of us now, and to perceive that world, and to help us grapple with the complexity of that world, rather than to offer up erudite Kool-Aid for the cognoscenti.

Brad

It's Impossible To Predict Whether a Technology Will Be a "Good" or "Bad" Thing

Posted Wednesday, Sept. 14, 2011, at 12:36 PM ET

What did you think of this article?

Join The Fray: Our Reader Discussion Forum

Source: http://feeds.slate.com/click.phdo?i=cb84611ab303e6bef14d28bc27ffc2fe

creature us open mens final go daddy tmobile johnny cash serbia spongebob squarepants

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.