Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even...

21
Explanation/Instructions This is a solo practice activity. Students should read all of the cards as they typically would. For the cards under the first block title, students should read the red text (this replaces highlighting). For the second block title, students should read the highlighting. All of the cards until the last three have a mirroring effect applied to the text. Take note of whether it is easier (and how much easier) to read the last three cards.

Transcript of Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even...

Page 1: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

Explanation/InstructionsThis is a solo practice activity. Students should read all of the cards as they typically would. For the cards under the first block title, students should read the red text (this replaces highlighting). For the second block title, students should read the highlighting. All of the cards until the last three have a mirroring effect applied to the text. Take note of whether it is easier (and how much easier) to read the last three cards.

Page 2: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

Extinction!Our first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology writer focusing specializing in futurism, founding director of the Immortality Institute—a non-profit organization focused on the abolition of nonconsensual death, member of the World Transhumanist Association, associate of the Institute for Accelerating Change, member of the Center for Responsible Nanotechnology's Global Task Force, 2004 (“Immortalist Utilitarianism,” Accelerating Future, May, Available Online at http://www.acceleratingfuture.com/michael/works/immethics.htm, Accessed 09-09-2011)The value of contributing to Aubrey de Grey's anti-aging project assumes that there continues to be a world around for people's lives to be extended. But if we nuke ourselves out of existence in 2010, then what? The probability of human extinction is the gateway function through which all efforts toward life extension must inevitably pass, including cryonics, biogerontology, and nanomedicine. They are all useless if we blow ourselves up. At this point one observes that there are many

working toward life extension, but few focused on explicitly preventing apocalyptic global disaster. Such huge risks sound like fairy tales rather than real threats - because we have never seen them happen before, we underestimate the probability of their occurrence. An existential disaster has not yet occurred on this planet.The risks worth worrying about are not pollution, asteroid impact, or alien invasion - the ones you see dramaticized in movies - these events are all either very gradual or improbable. Oxford philosopher Nick Bostrom warns us of existential risks, "...where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." Bostrom continues, "Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly stifling cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory." The four main risks we know about so far are summarized by the following, in ascending order of probability and severity over the course of the next 30 years:Biological. More specifically, a genetically engineered supervirus. Bostrom writes, "With the fabulous advances in genetic technology currently taking place, it may become possible for a tyrant, terrorist, or lunatic to create a doomsday virus, an organism that combines long latency with high virulence and mortality." There are several factors necessary for a virus to be a risk. The first is the presence of biologists with the knowledge necessary to genetically engineer a new virus of any sort. The second is access to the expensive machinery required for synthesis. Third is specific knowledge of viral genetic engineering. Fourth is a weaponization strategy and a delivery mechanism. These are nontrivial barriers, but are sure to fall in due time.Nuclear. A traditional nuclear war could still break out, although it would be unlikely to result in our ultimate demise, it could drastically curtail our potential and set us back thousands or even millions of years technologically and ethically. Bostrom mentions that the US and Russia still have huge stockpiles of nuclear weapons. Miniaturization technology, along with improve manufacturing technologies, could make it possible to mass produce nuclear weapons for easy delivery should an escalating arms race lead to that. As rogue nations begin to acquire the technology for nuclear strikes, powerful nations will feel increasingly edgy.Nanotechnological. The Transhumanist FAQ reads, "Molecular nanotechnology is an anticipated manufacturing technology that will make it possible to build complex three-dimensional structures to atomic specification using chemical reactions directed by nonbiological machinery." Because nanomachines could be self-replicating or at least auto-productive, the technology and its products could proliferate very rapidly. Because nanotechnology could theoretically be used to create any chemically stable object, the potential for abuse is massive. Nanotechnology could be used to manufacture large weapons or other oppressive apparatus in mere hours; the only limitations are raw materials, management, software, and heat dissipation.Human-indifferent superintelligence. In the near future, humanity will gain the technological capability to create forms of intelligence radically better than our own. Artificial Intelligences will be implemented on superfast transistors instead of slow biological neurons, and eventually gain the intellectual ability to fabricate new hardware and reprogram their source code. Such an intelligence could engage in recursive self-improvement - improving its own intelligence, then directing that intelligence towards further intelligence improvements. Such a process could lead far beyond our current level of intelligence in a

Page 3: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

relatively short time. We would be helpless to fight against such an intelligence if it did not value our continuation.So let's say I have another million dollars to spend. My last million dollars went to Aubrey de Grey's Methuselah Mouse Prize, for a grand total of billions of expected utiles. But wait - I forgot to factor in the probability that humanity will be destroyed before the positive effects of life extension are borne out. Even if my estimated probability of existential risk is very low , it is still rational to focus on addressing the risk because my whole enterprise would be ruined if disaster is not averted . If we value the prospect of all the future lives that could be enjoyed if we pass beyond the threshold of risk - possibly quadrillions or more, if we expand into the cosmos, then we will deeply value minimizing the probability of existential risk above all other considerations.If my million dollars can avert the chance of existential disaster by, say, 0.0001%, then the expected utility of this action relative to the expected utility of life extension advocacy is shocking. That's 0.0001% of the utility of quadrillions or more humans, transhumans, and posthumans leading fulfilling lives. I'll spare the reader from working out the math and utility curves - I'm sure you can imagine them. So, why is it that people tend to devote more resources to life extension than risk prevention? The follow includes my guesses, feel free to tell me if you disagree:They estimate the probability of any risk occurring to be extremely low.They estimate their potential influence over the likelihood of risk to be extremely low.They feel that positive PR towards any futurist goals will eventually result in higher awareness of risk.They fear social ostracization if they focus on "Doomsday scenarios" rather than traditional extension.Those are my guesses. Immortalists with objections are free to send in their arguments, and I will post them here if they are especially strong. As far as I can tell however, the predicted utility of lowering the likelihood of existential risk outclasses any life extension effort I can imagine .

I cannot emphasize this enough. If a existential disaster occurs, not only will the possibilities of extreme life extension, sophisticated nanotechnology, intelligence enhancement, and space expansion never bear fruit, but everyone will be dead , never to come back . Because the we have so much to lose, existential risk is worth worrying about even if our estimated probability of occurrence is extremely low .It is not the funding of life extension research projects that immortalists should be focusing on. It should be projects that decrease the risk of existential risk. By default, once the probability of existential risk is minimized, life extension technologies can be developed and applied. There are powerful economic and social imperatives in that direction, but few towards risk management. Existential risk creates a "loafer problem" — we always expect someone else to take care of it. I assert that this is a dangerous strategy and should be discarded in favor of making prevention of such risks a central focus.

Reducing existential risk by even a tiny amount outweighs every other impact — math. Bostrom 11 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2011 (“The Concept of Existential Risk,” Draft of a Paper published on ExistentialRisk.com, Available Online at http://www.existentialrisk.com/concept.html, Accessed 07-04-2011)Holding probability constant, risks become more serious as we move toward the upper-right region of figure 2. For any fixed probability, existential risks are thus more

Page 4: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

serious than other risk categories. But just how much more serious might not

be intuitively obvious. One might think we could get a grip on how bad an existential catastrophe would be by considering some of the worst historical disasters we can think of—such as the two world wars, the

Spanish flu pandemic, or the Holocaust—and then imagining something just a bit worse. Yet if we look at global population statistics over time, we find that these horrible events of the past century fail to register (figure 3).[Graphic Omitted] Figure 3: World population over the last century. Calamities such as the Spanish flu pandemic, the two world wars, and the Holocaust scarcely register. (If one stares hard at the graph, one can perhaps just barely make out a slight temporary reduction in the rate of growth of the world population during these events.)But even this reflection fails to bring out the seriousness of existential risk. What makes existential catastrophes especially bad is not that they would show up robustly on a plot like the one in figure 3, causing a precipitous

drop in world population or average quality of life. Instead, their significance lies primarily in the fact that they would destroy the future . The philosopher Derek Parfit made a similar point with the following thought experiment:I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace .(2) A nuclear war that kills 99% of the world’s existing population.

(3) A nuclear war that kills 100% .

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater. … The Earth will remain habitable for at least another billion years . Civilization began only a few thousand years ago . If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history . The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history . If we compare this possible history to a day, what has occurred so far is only a fraction of a second . (10: 453-454)

To calculate the loss associated with an existential catastrophe, we must consider how much value would come to exist in its absence. It turns out that the ultimate potential for Earth - originating intelligent life is literally astronomical .One gets a large number even if one confines one’s consideration to the potential for biological human beings living on Earth. If we suppose with Parfit that our planet will remain habitable for at least another billion years , and we assume that at least one billion people could live on it sustainably , then the potential exist for at least 10 18 human lives . These lives could also be considerably better than the average contemporary

Page 5: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

human life , which is so often marred by disease, poverty, injustice , and various biological limitations that could be partly overcome through continuing technological and moral progress.

However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 10 34 years.[10] Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 1054 human-brain-emulation subjective life-years (or 1071 basic computational operations).(4)[11] If we make the less conservative assumption that future civilizations could eventually press close to the absolute bounds of known physics

(using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realized.[12]

Even if we use the most conservative of these estimates , which entirely ignore s the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10 18 human lives . This implies that the expected value of reducing existential risk by a mere one millionth of one percent age point is at least ten times the value of a billion human lives . The more technologically comprehensive estimate of 1054 human-brain-emulation subjective life-years (or

1052 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1 % chance of being correct , we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.

One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any “ordinary” good, such as the direct benefit of saving 1 billion lives. And, further, that the absolute value of the indirect effect of saving 1 billion lives on the total cumulative amount of existential risk—positive or negative—is almost certainly larger than the positive value of the direct benefit of such an action.[13]

Page 6: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

Non-existential impacts aren’t a big deal because humanity can recover. Extinction is forever. Bostrom 2 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2002 (“Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology, Volume 9, Number 1, Available Online at http://www.nickbostrom.com/existential/risks.html, Accessed 07-04-2011)2 The unique challenge of existential risksRisks in this sixth category are a recent phenomenon . This is part of the reason why it is useful to distinguish them from other risks . We have not evolved mechanisms, either biologically or culturally, for managing such risks. Our intuitions and coping strategies have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards . But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life . They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.

Predictions about existential risk are possible and necessary. Bostrom 9 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2009 (“The Future of Humanity,” Geopolitics, History and International Relations, Volume 9, Issue 2, Available Online to Subscribing Institutions via ProQuest Research Library, Reprinted Online at http://www.nickbostrom.com/papers/future.pdf, Accessed 07-06-2011, p. 2-4)We need realistic pictures of what the future might bring in order to make sound decisions . Increasingly, we need realistic pictures not only of our personal or local near-term futures, but also of remoter global futures . Because of our expanded technological powers, some human activities now have significant global impacts . The scale of human social organization has also grown, creating new opportunities for coordination and action, and there are many institutions and

Page 7: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

individuals who either do consider, or claim to consider, or ought to consider,

possible long-term global impacts of their actions. Climate change , national and international security , economic development , nuclear waste disposal, biodiversity, natural resource conservation, population policy,

and scientific and technological research funding are examples of policy areas that involve long time-horizons. Arguments in these areas often rely on implicit assumptions about the future of humanity . By making these assumptions explicit , and subjecting them to critical analysis , it might be possible to address some of the big challenges for humanity in a more well-considered and thoughtful manner.The fact that we “need” realistic pictures of the future does not entail that we can have them. Predictions about future technical and social developments are notoriously unreliable – to an extent that have lead some to propose that we do away with prediction altogether in our planning and preparation for the future. Yet while the methodological problems of such forecasting are certainly very significant, the extreme view that we can or should do away with prediction altogether is misguided . That view is expressed, to take one [end page 2] example, in a recent paper on the societal implications of nanotechnology by Michael Crow and Daniel Sarewitz, in which they argue that the issue of predictability is “irrelevant”:preparation for the future obviously does not require accurate prediction; rather, it requires a foundation of knowledge upon which to base action, a capacity to learn from experience, close attention to what is going on in the present, and healthy and resilient institutions that can effectively respond or adapt to change in a timely manner.2

Note that each of the elements Crow and Sarewitz mention as required for the preparation for the future relies in some way on accurate prediction. A capacity to learn from experience is not useful for preparing for the future unless we can correctly

assume (predict) that the lessons we derive from the past will be applicable to future situations. Close attention to what is going on in the present is likewise futile unless we can assume that what is going on in the present will reveal stable trends or otherwise shed light on what is likely to happen next. It also requires non-trivial prediction to figure out what kind of institution will prove healthy, resilient, and effective in responding or adapting to future changes.

The reality is that predictability is a matter of degree , and different aspects of the future are predictable with varying degrees of reliability and precision.3 It may often be a good idea to develop plans that are flexible and to pursue policies that are robust under a wide range of contingencies. In some cases, it also makes sense to adopt a reactive approach that relies on adapting quickly to changing circumstances rather than pursuing any detailed long-term plan or explicit agenda. Yet these coping strategies are only one part of the solution . Another part is to work to improve the accuracy of our beliefs about the future (including

the accuracy of conditional predictions of the form “if x is done, y will result”). There might be traps that we are walking towards that we could only avoid falling into by means of foresight. There are also opportunities that

Page 8: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

we could reach much sooner if we could see them farther in advance. And in a strict sense, prediction is always necessary for meaningful decision-making.4

Predictability does not necessarily fall off with temporal distance. It may be highly unpredictable where a traveler will be one hour after the start of her

journey, yet predictable that after five hours she will be at her destination. The very long-term future of humanity may be relatively easy to predict , being a matter amenable to study by the natural sciences, particularly cosmology (physical eschatology). And for there to be a degree of predictability, it is not necessary that it be possible to identify one specific scenario as what will definitely happen . If there is a t least some scenario that can be ruled out, that is also a degree of predictability . Even short of this, if there is some basis for assigning different probabilities [end page 3] (in the sense of credences, degrees of belief) to different propositions about logically possible future events, or some basis for criticizing some such probability distributions as less rationally defensible or reasonable than others, then again there is a degree of predictability. And this is surely the case with regard to many aspects of the future of humanity. While our knowledge is insufficient to narrow down the space of possibilities to one broadly outlined future for humanity, we do know of many relevant arguments and considerations which in combination impose significant constraints on what a plausible view of the future could look like. The future of humanity need not be a topic on which all assumptions are entirely arbitrary and anything goes. There is a vast gulf between knowing exactly what will happen and having absolutely no clue about what will happen. Our actual epistemic location is some offshore place in that gulf.5

Page 9: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

Extinction?Dramatizing impacts as existential risks replaces risk assessment with worst-case thinking.Furedi 10 — Frank Furedi, Professor of Sociology at the University of Kent at Canterbury, holds a Ph.D. from the School of Oriental and African Studies at London University, 2010 (“Fear is key to irresponsibility,” The Australian, October 9th, Available Online at http://www.theaustralian.com.au/news/opinion/fear-is-key-to-irresponsibility/story-e6frg6zo-1225935797740, Accessed 10-18-2010)In the 21st century the optimistic belief in humanity’s potential for subduing the unknown and to become master of its fate has given way to the belief that we are too powerless to deal with the perils confronting us.

We live in an era where problems associated with uncertainty and risk are amplified and , through our imagination, mutate swiftly into existential threats . Consequently, it is rare that unexpected natural events are treated as just that.

Rather, they are swiftly dramatised and transformed into a threat to human survival. The clearest expression of this tendency is the dramatisation of weather forecasting.Once upon a time the television weather forecasts were those boring moments when you got up to get a snack. But with the invention of concepts such as “extreme weather”, routine events such as storms, smog or unexpected snowfalls have acquired compelling entertainment qualities.This is a world where a relatively ordinary, technical, information-technology problem such as the so-called millennium bug was interpreted as a threat of apocalyptic proportions, and where a flu epidemic takes on the dramatic weight of the plot of a Hollywood disaster movie.Recently, when the World Health Organisation warned that the human species was threatened by the swine flu, it became evident that it was cultural prejudice rather than sober risk assessment that influenced much of present-day official thinking.In recent times European culture has become confused about the meaning of uncertainty and risk. Contemporary Western cultural attitudes towards uncertainty, chance and risk are far more pessimistic and confused than they were through most of the modern era. Only rarely is uncertainty perceived as an opportunity to take responsibility for our destiny. Invariably uncertainty is represented as a marker for danger and change is often regarded with dread.Frequently, worst-case thinking displaces any genuine risk- assessment process . Risk assessment is based on an attempt to calculate the probability of different outcomes. Worst-case thinking—these days known as precautionary thinking—is based on an act of imagination . It imagines the worst-case scenario and demands that we take action on that basis.

This causes serial policy failure — acting based on worst-case possibilities ruins decision-making. Evans 12 — Dylan Evans, Lecturer in Behavioral Science at University College Cork School of Medicine, holds a Ph.D. in Philosophy from the London School of Economics, 2012 (“Nightmare Scenario: The Fallacy of Worst-Case Thinking,” Risk Management, April 2nd, Available Online at

Page 10: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

http://www.rmmagazine.com/2012/04/02/nightmare-scenario-the-fallacy-of-worst-case-thinking/, Accessed 10-10-2013)There’s something mesmerizing about apocalyptic scenarios . Like

an alluring femme fatale, they exert an uncanny pull on the imagination . That is why what security expert Bruce Schneier calls “worst-case thinking” is so dangerous . It substitutes imagination for thinking, speculation for risk analysis and fear for reason .

One of the clearest examples of worst-case thinking was the so-called “1% doctrine,” which Dick Cheney is said to have advocated while he was vice president in the George W. Bush administration. According to journalist Ron Suskind, Cheney first proposed the doctrine at a meeting with CIA Director George Tenet and National Security Advisor Condoleezza Rice in November 2001.Responding to the thought that Al Qaeda might want to acquire a nuclear weapon, Cheney apparently remarked: “If there’s a 1% chance that Pakistani scientists are helping Al Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It’s not about our analysis…It’s about our response.”By transforming low-probability events into complete certainties whenever the events are particularly scary, worst-case thinking leads to terrible decision making . For one thing, it’s only half of the cost/benefit

equation. “Every decision has costs and benefits, risks and rewards,” Schneier points out. “By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.”

Hyperbolic extinction impacts undermine effective decision-making. Gross and Gilles 12 — Mathew Barrett Gross, New Media Strategist who served as the Director of Internet Communications for Howard Dean's 2004 presidential campaign, and Mel Gilles, Director of Sol Kula Yoga and Healing, 2012 (“How Apocalyptic Thinking Prevents Us from Taking Political Action,” The Atlantic, April 23rd, Available Online at http://www.theatlantic.com/politics/archive/2012/04/how-apocalyptic-thinking-prevents-us-from-taking-political-action/255758/, Accessed 10-10-2013)Flip through the cable channels for long enough, and you'll inevitably find the apocalypse. On Discovery or National Geographic or History you'll find shows like MegaDisasters, Doomsday Preppers, or The Last Days on Earth chronicling, in an hour of programming, dozens of ways the world might end: a gamma ray burst from a nearby star peeling away the Earth's ozone layer like an onion; a mega-volcano erupting and plunging our planet into a new ice age; the magnetic poles reversing. Turn to a news channel, and the headlines appear equally apocalyptic, declaring that the "UN Warns of Rapid Decay in Environment" or that "Humanity's Very Survival" is at risk. On another station, you'll find people arguing that the true apocalyptic threat to our way of life is not the impending collapse of ecosystems and biodiversity but the collapse of the dollar as the world's global currency. Change the channel again, and you'll see still others insisting that malarial mosquitoes, drunk on West Nile virus, are the looming specter of apocalypse darkening our nation's horizon.How to make sense of it all? After all, not every scenario can be an apocalyptic threat to our way of life -- can it? For many, the tendency is to dismiss all the potential crises we are facing as overblown: perhaps cap and trade is just a smoke screen designed to earn Al Gore billions from his clean-energy investments; perhaps terrorism is just an excuse to increase the power and reach of the government. For others, the panoply of potential disasters becomes overwhelming, leading to a distorted and paranoid vision of reality and the threats facing our world -- as seen on shows like Doomsday Preppers. Will an epidemic wipe out humanity, or could a meteor destroy all life on earth? By the time you're done watching Armageddon Week on the History Channel, even a rapid reversal of the world's magnetic poles might seem terrifyingly likely and imminent.The last time apocalyptic anxiety spilled into the mainstream to the extent that it altered the course of history -- during the Reformation -- it relied on a revolutionary new communications technology: the

Page 11: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

printing press. In a similar way, could the current surge in apocalyptic anxiety be attributed in part to our own revolution in communications technology?The media, of course, have long mastered the formula of packaging remote possibilities as urgent threats , as sociologist Barry Glassner pointed out in his bestseller The Culture of Fear. We're all familiar with the formula: "It's worse than you think," the anchor intones before delivering an alarming report on date-rape drugs, stalking pedophiles, flesh-eating bacteria, the Ebola virus (née avian flu cum swine flu). You name it (or rename it): if a threat has even a remote chance of materializing, it is treated as an imminent inevitability by television news. It's not just that if it bleeds, it leads. If it might bleed, it still leads. Such sensationalist speculation attracts eyeballs and sells advertising, because fear sells -- and it can sell everything from pharmaceuticals to handguns to duct tape to insurance policies. "People react to fear, not love," Richard Nixon once said. "They don't teach that in Sunday school, but it's true."Nothing inspires fear like the end of the world, and ever since Y2K, the media's tendency toward overwrought speculation has been increasingly married to the rhetoric of apocalypse. Today, nearly any event can be explained through apocalyptic language, from birds falling out of the sky (the Birdocalypse?) to a major nor'easter (Snowmageddon!) to a double-dip recession (Barackalypse! Obamageddon!). Armageddon is here at last -- and your local news team is live on the scene! We've seen the equivalent of grade inflation (A for Apocalypse!) for every social, political, or ecological challenge before us, an escalating game of one- upmanship to gain the public's attention . Why worry about global warming and rising sea levels when the collapse of the housing bubble has already put your mortgage underwater? Why worry that increasing droughts will threaten the supply of drinking water in America's major cities when a far greater threat lies in the possibility of an Arab terrorist poisoning that drinking supply, resulting in millions of casualties?Yet not all of the crises or potential threats before us are equal, nor are they equally probable – a fact that gets glossed over when the media equate the remote threat of a possible event, like

epidemics, with real trends like global warming.Over the last decade, the 24-hour news cycle and the proliferation of media channels has created ever-more apocalyptic content that is readily available to us, from images of the Twin Towers falling in 2001 to images of the Japanese tsunami in 2011. So, too, have cable channels like Discovery and History married advances in computer-generated imagery with emerging scientific understanding of our planet and universe to give visual validity to the rare and catastrophic events that have occurred in the past or that may take place in the distant future. Using dramatic, animated images and the language of apocalypse to peddle such varied scenarios, however, has the effect of leveling the apocalyptic playing field , leaving the viewer with the impression that terrorism, bird flu, global warming, and asteroids are all equally probable . But not all of these apocalyptic scenarios are equally likely , and they're certainly not equally likely to occur within our lifetimes – or in our neighborhoods. For example, after millions of Americans witnessed the attacks of 9/11 on television, our collective fear of terrorism was much higher than its actual probability; in 2001, terrorists killed one-twelfth as many Americans as did the flu and one-fifteenth as many Americans as did car accidents. Throughout the first decade of the 21st century, the odds of an American being killed by a terrorist were about 1 in 88,000 -- compared to a 1 in 10,010 chance of dying from falling off a ladder. The fears of an outbreak of SARS, avian flu, or swine flu also never lived up to their media hype.

Page 12: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

This over-reliance on the apocalyptic narrative causes us to fear the wrong things and to mistakenly equate potential future events with current and observable trends . How to discern the difference between so many apocalyptic options? If we ask ourselves three basic questions about the many threats portrayed apocalyptically in the media, we are able to separate the apocalyptic wheat from the chaff. Which scenarios are probable? Which are preventable? And what is the likely impact of the worst-case model of any given threat?

Worst-case thinking structurally prevents rational decision-making. Schneier 10 — Bruce Schneier, internationally renowned security technologist who was described by The Economist as a “Security Guru,” currently works as the Chief Security Technology Officer for BT—a global telecommunications services company, holds an M.A. in Computer Science from American University, 2010 (“Worst-case thinking makes us nuts, not safe,” CNN, May 12th, Available Online at http://www.cnn.com/2010/OPINION/05/12/schneier.worst.case.thinking/, Accessed 10-18-2010)At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack. I didn't get to give my answer until the afternoon, which was: "My nightmare scenario is that people keep talking about their nightmare scenarios ."

There's a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty.

It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis . And it makes us more vulnerable to the effects of terrorism.

Worst-case thinking means generally bad decision making for

several reasons. First, it's only half of the cost-benefit equation . Every decision

has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes .

Second, it's based on flawed logic . It begs the question by assuming that a proponent of an action must prove that the nightmare scenario is impossible . Third, it can be used to support any position or its opposite . If we build a nuclear power plant, it could melt down. If we don't build it, we will run short of power and society will collapse into anarchy. If we allow flights near Iceland's volcanic ash, planes will crash and people will die. If we

don't, organs won’t arrive in time for transplant operations and people will die. If we don't invade Iraq, Saddam Hussein might use the nuclear weapons he might have. If we do, we might destabilize the Middle East, leading to widespread violence and death. Of course, not all fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking . So terrorism

Page 13: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary; nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and annihilating the planet is bad. Basically, any fear that would make a good movie plot is amenable to worst-case thinking.

Decision-making is impossible under their framework. Mueller and Stewart 11 — John Mueller, Professor and Woody Hayes Chair of National Security Studies at the Mershon Center for International Security Studies and Professor of Political Science at Ohio State University, holds a Ph.D. in Political Science from the University of California-Los Angeles, and Mark G. Stewart, Professor of Civil Engineering and Director of the Centre for Infrastructure Performance and Reliability at the University of Newcastle (Australia), holds a Ph.D. from the University of Newcastle (Australia), 2011 (“Assessing Risk,” Terror, Security, and Money: Balancing the Risks, Benefits, and Costs of Homeland Security, Published by Oxford University Press, ISBN 9780199795758, p. 15-17)Analyst Bruce Schneier has written penetratingly of worst-case thinking. He points out that it

involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis . And it makes us more vulnerable to the effects of terrorism.

It leads to bad decision making because it’s only half of the cost-benefit equation . Every decision has costs and

benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes. [end page 15]

It also assumes “that a proponent of an action must prove that the nightmare scenario is impossible,” and it “can be used to support any position or its opposite. If we build a nuclear power plant, it could melt down. If

we don’t build it, we will run short of power and society will collapse into anarchy.” And worst, it

“validates ignorance” because, “instead of focusing on what we know, it focuses on what we don’t know—and what we can imagine.” In the process, “risk assessment is devalued ” and

“probabilistic thinking is repudiated in favor of possibilistic thinking.”6

As Schneier also notes, worst-case thinking is the driving force behind the precautionary principle, a decent working definition of which is “action should be taken to correct a problem as soon as there is evidence that harm may occur, not after the harm has already occurred.”7 It could be seen in action less than a week after 9/11, when President George W. Bush outlined his new national security strategy: “We cannot let our enemies strike first… [but must take] anticipatory action to defend ourselves, even if uncertainty remains as to the time and place of the enemy’s attack. To forestall or prevent such hostile acts by our adversaries, the United States, will, if necessary, act preemptively…. America will act against such emerging threats before they are fully formed.”8 The 2003 invasion of Iraq, then, was justified by invoking the precautionary principle based on

Page 14: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

the worst-case scenario in which Saddam Hussein might strike. If, on the other hand, any worst-case thinking focused on the potential for the destabilizing effects a war would have on Iraq and the region, the precautionary principle would guide one to be very cautious about embarking on war. As Sunstein notes, the precautionary principle “offers no guidance —not that it is

wrong, but that it forbids all courses of action, including regulation.” Thus, “taken seriously, it is paralyzing , banning the very steps that it simultaneously requires.”9 It can be invoked in equal measure to act or not to act.There are considerable dangers in applying the precautionary principle to terrorism: on the one hand, any action taken to reduce a presumed risk always poses the introduction of countervailing risks, while on the other, larger, expensive counterterrorism efforts will come accompanied by high opportunity costs.10 Moreover, “For public officials no less than the rest of us, the probability of harm matters a great deal, and it is foolish to attend exclusively to the worst case scenario.”11A more rational approach to worst-case thinking is to establish the likelihood of gains and losses from various courses of action, including staying the current course.12 This, of course, is the essence of risk assessment. What is necessary is due consideration to the spectrum of threats, not simply the worst one imaginable , in order to properly understand, and coherently [end page 16] deal with , the risks to people, institutions, and the economy. The relevant decision makers are professionals, and it is not unreasonable to suggest that they should do so seriously . Notwithstanding political pressures

(to be discussed more in chapter 9), the fact that the public has difficulties with probabilities when emotions are involved does not relieve those in charge of the requirement, even the duty , to make decisions about the expenditures of vast quantities of public monies in a responsible manner .

Existential Risk Credibility DA — hyping up a low-probability existential threat undermines the credibility of all existential risk mitigation. This turns their impact. Anissimov 8 — Michael Anissimov, science and technology writer focusing specializing in futurism, founding director of the Immortality Institute—a non-profit organization focused on the abolition of nonconsensual death, member of the World Transhumanist Association, associate of the Institute for Accelerating Change, member of the Center for Responsible Nanotechnology's Global Task Force, 2008 (“Ideas for Mitigating Extinction Risk,” Accelerating Future—Michael Anissimov’s futurism blog, September 23rd, Available Online at http://www.acceleratingfuture.com/michael/blog/2008/09/ideas-for-mitigating-extinction-risk/, Accessed 09-09-2011)As I see it, there are three main categories of risk: bio, nano, and AI/robotics. These man-made risks make up the vast majority of the threat magnitude over the coming century and deserve most of the attention.Threats of low probability include asteroid strikes, supervolcano eruptions, alien invasions, simulation getting shut down, and many others. Though there is disagreement on whether nuclear war, particle accelerator disasters, or runaway climate change deserve to be counted as substantial-probability extinction threats over the coming century, I would say they are not .A word on focusing on low probability threats alongside higher probability threats. Mentioning low probability threats just for the sake of comprehensiveness is rhetorically damaging. It distracts from the central thrust by

Page 15: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

introducing superfluous information. Worse, it can damage credibility of the entire message. Whether fair or unfair, we have seen the doom-worriers of the Large Hadron Collider heavily maligned by both scientists and laypeople in print and online. Even if an x-risk mitigator thought there was some probability of planetary doom due to the LHC, say one in a hundred thousand, the credibility sacrifice of pushing the issue is bound to detract from one’s ability to advocate mitigation of other, much higher-probability threats. So it should be avoided . Of course, if the LHC occupies a dominant portion of the risk pie in one’s personal estimate, it would be rational to devote attention to that, despite the credibility penalty.

The “one-percent doctrine” is incoherent and destroys decision-making. Meskill 9 — David Meskill, Assistant Professor of History at Dowling College, holds a Ph.D. in Modern European History from Harvard University, 2009 (“The "One Percent Doctrine" and Environmental Faith,” A Little Knowledge—David Meskill’s blog, December 9th, http://davidmeskill.blogspot.com/2009/12/one-percent-doctrine-and-environmental.html, Accessed 09-06-2011)Tom Friedman's piece today in the Times on the environment (http://www.nytimes.com/2009/12/09/opinion/09friedman.html?_r=1) is one of the flimsiest pieces by a major columnist that I can remember ever reading. He applies Cheney's "one percent doctrine" (which is similar to the environmentalists' "precautionary principle") to the risk of

environmental armageddon. But this doctrine is both intellectually incoherent and practically irrelevant. It is intellectually incoherent because it cannot be applied consistently in a world with many potential disaster scenarios. In addition to the global-warming risk, there's also the asteroid-hitting-the-earth risk, the terrorists-with-nuclear-weapons risk (Cheney's original scenario), the super-duper-pandemic risk, etc. Since each of these risks, on the "one percent doctrine," would deserve all of our attention, we cannot address all of them simultaneously . That

is, even within the one-percent mentality, we'd have to begin prioritizing, making choices and trade-offs . But why then should we only make these trade-offs between responses to disaster scenarios ? Why not also choose between them and other, much more cotidien, things

we value? Why treat the unlikely but cataclysmic event as somehow fundamentally different , something that cannot be integrated into all the other calculations we make? And in fact, this is how we behave all the time . We get into our cars in order to buy a cup of coffee, even though there's some chance we will be killed on the way to the coffee shop. We are constantly risking death, if slightly, in order to pursue the things we value. Any creature that adopted the "precautionary principle" would sit at home - no, not even there, since there is some chance the building might

collapse. That creature would neither be able to act, nor not act, since it would nowhere discover perfect safety .

Page 16: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

Their risk framing is an excuse to make bad decisions. Yglesias 6 — Matthew Yglesias, political columnist, blogger, and writer, former editor-in-chief of The Harvard Independent, current columnist for The Atlantic Monthly, and B.A. from Harvard University, 2006 (“The One Percent Doctrine,” Talking Points Memo Café, July 7th, Available Online at http://www.tpmcafe.com/blog/yglesias/2006/jul/07/the_one_percent_doctrine, Accessed 10-29-2007)Suskind tells a story in which you have an administration that, at the beginning, simply doesn't know very much about international terrorism and doesn't care to learn much more about it. They think other things are more important, they have a limited amount of time, and so they're focused on that. Then comes 9/11 – a devastating event. And, of course, at that time nobody knew what else might be coming down the pike. Decisions needed to be made, and quickly, by people who weren't especially well-versed in the situation.Thus, for reasons that under the circumstances you have to consider at least understandable, the decision was made to radically reduce the evidentiary threshold for taking action and the procedural constraints on action .I think you can see why this seemed like a good idea. The US government has all these giant security institutions at its disposal, and the basic first instinct was that in the wake of 9/11 they needed to be unleashed or unshackled or the metaphor of your choice. The trouble is that, in practice, you need evidentiary standards to make sure that your institutions are doing something meaningful with their time. Saidi's case doesn't make it into the book, but it's totally typical of the sort of stuff that wound up going down. Under the "one percent" theory of threat-mitigation lots and lots of time was spent interrogating people who didn't really know anything. "Agressive" interrogation methods were approved. So you had lots of people who didn't really no anything being leaned on extremely hard to say something which naturally took you from a medium-sized pool of detainees who were often the wrong people, to a giant pool of basically useless information.Then people had to go track that down. And so on and so forth – wastes of time.The absence of procedural constraints and the perceived need for secrecy, meanwhile, let all sorts of dirty dealing enter into the picture. When things are done that don't work, they don't come to light. Positive spin is put on actions, sometimes by the White House sometimes by the mid-level figures taking the actions. Things that were set in motion in the panicked days of September and October 2001 don't get corrected or altered as new information becomes available.What's more, the "one percent" principle is , when you get down to it, totally useless as a guide to action . You simply can't take decisive action to counter everything that has a 1-in-a-100 chance of being a threat. It's meaningless . But it's a good excuse – a rationalization – for not acquiring better information , for not learning about the problem , for not thinking about the long-term viability of anything , and for basically proceeding on the basic of knee-jerk reactions and longstanding beliefs . Anything or nothing can be justified from within this framework , and operating outside the boundaries of law and oversight nobody needs to know about it even though the whole thing's gone horribly awry.

Bostrom doesn’t support their framing. Bostrom 13 — Nick Bostrom, Professor in the Faculty of Philosophy & Oxford Martin School, Director of the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology at the University of Oxford, recipient of the 2009 Eugene R. Gannon Award for the Continued Pursuit of Human Advancement, holds a Ph.D. in Philosophy from the London School of Economics, 2013 (“Interview with Nick Bostrom,” H+ Magazine, March 12th, Available Online at http://hplusmagazine.com/2013/03/12/interivew-with-nick-bostrom/, Accessed 10-16-2013)You need to care about getting things right . Even then, it’s quite difficult to do it . But if it’s in an arena where the discourse is shaped by a lot of other things than the quest for truth , if

Page 17: Verbatim Mac Spea…  · Web viewOur first priority should be reducing existential risk — even if the probability is low. Anissimov 4 — Michael Anissimov, science and technology

people use the arena for political purposes or for self promotion purposes , or to tell interesting stories , to make money…

If there are all these other roles that are statements play, then the truth is a very weak signal and it will be drowned out by this noise and distortion .