US Government Computer Penetration Programs and the...

18
US Government Computer Penetration Programs and the Implications for Cyberwar Edward Hunt College of William & Mary The US Department of Defense was the driving force behind the de- velopment of sophisticated computer penetration methodologies. By analyzing the security of the nation’s time-sharing computer systems, security analysts developed an expert understanding of computer penetration. Eventually, the US and its intelligence agencies utilized computer penetration techniques to wage offensive cyberattacks. In January 2011, the journalists William Broad, John Markoff, and David Sanger reported in the New York Times that a com- puter worm called Stuxnet, ‘‘the most sophis- ticated cyberweapon ever deployed,’’ had caused significant delays to Iran’s nuclear en- richment program. 1 ‘‘The sheer size of the code was staggering,’’ reported Sharon Wein- berger in Nature, noting that Stuxnet featured ‘‘some 15,000 lines of code, representing an estimated 10,000 person hours in software de- velopment.’’ 2 A team of security analysts at the software company Symantec, which pub- lished a thorough technical analysis of the cyberweapon, believed that ‘‘Stuxnet is of such great complexity—requiring significant resources to develop—that few attackers will be capable of producing a similar threat.’’ 3 Given the cyberweapon’s capability to imple- ment ‘‘direct-attack attempts on critical infra- structure,’’ the Symatec analysts concluded that ‘‘Stuxnet is the type of threat we hope to never see again.’’ 3 US President Barack Obama came to a dif- ferent conclusion after the initial public disclo- sure of the Stuxnet weapon. According to the journalist David Sanger, President Obama ‘‘de- cided that the cyberattacks should proceed.’’ 4 In fact, Obama had ‘‘secretly ordered increas- ingly sophisticated attacks on the computer systems that run Iran’s main nuclear enrich- ment facilities,’’ expanding a covert program code-named Olympic Games that had begun during the Bush administration. 4 Of course, American involvement should come as no surprise, particularly since Richard Clarke and Robert Knake had acknowledged in 2010 in their book Cyber War that the US ‘‘has perhaps led [the world] in cyber espionage and the creation of cyber war tools’’ and today ‘‘very likely possesses the most sophisticated offensive cyber war capabilities.’’ 5 As an example, Clarke and Knake pointed to the ‘‘malicious code’’ that the Central Intelligence Agency (CIA) had created in the early 1980s to sabo- tage the trans-Siberian pipeline in the Soviet Union, leading to ‘‘the most massive non- nuclear explosion ever recorded, over three kilotons.’’ 6 Interestingly, the attack against the Soviets, unknown to the world until only a few years ago, 7 coincided with another major report in the New York Times in late 1983 by William Broad, who sought to impli- cate the Soviets as a major threat to US com- puter security. According to Broad, Reagan administration officials had suspected that in 1981 ‘‘the Russians managed, by getting onto an international computer network, to break into two large private computers in the West.’’ 8 The allegations, which leading experts quickly dismissed as sensational 9 and that now appear tame compared to the attack against the Soviet pipeline, suggest that the US has indeed led the world in the creation of cyberwar tools. In his article, Broad nicely summarized the groundbreak- ing US efforts to assess computer security vulnerabilities: [T]echnical threats ... were first described in detail in the early 1970s by Dr. [Willis] Ware of the RAND Corporation ... The [Ware] 4 IEEE Annals of the History of Computing 1058-6180/12/$31.00 c 2012 IEEE Published by the IEEE Computer Society

Transcript of US Government Computer Penetration Programs and the...

Page 1: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

US Government ComputerPenetration Programs andthe Implications for Cyberwar

Edward HuntCollege of William & Mary

The US Department of Defense was the driving force behind the de-velopment of sophisticated computer penetration methodologies. Byanalyzing the security of the nation’s time-sharing computer systems,security analysts developed an expert understanding of computerpenetration. Eventually, the US and its intelligence agencies utilizedcomputer penetration techniques to wage offensive cyberattacks.

In January 2011, the journalists WilliamBroad, John Markoff, and David Sangerreported in the New York Times that a com-puter worm called Stuxnet, ‘‘the most sophis-ticated cyberweapon ever deployed,’’ hadcaused significant delays to Iran’s nuclear en-richment program.1 ‘‘The sheer size of thecode was staggering,’’ reported Sharon Wein-berger in Nature, noting that Stuxnet featured‘‘some 15,000 lines of code, representing anestimated 10,000 person hours in software de-velopment.’’2 A team of security analysts atthe software company Symantec, which pub-lished a thorough technical analysis of thecyberweapon, believed that ‘‘Stuxnet is ofsuch great complexity—requiring significantresources to develop—that few attackers willbe capable of producing a similar threat.’’3

Given the cyberweapon’s capability to imple-ment ‘‘direct-attack attempts on critical infra-structure,’’ the Symatec analysts concludedthat ‘‘Stuxnet is the type of threat we hopeto never see again.’’3

US President Barack Obama came to a dif-ferent conclusion after the initial public disclo-sure of the Stuxnet weapon. According to thejournalist David Sanger, President Obama ‘‘de-cided that the cyberattacks should proceed.’’4

In fact, Obama had ‘‘secretly ordered increas-ingly sophisticated attacks on the computersystems that run Iran’s main nuclear enrich-ment facilities,’’ expanding a covert programcode-named Olympic Games that had begunduring the Bush administration.4

Of course, American involvement shouldcome as no surprise, particularly sinceRichard Clarke and Robert Knake had

acknowledged in 2010 in their book CyberWar that the US ‘‘has perhaps led [theworld] in cyber espionage and the creationof cyber war tools’’ and today ‘‘very likelypossesses the most sophisticated offensivecyber war capabilities.’’5 As an example,Clarke and Knake pointed to the ‘‘maliciouscode’’ that the Central Intelligence Agency(CIA) had created in the early 1980s to sabo-tage the trans-Siberian pipeline in the SovietUnion, leading to ‘‘the most massive non-nuclear explosion ever recorded, over threekilotons.’’6

Interestingly, the attack against theSoviets, unknown to the world until only afew years ago,7 coincided with anothermajor report in the New York Times in late1983 by William Broad, who sought to impli-cate the Soviets as a major threat to US com-puter security. According to Broad, Reaganadministration officials had suspected thatin 1981 ‘‘the Russians managed, by gettingonto an international computer network, tobreak into two large private computers inthe West.’’8 The allegations, which leadingexperts quickly dismissed as sensational9

and that now appear tame compared to theattack against the Soviet pipeline, suggestthat the US has indeed led the world in thecreation of cyberwar tools. In his article,Broad nicely summarized the groundbreak-ing US efforts to assess computer securityvulnerabilities:

[T]echnical threats . . . were first described indetail in the early 1970s by Dr. [Willis] Wareof the RAND Corporation . . . The [Ware]

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 4

4 IEEE Annals of the History of Computing 1058-6180/12/$31.00 �c 2012 IEEEPublished by the IEEE Computer Society

Page 2: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

report showed how spies could actively pene-trate computers, steal or copy electric files andsubvert the devices that normally guard top-secret information.

The study touched off more than a decadeof quiet activity by elite groups of computerscientists working for the Government whotried to break into sensitive computers. Theysucceeded in every attempt.8

Undoubtedly, Broad meant to describe theefforts made by ‘‘government’’ scientists tounderstand the threats to US systems, alwayswith the hope of improving their security. Atthe time, few analysts openly suggested thatthe ‘‘careful insertion of a few well-writtentrap doors and Trojan horses into the soft-ware (or wired into the hardware) of com-puters sold to potentially hostile countrieswould be a reliable and virtually undetectableintelligence asset,’’ as Navy Lieutenants PeterGrant and Robert Riche had done in July1983.10 For the most part, specialists fromacademia, government, industry, and themedia remained silent about the use of com-puter subversion as a weapon.

In recent years, a few noticeable excep-tions to the continuing silence haveappeared in the popular media. In the late1990s, the journalist David Fulghum high-lighted the US government’s growing interestin cyberwar in his numerous reports for Avia-tion Week & Space Technology. In one article,Fulghum described how the US had used‘‘[o]ffensive computer warfare . . . as a preci-sion weapon during the Kosovo conflict’’ in1999.11 A radar specialist at the Pentagonhad informed Fulghum that the US had prob-ably ‘‘put false targets into [Serbia’s] air de-fense network through its communicationslinks.’’11 John Arquilla, an associate professorat the Naval Postgraduate School and an an-alyst at RAND, similarly informed the PBS se-ries Frontline for its 2003 episode ‘‘CyberWar!’’ that ‘‘some means may have beenused to distort the images that the Serbianintegrated air defense systems were generat-ing.’’12 Arquilla also hinted that ‘‘we didsome things to the systems of the Iraqis’’ dur-ing the first Gulf War.12 In fact, Fulghum hadreported in November 1999 that senior mili-tary officials had confirmed that the penetra-tion of a country’s air defense system ‘‘wasfirst attempted against Iraq during the1990–91 Persian Gulf war.’’13

A series of articles in the New York Timesfrom mid-2009 highlighted the Obamaadministration’s interest in cyberwar, a

topic that reached ‘‘religious intensity,’’according to military historian DanielKuehl.14 Some of ‘‘the latest in attack soft-ware . . . was developed by cryptologists atthe N.S.A. [National Security Agency],’’ jour-nalists Corey Kilgannon and Noam Cohenreported, making it no secret that the agencyfeatured ‘‘most of the government’s talent forbreaking and making computer codes.’’15

Christopher Drew and John Markoff alsoreported that US military and intelligenceagencies had turned to contractors such asRaytheon, Northrop Grumman, and GeneralDynamics to develop offensive capabilities.12

Nevertheless, few analysts have ever compre-hensively described the US’s cyberwar capa-bilities, perhaps ‘‘because so much of thesubject matter is secret,’’ as Clarke andKnake reasonably speculated in their bookCyber War.16

The emphasis on secrecy has changed lit-tle over the years, especially among thoseinvolved in the highest levels of securityanalysis. In the early 1970s, leading analystsRoger Schell and Paul Karger complainedthat ‘‘most reports of [computer] penetra-tions have severe (and often unjustified)distribution restrictions leaving very fewdocuments in the public domain.’’17 Decadeslater, another highly respected securityexpert, Clark Weissman, still found ‘‘generalaccess to this past experience . . . oftenrestricted.’’18 Not until the mid-1990s did se-curity specialist Mathew Bishop successfullyorganize and distribute a collection of theearly, landmark computer security studies.19

Bishop’s collection, supported by numerousacademic papers already in the public do-main, offers an illuminating look into theearly history of the creation of cyberwartools. According to Bishop, ‘‘virtually all ofthe important papers were produced undergovernment contract.’’19 Indeed, the recordssuggest that the defense establishment,which includes the Department of Defense(DoD) and its closely linked allies in industryand academia, engineered many of the tech-niques used to break into computers systemsthrough a constant stream of securitytesting.20

In this article, I have attempted to providemeaningful academic research into this earlyhistory, hoping to establish a more open dia-logue, just as Clark and Knake called for intheir book Cyber War.21 More specifically, Iintend to show that the defense establish-ment pioneered and created many of thetools used in modern day cyberwarfare.22

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 5

July–September 2012 5

Page 3: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

Notes on Secondary LiteratureScholars have basically ignored the formativerole played by the defense establishment inthe development of subversive computertechniques, leaving only a few exceptions inthe literature. By the mid-1970s, computercrime had even become a serious topic of in-terest, leading to several popular books onthe subject. Having documented close to400 computer-related crimes, Donn Parkerof the Stanford Research Institute (SRI) high-lighted some of the more sensational inci-dents in his 1976 book Crime by Computer.Although Parker acknowledged the existenceof ‘‘professional computer penetrators whotest the security of computer systems,’’ hebarely explored their findings, focusingmore on the acts committed by civilians.23

In the 1978 book Computer Capers, journalistThomas Whiteside reinforced much ofParker’s alarmist narrative, although White-side suggested that computer criminals whobroke into government or corporate systems‘‘have not employed highly sophisticatedapproaches.’’24 Instead, Whiteside foundthat ‘‘the more advanced techniques . . . per-haps observable in various exercises . . . havebeen carried out within the defense establish-ment.’’24 Officially sanctioned teams of secu-rity analysts attempted to ‘‘penetrate some ofthe most complex and supposedly securecomputer systems,’’ he reported.25 In onecase, Whiteside referred to a study completedby the Naval Research Laboratory (NRL) thatdetailed ‘‘the successful, covert subversion ofa Univac 1108 Exec VIII operating system’’utilized by the military.26 The NRL penetra-tion team had easily exploited a design prob-lem publicly documented by Univac toembed trap doors in the system, grantingthe team covert access to encrypted, classi-fied information. In spite of the obviousquestions that might have arisen about theoffensive potential of such practices, analystssuch as Donn Parker instead turned the spot-light on the alleged criminals, who he intro-duced to the media as ‘‘hackers.’’

Throughout the 1980s, scholars continuedto ignore official penetration programs, espe-cially as the popular media sensationalizedthe hacker phenomenon. Parker had relayedthe hacker concept to the popular media inan interview with Time magazine in August1983, warning about ‘‘a whole epidemic ofmalicious system hacking’’ taking place.27

Of course, Parker had also noted in his 1983book Fighting Computer Crime that ‘‘there isno proof of a system hacker epidemic.’’28

Instead, the miscreants, as he describedthem, ‘‘flare up from time to time,’’ givingthem the characteristics of ‘‘a disease.’’29

Offering an altogether different perspec-tive on hackers, the journalist Steven Levyromanticized their origins in his 1984 bookHackers. According to Levy, the idealist pro-grammers consisted of ‘‘adventurers, visiona-ries, risk-takers, [and] artists’’ who shared‘‘a common philosophy . . . of sharing, open-ness, decentralization, and getting yourhands on machines at any cost—to improvethe machines, and to improve the world.’’30

Undoubtedly, Levy held the minorityview, even among scholars. In late 1985,law scholar Diana Smith contributed an arti-cle to the Criminal Justice Journal, asking,‘‘Who is calling your computer next?Hacker!’’ Seizing upon a common view inthe media, Smith described the typical hackeras ‘‘a high school dropout of better than aver-age intelligence’’ who attempted to ‘‘accessthe computer networks of large corpora-tions’’ as a sort of intellectual challenge.31

‘‘Certainly, the hackers are the ones with ex-pert capabilities,’’ Smith claimed.32 Just a cou-ple of years later, one journalist writing forthe New York Times went even further,describing hackers as ‘‘electronic terrorists’’who used Trojan horses to corrupt computerdata.33 In spite of such sensational propa-ganda about the mysterious and amorphoushackers,34 the increased attention on com-puter vulnerabilities led to far more seriousconsideration of potential security threats.

For anyone seriously interested in assessingsecurity vulnerabilities, officially sanctionedstudies provided a reasonable starting point.The security analysts Deborah Russell andG.T. Gangemi, Sr. offered a rare glimpseinto some of the early, publicly funded stud-ies in their 1991 book Computer SecurityBasics. Notably, the authors described keyevents from the 1960s and 1970s that hadled to the creation of tiger teams, or ‘‘govern-ment- and industry-sponsored teams ofcrackers who attempted to break down thedefenses of computer systems in an effort touncover, and eventually patch, securityholes.’’35 The authors even attributed thehacker practice of probing systems for vulner-abilities to the formally developed ‘‘tigerteam methodology.’’36

In late 1993, Wayne Madsen, a security spe-cialist with ties to the NSA, offered a far morecautionary perspective about government andindustry involvement. In an article publishedby the International Journal of Intelligence and

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 6

US Government Computer Penetration Programs and the Implications for Cyberwar

6 IEEE Annals of the History of Computing

Page 4: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

Counter-Intelligence, Madsen warned about‘‘state-sponsored and corporation-initiateddigital eavesdropping schemes’’ initiated bynations all over the world, including theUS.37 The threat of the ‘‘often youthfulhacker’’ paled in comparison to the ‘‘dedi-cated and well-financed intelligenceagencies,’’ Madsen believed.38

Today, Pentagon officials would likelyagree, believing that ‘‘the most-sophisticatedcomputer attacks require the resources of agovernment,’’ according to a recent reportby Siobhan Gorman and Julian E. Barnes inthe Wall Street Journal.39 In their report, thejournalists noted that officials believed that‘‘the weapons used in a major technologicalassault, such as taking down a power grid,would likely have to be developed withstate support.’’39

Clearly, a hostile government with vastamounts of resources at its disposal poses amore serious threat to system security thanany of the so-called computer hackers. Never-theless, few scholars have explored the US de-fense establishment’s offensive capabilities.The leading scholar on the history of com-puter security, Donald MacKenzie, discussedmany of the early, officially sanctioned secu-rity studies in a chapter of his 2001 bookMechanizing Proof, but he largely ignored theformative role that the publicly funded tigerteams had played in creating successful pene-tration techniques.40 ‘‘RAND had done somepenetration studies (experiments in circum-venting computer security controls) of earlytime-sharing systems on behalf of the govern-ment,’’ he only briefly acknowledged.41 Jef-frey Yost of the Charles Babbage Instituteoffered a similarly abbreviated account inhis 2007 article, ‘‘A History of Computer Se-curity Standards.’’ According to Yost, ‘‘theRAND Corporation, and its spin-off, . . . theSystem Development Corporation (SDC) . . .[had] engaged in some of the first so-called‘penetration studies’ to try to infiltrate time-sharing systems in order to test their vulnera-bility.’’42 Like MacKenzie, Yost focused hisnarrative on the development of computer se-curity and standards. A number of leadingpenetration experts, including Roger Schell,acknowledge that even ‘‘today the subversionthreat is rarely discussed’’ by security analysts,never mind historians.43

In recent months, the issue of subversionhas taken on special urgency, especiallysince the Wall Street Journal reported in mid-2011 that computer subversion ‘‘comingfrom another country’’ could lead ‘‘the

A hostile government

with vast amounts of

resources at its disposal

poses a more serious

threat to system security

than any so-called

computer hackers.

U.S. to respond using traditional militaryforce.’’39 With military officials threateningto ‘‘put a missile down one of your smoke-stacks’’ in response to a cyberattack, it has be-come increasingly important to understandthe cyber capabilities of the US as well.39

Penetration DefinedThe available documentary record indicatesthat in June 1965, the SDC, a major govern-ment contractor with roots in the US AirForce and RAND,44 hosted one of the earliest,most influential conferences on computer se-curity. Held in Santa Monica, California, theevent united some of the country’s leadingcomputer experts, who represented institu-tions such as IBM, RAND, and the LawrenceRadiation Laboratory. At the time, Robertvon Buelow, the Head of Laboratory Devel-opment and Operations Staff at SDC, explic-itly warned his colleagues about the securityof time-sharing computer systems, whichgranted external users access to their resour-ces through communications lines. RAND’sWillis Ware repeated von Buelow’s warning,describing the unauthorized access of dataas ‘‘the big problem with time-shared sys-tems.’’45 In fact, SDC had recently convertedits IBM Q-32 computer into a time-sharingsystem.

With the security implications in mind,SDC officials instructed one of their‘‘experts’’ to ‘‘[s]it down and figure out allthe ways you think you might be able to vio-late memory protection.’’46 Before long,SDC’s expert had discovered more than adozen ways to undermine the Q-32’s safe-guards. As a possible long-term solution,von Buelow envisioned an official agencythat would one day collect and distribute in-formation about ‘‘all the ways a system can

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 7

July–September 2012 7

Page 5: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

be violated.’’46 Even more important, theconference marked one of the first occasionswhen the country’s leading computer expertstogether requested ‘‘studies to be conductedin such areas as breaking security protectionin the time-shared system.’’47

Shortly after the Santa Monica conference,security specialists introduced the ‘‘languageof penetration’’ to describe an attack againsta computer system. Ware, who headedRAND’s Computer Sciences Department,organized a special paper session on com-puter security for the Spring 1967 Joint Com-puter Conference, hoping to engage thebroader user communities. Alongside his col-leagues Harold Petersen and Rein Turn, bothof RAND, and Bernard Peters of the NSA,Ware warned that ‘‘deliberate attempts topenetrate’’ both military and nonmilitarysystems ‘‘must be anticipated.’’48 Petersenand Turn agreed, citing ‘‘deliberate penetra-tion’’ as a major risk to ‘‘remotely accessibleon-line, time-shared information systems.’’49

In particular, Peters argued that ‘‘a penetrat-ing program’’ could potentially compromise‘‘large amounts of information’’ stored intime-sharing systems.50

Ware later credited these newly open dis-cussions about security threats to the highlysecretive NSA, which had taken a special in-terest in time-sharing technology. Since1964, Ware had sat on the NSA’s ScientificAdvisory Board, a collection of the nation’spremier technological experts from govern-ment, industry, and academia.51 Throughhis connections, he had grown familiarwith ‘‘the development within the NationalSecurity Agency (NSA) of a remote-accesstime-sharing system, . . . running on a Univac494 machine, and serving terminals andusers not only within the [NSA] headquartersbuilding . . . but also worldwide.’’52 Withthese types of systems in mind, Ware andhis colleagues had grown increasingly con-cerned that the country’s ‘‘systems mightnot be able to protect themselves and theirdata against intrusive and destructiveattacks.’’52

In addition to introducing the threat ofcomputer penetration at the Spring 1967conference, Ware’s special session establishedthe foundation for security research, includ-ing the most fundamental methods of break-ing into computer systems. Communicationslines ‘‘are the most vulnerable part of the sys-tem,’’ Petersen and Turn noted in a paper atthe conference, citing ‘‘the well-developedart of listening in on voice conversations.’’53

Similarly, Ware warned that eavesdropperscould bug a system, enabling them ‘‘to pirateinformation.’’54 A more sophisticated attack,Ware believed, involved an ‘‘ingenious userwho skillfully invades the software system,’’making changes that ‘‘give him ‘ins’ to nor-mally unavailable information.’’55 Petersenand Turn described these special entry pointsas trap doors, or covert channels that granted‘‘unscrupulous programmers’’ access to a sys-tem.56 In a pair of comprehensive tables,Petersen and Turn outlined their recom-mended countermeasures as well as numer-ous other threats, such as browsing forrestricted files, masquerading as legitimateusers, and piggy-backing into a system.57

Ware depicted the possible points of attackin a special diagram in one of his papers,which included a systems programmer incor-porating covert ‘‘ins’’ into a system.58

A few years after the conference, Turnreflected that the RAND papers had ‘‘estab-lished much of the vocabulary’’ for computersecurity research.59 By 1972, at least 30reports and articles had reviewed the samethreats and safeguards.59

Questions about SecurityEven before the spring 1967 conference,computer programmers and electronic spe-cialists had well understood the risks associ-ated with time-sharing systems. In the mid1960s, IBM and Remington Rand Univachad feared that competitors would attemptto monitor the information that the twocompanies transmitted through their sys-tems. Both companies had considered hiringa ‘‘decoy computer programmer’’ to transmitfalse, misleading data.60

A month prior to the conference, theWashington Post had reported that ‘‘a SenateJudiciary Committee eavesdropping expert’’believed that time-sharing systems wouldsoon play an influential role in industrial es-pionage.61 A few years after the WashingtonPost’s report, one computer programmeraccessed confidential data stored in a com-petitor’s time-sharing system, marking oneof the earliest documented cases of computerpenetration.52 The programmer relied on asimple method, taking a little over a monthto guess the commands necessary to accessthe targeted information.63

The security problem gained perhaps itsmost widespread attention when officials ata small company called Information SystemsDesign (ISD) discovered an unauthorized usermasquerading as a client in the company

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 8

US Government Computer Penetration Programs and the Implications for Cyberwar

8 IEEE Annals of the History of Computing

Page 6: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

system, copying a program.64 ISD soonlinked the breach to Hugh Jeffery Ward, acomputer programmer working for a compet-itor, the University Computing Company(UCC). Reporting on the incident in March1971, one local newspaper ran the headline,‘‘Computer’s Secrets Stolen by Telephone.’’65

The Paris edition of the International HeraldTribune more sensationally declared, ‘‘Com-puter Raped by Telephone.’’65 A defense law-yer for the UCC offered a more likelyexplanation, suggesting that programmersat both companies routinely tapped intoeach others’ systems, just as officials at IBMand Remington Rand Univac had foreseen.66

As the security issue gained more promi-nent attention, the US government, actingprimarily through the Pentagon, initiatedthe first major study into time-sharing sys-tem security. Essentially, US officials wantedto know if they could safely sell unclassified,commercial access to systems that alsohosted classified information.67 With RANDselected to ‘‘provide the leadership’’ on theproject, DoD officials tapped Ware to chaira specially organized task force.68 An impres-sive array of experts from the CIA, NSA, Mas-sachusetts Institute of Technology (MIT),ARPA, and Lockheed joined the project,including SDC’s Robert von Buelow. Overthe course of the study, the task force focusedon what it called ‘‘the most difficult securitycontrol situation,’’ namely, ‘‘a time-sharingsystem serving geographically distributedusers’’ that processed ‘‘the most sensitiveinformation.’’69

Drawing upon the papers that Ware andhis colleagues presented at the spring 1967conference, the task force reaffirmed manyof the same threats, such as wiretaps andtrap doors. In one case, the task force specu-lated that ‘‘covert monitoring devices canbe installed within the central processor,’’ atechnique ‘‘easy for a knowledgeable person’’to accomplish, making ‘‘detection very diffi-cult.’’70 Additionally, the task force warnedabout loopholes, or inadequacies in a system’ssoftware. A cleverly exploited loophole couldlead to ‘‘a security breach, a suspension ormodification of software safeguards (perhapsundetected), or wholesome clobbering of in-ternal programs, data, and files,’’ the taskforce speculated.71 Although US officials clas-sified the ‘‘Ware Report’’ at the time of its ini-tial publication in February 1970, computerspecialist Peter Browne openly described it as‘‘the definitive document’’ on computer secu-rity in a 1972 bibliographic paper.72

Theories of AttackAfter the task force completed its detailed in-vestigation into the many threats to online sys-tems, a number of organizations within thedefense establishment began formally develop-ing their penetration techniques and strategies.In the same month that the media sensational-ized the Ward case, RAND published a secret,ARPA-sponsored penetration study.73 Duringthe study, a team of RAND researchers had in-formed ARPA that it could demonstrate a pen-etration by crashing a targeted system orsending unauthorized messages to that sys-tem’s terminals.74 Another team of RAND ana-lysts, including Rein Turn, summarized theresults in a follow-up paper, observing thatthe ‘‘relatively small ARPA-sponsored RANDeffort demonstrated the ease of penetratinga large computer system,’’ even if it featuredsupposedly adequate safeguards.75

In a second, more challenging test,RAND’s penetration team succeeded in steal-ing system passwords and installing a trapdoor, all without detection.76 During the exer-cises, RAND team members James Anderson,Richard Bisbey, and Dennis Hollingworthhad ‘‘demonstrated the practicality of system-penetration as a tool for evaluating . . . datasecurity safeguards,’’ Turn noted in yet an-other paper.77 Furthermore, RAND specialistR. Fredrickson had initiated a related programinto ‘‘[t]he further development of system-penetration techniques,’’ hoping to develop‘‘tools for security system certification.’’78

ARPA sponsored these efforts through ini-tiatives such as the Computer Security Assur-ance project, which involved ‘‘test[ing] asystem[’s] security through penetration.’’79

Turn, Fredrickson, and Hollingworth allbelieved that penetration testing offered sev-eral benefits that justified its study, suchas the penetrator’s ‘‘diabolical frame ofmind . . ., which is difficult to emulate’’with other testing methods.80 They recom-mended the ‘‘development of methodologyand tools’’ to test system security, including‘‘penetration test techniques’’ and ‘‘aids topenetration test data management and analy-sis.’’81 Although Turn, Fredrickson, and Hol-lingworth supported the development ofpenetration methodologies for the purposesof security testing, they also foresaw computerpenetration ‘‘as an extension of electronicwarfare,’’ a potentially major weapon thatgovernments could use to gather intelligenceor crash systems ‘‘at critical moments.’’82

The US Air Force (USAF), RAND’s leadingsponsor, wasted little time in taking advantage

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 9

July–September 2012 9

Page 7: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

of the company’s pioneering efforts, whichWare later described as ‘‘a vigorously growingseed’’ planted ‘‘for others to nourish.’’83 Inmid-1971, the USAF Electronic Systems Divi-sion contracted with James P. Anderson &Co. to analyze the security of its Data ServicesCenter (ACS) at the Pentagon, a major systemthat serviced six organizations involving hun-dreds of people.84,85 The USAF required that itssystem, which consisted of a Honeywell 635computer and GECOS III operating system,provide secure access to ‘‘users located virtu-ally anywhere,’’ similar to the NSA system.

Based on his analysis of the ‘‘opportunitiesto mount a penetration attack,’’ Andersonwarned about a number of penetration sce-narios that could compromise the Honeywellsystem.86 By using a secondary computer torelay commands through an unknowing, le-gitimate user, a penetrator could engage in‘‘a convincing dialog with the user’’ while‘‘simultaneously inject[ing] the attack pro-gram [in]to the system,’’ Anderson warned.87

This ‘‘store and forward’’ attack, just one ofmany, represented a specific phase of whatAnderson more generally outlined as an ‘‘at-tack sequence.’’88 The approach involved sev-eral prearranged steps, including, ‘‘1. Find anexploitable vulnerability. 2. Design an attackaround it. 3. Test the attack. 4. Seize a linein use for ACS Operations. 5. Enter the attack.6. Exploit the entry for information recov-ery.’’88 Before implementing the attack, pene-trators would also have to collect intelligenceon the targeted system, including the ‘‘exactlocation(s) of unclassified terminals, phonenumbers and actual lines used, location ofjunction boxes and other places taps can beplaced, details of security measures thatexist, table of organization,’’ and other mea-sures.88 Similar to the Ware report and someearly RAND exercises, US officials classifiedAnderson’s study, which added to the grow-ing body of secret work on penetration.

In early 1972, the USAF again contractedwith Anderson’s company, requesting thatit produce a plan for addressing the USAF’sunresolved security problem. By organizinga special study panel similar to the taskforce, the USAF united a smaller team ofexperts from business, government, and in-dustry, including the NSA, Mitre, SDC, andCase Western Reserve. Notably, Andersonwrote the section of the study panel’s reportthat dealt with penetration techniques, orthe many different classes of attacks.89 Heattributed one type of attack, the ‘‘asyn-chronous attack,’’ to USAF Major Roger

Schell, who had overseen both Anderson’sPentagon study and the study panel’s re-port.90 The NSA’s Dan Edwards, who hadserved on both the task force and the studypanel, had identified an even more serious at-tack, Anderson noted, citing the Trojanhorse, or ‘‘the quintessence of the maliciousthreat against contemporary systems.’’91 Pen-etrators implemented the attack by present-ing a gift program embedded with trapdoors to unknowing, legitimate users. Infact, Anderson and Edwards had introducedthe Trojan horse method to computer spe-cialists at an earlier workshop sponsored bythe NSA-linked Institute for Defense Analysis(IDA). The NSA’s Dennis K. Branstad, wholater reported on the IDA meeting, notedthat the attack ‘‘was coined a ‘TrojanHorse’’’ during a group discussion session.92

In the study panel report, Anderson called at-tention to the Trojan horse technique andother methods of hostile penetration to de-scribe how penetrators implemented theirattacks, intending to provide ‘‘familiaritywith how the problem appears to the pene-trator.’’93 By listing the actual code that hadcompromised the Pentagon’s Honeywell sys-tem during a test ‘‘scavenge attack,’’ Ander-son demonstrated how ‘‘the user-IDS andpasswords recovered permit its user to mas-querade as any other user of the system.’’94

The study panel, which completed its reportin late 1972, concluded that contemporarysystems remained especially vulnerable topenetrators. ‘‘It is a commentary on contem-porary systems that none of the known tigerteam efforts has failed to date,’’ the studypanel noted, referring to the results of spe-cially organized teams of penetration test-ers.95 Shortly before the panel released itsfindings, Roger Shell had similarly informedseveral colleagues that no ‘‘major multiusercomputer system’’ had yet ‘‘withstood seri-ous attempts at circumvention by deter-mined and skilled users.’’96

Given the growing awareness about thecomputer security problem, the country’sleading experts convened in Rancho SanteFe, California, in December 1972, hoping toestablish ‘‘fundamental principles of applica-tion and implementation’’ for computer se-curity.97 NSA official Douglas L. Hoganchaired the workshop’s planning committee,which had drawn more than 60 specialists tothe event. The NSA’s Hilda Faust, one of thestudy panel members and few women to at-tend the workshop, participated in an eight-member measurements working group that

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 10

US Government Computer Penetration Programs and the Implications for Cyberwar

10 IEEE Annals of the History of Computing

Page 8: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

also included the USAF’s Roger Schell andRAND’s Rein Turn. Tasked with establishinga measure of system security, the workinggroup attempted to quantify what it calledan ‘‘intrusion work factor,’’ a concept thatPetersen and Turn had discussed at the1967 Joint Computer Conference.98 Theteam members began by outlining an attacksequence similar to the one that Andersonhad described in his Pentagon study. In gen-eral, the steps involved collecting ‘‘sufficientinformation about the target system,’’ devel-oping ‘‘an acceptable penetration plan,’’achieving ‘‘access to the target system,’’ andfinally ‘‘[p]enetrating the data bank andescaping detection.’’99

The measurements group also noted thatdifferent penetration techniques varied intheir degree of difficulty, an important con-sideration for estimating the work factor.For example, some penetrators might maketrial-and-error attempts to infiltrate a system,whereas others might employ a wiretappingor eavesdropping approach. A subjective esti-mation of a system’s work factor would likelyinvolve iterative methods of ‘‘experimenta-tion and/or simulation,’’ the analysts specu-lated, calling for a completed ‘‘penetrationonce a suspected error has been found.’’100

Incidentally, the DoD began distributingits official security manual for resource-shar-ing computer systems shortly after the work-shop concluded. In its manual, the DoDrequired that security analysts complete averification process, which included ‘‘theactual on-line system penetration’’ of allDoD systems.101 Although the DoD hadbegun developing its manual well beforethe workshop, much of the developmentalwork in the field of computer security ‘‘canbe traced to the workshop,’’ the chairmanof the measurements group, Peter Brown,later reflected.102 With the publication ofthe DoD manual, the defense establishmenthad formally adopted computer penetrationas a tool for further study and development.

Security Does Not ExistRegardless of its role in security testing, thepenetration method appeared far better atexploiting vulnerabilities. For example, cor-porate spies and saboteurs had begun regu-larly employing penetration techniques toattack time-sharing systems, often as a rou-tine business practice. SRI’s Donn Parkerreported in a major study in March 1973that ‘‘it is common practice’’ for companies‘‘to gain legitimate or unauthorized access

to competitors’ systems.’’103 Parker, whohad also attended the workshop in RanchoSante Fe, found that corporate spies routinely‘‘take copies of programs and data files’’ and‘‘subvert the operating system making subse-quent attacks simple.’’103 He based his ob-servations on conversations with severalofficials of time-sharing companies, includ-ing a small number of captured penetrators.Parker noted that one company had hired a‘‘young, bright systems programmer’’ topatch the holes that he had previouslyembedded in the company’s system on be-half of a competitor.103

In January 1974, Parker and his colleagueSusan Nycum published a related article inthe magazine Datamation, relaying Parker’sfindings to a much wider audience. In theirreport, they noted that employees of time-sharing companies regularly tried to obtainprograms, customer lists, and user files fromcompetitors in a practice that ‘‘is commonamong commercial time-sharing companies’employees.’’104 Working on behalf of theiremployers, programmers even attempted tocrash competitors’ systems. Although somein the industry viewed the tactics as ‘‘indus-trial espionage and sabotage,’’ Parker andNycum detected ‘‘a growing feeling amongcertain computer professionals that suchactivity is . . . a legitimate businesstechnique.’’104

Given the unresolved security problem,security analysts expressed hope that atime-sharing system under developmentsince the mid to late 1960s, known as Mul-tics, would lead to a possible solution.105

The study panel had suggested that systemssuch as Multics ‘‘appear to offer the best vehi-cle for implementing a secure system.’’106

Programmers had developed Multics ‘‘withdata security as a fundamental design crite-ria,’’ RAND analyst Dennis Hollingworthnoted in a separate report.107 Still, Holling-worth believed that all systems required ‘‘sev-eral iterations of system penetration testing/hardening to eliminate the majority of pene-trable errors,’’ especially because ‘‘so called‘tiger teams’’’ had ‘‘almost invariably’’ suc-ceeded in undermining systems with addedsafeguards.108 In fact, a special project runby the USAF and Mitre code-named ZARF,which specialized in ‘‘cracking ‘uncrackable’computers,’’ had formed a tiger team to per-form a security evaluation of Multics.109

The USAF, the main impetus behind the se-curity evaluation, had planned to use Multicsfor its Data Services Center in the Pentagon,

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 11

July–September 2012 11

Page 9: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

In a major study in

March 1973, SRI’s Donn

Parker reported that

‘‘it is common practice’’

for companies ‘‘to gain

legitimate or

unauthorized access to

competitors’ systems.’’

the same center that James Anderson hadrecently assessed. Roger Schell and PaulKarger, both of the USAF, took formal creditfor the project, but study panel memberand Mitre analyst Steven Lipner also playeda prominent role.

These three ZARF team members set uptheir operation in the basement of Schell’shome in Concord, Massachusetts, relyingon a terminal that they had connected toSchell’s personal telephone line.110 The menbegan their analysis by studying a Multicssystem at the Rome Air Development Centerin New York, developing a penetrationapproach for their real target, a system atMIT.111 During their attempt to break intothe MIT system, or ‘‘the most secure operat-ing system available in the industry,’’112 theZARF members successfully utilized a numberof well-established penetration techniques,leading them to conclude that ‘‘a malicioususer can penetrate the system at will with rel-atively minimal effort.’’113 For instance, theZARF team embedded a trap door in theMIT computer, demonstrating ‘‘the feasibil-ity of inserting and distributing trap doors’’to other sites.114

After the study, Honeywell unknowinglydistributed a version of the compromised sys-tem, installing it in the USAF’s Data ServicesCenter in the Pentagon.115 ‘‘This trap doorwas small (fewer than 10 instructions out of100,000),’’ Schell later acknowledged, notingthat the ‘‘manufacturer could not find it,even when he knew it existed and how itworked.’’116 It took Honeywell about a yearafter Karger and Schell had published theirfindings to discover and disable the trapdoor. The journalist Thomas Whiteside,

who referenced the Multics study in a laterreport for the New Yorker, suggested that theZARF team had ‘‘modified the system so thor-oughly that even if particular flaws which hadallowed the original penetration were to bediscovered and corrected, the penetratorswould continue to have full access to the sys-tem.’’117 Clearly, the ZARF team had littletrouble undermining the Multics system.

Sounding the AlarmFollowing the disappointing results of theMultics security evaluation, the popularmedia began slowly sounding the alarm oncomputer security. In September 1974, W.Thomas Porter, Jr. dramatized computer vul-nerabilities in an article in the New YorkTimes Magazine, borrowing a headline fromthe Ward case, ‘‘Computer Raped by Tele-phone.’’118 Unless ‘‘the Department of De-fense will soon include an appropriation fora completely protected security system,’’ Por-ter feared that the US would face utter destruc-tion. He envisioned ‘‘a computer-trainedguerrilla group’’ called ‘‘Those Who ThreatenWorld Annihilation’’ penetrating a US missilecontrol system and then announcing, ‘‘Wenow control your system. Your missiles areaimed to destroy you.’’119 Porter based hisfears on the work conducted by companiessuch as RAND, where penetration tests haddemonstrated that ‘‘no major defense systemhas withstood a dedicated attack.’’119

The journalist Tom Alexander made thesame point in a similarly alarmist article inFortune magazine, commenting that ‘‘nomajor system has yet withstood a dedicatedattack by a tiger team.’’120 Interviewed forthe Fortune article, Willis Ware worried espe-cially about the ‘‘million programmers inthe country right now,’’ warning that ‘‘ifonly 1 percent of them were inclined to bedishonest, that’s ten thousand dishonest pro-grammers.’’120 Even the magazine AmericanScientist published a comparably cautionaryRAND study that Ware had coauthoredwith his colleague Rein Turn. ‘‘Despite morethan nine years of operational use andcontinuous testing, occasional errors inMULTICS are still found,’’ Ware and Turnacknowledged.121 The two RAND analystsrelayed the same general point as Alexanderand Porter, noting that ‘‘every operating sys-tem now in use that has been tested has beenfound to contain numerous errors.’’121

In addition to Fortune, the joint USAF-Mitre ZARF team shared the results of its pen-etration exercises with a number of other

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 12

US Government Computer Penetration Programs and the Implications for Cyberwar

12 IEEE Annals of the History of Computing

Page 10: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

leading media outlets. Based on a study com-pleted in December 1974, the WashingtonPost journalist Bonnie Ginzburg declared,‘‘Military computers containing top-secretnational security information can be pene-trated at will, according to an Air Force re-port.’’122 The study itself, which Ginzburgquoted, stated that ‘‘projects aimed at testingthe security of operating systems by penetra-tion’’ have resulted in ‘‘total success for thepenetrator’’ every time.122 As part of her re-port, Ginzburg also interviewed Mitre’s StevenLipner, who informed her that a penetrationteam had remotely accessed ‘‘the passwordfiles that people use to log in’’ to sensitive mil-itary systems, doing it ‘‘over a six-month pe-riod in one case without detection.’’122

The journalist Thomas Whiteside describedand quoted from the same December studyin his series of articles about computercrime for the New Yorker.123 In fact, Lipnerand Schell had provided Whiteside with‘‘several Air Force and MITRE internalpapers,’’ including the Multics security evalu-ation.124 Despite the numerous studies, apossible solution to the security problemremained out of reach, perhaps due to‘‘what the Air Force calls low-level fundingin the area of computer security,’’ Ginzburgsuggested in her report for the Post.122 Yearslater, the ZARF team members discussed thesense of urgency that they had felt at thetime, noting that ‘‘with the Cold War raging,our assumption was that the most immediateprofessional penetrators would be foreign es-pionage agents and that the defense estab-lishment would be the target of choice,’’with ‘‘commercial penetrators to follow.’’125

Still, numerous organizations performingpenetration research received abundant Pen-tagon funding, reflecting the serious atten-tion given to the security problem. By mid-1974, a number of institutions, includingthe Lawrence Livermore Laboratory, Infor-mation Sciences Institute, IBM CorporationResearch Division, Air Force Electronic Sys-tems Division, SDC, and CIA, openlyacknowledged that they had been conduct-ing officially sponsored penetration exer-cises.126 MIT professor Jerome Saltzer, whohad worked on Ware’s task force, summar-ized the projects in a brief article in the news-letter Operating Systems Review, explainingthat ‘‘Much of the work . . . is governmentsponsored, and that work is largely underthe Department of Defense.’’127

Coinciding with all the research programs,many penetration studies began appearing in

the scholarly literature. In September 1974,the same month that Porter had publishedhis article about computer-trained guerrillagroups, the Honeywell Computer Journaldescribed computer penetration in a com-prehensive article, ‘‘Penetration of Com-puter Systems: An Overview.’’128 Theauthor, Honeywell’s R.D. Lackey, based hisdiscussion on ‘‘many examples of actual sys-tem penetration,’’ although he listed noreferences.128 IBM also published the resultsof one of its security studies in its IBM Sys-tems Journal in late 1974. The author, W.S.McPhee, who had reviewed IBM’s OS/VS2system, warned that ‘‘total system integrity,or security, does not exist anywhere in theworld. If someone is willing to spendenough and risk enough, any security sys-tem can be broken.’’129 Certainly, McPhee’spoint fit well with the overall consensusthat no system remained safe from hostilepenetration.

Penetration MethodologiesAs tiger teams demonstrated their ease inpenetrating even the securest systems, theDoD began funding research that sought toestablish systematic methodologies for ana-lyzing computer security. Because they wereless glamorous than the tiger team testing,the media often overlooked these more so-phisticated approaches, which some RANDanalysts described as drudgery.130 The meas-urements working group at the RanchoSante Fe workshop had made a strong pushfor these more formulaic evaluation tools,calling for iterative, heuristic searches for sys-tem flaws that also took into account histor-ical data. In fact, Donn Parker had begun hisexploration of computer-related crime as partof the Research in Secured Operating Systems(RISOS) project at the University of Califor-nia’s Lawrence Livermore Laboratory.131

The project team members, consisting of pro-grammers, statisticians, and other research-ers, had contracted with ARPA to ‘‘test andevaluate the security of selected computersystems, as specified by the Department ofDefense,’’ according to the project’s director,Robert Abbott.132 Jerome Saltzer, who hadsummarized many of the ongoing penetra-tion studies in Operating Systems Review,described the RISOS project as an attemptto establish ‘‘systematic methodologies’’ fortesting computer security, noting that oneRISOS computer would store ‘‘a catalog oftechniques which have permitted successfulsystem penetration.’’133

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 13

July–September 2012 13

Page 11: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

Notably, ARPA invested in several of theseteam-directed approaches, including thecomparable Protection Analysis (PA) projectundertaken by analysts at the University ofSouthern California’s Information SciencesInstitute. Like the RISOS project, the PA proj-ect revolved around the development ofwhat Saltzer described as ‘‘system-penetra-tion expertise.’’134 In the PA project’s final re-port, lead researchers Richard Bisbey andDennis Hollingworth, who had both workedalongside James Anderson on RAND’s earlypenetration studies, distinguished PA fromRISOS and its attempt ‘‘to systematize pene-tration activities.’’135 Instead of requiring‘‘individuals who themselves who wouldmake good ‘penetrators’ of a given target sys-tem,’’ Bisbey and Hollingworth hoped to‘‘identify automatable techniques for detect-ing vulnerabilities in existing system soft-ware,’’ a process that required ‘‘limitedexpertise’’ from users.135 Like the RISOSteam, the PA team had also built a databaseof system threats, but ARPA eventually cutfunding from the highly challenging project.

Analysts at the SDC, the contractor thathad made one of the initial pushes for pene-tration studies, developed perhaps the mostsuccessful strategy for assessing system secu-rity, the Flaw Hypothesis Methodology(FHM). Clark Weissman, a study panel mem-ber, had originally outlined the approach inan internal SDC paper in 1973.136 He laterdescribed FHM penetration testing as a‘‘holistic method’’ that could ‘‘expose asmany flaws as established as a test goal.’’137

Weissman’s colleague Richard Linde openlydiscussed FHM at the May 1975 Joint Com-puter Conference, describing it as a ‘‘success-ful penetration methodology,’’ or ‘‘a formalstrategy for penetrating an operating sys-tem.’’138 During four basic steps, penetrationanalysts reviewed system manuals, theorizedabout possible weaknesses, tested for thealleged flaws, and finally classified the dis-covered errors.139 The penetration analystsdiscovered most of the flaws ‘‘by ‘thought’testing,’’ with actual test penetrations involv-ing ‘‘a minimum amount of thought oncethe flaw is proved,’’ Linde explained.140

In addition to 26 generic system flaws,Linde described 18 generic operating systemattacks, including denial of access, Trojanhorses, and wiretapping. Notably, Linde sug-gested that ‘‘Richard Bisbee [sic], DanEdwards, and several of the Multics people,whom I have not referenced herein, shouldbe given credit for devising a number of the

penetration attacks.’’141 Years later, Weiss-man also credited a number of analystswith devising penetration attacks, citing‘‘the legendary breakpoint attack of Linde/Phillips,’’ which Weissman considered oneof the most successful penetrations.142 Theattack enabled penetrators to insert codeinto a targeted system, giving them ‘‘arbitrarycontrol’’ over it.142 In fact, Phillips had par-ticipated in a self-described ‘‘team of penetra-tors’’ that utilized FHM to test the security ofIBM’s VM/370 system, enlisting the guidanceof both Weissman and Linde.143 Phillips andhis team published their findings in the IBMSystems Journal, noting that they had carriedout ‘‘several penetrations, some of which en-abled the penetrators to seize control of thesystem.’’144 FHM ‘‘provided a systematicand reasonably comprehensive approach’’for their analysis, they concluded.145

Eventually, FHM had become such awidely accepted methodology that the NSArequired security analysts working at itsComputer Security Evaluation Center(CSEC) ‘‘to be conversant with the ‘flaw hy-pothesis’ or equivalent security testing meth-odology’’ in order to conduct evaluations ofsystems that offered a basic level of secu-rity.146 One NSA operative later explainedthat the X division at NSA ‘‘is the secure soft-ware group. We test secure computers. X-1are the mathematical folks who test softwaretheoretically—trying to find holes in its de-sign. X-2 people sit at the computer, tryingto break software once it’s written.’’147 In-stead of relying on simple penetration tests,the NSA could now follow a carefully con-structed penetration methodology to thor-oughly assess the vulnerabilities of onlinecomputer systems.

ConclusionShortly after Linde had discussed the FHMmethodology at the May 1975 Joint Com-puter Conference, US political leaders foundthemselves embroiled in a major controversyover an emerging technology that linked nu-merous computer systems through telecom-munications lines, the Arpanet. Duringcongressional hearings on the security ofthis precursor to the modern day Internet,148

California Senator John V. Tunney called at-tention to the ‘‘recent charges that the intel-ligence community has developed thecapability of using computers to create dossi-ers on American citizens.’’149 At the time,NBC’s Ford Rowan had accused US intelli-gence agencies of accumulating and

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 14

US Government Computer Penetration Programs and the Implications for Cyberwar

14 IEEE Annals of the History of Computing

Page 12: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

distributing information on American citi-zens through computer networks. Testifyingbefore Congress in June 1975, Deputy Assis-tant Secretary of Defense David O. Cookedenied that the DoD maintained a systemof monitoring information transmitted overthe Arpanet. Stephen T. Walker, who hadrecently moved to ARPA from the NSA,150

similarly testified that ‘‘there is no way tomonitor the data being transferred on thenetwork.’’151 In other words, Cooke and hiscolleagues implied that the computer net-works remained entirely secure. Nobodycould request information from the Arpanet‘‘without our knowledge,’’ Cooke declared.152

Even Paul Armer, who had spent 10 years asthe head of RAND’s Computer Science De-partment, testified that computers could notsteal data from other computers, callingthe idea hogwash.153 Nevertheless, Armeracknowledged that ‘‘I don’t mean to implythat computers today are not penetrated byindividuals with malevolent intent.’’153

Given the possibility, he suggested that ‘‘[i]fthere is concern about the FBI computerbeing programmed to penetrate the SocialSecurity computers and the Census Bureaucomputers,’’ then the government should‘‘treat the files of Social Security and the Cen-sus like classified information.’’154

Of course, none of the DoD officials hadanything to say about the consistent successof their professional tiger teams or the newpenetration methodologies under their de-velopment. Officials like Walker chose towait a few years before acknowledging thatthe DoD had in fact generated ‘‘long lists ofthe ways penetrators used to break into sys-tems’’ by the early 1970s.155 ‘‘The ‘TigerTeam’ system penetration efforts record ofsuccess in penetrating all commercial sys-tems attempted, led to the perception thatthe integrity of computer systems [sic] hard-ware and software could not be relied uponto protect information from disclosure toother users of the same computer system,’’he admitted years after the hearings.155

Undoubtedly, the tools and strategiesrequired to successfully implement offensiveattacks had already been thoroughly studiedand well understood, especially in time forsome discussion at the 1975 congressionalhearings.

Without a readily available subversionhandbook, potential subverters would nothave to look much beyond the years of stud-ies sponsored by the defense establishmentto discover the essential principles, tools,

The journalist David

Sanger recently

observed, ‘‘there has

never been a real

debate in the United

States about when

and how to use

cyberweapons.’’

and methodologies used to break into com-puter systems. Philip Myers, who usedmany of the officially sponsored securitystudies from the early 1970s to research andwrite his 1980 master’s thesis on computersubversion, recognized his work’s offensivepotential, noting that he did not intend ‘‘toprovide a handbook of subversion for sub-verters.’’156 Myers even suggested that com-puter penetration ‘‘would not be a sound‘business’ practice’’ because it depends ‘‘ona design oversight or an implementationflaw that might be eventually corrected.’’157

For the ‘‘professional that is in the businessof gathering intelligence over a long periodof time,’’ the reliance on system flaws pre-sented a major problem.158 A more sophisti-cated approach, Myers believed, involvedthe thorough modification of a system,ensuring ‘‘results over the long haul.’’158

The computer subverter, as opposed to thecomputer penetrator, ‘‘constructs his ownmechanisms that are inserted into the hard-ware or software during one of the variousphases of a computer systems [sic] lifecycle,’’ Myers explained, highlighting trapdoors and Trojan Horses as the essentialtools.157 Myers further distinguished com-puter subverters from computer penetratorsby noting that ‘‘any reference to the sub-verter is meant as a reference to the subver-sive organization.’’159 According to Myers,subverters carried out ‘‘field operations’’by acting ‘‘with the guidance of all the ex-pertise that might be available’’ from anorganization.160

Roger Schell, who served as Myers’s thesisadvisor, later described subverters in similarterms. In a collaborative paper, Schell and

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 15

July–September 2012 15

Page 13: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

several of his colleagues described subversionas ‘‘a type of attack’’ typically conducted by‘‘a nation state (or state-sponsored organiza-tion), organized crime group, or large corpo-ration involved in corporate espionage’’ forthe purposes of information warfare.161 Al-though Myers and Schell clearly understoodoffensive actions from the perspective of sub-verters, it still remains difficult to establish anindisputable link between the years of officialpenetration studies in the US and modernday cyberwar.

Regardless, recent news reports have high-lighted a number of potential links betweensecurity testing and offensive cyberwar. Asthe journalist Siobhan Gorman reported inthe Wall Street Journal in June 2012, theIdaho National Laboratory helped lay thegroundwork for the attack against Iran byidentifying cybervulnerabilities in systemsthat manage everyday operations in the US.‘‘Idaho National Lab has a cadre of research-ers who investigate vulnerabilities in compu-terized control systems that run criticalinfrastructure,’’ Gorman reported. As part ofits partnership with the CIA’s InformationOperations Center, the security researchersat the lab had assessed the vulnerabilities ofthe systems that the Iranians used for its en-richment program.162

In another similarity with security testing,US officials overseeing the development ofthe Stuxnet weapon oversaw a phase ofwhat they called ‘‘destructive testing.’’4

According to the New York Times journalistDavid Sanger, the US constructed ‘‘a virtualreplica of Natanz,’’ enabling it to conduct‘‘the test over several of the Energy Depart-ment’s national laboratories to keep eventhe most trusted nuclear workers from figur-ing out what was afoot.’’4 The push for de-structive testing strongly mirrored theapproach that Karger and Shell had takenduring their security evaluation of MULTICS.As Karger and Schell have acknowledged, ‘‘aprofessional penetrator would do as wedid—debug the penetration programs on aduplicate machine... before attacking thedesired target.’’163 In his 1971 study of theData Services Center in the Pentagon, JamesAnderson had also emphasized that a hostilepenetrator’s ‘‘attack sequence’’ would in-volve a period of testing.164 Myers madethe same point in his thesis, insisting that‘‘the subverter needs to insure [sic] thatany techniques and mechanisms used inthe field have been perfected at a safe com-puting site.’’159

The expert computer scientist RobertMorris, who the New York Times recently eul-ogized as a ‘‘Pioneer in Computer Security’’and the Washington Post included ‘‘amongthe top U.S. computer security experts,’’ per-haps best symbolizes the transition from pen-etration testing to offensive cyberwar.165 Inthe early 1980s, Morris had worked atAT&T’s Bell Laboratories, which had author-ized him ‘‘to attempt to act as a tiger team’’to test the security of the company’s sys-tems.166 ‘‘I don’t make any modifications,’’Morris explained at the time, ‘‘but simply in-form the company of the cases in which Ihave been able to get access.’’166 Eventually,Morris joined the NSA’s CSEC as its chief sci-entist, having known ‘‘all of the senior staff’’for a number of years.167 In the 1989 bookThe Cuckoo’s Egg, Clifford Stoll described Mor-ris as the NSA’s ‘‘top guru’’ on computer secu-rity.168 In one notable exchange, Morrisinformed Stoll that ‘‘I’ve got three good pass-word cracking programs’’ that cracked pass-words by systematically trying the words ina dictionary, calling the approach ‘‘child’splay.’’169 At CSEC, Morris primarily focusedon security issues, ‘‘work[ing] for the agencyin protecting government computers and inprojects involving electronic surveillanceand online warfare,’’ according to New YorkTimes journalist John Markoff.170 Similarly,T. Rees Shapiro reported in the WashingtonPost that ‘‘Morris was the digital gatekeeperof the American government’s secrets,’’ lead-ing a team at NSA that ‘‘defended the mili-tary’s networks from outside attack.’’171

Before long, Morris made the transitionfrom security guru to cyberwarrior. Accord-ing to John Markoff’s report in the NewYork Times, Morris had ‘‘played an importantclandestine role in planning what was proba-bly the nation’s first cyber war,’’ the cyberat-tacks against Iraq during the first GulfWar.172 ‘‘Although details are still classified,’’Markoff noted, ‘‘the attacks, along with laser-guided bombs, are believed to have largelydestroyed Iraq’s military command and con-trol capability before the war began.’’172 An-other report by MIT News placed Morris with‘‘a special team’’ that had worked on ‘‘nulli-fying the Iraqi defense system’’ at the time,noting that Morris ‘‘was detailed to the JointChiefs of Staff.’’172 Incidentally, Morris hadpreviously worked with the joint chiefs onsecuring the nation’s military systems duringhis time at CSEC.173 Certainly, Morris wouldhave made an ideal candidate for one of thenation’s earliest teams of cyberwarriors.

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 16

US Government Computer Penetration Programs and the Implications for Cyberwar

16 IEEE Annals of the History of Computing

Page 14: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

During another round of congressionalhearings on computer security in late 1983,Morris’s colleague Willis Ware, who hadhelped sound the alarm on security issuesin the mid-1960s, commented on the partic-ularly blurry line separating security testingfrom cyberwar. ‘‘The computer securityissue must be seen as analogous to the classi-cal offense/defense situation,’’ he noted inhis prepared statement.174 ‘‘As the defensesof computer security safeguards get better,’’Ware argued, ‘‘the offenses . . . against themwill become more sophisticated, and thatcycle . . . will repeat again and again andagain.’’175 Certainly, figures such as RobertMorris not only ranked high in the upperechelons of what Ware called ‘‘the country’sexpertise’’ on the ‘‘sophisticated technicalthreat[s]’’ but also exemplified the offense/defense cycle inherent to security research.176

Given the issue’s importance, including thehigh level of security and capabilities nowachieved by the US, ‘‘we must initiate abroad public dialogue about cyber war,’’Clarke and Knake recently suggested inCyber War.177 Indeed, ‘‘there has never beena real debate in the United States aboutwhen and how to use cyberweapons,’’ thejournalist David Sanger recently observed.178

At the very least, I hope this research willcontribute to the ongoing push for a moreopen, democratic discussion.

References and Notes1. W.J. Broad, J. Markoff, and D.E. Sanger,

‘‘Israeli Test on Worm Called Crucial in Iran

Nuclear Delay,’’ New York Times, 16 Jan. 2011;

www.nytimes.com/2011/01/16/world/

middleeast/16stuxnet.html.

2. S. Weinberger, ‘‘Is This the Start of Cyber-

warfare?’’ Nature, vol. 474, 2011, p. 143;

www.nature.com/news/2011/110608/full/

474142a.html.

3. N. Falliere, L.O. Murchu, and E. Chien,

‘‘W32.Stuxnet Dossier, Version 1.4,’’ white

paper, Feb. 2011, p. 55; www.symantec.com/

content/en/us/enterprise/media/security_

response/whitepapers/w32_stuxnet_dossier.pdf.

4. D.E. Sanger, ‘‘Obama Order Sped Up Wave of

Cyberattacks Against Iran,’’ New York Times,

1 June 2012, www.nytimes.com/2012/06/01/

world/middleeast/obama-ordered-wave-of-

cyberattacks-against-iran.html.

5. R.A. Clarke and R.K. Knake, Cyber War: The

Next Threat to National Security and What to

Do about It, HarperCollins Publishers, 2010,

pp. 145, 259.

6. Clarke and Knake, Cyber War, p. 93.

7. W. Safire, ‘‘The Farewell Dossier,’’ New York

Times, 2 Feb. 2004; www.nytimes.com/2004/

02/02/opinion/the-farewell-dossier.html, and

T.C. Reed, At the Abyss: An Insider’s History of the

Cold War, Random House, 2005, pp. 266–270.

8. W.J. Broad, ‘‘Computer Security Worries Mili-

tary Experts,’’ New York Times, 25 Sept. 1983.

9. J. Walsh, ‘‘‘Lack of Reciprocity’ Prompts IIASA

Cutoff,’’ Science, vol. 216, no. 4541, 1982,

p. 35.

10. P. Grant and R. Riche, ‘‘The Eagle’s Own

Plume,’’ Proc. United States Naval Inst., vol. 109,

no. 7, 1983, p. 34.

11. D.A. Fulghum, ‘‘Yugoslavia Successfully

Attacked by Computers,’’ Aviation Week &

Space Technology, 23 Aug. 1999.

12. J. Arquilla, interviewed by Frontline, 4 Mar.

2003; www.pbs.org/wgbh/pages/frontline/

shows/cyberwar/interviews/arquilla.html.

13. D.A. Fulghum, ‘‘Telecom Links Provide Cyber-

Attack Route,’’ Aviation Week & Space Technol-

ogy, 8 Nov. 1999.

14. C. Crew and J. Markoff, ‘‘Contractors Vie for

Plum Work, Hacking for U.S.,’’ New York

Times, 31 May 2009; www.nytimes.com/

2009/05/31/us/31cyber.html.

15. Corey Kilgannon and Noam Cohen, ‘‘Cadets

Trade the Trenches for Firewalls,’’ New York

Times, 11 May 2009; www.nytimes.com/

2009/05/11/technology/11cybergames.html.

16. Clarke and Knake, Cyber War, pp. 261–262.

17. P.A. Karger and R.R. Schell, MULTICS Security

Evaluation: Vulnerability Analysis, ESD-TR-74-193,

vol. II, L.G. Hanscom Air Force Base, Electronic

Systems Division, June 1974, p. 6; http://

csrc.nist.gov/publications/history/karg74.pdf.

18. C. Weissman, ‘‘Security Penetration Testing

Guideline,’’ tech. memo 5540:082A, Hand-

book for the Computer Security Certification of

Trusted Systems, Naval Research Laboratory,

1995, p. 23; www.dtic.mil/cgi-bin/

GetTRDoc?AD¼ADA390673.

19. Computer Security Division, Computer Security

Resource Center, ‘‘Early Computer Security

Papers, Part I,’’ Nat’l Inst. Standard and Tech-

nology, http://csrc.nist.gov/publications/history.

20. For more insight into what organizations

should be included in the ‘‘defense establish-

ment,’’ see K. Flamm, Targeting the Computer:

Government Support and International Compe-

tition, The Brookings Institution, 1987,

pp. 42–78, 93–124. For a specific example,

see W.H. Ware, Security Controls for Computer

Systems (U): Report of Defense Science Board

Task Force on Computer Security, RAND,

R-609-1, 11 Feb. 1970, pp. xi–xii; http://

csrc.nist.gov/publications/history/ware70.pdf.

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 17

July–September 2012 17

Page 15: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

Organizations involved in the Ware report

include RAND, SDC, Lockheed, DoD, Case

Western Reserve, CIA, NSA, ARPA, DCA,

Mitre, IBM, and MIT.

21. Clarke and Knake, Cyber War, p. 263.

22. I first reported on my research findings in

my master’s thesis, ‘‘Computer Penetration:

The Pioneering Role of the United States

Government and Its Alliance with Industry

and Academia, 1965–1985,’’ Univ. of

Massachusetts, 2010.

23. D.B. Parker, Crime by Computer, Charles Scrib-

ner’s Sons, 1976, p. 283.

24. T. Whiteside, Computer Capers: Tales of Elec-

tronic Thievery, Embezzlement, and Fraud,

Thomas Y. Crowell Company, 1978, p. 115.

25. Whiteside, Computer Capers, p. 116.

26. Whiteside, Computer Capers, p. 155.

27. J. Murphy, P. Elmer-DeWitt, and M. Krance,

‘‘Computers: The 414 Gang Strikes Again,’’

Time, 29 Aug. 1983; www.time.com/time/

printout/0,8816,949797,00.html.

28. D.B. Parker, Fighting Computer Crime, Charles

Scribner’s Sons, 1983, p. 183.

29. Parker, Fighting Computer Crime, pp. 130, 183.

30. S. Levy, Hackers: Heroes of the Computer Revo-

lution, Dell, 1984, p. 7.

31. D. Smith, ‘‘Who Is Calling Your Computer

Next? Hacker!’’ Criminal Justice J., vol. 8,

no. 1, 1985, p. 94.

32. Smith, ‘‘Who Is Calling Your Computer Next?

Hacker!’’ p. 95. Smith’s emphasis.

33. M. McCain, ‘‘Computer Users Fall Victim to

a New Breed of Vandals,’’ New York Times,

19 May 1987; www.nytimes.com/1987/05/

19/nyregion/computer-users-fall-victim-to-a-

new-breed-of-vandals.html.

34. S. Brand, ‘‘Keep Designing: How the Informa-

tion Economy is Being Created and Shaped

by the Hacker Ethic,’’ Whole Earth Rev., May

1985, pp. 44–55.

35. R. Lehtinen, D. Russell, and G.T. Gangemi,

Sr., Computer Security Basics, O’Reilly Media,

2006, p. 29.

36. Lehtinen, Russell, and Gangemi, Computer

Security Basics, p. 30.

37. W. Madsen, ‘‘Intelligence Agency Threats to

Computer Security,’’ Int’l J. Intelligence and

Counter-Intelligence, vol. 6, no. 4, 1993, p. 413.

38. Madsen, ‘‘Intelligence Agency Threats to

Computer Security,’’ p. 414.

39. S. Gorman and J.E. Barnes, ‘‘Cyber Combat

Can Count as Act of War,’’ Wall Street J.,

31 May 2011; http://online.wsj.com/article/

SB1000142405270230456310457635562313

5782718.html.

40. D. MacKenzie, Mechanizing Proof: Computing,

Risk, and Trust, MIT Press, 2001, p. 160.

41. MacKenzie, Mechanizing Proof, p. 156. See

also D. MacKenzie and G. Pottinger, ‘‘Mathe-

matics, Technology, and Trust: Formal Verifi-

cation, Computer Security, and the U.S.

Military,’’ IEEE Annals of the History of Comput-

ing, vol. 19, no. 3, 1997, pp. 41–59.

42. J.R. Yost, ‘‘A History of Computer Security

Standards,’’ The History of Information Security:

A Comprehensive Handbook, K. de Leeuw and

J. Bergstra, eds., Elsevier, 2007, pp. 601–602.

43. E.A. Anderson, C.E. Irvine, and R.R. Schell,

‘‘Subversion as a Threat in Information War-

fare,’’ J. Information Warfare, vol. 3, no. 2,

2004, p. 52; http://citeseerx.ist.psu.edu/

viewdoc/download?doi¼10.1.1.105.505&rep¼rep1&type¼pdf.

44. C. Baum, The System Builders: The Story of

SDC, System Development Corp., 1981.

45. R.L. Dennis, Security in the Computer Environ-

ment, SP 2440/000/01, System Development

Corp., 18 Aug. 1966, p. 11; http://handle.

dtic.mil/100.2/AD640648.

46. Dennis, Security in the Computer Environment,

p. 7.

47. Dennis, Security in the Computer Environment,

p. 30.

48. W.H. Ware, Security and Privacy in Computer

Systems, P-3544, RAND, Apr. 1967, p. 1;

www.rand.org/pubs/papers/2005/P3544.pdf.

49. H.E. Petersen and R. Turn, System Implications

of Information Privacy, P-3504, RAND, Apr.

1967, p. iii; www.rand.org/pubs/papers/

2005/P3504.pdf.

50. B. Peters, ‘‘Security Considerations in a Multi-

programmed Computer System,’’ Proc. AFIPS

Spring Joint Computer Conf., ACM Press, 1967,

p. 285.

51. J. Bamford, The Puzzle Palace: A Report on

America’s Most Secret Agency, Houghton

Mifflin Company, 1982, pp. 339–340.

52. W.H. Ware, ‘‘Foreword,’’ Security in Comput-

ing, 3rd ed., C.P. Pfleeger and S. Lawrence

Pfleeger, Prentice Hall, 2003, p. xix.

53. Petersen and Turn, System Implications, p. 3.

54. Ware, Security and Privacy, pp. 8, 10.

55. Ware, Security and Privacy, pp. 10–11.

56. Petersen and Turn, System Implications, p. 5.

57. Petersen and Turn, System Implications,

pp. 4–5.

58. Ware, Security and Privacy, p. 6.

59. R. Turn, A Brief History of Computer Privacy/

Security Research at RAND, P-4798, RAND,

Mar. 1972, p. 3.

60. R.M. Greene, Jr., ed., Business Intelligence and

Espionage, Dow Jones-Irwin, 1966, p. 224.

61. ‘‘Industrial Spies to Turn to Laser Beam, Com-

puter Snooping,’’ Washington Post, 16 Mar.

1967.

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 18

US Government Computer Penetration Programs and the Implications for Cyberwar

18 IEEE Annals of the History of Computing

Page 16: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

62. D.B. Parker, Threats to Computer Systems, Law-

rence Livermore Laboratory, Mar. 1973, p. 111.

63. Parker, Threats to Computer Systems, p. 56.

64. D.B. Parker, Crime by Computer, Charles Scribner’s

Sons, 1976, p. 87; J. Millar Carroll, Computer

Security, Butterworth-Heinemann, 1996, p. 53.

65. Parker, Crime by Computer, p. 94.

66. Parker, Crime by Computer, p. 102.

67. MacKenzie, Mechanizing Proof, p. 158.

68. W.H. Ware, RAND and the Information Evolu-

tion: A History in Essays and Vignettes, CP-537-

RC, RAND, 2008, p. 153; www.rand.org/

pubs/corporate_pubs/2008/RAND_CP537.pdf.

69. W.H. Ware, Security Controls for Computer Sys-

tems (U): Report of Defense Science Board Task

Force on Computer Security, R-609-1, RAND,

11 Feb. 1970, p. 10; http://csrc.nist.gov/

publications/history/ware70.pdf.

70. Ware, Security Controls for Computer Systems,

R-609-1, p. 7.

71. Ware, Security Controls for Computer Systems,

R-609-1, p. 8.

72. P.S. Browne, ‘‘Computer Security: A

Survey,’’ SIGMIS Database, vol. 4, no. 3,

1972, p. 12; http://doi.acm.org/10.1145/

1017536.1017537.

73. J. Anderson et al., Computer Security Experi-

ment, WN-7275-ARPA, RAND, Mar. 1971.

I have not been able to obtain a copy of

this paper.

74. Ware, RAND and the Information Revolution,

pp. 153–154.

75. R. Turn, R. Fredrickson, and D. Hollingworth,

Data Security Research at the RAND Corpora-

tion: Description and Commentary, P-4914,

RAND, Mar. 1972, p. 2.

76. Turn, Fredrickson, and Hollingworth, Data Se-

curity Research at the RAND Corporation, p. 9.

77. Turn, A Brief History, pp. 4–5.

78. Turn, A Brief History, p. 5.

79. Turn, Fredrickson, and Hollingworth, Data

Security Research, p. 2.

80. Turn, Fredrickson, and Hollingworth, Data

Security Research, p. 10.

81. Turn, Fredrickson, and Hollingworth, Data

Security Research, p. 20.

82. Turn, Fredrickson, and Hollingworth, Data

Security Research, p. 11.

83. Ware, RAND and the Information Revolution,

p. 154.

84. J.P. Anderson, AF/ACS Computer Security Con-

trols Study, ESD-TR-71-395, L.G. Hanscom

Field, HQ Electronic Systems Division,

Nov. 1971, p. 1.

85. E. Spafford, ‘‘James P. Anderson: An Infor-

mation Security Pioneer,’’ IEEE Security and

Privacy, vol. 6, no. 1, 2008, p. 9; http://

dx.doi.org/10.1109/MSP.2008.15. Anderson,

who had participated in Ware’s task force and

RAND’s penetration exercises, had organized

his company in the late 1960s following stints

at companies such as Univac, Burroughs, and

Auerbach.

86. Anderson, AF/ACS Computer Security, p. 2.

87. Anderson, AF/ACS Computer Security, p. 17.

88. Anderson, AF/ACS Computer Security, p. 27.

89. J.P. Anderson, Computer Security Technology

Planning Study, vol. II, ESD-TR-73-51, L.G.

Hanscom Field, HQ Electronic Systems Division,

Oct. 1972, p. 59; http://seclab.cs.ucdavis.

edu/projects/history/papers/ande72.pdf.

90. Anderson, Computer Security, vol. II, p. 63.

91. Anderson, Computer Security, vol. II, pp. 62–63.

92. D.K. Branstad, ‘‘Privacy and Protection,’’

SIGOPS Operating Systems Rev., vol. 7, no. 1,

1973, p. 13.

93. J.P. Anderson, ‘‘Information Security in a

Multi-User Computer Environment,’’ Advances

in Computers, vol. 12, M. Rubinoff, ed., Aca-

demic Press, 1972, p. 4; Anderson, Computer

Security, vol. II, p. 58.

94. Anderson, Computer Security, vol. II, p. 65.

95. J.P. Anderson, Computer Security Technology

Planning Study, vol. I, ESD-TR-73-51, L.G.

Hanscom Field, HQ Electronic Systems Division,

Oct. 1972, p. 4; http://seclab.cs.ucdavis.

edu/projects/history/papers/ande72a.pdf.

96. G.J. Popek and C.S. Kline, ‘‘Verifiable Secure

Operating System Software,’’ Proc. FIPS Nat’l

Computer Conf. and Exposition, ACM Press,

1974, p. 145; http://doi.acm.org/10.1145/

1500175.1500204.

97. D.K. Branstad and S.K. Reed, eds., Controlled

Accessibility Workshop Report, tech. note 827,

US Dept. of Commerce and Nat’l Bureau of

Standards, May 1974, p. 1.

98. Branstad and Reed, Controlled Accessibility, p. 68;

Petersen and Turn, System Implications, p. 13.

99. Branstad and Reed, Controlled Accessibility, p. 68.

100. Branstad and Reed, Controlled Accessibility, p. 74.

101. ADP Security Manual: Techniques and Proce-

dures for Implementing, Deactivating, Testing,

and Evaluating Secure Resource-Sharing ADP

Systems, DoD 5200.28-M, US Dept. of De-

fense, Jan. 1973, p. 13; http://handle.dtic.mil/

100.2/ADA268995.

102. P.S. Browne, ‘‘Computer Security: A Survey,’’

Proc. AFIPS Nat’l Computer Conf. and Exposi-

tion, ACM Press, 1976, p. 58.

103. D.B. Parker, Threats to Computer Systems, Law-

rence Livermore Laboratory, Mar. 1973, p. 14.

104. D.B. Parker and S. Nycum, ‘‘The New Crimi-

nal,’’ Datamation, Jan. 1974, p. 58.

105. K. Flamm, Targeting the Computer: Govern-

ment Support and International Competition,

The Brookings Institution, 1987, p. 57.

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 19

July–September 2012 19

Page 17: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

106. Anderson, Computer Security, vol. I, p. 12.

107. D. Hollingworth, Enhancing Computer System

Security, P-5064, RAND, Aug. 1973, p. 10;

www.rand.org/pubs/papers/2006/P5064.pdf.

108. Hollingworth, Enhancing Computer System

Security, pp. 10, 5.

109. T. Alexander, ‘‘Waiting for the Great Com-

puter Rip-off,’’ Fortune, July 1974, p. 143.

110. T. Whiteside, ‘‘Annals of Crime: Dead Souls

in the Computer—II,’’ New Yorker, 29 Aug.

1977, p. 60.

111. Karger and Schell, MULTICS Security Evalua-

tion, p. 17.

112. P.A. Karger and R.R. Schell, ‘‘Thirty Years

Later: Lessons from the Multics Security Evalu-

ation,’’ Proc. 18th Ann. Computer Security

Applications Conf. (ACSAC), IEEE CS Press,

2002, p. 119.

113. Karger and Schell, MULTICS Security Evalua-

tion, p. 59.

114. Karger and Schell, MULTICS Security Evalua-

tion, p. 53.

115. Karger and Schell, ‘‘Thirty Years Later,’’ p. 121.

116. R.R. Schell, ‘‘Computer Security: The Achilles’

Heel of the Electronic Air Force?’’ Air Univ.

Rev., vol. 30, no. 2, 1979; www.au.af.mil/au/

cadre/aspj/airchronicles/aureview/1979/

jan-feb/schell.html.

117. Whiteside, ‘‘Annals of Crime,’’ p. 61.

118. W.T. Porter, Jr., ‘‘‘Computer Raped by

Telephone’ . . . and Other Futuristic Felonies by

Electronic Con Men Who Leave No Footprints,’’

New York Times Magazine, 8 Sept. 1974.

119. Porter, ‘‘‘Computer Raped by Telephone,’’’

p. 43.

120. Alexander, ‘‘Waiting for the Great Computer

Rip-off,’’ p. 146.

121. R. Turn and W.H. Ware, ‘‘Privacy and Security

in Computer Systems,’’ American Scientist,

March/April 1975, p. 201.

122. B. Ginzburg, ‘‘Military Computers Easily

Penetrable, AF Study Finds,’’ Washington Post,

8 Aug. 1976.

123. Whiteside, ‘‘Annals of Crime,’’ pp. 59–60.

124. Whiteside, ‘‘Annals of Crime,’’ p. 59.

125. Karger and Schell, ‘‘Thirty Years Later,’’ p. 121.

126. J.H. Saltzer, ‘‘Ongoing Research and Devel-

opment on Information Protection,’’ SIGOPS

Operating Systems Rev., vol. 8, no. 3, 1974,

p. 9.

127. Saltzer, ‘‘Ongoing Research and Development

on Information Protection,’’ p. 8.

128. R.D. Lackey, ‘‘Penetration of Computer Sys-

tems: An Overview,’’ Honeywell Computer J.,

vol. 8, no. 2, 1974, p. 81.

129. W.S. McPhee, ‘‘Operating System Integrity

in OS/VS2,’’ IBM Systems J., vol. 13, no. 3,

1974, p. 251.

130. Alexander, ‘‘Waiting for the Great Computer

Rip-off,’’ p. 146; D. Hollingworth, S. Glase-

man, and M. Hopwood, Security Test and

Evaluation Tools: An Approach to Operating

System Security Analysis, P-5298, RAND,

Sept. 1974, p. 13; www.rand.org/pubs/

papers/2009/P5298.pdf.

131. Parker, Threats to Computer Systems, p. viii.

132. Parker, Threats to Computer Systems, p. vii.

133. Saltzer, ‘‘Ongoing Research,’’ pp. 11, 12.

134. Saltzer, ‘‘Ongoing Research,’’ p. 12.

135. R. Bisbey II and D. Hollingworth, Protection

Analysis: Final Report, ISI/SR-78-13, Informa-

tion Sciences Inst., May 1978, p. 3; http://

csrc.nist.gov/publications/history/bisb78.pdf.

136. C. Weissman, System Security Analysis/Certification

Methodology and Results, SDC SP-3728, System

Development Corp., 8 Oct. 1973. I have not

been able to obtain a copy of this paper.

137. C. Weissman, ‘‘Security Penetration Testing

Guideline,’’ NRL tech. memo 5540:082A,

Handbook for the Computer Security Certifica-

tion of Trusted Systems, Naval Research Labo-

ratory, 1995, p. 6; http://chacs.nrl.navy.mil/

publications/handbook/PENET.pdf.

138. R.R. Linde, ‘‘Operating System Penetration,’’

Proc. AFIPS Nat’l Computer Conf. and Exposi-

tion, ACM Press, 1975, pp. 361, 365.

139. Linde, ‘‘Operating System Penetration,’’ p. 361.

140. Linde, ‘‘Operating System Penetration,’’ p. 363.

141. Linde, ‘‘Operating System Penetration,’’ p. 366.

142. Weissman, ‘‘Security Penetration,’’ p. 34.

143. C.R. Attansio, P.W. Markstein, and R.J. Phillips,

‘‘Penetrating an Operating System: A Study

of VM/370 Integrity,’’ IBM Systems J., vol. 15,

no. 1, 1976, pp. 103, 115.

144. Attansio, Markstein, and Phillips, ‘‘Penetrating

an Operating System,’’ p. 110.

145. Attansio, Markstein, and Phillips, ‘‘Penetrating

an Operating System,’’ p. 114.

146. Trusted Computer System Evaluation Criteria,

DoD 5200.28-STD, US Dept. of Defense,

Dec. 1985, pp. 83, 84; http://csrc.nist.gov/

publications/history/dod85.pdf.

147. C. Stoll, The Cuckoo’s Egg, Doubleday, 1989,

p. 258.

148. J. Abbate, Inventing the Internet, MIT Press, 2000.

149. Senate Subcommittee on Constitutional Rights of

the Committee on the Judiciary and the Senate

Special Subcommittee on Science, Technology,

and Commerce of the Committee on Com-

merce, Surveillance Technology, 94th Cong., 1st

sess., 23 June, 9 Sept., and 10 Sept., 1975, p. 2.

150. MacKenzie, Mechanizing Proof, p. 175.

151. Senate Subcommittee, Surveillance Technology,

p. 41.

152. Senate Subcommittee, Surveillance Technology,

p. 43.

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 20

US Government Computer Penetration Programs and the Implications for Cyberwar

20 IEEE Annals of the History of Computing

Page 18: US Government Computer Penetration Programs and the …courses.isi.jhu.edu/malware/papers/HUNT.pdf · 2014. 7. 10. · an interview with Time magazine in August 1983, warning about

153. Senate Subcommittee, Surveillance Technology,

p. 56.

154. Senate Subcommittee, Surveillance Technology,

p. 57.

155. S.T. Walker, ‘‘The Advent of Trusted Com-

puter Operating Systems,’’ Proc. Nat’l Com-

puter Conf., ACM Press, 1980, p. 655.

156. P.A. Myers, ‘‘Subversion: The Neglected As-

pect of Computer Security,’’ master’s thesis,

Naval Postgraduate School, 1980, p. 106;

http://csrc.nist.gov/publications/history/

myer80.pdf.

157. Myers, ‘‘Subversion,’’ p. 35.

158. Myers, ‘‘Subversion,’’ p. 74.

159. Myers, ‘‘Subversion,’’ p. 41.

160. Myers, ‘‘Subversion,’’ pp. 40–41.

161. E.A. Anderson, C.E. Irvine, and R.R. Schell,

‘‘Subversion as a Threat in Information Warfare,’’

J. Information Warfare, vol. 3, no. 2, 2004, p. 58.

162. S. Gorman, ‘‘U.S. Team and Israel Developed

Worm,’’ Wall Street J., 1 June 2012, http://

online.wsj.com/article/SB1000142405270230

4821304577440703810436564.html.

163. Karger and Schell, ‘‘Thirty Years Later,’’ p. 122.

164. Anderson, AF/ACS Computer Security, p. 27.

165. J. Markoff, ‘‘Robert Morris, Pioneer in Com-

puter Security, Dies at 78,’’ New York Times,

30 June 2011; www.nytimes.com/2011/06/

30/technology/30morris.html. T.R. Shapiro,

‘‘Robert Morris, A Developer of Unix, Dies

at 78,’’ Washington Post, 30 June 2011;

www.washingtonpost.com/local/obituaries/

robert-morris-a-developer-of-unix-dies-at-78/

2011/06/30/AG5PwbsH_story.html.

166. House Subcommittee on Transportation,

Aviation and Materials of the Committee on

Science and Technology, Computer and Com-

munications Security and Privacy, 98th Cong.,

1st sess., 26 Sept., 17 Oct., and 24 Oct.,

1983, p. 508.

167. House Subcommittee, Aviation and Materials,

p. 524.

168. Stoll, Cuckoo’s Egg, p. 255.

169. Stoll, Cuckoo’s Egg, p. 252.

170. Markoff, ‘‘Robert Morris,’’ New York Times.

171. Shapiro, ‘‘Robert Morris,’’ Washington Post.

172. P. Richards, ‘‘NSA’s Morris Gives Warnings on

Information Encryption,’’ MIT News, 26 Nov.

1997; http://web.mit.edu/newsoffice/1997/

morris-1126.html.

173. Stoll, Cuckoo’s Egg, p. 256.

174. House Subcommittee, Computer and

Communications Security and Privacy, p. 456.

175. House Subcommittee, Computer and

Communications Security and Privacy, p. 446.

176. House Subcommittee, Computer and

Communications Security and Privacy, p. 461.

177. Clarke and Knake, Cyber War, p. 261.

178. D.E. Sanger, ‘‘Mutually Assured Cyber-

destruction?’’ New York Times, 3 June 2012;

www.nytimes.com/2012/06/03/sunday-review/

mutually-assured-cyberdestruction.html.

Edward Hunt is a doctoral

student in American Studies

at the College of William &

Mary in Williamsburg, Vir-

ginia. His research interests in-

clude the role of government

in technological development;

the struggle of communities

for economic democracy; and the operations of

empire, markets, and resistance in recent US his-

tory. Hunt has an MS in statistics from the Univer-

sity of Massachusetts at Amherst and an MA in

history from the University of Massachusetts at

Boston. Contact him at [email protected].

[3B2-9] man2012030004.3d 9/8/012 12:54 Page 21

July–September 2012 21