Secure Design. Topics 1. “Designing-In” Security 2. Convenience and Security 3. Security By...

41
CSC 482/582: Computer Security Secure Design

Transcript of Secure Design. Topics 1. “Designing-In” Security 2. Convenience and Security 3. Security By...

Introduction

CSC 482/582: Computer SecuritySecure DesignTopicsDesigning-In Security

Convenience and Security

Security By Obscurity

Open vs. Closed Source

A Game of EconomicsSlides adapted from "Foundations of Security: What Every Programmer Needs To Know" by Neil Daswani, Christoph Kern, and Anita Kesavan (ISBN 1590597842; http://www.foundationsofsecurity.com). Except as otherwise noted, the content of this presentation is licensed under the Creative Commons 3.0 License.2In the previous chapter, we covered a number of high-level requirements that more secure systems strive to provide. In this chapter, we will discuss a number of design principles that security architects typically keep in mind when building secure systems.Designing-In SecurityDesign features with security in mindNot as an afterthoughtHard to add-on security later

Define concrete, measurable security goals. Ex:Only certain users should be able to do X. Log action.Output of feature Y should be encrypted.Feature Z should be available 99.9% of the time

Bad Examples: Windows 98, Internet3If you are building a system that needs to be secure, it is important to think about security up-front as part of its design; that is, security should be designed in to the system from the start. Systems that are often built that meet a set of functionality and performance criteria, and software developers then attempt to make these systems secure only as an afterthought. That is the wrong way to design a system. Experience has shown that it is very hard to add on security onto a system only after it has been developed.

For example, consider the design of the Windows 98 operating system (OS). Microsofts highest priority goals were to attempt to pack as much functionality into the OS, and deploy the software on time. Providing security and an access control mechanism was not among the primary design criteria as evidenced by many of the security holes in the product. For instance, the OS includes a feature that allows a PC to boot up into a safe or diagnostic mode without the entry of a username or password at all. A user may simply hit the F8 key as the boot sequence starts to have the PC boot up in this diagnostic mode, thereby bypassing the required entry of a username and password. The problem with this is that an attacker that gains physical access to the system could access the contents of the users entire disk by pressing the F8 key during the boot sequence.

A better design of this feature that would have kept security in mind from the start would be to require the user to enter a username and password to enter the diagnostic mode. The only user that should indeed be authorized to boot up the PC in diagnostic mode and be given full access to the disk, etc. is the system administrator.

Examples:

F8 on Windows boot-up bypasses password entry and puts computer into diagnostic mode.TCP/IP no accountability free to send packets, and denial-of-service attacks occur because bad guys can send out many packets at no cost.

Security added as afterthought usually results in a turtle architecture: hard outer shell, but very soft inside.

Examples of concrete, measurable security goals:Only certain users should be authorized to use this featureThis feature should be available 99.9% of the time (i.e. dial-tone) even under a denial-of-service attack3) Information output by this feature should be encrypted4) A log or audit trail should be kept every time this feature is used

March 2002: .Net Server Ship Date delayed due to security vulnerabilities. Almost never happended bfore. Only could happen becasuse security was defined as part of the servers set of features.

Security often requires architectural changes.

Systems that add-on security later are like turtles. They might employ some type of hard shell that they add on the top of the system to defend it, but if an attacker can find a way around the shell, the system will be easy to attack.

Adding security later on may require disabling newer features anyway, so why build features at all unless you build them secure to begin with.

Securing an application later on may require significant changes (not using certain library functions such as strcpy), and changing things too late in the development of the application may introduce 1) risk, and 2) pressure to get the changes done before the deadline.

Windows 98Diagnostic Mode: Accessed through 'F8' key when bootingCan bypass password protections, giving attacker complete access to hard disks & data

Username/Password Security was added as an afterthought

Should have been included at the start, then required it for entering diagnostic mode4The InternetAll nodes originally university or military (i.e. trusted) since it grew out of DARPA

With commercialization, lots of new hosts, all allowed to connect to existing hosts regardless of whether they were trusted

Deployed Firewalls: allows host to only let in trusted trafficLoopholes: spoofing IPs, tunneling via allowed ports5IP Whitelisting & SpoofingIP Whitelisting: accepting communications only from hosts with certain IP addresses

IP Spoofing attack: attacker mislabels (i.e. lies) source address on packets, slips past firewall

Response to spoofing sent to host, not attackerMultiple communication rounds makes attack harderMay DoS against legitimate host to prevent response

6IP Spoofing & NoncesNonce: one-time pseudo-random numberAttaching a nonce to a reply and requesting it to be echoed back can guard against IP spoofingAttacker wont know what reply to fakeSpoofing easier for non-connection-oriented protocols (e.g. UDP) than connection-oriented (e.g. TCP)TCP sequence #s should be random, o/w attacker can predict and inject packets into conversation7M&M ArchitecturesInherently insecure system protected by another system mediating access to itEx: Firewalls guard vulnerable systems withinEx: Death Star strong outer defense but vulnerable

Hard outer shell should not be sole defense

8Convenience and SecuritySometimes inversely proportionalMore secure Less convenientToo Convenient Less secure

If too inconvenient unusable users will workaround insecureEx: users may write down passwords

Good technologies increase both: relative security benefit at only slight inconvenience

9Until now, we have talked about how to pursue designing security for our systems. We must, however, acknowledge that security comes at a price to users. Typically, the more security technology that is deployed, the less convenient using a system becomes for users. For example, if we require our users to have passwords, and we allow our users to use any password they would like, then this might lead to security vulnerabilities since some users might choose passwords that are very easy to guess. On the other hand, if we assign complicated, hard-to-guess passwords to users, then our system will be more secure, but it will also be less convenient for users. Users may forget their passwords that we assign them.

The interaction between security and convenience is not one way either. If the passwords that we assign are complicated enough, users might decide to write these passwords down (even if we tell them not to). Users might even write down their passwords in a place that a hacker might be able to access. Passwords written down in places that a hacker might find them could lead to an overall less secure system than if they were never written down anywhere at all. What this shows is that if a security policy is too inconvenient, users may do unexpected things (or just not listen to us), and we will end up with an overall insecure system anyhow.

In summary, more security is usually more inconvenient, but more inconvenient does not mean more secure. A good security policy or technology will increase both security and convenience, but it is typically hard to achieve both. For example, if we were to allow users to choose their own passwords, but somehow check that their password is not easy to guess, and ask them to choose another one if it doesnt satisfy our check, we will be achieving better security at the cost of hopefully only a minor inconvenience.

(Requiring stronger passwords means that more users will have trouble selecting one.Think about examples of technologies that increase both convenience and security.)Hypertext Transfer Protocol

Web BrowserWeb ServerRequest document at URLReturn requested document

HTTP RequestGET http://www.google.com/ HTTP/1.1Host: www.google.comUser-Agent: Mozilla/5.0 (Windows NT 5.1) Gecko/20060909 Firefox/1.5.0.7Accept: text/html, image/png, */*Accept-Language: en-us,en;q=0.5Cookie: rememberme=true; PREF=ID=21039ab4bbc49153:FF=4MethodURLProtocol VersionHeadersBlank LineNo Data for GET methodHTTP ResponseHTTP/1.1 200 OKCache-Control: privateContent-Type: text/htmlServer: GWS/2.1Date: Fri, 13 Oct 2006 03:16:30 GMT

... (page data) ... Protocol VersionHTTP Response CodeHeadersBlankLineWeb Page DataSimple Web Server (SWS)To illustrate what can go wrong if we do not design for security in our web applications from the start, consider a simple web server implemented in Java.

Only serves documents using HTTP

Walkthrough the code in the following slides

13SWS: main /* This method is called when the program is run from the command line. */

public static void main (String argv[]) throws Exception { /* Create a SimpleWebServer object, and run it */ SimpleWebServer sws = new SimpleWebServer(); sws.run(); }

14Now we walk through the code

Main() creates a SimpleWebServer object and calls its run() method. The run() method is just an infinite loop that waits for a connection from a client, and then attempts to process the request.SimpleWebServer Objectpublic class SimpleWebServer { /* Run the HTTP server on this TCP port. */ private static final int PORT = 8080; /* The socket used to process incoming connections from web clients */ private static ServerSocket dServerSocket; public SimpleWebServer () throws Exception { dServerSocket = new ServerSocket (PORT); } public void run() throws Exception { while (true) { /* wait for a connection from a client */ Socket s = dServerSocket.accept(); /* then process the client's request */ processRequest(s); } }

15Here is the SimpleWebServer object. First we initialize a variable that holds the port number the web server should listen to for connections from clients.

Then we initialize a ServerSocket.

Socket: The method of directing data to the appropriate application in a TCP/IP network. The combination of the IP address of the station and a port number make up a socket.

Think of this like an electrical socket. A web server and a web client both have a virtual power strip with many sockets on it. A web client can talk to a server by selecting one of its sockets, and then selecting a server socket and plugging a virtual wire into each end.

The run() method has an infinite loop waiting for a connection from a client. The call to ServerSocket accept() returns a socket object that corresponds to a unique socket on the server. This allows the server to communicate with the client.

Once the communication is established, the clients request is processed.SWS: processRequest (1) /* Reads the HTTP request from the client, and responds with the file the user requested or a HTTP error code. */

public void processRequest(Socket s) throws Exception { /* used to read data from the client */ BufferedReader br = new BufferedReader (new InputStreamReader (s.getInputStream())); /* used to write data to the client */ OutputStreamWriter osw = new OutputStreamWriter (s.getOutputStream()); /* read the HTTP request from the client */ String request = br.readLine(); String command = null; String pathname = null; 16processRequest() takes the client socket as input. It uses this socket to create BufferedReader and OutputStreamWriter objects. Once these communication objects are created, the method attempts to read a line of input from the client using the BufferedReader. We expect this line of input to be an HTTP GET request (as discussed earlier).SWS: processRequest (2) /* parse the HTTP request */ StringTokenizer st = new StringTokenizer (request, " "); command = st.nextToken(); pathname = st.nextToken(); if (command.equals("GET")) { /* if the request is a GET try to respond with the file the user is requesting */ serveFile (osw,pathname); } else { /* if the request is a NOT a GET, return an error saying this server does not implement the requested command */ osw.write ("HTTP/1.0 501 Not Implemented\n\n"); } /* close the connection to the client */ osw.close();

17The StringTokenizer object is used to break up the request into its constituent parts: GET, the pathname to the file the client would like to download.

If the command is a GET, we call the serveFile() method, else we issue an error. Then we close the connection to the client.SWS: serveFile (1)public void serveFile (OutputStreamWriter osw, String pathname) throws Exception { FileReader fr=null; int c=-1; StringBuffer sb = new StringBuffer(); /* remove the initial slash at the beginning of the pathname in the request */ if (pathname.charAt(0)=='/') pathname=pathname.substring(1); /* if there was no filename specified by the client, serve the "index.html" file */ if (pathname.equals("")) pathname="index.html"; 18The first if removes the initial slash at the beginning of the pathname, and the second if sets the file to be downloaded = index.html, if another file was not specified.SWS: serveFile (2)/* try to open file specified by pathname */ try { fr = new FileReader (pathname); c = fr.read(); } catch (Exception e) { /* if the file is not found,return the appropriate HTTP response code */ osw.write ("HTTP/1.0 404 Not Found\n\n"); return; } 19Now the method attempts to open the file and read it into the web servers memory. If the FileReader object is unable to open the file and read a byte from it, it issues an error message.SWS: serveFile (3)/* if the requested file can be successfully opened and read, then return an OK response code and send the contents of the file */ osw.write ("HTTP/1.0 200 OK\n\n"); while (c != -1) { sb.append((char)c); c = fr.read(); } osw.write (sb.toString()); 20If the file was successfully opened, send the HTTP/1.0 200 OK message and then the method enters a while loop that reads bytes from the file and appends them to a StringBuffer, until the end of the file is reached. Then this StringBuffer is sent to the client.Security in Software RequirementsRobust, consistent error handlingShare requirements w/ QA team

Handle internal errors securely dont provide error messages to potential attackers!

Validation and Fraud ChecksSecurity or Bust Policy21Error Handling RequirementsVulnerabilities often due to bad error handling

Example: DoS on SWS makes it unavailable

Just send a carriage return as the first message instead of a properly formatted GET message

Causes exception when breaking input into tokens

22DoS on SWS ExampleprocessRequest():

/* read the HTTP request from the client */ String request = br.readLine(); // empty string String command = null; String pathname = null;

/* parse the HTTP request */ StringTokenizer st = new StringTokenizer (request, " "); command = st.nextToken(); // EXCEPTION: no tokens!/* SERVER CRASHES HERE DENIAL OF SERVICE! */ pathname = st.nextToken();23Trace the code, assuming a CR sent from the client.

We read the line of input from the client. When we tokenize, the line: command = st.nextToken(); results in an exception. Control is returned to run() which does not handle the exception; then control is returned to main() which does not handle the exception either. Java terminates the application.How Do We Fix This?Requirement: The web server should immediately disconnect from any web client that sends a malformed HTTP request to the server.

The programmer needs to carefully handle exceptions to deal with malformed requests.

Solution: Surround susceptible String Tokenizing code with try/catch block.24Try/Catch Solution/* read the HTTP request from the client */ String request = br.readLine(); String command = null; String pathname = null;

try {/* parse the HTTP request */ StringTokenizer st = new StringTokenizer (request, " "); command = st.nextToken(); pathname = st.nextToken();} catch (Exception e) {osw.write (HTTP/1.0 400 Bad Request\n\n);osw.close();return;}25Close the connection to the client, rather than crash the serverSharing Requirements with QABoth dev & testers should get requirements

Should have test cases for security too: Does it malfunction when provided bad input?

Ping-of-Death: sending a packet of data can cause server to crash Ex: DoS attack on SimpleWebServerEx: Nokia GGSN crashes on packet with TCP option field set to 0xFF26Handling Internal Errors SecurelyError messages and observable behavior can tip off an attacker to vulnerabilities

Fault Injection: Providing a program with input that it does not expect (as in the DoS attack against SimpleWebServer) and observing its behavior

Ethical hackers often hired to find such bugs

27Validation and Fraud ChecksRequirements should specify which error cases & threats to handle

Credit Card Example:Luhn Checksum: sum of digits, with even digits doubled, being divisible by 10 ensures validity of numberCVC: guards against fraudsters who have stolen # but dont know the CVCFull Credit Card Check might be too costly, intrusive28Measurable Security RequirementsAccess Control Security: Only certain users can perform specified functionsAuditing: Maintain log of users sensitive actionsConfidentiality: encrypt certain functions outputAvailability: Certain features should be available almost always (99.99%) or within specified time

Include these requirements in design docs!

29Security or BustShould not ship code unless its secure

Advantage gained by launching earlier could be lost due to vulnerabilities that tarnish brand and lead to lost revenue

Ex: Microsoft delayed ship of .NET server in 02 because security requirements not met by code freeze deadline30Security by ObscurityTrying to be secure by hiding how systems and products work (to prevent info from being used by attacker)

Ex: Military uses Need to Know basis

May be necessary, but not sufficient to prevent determined attackers31Flaws in the ApproachWhat assumptions to make about adversary?Knows algorithms? Or not?Algorithms in binary secret?

Attackers can probe for weaknessesreverse engineer executablesobserve behavior in normal vs. aberrant conds. (use fault injection)Fuzzing: trying random input strings to find an exploitblackmail insiders32Now that we have added security requirements to the requirements documents of our information systems, let talk about how we should go about implementing mechanisms that enforce those security requirements. Many organizations practice security by obscurity. That is, they attempt to keep things secure by keeping them secret. For example, companies keep many trade secrets, and sometimes dont even tell their customers how their products work. Military organizations only disseminate information to people on a need to know basis. In both these cases, an organization is trying to keep information secure by hiding it from others. While it is possible to achieve some level of security by hiding information, that is, through obscurity, it may not always make sense to assume that an attacker does not know how the system works.

For example, one might assume that a user might not be able to understand how a program works because it is deployed as an executable binary file (i.e., a .exe file). However, an attacker can easily disassemble, decompile, or reverse engineer the executable. Also, the attacker could derive information about how the program functions simply by observing its behavior under normal conditions, and/or its behavior on inputs that the attacker selects. (Can we come up with a simple example?) In addition to the technical approaches above, the attacker may also potentially be able to blackmail or coerce those that do know how the system works into disclosing details about how it works.

To be conservative, we may therefore want to assume that the attacker knows exactly how the system to be attacked works. As such, we may want to avoid practicing security by obscurity if a better option exists. In the following, we will talk about how it may be possible to build a secure system whose design could be public knowledge, where the security of the system does not depend upon hiding design details, but instead depends on certain keys being secret. By keys we mean relatively short sequences of bits (i.e. 1024-bit keys). It is usually much easier to keep a few keys secret compared to keeping all of the information about how a system functions secret.

Interesting article about the topic at: http://news.com.com/2010-1071-980462.html?tag=fd_nc_1Secret KeysKerckhoffs doctrine (1883): The method used to encipher data is known to the opponent, and that security must lie in the choice of key.assume the worst case!obscurity alone is not sufficient

Compromised key can be changed without re-designing system.Key is smaller and easier to keep secret

33The concept of assuming that the attacker knows how the system functions, and that the security of the system should be dependent upon a key goes back to Kerckhoffs doctrine in 1883. We will see various algorithms whose details are completely public, yet secure in the next chapters on applied cryptography. For now, just keep in mind that hiding the details of how your system works does not typically provide an acceptable level of security. In fact, if the design of a system is not reviewed by a third-party, it is more likely than not that it will contain security holes that the original designer did not conceive of but that will be relatively obvious to third parties. (This is true unless, of course, the original designer is an experienced, expert cryptographer!)SWS ObscurityJust distributing Java bytecode of SWS (and not source code) not enough security

Can be disassembled or decompiled (e.g. Mocha, Jad) to produce rough source code

Even disassembling can reveal the DoS exploit of the vulnerable tokenization process 34Disassembling SWSpublic void processRequest(java.net.Socket);throws java/lang/ExceptionCode:0: new 25; //class BufferedReader3: dup4: new 26; //class InputStreamReader7: dup8: aload_19: invokevirtual 27;12: invokespecial 28;15: invokespecial 29;18: astore_219: new 30; //class OutputStreamWriter22: dup23: aload_124: invokevirtual 31;27: invokespecial 32;30: astore_331: aload_232: invokevirtual 33;35: astore 437: aconst_null38: astore 540: aconst_null41: astore 6

43: new 34; //class StringTokenizer46: dup47: aload 449: ldc 35; //String51: invokespecial 36;54: astore 756: aload 758: invokevirtual 37;61: astore 563: aload 765: invokevirtual 37;68: astore 670: aload 572: ldc 38; //String GET74: invokevirtual 39;77: ifeq 9080: aload_081: aload_382: aload 684: invokevirtual 40;87: goto90: aload_391: ldc 41;93: invokevirtual 42;96: goto 10199: astore 8101: aload_3102: invokevirtual 44;105: return35 Things to AvoidDont invent your own encryption algorithm!

Dont embed keys in software! Nor in Windows Registry which is readable by all

Dont Forget Code Reuse: reuse well-tested software known to be reliably secure instead of doing same thing from scratch

36The idea of not providing security through obscurity entails many rules of thumb that one should keep in mind when designing secure systems. Here, we state a few such rules of thumbs, and examples of things that designers should stay away from.

For instance, Systems designers should not attempt to invent their own algorithms for encryption. Designing encryption algorithms is a tricky and challenging business that should be left to cryptographers. If someone does decide to invent a new encryption algorithm, they should keep in mind that it will not be secure simply because no one else knows how it works. A good cryptography can take advantage of modern cryptanalytic techniques to break the security of most ad-hoc developed encryption schemes. If one does decide to attempt to invent a new encryption algorithm, the design of the algorithm should be reviewed by others, and should be reviewed by the community of cryptographers. The security of a good encryption scheme should be dependent on well-chosen secret keys, not on the obscurity of the algorithm itself.

In addition, it is not recommended that one develop new implementations of existing encryption algorithms. Coming up with new encryption algorithms that are secure is hard, and getting the implementation of a cryptographic algorithm (new or old) is just as hard. A slight bug in the implementation of an encryption algorithm can be the door to a serious security hole. There are many well-known implementations of encryption algorithms (we will cover some of these later in the course), and a system designer should opt to re-use an existing, tested implementation that has already been looked over and tested by many smart people. While re-use of software is generally a good idea, it is an especially good idea when it comes to encryption and other cryptographic algorithms.

Another rule of thumb that can be derived from the fact that security by obscurity is typically not good security is that one should not, whenever possible, attempt to embed secret keys in software. Keep in mind that software is compiled into binary code, and can be disassembled, decompiled, and/or reverse engineered. Secret keys will not stay secret simply because binary files are hard to read. In fact, a good secret key is typically a random sequence of bits while binary code that contains machine instructions often have predictable patterns. So, if you attempt to hide a secret key in a binary executable, it will probably exhibit more randomness (entropy) than the rest of the code. As a result, an encryption key buried in a binary executable can be easily discerned if it is a good key (is a random sequence of bits), and is sufficiently long (another property of a good key). So, the moral of the story is that secret keys should not be stored in program code if possible.A secure program will not attempt to achieve security by simply storing secret information in hard to reach places. For instance, some programs attempt to protect secret information by storing it in the Windows Registry. The Windows Registry is a part of the Windows operating system that applications can use to store configuration information. Most users of Windows PCs do not know how to read the information in the registry, but hackers (attackers) certainly will. A program that attempts to store a password in the Windows Registry in hopes of keeping the password secret is doomed to be hacked. A hacker can simply run the regedit command on the Windows prompt or use the Windows API to read the password out of the registry. So, if information is to be stored in the Windows Registry for security purposes, it should be encrypted. Of course, based on our prior rule of thumb, the encryption key should not be stored in a susceptible location (such as the program that uses the key) either. The encryption key should ideally be derived from password that the end-user does not write down anywhere, and only enters into the program that needs to use it whenever necessary.Open vs. Closed SourceIs open-source software secure?Open:Some people might look at security of your application (if they care)may or may not tell you what they findClosed:not making code available does not hide muchneed diverse security-aware code reviewsA business decision: Not a security one!37There are a plethora of companies that need to secure their software that might are aware of the security by obscurity problem, and decide to make their software open-source in order to secure it. When a company makes a piece of software open-source, it makes the source code available to the entire world for review. They reason that if a piece of software can only be made secure by subjecting it to an open review process, then why not make it open to the entire world such that people can point out problems, and the company can simply fix it. While the company might have good intensions, the company is making yet a more detailed set of assumptions if it believes that it can create more secure software by open-sourcing it.

The first additional assumption that the company would be making is that by open-sourcing its software, others would actually look at the source code of the software, and specifically would look at the sections of code that might lead to security flaws. If the source code is hard to read, not very understandable, uninteresting, etc. the code will probably not be read at all. In addition, if an open-source developer actually does look at the code, he or she might be interested in looking at a piece of the code whose functionality they are interested in modifying for their own purposes. The open-source developer might want to change the GUI, or adapt some part of the functionality to serve a specific request that one of their customers might have. Security may or may not be on the agenda of the open-source developer. Finally, even if the open-source developer is interested in the security of the program, there is no assurance that he or she will actually report any security vulnerabilities to the author of the code. The open-source developer may be malicious and may be looking to attack some deployed version of the software in question. Due to all these reasons, the simple act of making a piece of software open-source will not automatically lead to an increase in its security.

On the other hand, keeping a piece of software proprietary (closed-source) does not ensure the security of a program either for all the reasons that we discussed when we talked about in the security by obscurity section. Only releasing the binary code of an application does not hide much from the attacker, and the attacker can still exploit security holes by studying the behavior of the running program. Even if a company keeps its code proprietary, it should be reviewed by security experts to look for vulnerabilities.

What this all means is that if you want to ensure the security of an application, you need to spend time reviewing the code for security vulnerabilities. You cant simply open-source it in the hopes that others will find security flaws for you, and you cant hope that it will be secure just because you dont release the source code. You need to spend time reviewing the security of your application yourself if you indeed want it to be secure.

So by open-sourcing a piece of software, one might argue that they could be just making the hackers job a little easier. This is possible, but a determined hacker doesnt need to source code. This does not contradict what we said when we talked about security by obscurity. Hiding the source code of an application does not make it much harder to attack.

At the end of the day, the decision to open-source a piece of software or keep it closed source should be a business decision. Is open-sourcing or keeping the code proprietary more complementary with the business model under which the software is intended to generate revenue?

(Good discussion of this topic in Building Secure Software by John Viega and Gary McGraw.)A Game of EconomicsAll systems insecure: how insecure?What is the cost to break system? Weakest link?

For every $ that defender spends, how many $ does attacker have to spend?If (Cost to break system >> Reward to be gained) Then system is secureOtherwise system is NOT secure

Raise the bar high enoughSecurity is about risk management38Another important thing to realize before embarking on the design of a secure system is that security can be viewed as a game of economics. In fact, all systems can be considered to be insecure, and the real question becomes how insecure. Specifically, the insecurity of an application can be measured by the expected cost that an attacker would need to spend to achieve his or her attack. This expected cost can be made up of the amount of time the attacker needs to spend, the materials and technology that the attacker would need to acquire, and the risk that the attacker thinks he or she might get caught. When we study applied cryptography in the next chapter, we will see that the amount of security that any particular cryptographic algorithm offers is partially a function of how many bits the secret key used with that algorithm is. For each additional bit of data in the key, the amount of time that an attacker may have to spend to attack the algorithm is multiplied by two. Another interesting question to look at from a security standpoint is: for every dollar that the attackee spends, how many dollars does the attacker need to spend? While it is interesting to think about security in these terms, it is typically hard to come up with quantitative numbers to answer these types of questions. Nevertheless, it is useful to pose these questions to help get our heads around the nature of the problem of security.

In this view of the world where we consider all systems to be insecure, the next relevant question is what does it take to make sure that a system is secure enough? We might define secure enough to mean that the expected cost to break into the system is much greater than the potential reward to be gained by the attacker. If a system is secure enough, then there does not exist enough economic incentive for the attacker to break into the system. In chapters 6 through 8 of this course, we will study a number of defenses that one can employ to make a system secure enough against an average hacker. (Where average means that the attacker does not have any special vendettas or special rewards to break into your system.)

The bottom line is that security is about risk management. How much technology does one need to employ to manage the risk that an attack might achieve its goals? If no technology is employed, the risk that an attack will be successful will be high. If some technology is employed, the risk that the average hacker will successfully mount an attack might be drastically reduced. In a lot of technology is employed (and it is targeted at addressing the most relevant threats, and there arent any physical or policy-oriented holes, etc.) then it is likely that all but the most sophisticated hackers will be able to successfully mount an attack and the risk is low. Economics ExampleTwo ways to break system with L-bit keyBrute-force search for key: costs C cents/try Payoff employee (earning S yearly for Y years, interest ) for the key: costs P = i=0Y SY-i dollarsBrute-Force Total Cost:On average, try half the keysCost = (C/2)(2L) = 2L-1CEx: Say P=$5 million, L=64, C=3.4e-11, brute-force cost is > $300 million (better to payoff)Break-even point: 2L-1C = i=0Y SY-i

39Good Enough SecurityAlpha Version: security should be good enoughWont have much to protect yetDifficult to predict types of threatsBut still set up a basic security framework, hooks

Beta Version: throw away alphaDesign in security to deal with threats discovered during testing40Key PointsSimpleWebServer: Need to design in security from start to be successful.

Economics Game (cost >> reward for attacker)

Good Enough Security: Design Incrementally From Beginning

41In the previous chapter, we covered a number of high-level requirements that more secure systems strive to provide. In this chapter, we will discuss a number of design principles that security architects typically keep in mind when building secure systems.