University of Washington€¦  · Web viewNote: In the real word, there are recourses when your...

66
DRAFT 1.00 Department of Energy Cyber Security Grass Roots Community Talaris Grass Roots Roundtable on Cyber Security Breakout Session Notes Authors: Barbara Endicott-Popovsky Sarah Alkire Catherine Call Julia Narvaez Thomas Ng Lucas Reber Tamara Stewart

Transcript of University of Washington€¦  · Web viewNote: In the real word, there are recourses when your...

Page 1: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Department of EnergyCyber Security Grass Roots Community

Talaris Grass Roots Roundtable on Cyber Security

Breakout Session Notes

Authors:Barbara Endicott-Popovsky

Sarah AlkireCatherine CallJulia NarvaezThomas NgLucas Reber

Tamara Stewart

SeattleNovember 1, 2009

Page 2: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

ContentsExecutive Summary..............................................................................................................................................................4

1. Trust and Trustworthiness...........................................................................................................................................61.1. Executive summary................................................................................................................................................61.2. Session objectives...................................................................................................................................................61.3. Discussion focus points........................................................................................................................................6

Human trust models........................................................................................................................................................7Computer Systems Trust...............................................................................................................................................7Trustworthiness and Risk relationship...................................................................................................................8Trust and Assurance relationship.............................................................................................................................8Systems trusting other systems..................................................................................................................................8Identity verification......................................................................................................................................................... 9Data...................................................................................................................................................................................... 10Formula for trust............................................................................................................................................................10

1.4. Recommendations...............................................................................................................................................112. Complex Systems............................................................................................................................................................12

2.1. Executive summary.............................................................................................................................................132.2. Session objectives................................................................................................................................................132.3. Discussion focus points......................................................................................................................................13

How we got here.............................................................................................................................................................13Modeling and Simulation............................................................................................................................................14Keys to using trust/complexity................................................................................................................................14Composing Systems...................................................................................................................................................... 14Design for a hostile environment............................................................................................................................15Engineering discipline................................................................................................................................................. 15Wake up already.............................................................................................................................................................15

2.4. Recommendations...............................................................................................................................................153. Scientific Evaluation......................................................................................................................................................16

3.1. Executive summary.............................................................................................................................................163.2. Session objectives................................................................................................................................................173.3. Discussion focus points......................................................................................................................................17

Lack of Scientific Evaluation in Computer Science..........................................................................................17Lack of Common Guidelines for Scientific Evaluation in Computer Science........................................18Educational Efforts........................................................................................................................................................18Implementing a Grand Challenge Event or Mini-Challenge Event...........................................................18Encouraging Repeatability.........................................................................................................................................19Common Datasets for Research...............................................................................................................................19

3.4. Recommendations...............................................................................................................................................19Appendix...................................................................................................................................................................................... 20Appendix A. Trust and Trustworthiness Breakout Session Notes.....................................................................21Appendix B. Complex Systems Breakout Session Notes.........................................................................................35Appendix C. Scientific Evaluation Breakout Session Notes...................................................................................44

2

Page 3: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Executive Summary

The Round Table meeting of the DOE Cyber Security Grass Roots Community was held at the Talaris Conference Center, in Seattle, Washington, on August 27 and 28th, 2009. Perspectives were shared from researchers and practitioners from inside the Department of Energy (DOE), other government agencies, academia, or industry. The predominant reason for the meeting was the discussion of ideas that have the potential to transform cybersecurity research through a science-based approach.

Trust Executive Summary:The discussion goal was the exploration of new metaphors, models, and architectures for trust/trustworthiness to support emerging social and business interaction modes of cyber-physical systems. The starting point was the discussion of human trust models. This led to analyzing factors that affect trust between humans and computer systems, as well as trust among computer systems. Focus points discussed include:• Trustworthiness and risk relationship, which are context dependent • Trust and assurance relationship, level of trust derived from level of assurance • Identity verification to assess confidence, risk, and trust• Trust from the Data/Data Sender’s perspective to protect the integrity and use of dataConclusions exalt the importance of risk analysis to cyber-security and more rigorous approaches across the board. Recommended areas of research include determination of how the Human Trust model has bearing on what/how we need to build systems, exploration of end-to-end trust model, systems usability, trust and decision making relationship, and trust reevaluation.

Complexity Executive Summary:The discussion on complexity centered on how to take advantage of complex systems to make them more secure rather than being overwhelmed by their complexity, as well as how to tie in trustworthiness and scientific evaluation principles to better understand complex systems in accordance within the scientific method. Some keys to using trust/complexity include sound architecture, modularity, and abstraction keys. Complexity does not reveal what is possible, only what is not. The only way to prove something is to simulate it. Furthermore, the group wants to address the research needs of:• Principled development• System engineering in the system of cyber• Data oriented approaches• Educational awarenessPrimarily the discussion group found that the industry needs a cultural/paradigm shift, an interdisciplinary, scientific approach, and a break in the chain of irresponsibility. All of this would lead to creating a system that would tolerate internal and external corruptions for a hostile and secure environment.

Scientific Evaluation Executive Summary:Some of the key points that the discussion on scientific evaluation focused on encouraging scientific evaluation in the classroom, adopting guidelines throughout the field, implementing a grand challenge (using methodology much like the DARPA grand challenges), and establishing repeatable and reliable experiments that can be used as educational resources. Some of the suggestions that were proposed and agreed upon were: Establish norms across the board

3

Page 4: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Appoint a jury of peers to critique Research applicable courses and recommend those that meet the guidelines Include guidelines from the list of books Create a collective space for literature, problems, applications, case studies, etc. from the field Offer forums at conferences Create a common scientific evaluation guideline for use across the disciplineThe primary conclusion reached in the discussion was that there should be Grass Roots efforts that give the sense that there is a community to encourage changes.

4

Page 5: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

BackgroundThe Round Table meeting of the DOE Cyber Security Grass Roots Community was held at the Talaris Conference Center in Seattle, Washington, on August 27 and 28th, 2009. Perspectives were shared from researchers and practitioners inside the Department of Energy (DOE), associated with other government agencies, or in academia or industry. The predominant reason for the meeting was to discuss ideas that have the potential to transform cybersecurity research through a science-based approach.

The main focus areas of the roundtable were discussed in three breakout sessions: Trust and Trustworthiness Complex Systems Scientific Evaluation

1. Trust and Trustworthiness

Moderators: Aaron Tombs, Alex NicollNote takers: Julia Narvaez, Tamara Stewart

1.1. Executive summary The discussion goal was the exploration of new metaphors, models, and architectures for trust/trustworthiness to support emerging social and business interaction modes of cyber-physical systems. The starting point was the discussion of human trust models. This led to analyzing factors that affect trust between humans and computer systems, as well as trust among computer systems. Focus points discussed include:

• Trustworthiness and risk relationship, which are context dependent • Trust and assurance relationship, level of trust derived from level of assurance • Identity verification to assess confidence, risk, and trust• Trust from the Data/Data Sender’s perspective to protect the integrity and use of data

Conclusions exalted the importance of risk analysis to cyber-security and more rigorous approaches across the board. Recommended areas of research include determination of how the Human Trust model has bearing on what/how we need to build systems, exploration of end-to-end trust model, systems usability, trust and decision making relationship, and trust reevaluation.

1.2. Session objectivesDiscussion of the following topics:

Which new metaphors/models/architectures for trust/trustworthiness should we explore to support newer forms of social, business, and other interactions, particularly with the addition of cyber physical systems?

Are there things we have tried or are using that may be more effective? What concrete actions can we take to improve the effective use and study of

trust/trustworthiness in our discipline? If we do all this, is it sufficient?

1.3. Discussion focus pointsThe discussion focused on the following aspects of trust:

5

Page 6: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Human trust models

What is needed to build, maintain, and validate trust? Trust relies on Level of Perception (a human attribute), which contributes to Confidence. The following are aspects of trust when humans interact among themselves or with systems: Trusted systems: Characterized by high confidence. There is the illusion that we know who is in

the conversation, e.g., two people in a face to face conversation. In such conversation there is a continuous assertion, , and it might be affected by an emotional experience. Characteristics: o Self-contained and simpleo Adaptative

Belief Trust: Belief in somebody’s integrity or ability. Some people might not want to be in a position of believing that somebody is trustworthy; they would rather have assurance. If they are distanced from belief trust, then they have less reliance on the analysis of trustworthiness.

Reliance Trust: Trust in the form of the DoD trusted system. You trust a system that you rely on to perform a task. This has different mathematical properties than Belief Trust.

Transitivity: When somebody trusts you, you are trusted by somebody else, e.g., implementation of an email filter where people trust each other. Trust is transferrable, but it is necessary to define in what conditions there is transitivity.

Lessons from physical security: o Trust is a conditional state, situational and dynamic. The dynamics can change very quickly.o Trust can be measured.

People need to know how much trust there should be and how much there actually is in order to make adjustments. People decide to take action when the assurance level is “X”.

Spectrum of characteristics that could be combined into a level of trustworthiness: o Combination of physical characteristicso Expand conditional (based on “cultural” norms/standards/verification) models to include

tech norms/standards and human norms/cultural standards, because Cyber war is against People, using Technology as the means.

Risk assessment: The willingness (for a human) to extend “Trust” is based on the level of assessed “Risk” for a particular situation.

Reference: Kenneth Lane Thompson "Reflections on Trusting Trust"

Computer Systems Trust

A combination of several dimensions provides trust: human factors, technology, system dependency, and intuition.

Context: Trust depends on the purpose of the system, e.g., a financial system or a a health system require different trust levels than other systems.

Complexity: When a system is too complicated, it is difficult to know everything about it. You decide how much you need to know in order to accomplish your objective. The system will not be perfect; you need to be prepared for failure so that the failure does not jeopardize the objective that you are trying to accomplish with the system.

Fidelity: Fidelity is determined by how the information is presented and how intuitive it is, no matter how complex the system is. For example, in the military, the general says: "do not give it to me complex" (Common operational picture).

Culture dependency: Trust is culture dependent. Cultural experience creates such confidence in how something works that it becomes intuitive, e.g., turning a light switch on or off.

6

Page 7: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Analogy of how humans recognize images (vision recognition methods): Humans can filter an image. Humans do not see the whole picture, but certain points. They do not look for boundaries, but for particular points. In a similar way, there might be certain discrete characteristics that provide trust.

Modeling of factors: There is a combination of factors depending on the system and the decision process. For physical systems, the containment is important. For cyber systems, human factors and physical factors are applicable.

What are the alternatives?o Layers of protection: Layers of protection can be implemented if an individual chooses. For

example, using a virtual machine to access a banking system.o Visual cues: The system should have accurate and intuitive visual cues.

Trustworthiness and Risk relationship

What does trust mean in decision making? Do we even care about Trustworthiness if there is no (or little) risk to the mission? Trustworthiness and risk relationship is context dependent: To contextualize trust, it is necessary

to understand the risk associated with the kind of trust that the individual is trying to extend; e.g., is one person’s “trust threshold” the same as another’s?

Risk analysis: Application of an established field by bringing mathematical analysis of risk to the cyber-security world. Extensive literature in risk/decision making in other files could be very fruitful.

Risk assessment: Zero risk is unachievable. What is the risk compared with the consequences of failure?

Derivative value: Trust is a derivative value function of:o Risk value: quantifiableo Psychological impact: non quantifiable

Trust and Assurance relationship

Trust vs. assurance: Some people prefer to talk about assurance because assurance is quantifiable.

Testing for assurance: Complete testing is an instrument to provide assurance. Given a level of Assurance, you can “derive” a level of “trust"; e.g., a formally evaluated chip with known performance can be a “Trusted” device. However, if it is not installed correctly, or installed and then operated with incorrect signals, the deployed result can lose all the built-in Trust. Does this imply that the deployment instance needs to be evaluated?.

Systems trusting other systems

In the OSI model, layers provide services to other layers. How about the other layers where we develop systems to trust other objects? Do we design a system to trust or not to trust? Binary trust approach: Is trust a binary object development? Implementations can be other than

binary. For example, the voting system in computers is designed to not develop binary trust. It is based on four special purpose systems, architecturally designed by two different companies. To sustain one vote, there are four different systems that receive different inputs.

Trust more or less approach: There are ways to implement a system to make it not only yes or no, but to make it trust more or less. The concept for decision making is to trust more or less.

Use of metrics: Use well specified metrics to avoid getting lost in details. E.g., in the Orange Book and the Red Book, there is a trusted database interpretation. However, the application of

7

Page 8: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

metrics is questionable due to the difficulties of interpretation. For example, if one were to say that a trust level of 85% is the goal, does it mean that the database is going to trust the OS 85%? Or is the database going to ignore 15% of the authentication information that the OS is passing it?

End to end trust: Trust is a conscientious decision; both parties decide the measures to put in place: o Evidence: There is the issue of gathering evidence and processing evidence. Systems should

provide very solid evidence to allow an objective decision. o Context: For example, given a set of 5 machines, the same input information might lead to

different decisions of trust by each machine based on the context of the recipient machine. Assurance model: There are two models:

o Static assurance: There is a mechanism in which a machine signs off on something that has been evaluated under some circumstances. Under the same circumstances, other machines might trust it since it has already been signed off on.

o Dynamic assurance: We are facing a situation where dynamic assurance is used to make automatic trust assessments instantly: a machine will need to make a decision based on specific conditions in order to make an automatic trust assessment. There is an important historic component.

Identity verification

Look at the Event and the information about the Real Person (the Human) to assess confidence, risk, and trust. Even though an individual authenticates in a system, there is no assurance that this was the authorized individual. How can we improve authentication? Evaluation of additional patterns: To improve authentication, the system can evaluate additional

patterns and inputs to determine if the authentication is valid. For example, the system knows that a particular individual would not log into the system from Kazakhstan.

Personal identifying patterns: are there any personal identifying patterns that could be used to confirm identity (e.g., voice, gait, other sensory input)?

Privacy: Are we restricted from using authentication mechanisms because of privacy issues? Use post analysis: Use evaluation and degree of knowledge to authenticate. Again, the trust more

or less approach. Trust vs. consequence: Different rules can be applied to scenarios that have different conditions;

e.g., financial transactions from Kazakhstan can have different transaction limits than transactions from the United States.

Identification of "the equation": How do we identify the equation to formulate the conditions that lead to trust? Considerations for identifying the equation:o How do we analyze consequences through the inherent and apparent dependencies? o How do we discover the dependencies that are not apparent? o How do we discover what is being trusted? o How do we sub-analyze those factors and combine them into a final decision that allows us

to proceed with confidence or not? System dependencies: The effectiveness of a trust decision in the system depends on whether or

not the designer accounts for all the dependencies in that system. Digital certificates: Digital certificates that include identities using bioinformatics are supposed

to be very secure – more secure than regular venues. However, if an individual presents false credentials and obtains a good digital certificate, suddenly he is trusted everywhere. In some way, this makes the system more brittle rather than more robust.

8

Page 9: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Data

Trust for the purpose of protecting the integrity, exposure, and use of the data has to do with privacy policies, cryptography, and access control. Types of data trust:

o Resilience: self-healing data (from the integrity viewpoint)o Trust that data is used properly by the right peopleo Trust that data is interpreted correctly by the people using ito Consider the Sender’s Trust requirements as well as the Receiver’s Trust requirements

Consideration of the Insider threat model: There is a trust issue once a person sees the data. o Analysis of the person’s behavior: again, we have the question of how to decide what is

trustworthy. o Combining socio-technical data from different sources gives a better prediction for whether

a person will be an insider threat in the future. o There has been work done processing large amounts of emails that has been accurate in

terms of the insider threat. There is no deterministic solution and it will not be binary. Research Recommendations: Improve components leading to Integrity Trust:

o Data modification/destructiono Operational use of data/meta datao User Identity/Authorizationo Path or pedigree of how Trust was acquired

Note: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat model.

Formula for trust

A formula for trust includes the following elements: Context: Systems using decisions in context. Consequences: Direct consequences and indirect consequences. Some are direct and obvious

and others are difficult to identify, such as the cost of a security bridge impacting stock prices. Risk appetite Alternatives: Some might lead to a high opportunity cost Building trust into a system: Includes how to lose trust; e.g. rating on eBay; usability, feedback,

validation. Feedback: Systems do not provide enough feedback for people to make a trust decision. When

the conditions change, the system could provide feedback. Time component: A system starts with some assurance or trustworthiness which then

deteriorates. There could be a continuous evaluation to tell if the input can be trusted. Increasing trust: Based on a set of characteristics, we can determine what the reality is. How to create/build trust in the internet?: Studies about how people build person to person trust

in the internet indicate that the mind fills in the blanks left by the lack of face to face interaction. Cues in the interface with the computer are used to build trust.

1.4. Recommendations

Rigorously apply risk analysis to cyber-security. Research how to effectively (and correctly!) communicate trust information with users.

9

Page 10: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Determine how to make more use of available context to assess trust. What new information could be made available that could increase the ability to assess trust? Engineering practices: clear articulation of what a system depends on, and what will compromise its

intent, and to what degree. More rigorous approaches across the board. Establish large amounts of data on trusted entities to help better decision making Depending on the objective, sacrifice anonymity for authentication Robust policy control for information access: would that eliminate the need for trust?

Questions and research areas: How to build trust in a large computer system? How do we recognize what characteristics of trust are

the most valuable? It is necessary to identify ways to collect metrics. Two phases are proposed: o The need to trust from the beginning: trust is built into some systems (because of their nature),

such as Smart Grid. They start at a high level of trust. o Trust reevaluation: Over time, there needs to be a feedback loop to manage clues to evaluate

whether there are reasons to downgrade the trust level. Does the human trust model have applications to the system-to-system trust model?

o Human trust is built through cognitive learning. It gets broken and relearned. Human beings make decisions based on a number of factors. There can be trust extension; e.g. a non-human entity connected to the internet, and "John" signed it. That entity might be trusted by another machine now that it has been signed by “John”. In this case, the trust in the machine is elevated, but there is not absolute trust.

o Human beings make decisions using a number of factors, not a single factor. If an entity is validated by a number of other entities, and hence it has more clues, the level of trust increases. The number of validations by other entities can be used to elevate the trust level.

How to recognize, from the many factors of trust, the one that deserves more attention?o It is necessary to know what the system is designed to accomplish. By defining the purpose of a

system from the beginning, we can impose constraints. o System Success needs to be defined: how to measure the level of “Goodness” of what the system

accomplishes. End to end trust model: Assume that the cloud around us is un-trusted. Ignore what is in the middle

and only build trust end-to-end. Consider sending a message not to a final destination, but to the next stream up, e.g., to the telephone line, cell phone line, hub to hub, and so on, authorizing at every point. What is the universe of criteria that we need to consider?o Situation, e.g., major events (a terrorist attack, a hurricane, etc.)o Does the “natural background noise” equivalent play a role? How do we characterize and include

it in assessments?o How do we know that trust has been violated? If you build trust, you need to continuously re-

evaluate it, or recognize when it needs to be re-evaluated.o How do we develop or restructure programs so that their constructor systems give appropriate

feedback? (E.g., error messages). o Combination of visibility, usability and feedback. For example, if a user has logged into the

system from "Turkey", the system should inform the user of this the next time he/she logs in.o Is “End-to-End” Trust a viable alternative?

o This would be a conscious decision to “accept risk” at a “certain level”. o What measures would you use to determine the risk you accept? o How do you determine what risk level is appropriate at any given time? o How do you gather and record evidence of risk decisions for comparison, analysis, and future

decisions?

10

Page 11: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Trust and decision making: We agree that we need to be able to collect some information in order to make a good decision. The research is what information we need to collect.o How do we determine when we lose trust?o What is important to restore (and to what degree) when a system becomes degraded/shut down?o How “clean” should the system be? How much Risk/Trust must be restored?o When can the system be “trusted” to resume operation?

Application of current areas of research: There are studies on how to write secure software. Similarly, there needs to be a field of study on how to design trust parameters into a system.

Trust parameters: How do you build a dashboard with preferences, such as Google preferences? How do you decide if you want to be totally safe or totally open?o How do you want to value the parameters? o How do you correlate the value of the parameters behind the scenes? There needs to be complete

understanding of all the criteria, everything that we can measure, and it needs to be dynamic, so that the system does not need to ask the user.

Identity verification: Establishing extensive information about the entities to trust is a huge area of research. Hypothesis: anonymity cannot exist in the presence of strong authentication; they are linked. The challenge is that we cannot preserve either the anonymity or authentication that we have right now. We need to sacrifice some privacy.o We can create a rich identity verification package that we can pack and send to the receiver to

assure that it is the actual person. There might be privacy issues. o There are patents describing non conventional authentication mechanisms, e.g. evaluation of the

user’s IP address to identify the country, whether the user is at a business, home, or an authorized ISP. These features are not available through the browser, but there are commercial tools that provide that information. Combining all those pieces of information creates a super-category of information to look at.

o The communication between the trustee and the trusted entity needs to be trusted. If it is not, the identity verification is less confident. On the down side, in the last year, the bot-nets literally inserted themselves between the two ends in a transaction. However, the probability of this happening on a larger scale is still low.

Usability: When computers make trust based decisions, how do you convey their security status to the human user who will make a trust decision?

Resilience: If we can replace trust in electronic elements with sufficiently robust controls and policies, can we eliminate the concept of trust? Hypothesis: Trust can be eliminated with sufficient resilience.

Trust reevaluation after a disaster event: How can Trust be reevaluated after a disaster event?o Do you assess the trust criteria first? o Do you establish sufficient criteria first?o Do you ensure that the system is rebuilt/retrofitted with trust criteria monitoring/auditing

capability?o When/how much data is enough?o How can that be instrumented?o How can industry be directed to build that instrumentation into all components?

2. Complex Systems

Moderators: Robert Armstrong and Joanne WendelbergerNote takers: Catherine Call, Thomas Ng

11

Page 12: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

2.1. Executive summary The discussion on complexity centered on how to take advantage of complex systems to make them more secure rather than being overwhelmed by their complexity, as well as how to tie in trustworthiness and scientific evaluation principles to better understand complex systems in accordance within the scientific method. Some keys to using trust/ complexity include sound architecture, modularity, and abstraction keys. Complexity does not reveal what is possible, only what is not. The only way to prove something is to simulate it. Furthermore, the group wants to address the research needs of:

Principled development System engineering in the system of cyber Data oriented approaches Educational awareness

Primarily the discussion group found that the industry needs a cultural/paradigm shift, an interdisciplinary, scientific approach, and a break in the chain of irresponsibility. All of this would lead to creating a system that would tolerate internal and external corruptions for a hostile and secure environment.

2.2. Session objectives Spend more time on these questions:

How can we take advantage of complex systems so that they are more secure, rather than being overwhelmed by their complexity?

How can we tie trust/trustworthiness to scientific evaluation principles to better understand complex systems?

Spend less time on these questions Defining the specific nature and peculiar challenges of complex systems (can include

everything from enterprise through SCADA and including human, psychosocial, cultural, and political, etc.)

What concrete actions can we take to improve the effective use and protection of complex systems?

And if we do all this, is it sufficient?

2.3. Discussion focus pointsThe session started with the agreement that complexity should be looked at in terms of scientific method, and also that complexity makes things difficult. Complexity needs to be leveraged and managed. The preliminary questions from the discussion were: What problem are we trying to solve? Are we looking for an architecture, an approach such as decomposition, or reduction? Are we looking for a solution or a refinement?

How we got here

In the past, when computer science was a science, there were systems such as Multics that were formally, rigorously, and efficiently specified. These systems addressed security at the system level. With the advent of mass-market PCs, smart phones, service-oriented architectures and, most recently, the “cloud,” time to market economic pressures have created a demand for convenience and utility at the expense of formal design, scrutiny, and system security. There is little liability for bad software. During this process, the manufacturers have been as surprised as anyone at the creativity of system misuses and malware. Security has also been commoditized and driven by

12

Page 13: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

market forces: the new threat gets attention. Problems such as DDOS have been discussed since the 1980s, but there are many attacks and attack vectors; we have been picking the low-hanging “sexy” fruit and brushing aside the hard problems. In large, we have accepted a victim mentality and the role of playing a catch-up game against the hackers. Even now we are looking at how we can secure PCs, but if in five years we are all using smart devices, what does it matter if a PC is secure? One conclusion drawn was that current economics does not support science. The good news is that the government is beginning to realize we cannot tolerate insecure systems.

Modeling and Simulation

Modeling and simulation need to be combined with experiments, precise requirements and architecture. We need modularity, compartmentalization, component encapsulation, abstraction, well-defined interfaces, and precise requirements. We also need to anticipate and analyze emergent behaviors. We cannot model small pieces and expect to understand the composed system. Model checkers only check properties described by the model and not unanticipated emergent properties. Scale is an important factor, especially as it concerns trust. We must take a holistic and systemic approach and consider factors other than code. Simulation may be efficient if the components are small, the environmental parameters are well-controlled, and/or if the distribution is well understood.

Keys to using trust/complexity

Sound architecture, modularity and abstraction are key for using complexity. Airplanes are examples of complexity management. They are very complex, and everything is specified by requirements and must be monitored (e.g., when something was installed, by whom, and where). Simulation in this case is done very carefully, with well-controlled environment parameters or a good understanding of environmental distributions.

Composing Systems

This subject was raised in the context of a “divide and conquer” approach to managing complexity: if we have modular composition and loosely coupled systems, can we not confer (conserve and preserve) properties and/or compose for one or more selective emergent behaviors? There was no consensus on this and much debate. On the one hand, there is the idea that we can guarantee small components and control how they are combined so that we preserve desirable properties (e.g., robustness). On the other hand, there is the argument that composition creates unexpected, emergent behaviors, new vulnerabilities, and new risks. When we compose, we are not only preserving properties, but preserving vulnerabilities due to these properties, and/or the uncertainty of the properties themselves. Examples of this include: In a system composed of thermodynamic and hydraulic subcomponents, the interactions of the

thermo and hydraulic systems are irreducible. How you couple/compose (e.g., proximity of the components) affects performance as does the setup (e.g., configuration parameters and constraints) and the operating environment (e.g., temperature, humidity). Different behaviors will emerge depending on these factors. If an emergent behavior of the meta-system is undesirable, there is a design flaw.

eBay and PayPal together are more fertile grounds for hacking than either by itself. Putting payroll onto the same system as the lab creates greater risk for an organization and

greater opportunity for a hacker.Several research questions were brought up:

13

Page 14: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Define how we compose: how can we apply rigor and better understand the limitations of composing systems

Define the contexts under which we expect our systems to run Define perspicuous interfaces and how we configure systems Design for evolving systems: we will compose and configure systems differently over time as

we adapt to different circumstances. One conclusion reached was that we need to adopt a systems viewpoint: any system is composed of code, libraries, hardware, humans, etc. KLOCs (k lines of code) should not be the metric for complexity--it is not just the code but all the interactions and ensuing side effects that create complexity. We must also accept that we have to make decisions in an imperfect world and that we may need to accept approximate solutions that work well.

Design for a hostile environment

Malware is designed to survive and/or degrade gracefully in a hostile environment. The systems we build, however, are largely designed to run in protected environments and are brittle when used in circumstances for which they were not intended. Hackers use many of the same components as the good guys, but they are composed differently. A good place to start would be to design and build hostile environments, and to design, test, and compose for hostile environments. We do not this currently. Red team exercises have many “do not touch” rules (no social engineering, do not do that or you will break the simulation, etc.). Measuring adversary creativity would be another good place to start: hackers read journals to take advantage of new knowledge and technology, they operate under a different set of economic factors, and they also make mistakes.

Engineering discipline

There are good examples of engineering discipline, process, and well-controlled systems in other fields, and we should draw analogies from successful engineering practices and add discipline to the cyber field. We need to ensure we are looking at whole systems; we need to look beyond the software and take into account both system configuration and the operating context. When a system is put into an environment for which was not intended, we lose control. For example, attacks often move our systems out of our models and assumptions. We need firm requirements not only for the individual components but for the composed, “top layer” system. This is especially important in implementing new requirements. Without these requirements, we do not know what we need to analyze. And if we cannot analyze, we cannot test. This can be restated as saying we need to include risk management in our process: we need to look beyond systems analysis and also consider the vulnerabilities and risk in terms of meeting the requirements and accomplishing the mission.

Wake up already

We have discussed the same problems that have been on the table for many years. We keep making the same recommendations. We need a massive culture change and to break the chain of irresponsibility if we expect to make progress. We need to identify good tools and methodologies and understand the glue by which we compose systems. We need strong requirements and to bring risk management into the analytical process.

2.4. RecommendationsResearch Needs: Principled Development Taxonomy of definitions, principles, foundations

14

Page 15: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Abstraction, modularity, encapsulation, composition, and evaluation of properties Emergent behavior of cyber systems, anticipated or unanticipated, intentional or unintentional Irreducibility of some emergent behavior Analysis of properties at higher levels of abstraction Separation of data types Limits of attribution (of properties) due to complexityResearch Needs: System Engineering in the Context of Cyber Development of requirements Fidelity to specifications Assessment of vulnerabilities and risks Sensitivity and survivability to changes Designing in a malicious environment (HW, SW, Communications, people) EvolutionResearch Needs: Data-oriented approaches Analysis on complex data structures Signal to Noise (needle in a haystack) issues associated with complexity Gathering information on emergent behavior and importance of context Collection and analysis of massive, streaming data from multiple sources Sensor network type approaches that incorporate concepts such as statistical physics and

robustness Grids, Clouds, and beyondEducational Awareness Foundational issues Techniques for rigorous statement and solution of problems “Best practices” for code development and cyber defenses Incorporation of scientific approaches for comparing alternatives Action Items Develop and post white papers on technical topics. Complete summary of sessions on SIAM Mini-symposium on Mathematical Challenges in Cyber

Security. Explore opportunities for future technical forums on complexity. Respond to National Cyber Leap Year Summit Report from Complexity perspective.Address the Problem Need cultural change/paradigm shift. The complexity of Cyber Security requires an interdisciplinary, scientific approach. Break the chain of irresponsibility (cuts across the cyber industry: Microsoft, education)

3. Scientific Evaluation

Moderators: Glenn Fink, Clifford NeumanNote takers: Lucas Reber, Sarah Alkire

3.1. Executive summary Some of the key points that the discussion on scientific evaluation focused on encouraging scientific evaluation in the classroom, adopting guidelines throughout the field, implementing a grand challenge (using methodology much like the DARPA grand challenges), and establishing repeatable and reliable experiments that can be used as educational resources. Some of the suggestions that were proposed and agreed upon were:

15

Page 16: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Establish norms across the board Appoint a jury of peers to critique Research applicable courses and recommend those that meet the guidelines Include guidelines from the list of books Create a collective space for literature, problems, applications, case studies, etc., from the

field Offer forums at conferences Create a common scientific evaluation guideline for use across the discipline

The primary conclusion reached in the discussion was that there should be Grass Roots efforts that give the sense that there is a community to encourage changes.

3.2. Session objectivesScientific evaluation is a hallmark of the scientific process. This process is taught and used throughout the sciences to ensure that experimentation is documented in a known verifiable format. Although this is standard in the biological sciences, formal scientific evaluation is not as well established in the areas of computer science and especially in the field of cyber security. This has resulted in a class of research with limited ability to reproduce results through follow-up experimentation. The question, then, is if there is value in scientific evaluation, how do we encourage the use of scientific evaluation in the field of cyber security research?

3.3. Discussion focus pointsThe discussion over 2 sessions focused on the following aspects of scientific evaluation: Encouraging scientific evaluation in the classroom Adopting a manual or guidelines for scientific evaluation in cyber security Implementing a grand challenge event or mini-challenge events Spurring re-experimentation through tracks in common Common datasets available for repeatable experimentation Implement educational resources in the area of cyber security

Lack of Scientific Evaluation in Computer Science

A common theme over the two sessions was how to encourage the adoption of formal scientific evaluation in standard computer science curricula. It is generally accepted that scientific evaluation is a basis of the hard sciences but appears to be curiously absent in the field of computer science and cyber security research. This can be attributed to a number of factors: relatively young field, lack of understanding the value of the method, but more importantly, it has not been required to get published.

This lack of a requirement for proper scientific evaluation has resulted in a field of study with very few experiments that are repeatable. Having repeatability in the process enables researchers to review experiments and determine if they receive the same results under similar conditions. To put this in perspective, one attendee noted that if the lack of scientific evaluation requirements in computer security applied to physics, we would all believe that the cold fusion works based on the Fleischmann and Pons experiment.

There should not be constraints on methodologies on cyber security, but there is a need to include observational and scientific but non-experimental methods that are out there. However, there is a hard time finding subjects.

16

Page 17: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Lack of Common Guidelines for Scientific Evaluation in Computer Science

A common point of concern was identifying a standard guideline that can be applied to computer science and cyber security research. Attendees noted that the concept of proper scientific evaluation needed to be taught and applied in the classroom and in the review process for scientific journals, but there was no consensus on a common guideline. This prompted discussion on the possibility of creating a standard guideline for the community and the possibility of establishing a list of the top twenty books in the field.

It was also noted that one of the attendees is publishing a book on the topic and would be open to receiving input from the community on the book.

Further discussions indicated the need for a boilerplate for papers. This boilerplate for experimental papers should include instructions to authors and reviewers. This would be a valuable tool for committees that are reviewing papers since it will ensure papers have merit and are on topic.

Educational Efforts

An increase in educational efforts was proposed and agreed upon. Some of these efforts included ideas such as: Establish book and community norms. Establish jury of peers for helping with how to do experimental design in order to help each

other in the cyber security community.

Investigate the prospect of local university juries – not just one. Research methods courses – friendly jury of peers (one for cyber security or individual).

o What we want to get out of this is the “low hanging fruit” and just do something. Include guidelines from the list of books so we can add the good points into one book. Many

don’t go into the psychosocial part of cyber security.o Does yet another book need to be written though?o Can we just take the information from these books and recap what is out there on a page

in the Wiki. In general, it would be nice to have an application that includes examples from other cyber

literature. One good thing about the Wiki is that if any of us have cyber-security examples (papers, studies,

paragraphs about parts, etc.), either observational or scientific, we can put them there to help. Throw out hard problems and several different applications like case studies, etc. Need something separate for a discussion area other than the Wiki.

Implementing a Grand Challenge Event or Mini-Challenge Event

Attendees noted that the concept of a grand challenge that has stricter scientific evaluation requirements and requiring open datasets may entice participation from the community. It was generally decided that a full “Grand Challenge” level event may not be appropriate for a start. However, a number of attendees also chair committees, provide reviews for journals, and are active in the larger community. It was discussed that one area to help spur adoption of scientific evaluation in the community would be to add the requirements to the journals and groups that they participate in. One aspect of this that could be explored would be creating mini-Challenges in some of the common conferences that the community attends.

17

Page 18: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Encouraging Repeatability

Another area of interest would be encouraging repeated research to verify results by creating a sub-track in these conferences for repeated experiments. Although this is extremely important to the understanding of the field as a formal science, it is generally accepted that it is a difficult area to achieve funding in. By having the sub-track as an option, it may also provide a new area for beginning researchers to gain entry and experience in the community as well as providing a valuable addition to the foundations of the discipline.

Common Datasets for Research

Throughout the course of the session a common theme of contention and concern was the availability of common datasets to use for cyber security research. Attendees noted that there are relatively few large scale datasets to use for cyber security research. In many cases, the researchers are relying on datasets generated from in house resources (locally managed sensors and instrumentation) or on data sets that may have inherent failings such as the Lincoln Labs DARPA data. It was generally considered that the more data sets researchers have available, the better. The problem then is how we acquire valid datasets for use by researchers.

Developing a small database, along with the problems from that data and a list of sources, was recommended. If someone went through it, it would save others time in the future.

This question spurred a discussion on some of the types of data sets available and what could be of use. There are very few depositories of data sets for use by the community. This is a concern since it affects how repeatable an experiment can be. If outside researchers do not have access to the original dataset, it will be quite difficult to have repeatable and reliable results. This problem spurred the discussion of the possibility of a centralized clearing house for this type of data. There are a few sites available to the community at this time that may be able to be expanded upon, or it may require creating a site that allows researchers to point to existing datasets in essentially a registry format. There was discussion about the security of such a database, but there was a general consensus that community agreement would need to be reached to provide security for the information.

3.4. Recommendations

The community needs to encourage the adoption of formal scientific evaluation in the classroom, in the journals, and in the conferences.

A common scientific evaluation guideline for use across the discipline should be developed for adoption. This guideline should also include a boilerplate for conferences and journals.

Encourage re-experimentation by offering forums at conferences. Encourage scientific evaluation through the use of mini-challenges based off similar

methodology to DARPA Grand Challenges. It is important that we have these Grass Roots efforts give the sense that there is a community

to talk to so there can be recommendations.

18

Page 19: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Appendix

Appendix A. Trust and Trustworthiness Breakout Session Notes

Note taker: Julia Narvaez

Note taker: Tamara Stewart

Appendix B. Complex Systems Breakout Session Notes

Note takers: Catherine Call and Thomas Ng

Note taker: Thomas Ng

Appendix C. Scientific Evaluation Breakout Session Notes

Note Takers: Sarah Alkire, Lucas Reber

19

Page 20: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Appendix A. Trust and Trustworthiness Breakout Session Notes

Note taker: Julia Narvaez

Note taker: Tamara Stewart

Note taker: Julia Narvaez

Session Moderators: Aaron Tombs, Alex Nicoll

Background The Talaris Grassroots Roundtable was held at the Talaris Conference Center, in Seattle Washington, on August 27 and 28th, 2009.

Session Description Discussion of the following topics:What new exploration of new metaphors/models/architectures for trust/trustworthiness should we be exploring to support newer ways of social/business/etc interaction, particularly with the addition of cyber physical systems? Are there things we have tried or are using that may be more effective? What concrete actions can we take to improve the effective use and study of trust/trustworthiness in put discipline? If we do all that, is that sufficient?

Focus PointsThe discussion was focused in the following aspects of trust:

1. Human Trust

2. Computer systems trust

3. Risk aspect

4. Systems trusting other systems

5. Authentication

6. Data

7. Formula for trust

8. Questions and research areas

9. Ways forward

1. Human TrustThe following are aspects of trust when humans interact among them or interact with systems:

What is a trusting system: characterized by high confidence and there is the illusion that we know who is in the conversation, e.g. two people in a face to face conversation. There is a continuous assertion, it looks convincing, and might be affected by an emotional experience. Characteristics:

20

Page 21: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

o It is self-contained and simple. o There is knowledge of the characteristics of the system. o There is high degree of understanding of the system.

Trust vs. assurance: some people prefer to talk about assurance because assurance is quantifiable. Believe Trust: believe in somebody’s integrity or ability. Some people might not want to be in a

position of believing that somebody is trustworthy, they would rather have assurance. If they go away from the believe trust, then they have less reliance in the analysis of trustworthiness.

Reliance Trust: trust in the form of the DOD trusted system. You trust a system that you rely on to perform a task. This has different mathematical properties than Believe Trust.

Transitivity: when somebody trusts you, you are trusted by somebody else, e.g. implementation of an email filtering where people trust each other. Trust is transferrable, but it is necessary to define in what conditions there is transitivity.

Trustworthiness: often called in literature as trust, but it is not really the same. Lessons from physical security:

o Trust is situational and dynamic. The dynamics can change very quickly.o It can be measured. It is measured when the asset is there. o Need to know what the measurement should be and what it actually is in order to make

adjustments. People decide to take action when the assurance level is “X”. It is never perfect.Reference:Kenneth Lane Thompson "Reflections on Trusting Trust"

2. Computer Systems TrustBoth, technology and people need to be considered. There is a combination of several dimensions that provide trust: human factors, technology, system dependency, and intuition.

Context: Trust depends on the purpose of the system, e.g. different trust levels are required for a financial system or for a health system.

Complexity: when a system is too complicated, it is difficult to know everything about it. You decide how much you can know in order to accomplish your objective. The system is not going to be perfect; you need to be prepared for failure, so that the failure does not jeopardize the objective that you are trying to accomplish with the system.

Fidelity: Fidelity is determined by how the information is presented and how intuitive is, no matter how complex the system is. Example from the military (common operational picture), the general says: "do not give it to me complex".

Culture dependency: trust is culture dependent. The experience in the culture creates such confidence on how something works that it becomes intuitive, e.g. how to operate a light switch.

Analogy of how humans recognize images (vision recognition methods): humans can do some filtering on an image. Humans do not see the whole picture, but certain points. They do not look for boundaries, but for particular points. In a similar way, there might be certain discrete characteristics that provide trust.

Modeling of factors: snippets of information can be modeled mathematically. There is a combination of factors depending on the system and the decision process to go through. For physical systems, the containment is important. For cyber systems, human factors and physical factors are applicable.

What are the alternatives? Layers of protection: There are layers of protection that can be implemented if an individual chooses

it. For example using a virtual machine to access a banking system. Visual cues: The system should have designed accurate and intuitive visual cues. An example of a

misleading visual cue is online shopping, where people have been instructed to look for HTTPS and for the lock symbol in the browser. These cues indicate how the data is going to be transmitted

21

Page 22: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

through the browser, but there is no indication of how the data is going to be submitted. The shopper would have to look at the post URL in the source page to see if post action is encrypted. In this case, there are no visual cues of what is going to happen when the data is posted and hence there no designed visual cues to help people understand how secure the transaction is.

Testing for assurance: Complete testing is an instrument to provide assurance. Assurance yields trust.

3. Risk aspectWhat does trust mean in decision making? To contextualize trust is necessary to understand the risk associated with the kind of trust that the individual is trying to extend. If there is no risk, there is no need for caring about the decision.

Risk analysis: application of an established field. Go back to the mathematical analysis of risk and bring it to the cyber-security world: o There is extensive literature in risk/decision making in other files, which could be very fruitful.

Risk assessment: Zero risk is unachievable. What is the risk compared with the consequences of failure? The hard part is not to make the risk assessment, but to make a human understand the impact of the failure.

Derivative value: Trust is a derivative value function of:o Risk value: quantifiableo Psychological impact: non quantifiable

4. Systems trusting other systemsIn the OSI model, layers provide services to other layers. How about the other layers where we develop systems to trust other objects? Do we design a system to trust or not to trust?

Binary trust approach: Is trust a binary object development? Implementations can be other than binary. For example, the voting system in computers is designed to not develop binary trust. It is based in four special purpose systems, architecturally designed by two different companies. To sustain one vote, there are four different systems, which are provided with different input.

Trust more or less approach: There are ways to implement a system to make it not only yes or no, but to make it trust more or less. The concept for decisions is to trust more or less.

Use of metrics: Use of well specified metrics to avoid getting lost in details. In the Orange Book and the Red Book there is a trusted database interpretation. However, the application of metrics is questionable due to the interpretation. For example, saying that trust level is 85% is the goal. What does this mean? Does it mean that the database is going to trust the OS 85%? or is the database going to ignore 15% of the authentication information that the OS is passing it?

End to end trust: trust is a conscientious decision; both parties decide the measures to put in place: o Evidence: There is the issue of gathering evidence and processing evidence. Systems should

provide very solid evidence to allow an objective decision. o Context: For example, given a set of 5 machines, the same input information might lead to

different decisions of trust on each machine based on the context of the recipient machine. Assurance model: there are two models:

o Static assurance: there is a mechanism in which a machine signs off something that has been evaluated under some circumstances. Under the same circumstances, other machines might trust it since it has already been signed off.

o Dynamic assurance: We are facing a situation of dynamic assurance to make automatic trust assessment now: a machine will need to make a decision based on specific conditions in order to make automatic trust assessment. There is an important historic component.

22

Page 23: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

5. AuthenticationEven though an individual authenticates in a system, there is no assurance that this was the authorized individual. How to improve authentication?

Evaluation of additional patterns: To improve authentication, the system can evaluate additional patterns and inputs to determine if the authentication is valid. For example, the system knows that a particular individual would not log into the system from Kazakhstan.

Use post analysis: use evaluation and degree of knowledge to authenticate. Again, trust more or less. Trust vs. consequence: Different rules can be applied to scenarios that have different conditions, e.g.

financial transactions from Kazakhstan can have different transaction limit than transactions from US. Identification of "the equation": The question is how to identify the equation to formulate the

conditions that lead to trust. Considerations to identify the equation:o How to analyze consequences through the dependencies inherent and apparent? o How to discover the dependencies that are not apparent? o How to discover what is being trusted? o How to sub-analyze those factors and combine them into a final decision that allows to proceed

with the confidence or not? System dependencies: The effectiveness of a trust decision in the system depends on whether or not

the designer accounts for all the dependencies in that system. The designer could design a system to trust an object based on a faulty idea of the different paths through that system; and the system might be dealing with bigger problematic that it was designed for.

Digital certificates: Digital certificates that include identities using bioinformatics are supposed to be very secure, more secure than regular venues. However, if an individual presents false credentials and obtains the good digital certificate, suddenly he is trusted everywhere because all the cryptography matches up. In some way, this makes the system more brittle than more robust.

6. DataTrust for the purpose of protecting the integrity of the data, exposure of the data, and use of the data. It is related to the value of data. It has to do with privacy policies, cryptography and access control.

Consideration of the Insider threat model: there is a trust issue once a person sees the data. o Analysis of the person’s behavior: again the question of how to decide what is trustworthy. o Combining socio-technical data such as data from human resources, arrests for DUI, what the

person says, etc. gives a better prediction if a person is going to be an insider threat in the future. o There has been work done processing large amounts of emails and it has been accurate in terms

of the insider threat. There is no deterministic solution and it is not going to be binary.Note: In the real word, there are recourses when your trust is violated, i.e. there is the law. References: X.509. PNNL studies of the insider threat model.

7. Formula for trustA formula for trust includes the following elements:

Context: As systems interact with human beings, if people do not have the context of the conditions that the system uses to make the decision, then the human decision might be compromised.

Consequences: Some are direct and obvious and other difficult to identify, e.g. the cost of a security bridge impacting the stock price of a company, and the high consequences in a societal model.

Risk appetite (see risk section in this document)

23

Page 24: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Alternatives: Some might lead to a high opportunity cost Building trust into a system: and how to lose trust, e.g. rating on eBay; usability, feedback, validation. Feedback: Systems do not provide enough feedback to people to make a trust decision. When the

conditions change, they system could provide feedback. Time component: a system starts with some assurance or trustworthiness and then this deteriorates.

There is extrapolation of trust, i.e. there is a continuous evaluation if the input can be trusted. Increasing trust: based on a set of characteristics, we can determine what the reality is. How to create/build trust in the internet?: Studies on how people build person to person trust in the

internet indicate that the mind fills the blanks left by the face to face interaction. For example: o Some people fall in love with someone who they just have met in the internet. That is the

same problem with deception in chat rooms. Predators know the cues and use them.o In an online class, types of relationships were developed between the professor and the

students, where a sense of community was built. After the course, when the students were asked to describe each other, the students made up things about each other, even physical characteristics. There were cues in the interface with the computer.

8. Questions and research areasSoftware systems are not designed with the concept of trust built in, but many systems that will need to.

How to build trust in a large computer system? How to recognize what characteristics of trust are the most valuable? It is necessary the identification of ways to collect metrics. Two phases are proposed: o The need to trust from the get go: trust is built in some systems (because of their nature) such as

Smart Grid. They jump start at a high level of trust. o Trust reevaluation: Overtime, there needs to be a feedback loop to manage clues to evaluate if

there are reasons to downgrade the trust level.

Does the human trust model have applications to the system-to-system trust model? o Human trust is built in cognitive learning. Human beings make decisions based in a number of

factors. There can be trust extension, e.g. there is a non-human entity connected to the internet, and "John" signed it. That entity might be trusted by another machine now that it has been signed by “John”. In this case, the trust in the machine is elevated, but there is not absolute trust.

o Human beings make decisions in a number of factors, not in a single factor. The number of clues that we process is large. If an entity is validated by a number of other entities, and hence it has more clues, the level of trust increases. The number of validations by other entities can be used to elevate the trust level.

How to recognize, from the many factors of trust, the one that deserves more attention?o It is necessary to know what the system is designed to accomplish. By defining the purpose of a

system from the beginning, we can impose constraints. However, the larger the system, the more difficult to answer that question because its functions can multiply. For example, the purpose of the internet is to move information from point A to point B. From that perspective, the internet is functioning in the way it was designed. The problem is the applications that utilize that function.

o Context of the goals of the system, expectations of the user of what the system should do

End to end trust model: Assume that the cloud around us is un-trusted. Ignore what is in the middle and only build trust end-to-end. Consider sending a message not to a final destination, but to the next stream up, e.g. to the telephone line, cell phone line, hub to hub, and so on, authorizing at every point. The allocation of resources depends on the level of priority. A similar model has been implemented in the cell towers, where 911 calls have priority.

24

Page 25: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

What is the universe of criteria that we need to consider?o Situation, e.g. major events (a terrorist attack, a hurricane, etc.)o What is natural background noise? o How do we know that trust has been violated? If you build trust, you need to continuously re-

evaluate it, or recognize when it needs to be re-evaluated.o Does the re-evaluation mean that the criteria were wrong?o How to develop or restructure programs so that their constructor systems give appropriate

feedback? e.g. error messages. Determine where it is no longer physics but computer science. o Combination of visibility, usability and feedback. For example, if a user has logged in the system

in from "Turkey", the system should inform this to the user next time he/she logs in.

Trust and decision making: We agree that we need to be able to collect some information in order to make a good decision. The research is what information we need to collect.o How to determine when we lose trust?o Where do we gather enough information and enough criteria to make good decisions?o The first step is to start designing the parameters necessary to measure trust, i.e. build algorithms

against which we make a decision if we were in the system.

Application of current areas of research: There are studies on how to write secure software, likewise there needs to be a field of study on how to design trust parameters into a system.

Trust parameters: How to build a dash board with preferences, such as Google preferences, deciding if you want to be totally safe or totally open.o How do you want to value the parameters? o How to correlate the value of the parameters behind the scenes? There needs to be complete

understanding of all the criteria, everything that we can measure and it needs to be dynamic, so the system does not need to ask the user.

Identity verification: Establishing a lot of information about the entities to trust is a huge area of research. Hypothesis: anonymity cannot exist in the absence of strong authentication, they are linked. The challenge is that we cannot preserve either the anonymity or authentication that we have right now. We need to sacrifice some privacy.o We can create a rich identity verification package that we can pack and send to the receiver to

assure that it is the actual person. There might be privacy issues. o There are patents describing non conventional authentication mechanisms, e.g. evaluation of IP

address of the country where the user is logging in, whether the user is logging from a business or from home, an authorized ISP, browser, person's title, and so on. These features are not available through the browser, but there are commercial tools that provide that info. Combining all those pieces of information there is a super-category of information to look at.

o The communication between the trustee and the trusted entity needs to be trusted. If it is not, the identity verification is less confident. On the down side, in the last year, the bot-nets literally inserted themselves between the two ends in a transaction. However, the probability of this happening in great scale is still low.

Usability: Upon computers making trust based decisions, how to convey their security status to the human user who is going to make a trust decision?

Resilience: If we can replace trust in electronic elements with sufficiently robust controls and policies, can we eliminate the concept of trust? Hypothesis: Trust can be eliminated with sufficient resilience.

25

Page 26: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

9. Recommendations Rigorously apply risk analysis to cyber-security. Research how to effectively (and correctly!) communicate trust information with users. Determine how to make more use of available context to assess trust. What new information could be made available that could increase the ability to assess trust? Engineering practices: clear articulation of what a system depends on, and what will compromise its

intent, and to what degree. More rigorous approaches across the board. Establish large amounts of data of trusted entities to help better decision making Depending of the objective: sacrifice anonymity for authentication Robust policy control for information access: Would that eliminate the need for trust?

Note taker: Tamara Stewart

DOE Grassroots Roundtable on Cyber SecurityTrust Discussion Group – Day 1, Aug 27, 2009Topic Leaders: Alex Nicoll, Aaron TombTopic Description: Trust & Trust Worthiness

Discussion Objectives: Metaphors/Models/Architectures for Trust and for Trustworthiness What Trust is required to support emerging Social/Business interaction modes of

cyber–physical systems What concrete actions can improve inclusion of Trust/Trustworthiness in conduct and

improvement of our individual research disciplines (Examples:o Mathematics and Analysis (Predictive Awareness)o Information Management Systems (Self-protective data; software)o Platform and Composable System Architecture (

Topic A – Discuss current Trust models where improvements can be madeTopic B: is “Trust” as a concept, useful to Cyber Researchers?Topic C: Does Human Trust model have any bearing on what/how we need to build our systems?Discussion "Points Of Focus:"  (lots of side discussions occurred in our session, but there were some focal points to the over all session)Focus Point 1: Trust relies on Level of Perception (a Human attribute?) which contributes to Confidence (a twin could fool you into trusting him… )Focus Point 2: “Trust” is a Conditional State; dynamic and transitory…Focus Point 3: How do you equate “Trust” between entities? Focus Point 4: Trustworthiness and Risk relationship is Context Dependent Focus Point 5: Trust and Assurance relationshipFocus Point 6: Visual Cues to differentiate Assurance Levels or states?Focus Point 7: is the question we are really asking: are our systems Effective or Not (rather than can we trust them)?Focus Point 8: How do you trade off or compare Mission Success vs Consequences Focus Area 9: Trust Evaluation Research Questions

Topic A – Discuss current Trust models where improvements can be made

26

Page 27: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Ex #1: A Trusted System = a room with 2 people who know that each are the person they intend to talk to, with the door closed.

o High degree of trusto Continuous trust (during the conversation, there is physical presence and no leaving the

room for a substitute masquerading to have an opportunity to interjecto Adaptive (if one has a cast on their arm, they are still recognized as the same person by

the other human in the conversation)

Vs. If conversation done by WebCam:o Every Admin between both parties may be listening in, or has the ability too The image provided by WebCam may not be a real person (may be an Avatar

masquerading as intended person)Focus Point 1: Trust relies on Level of Perception (a Human attribute?) which contributes to Confidence (a twin could fool you into trusting him… )

-- in MLS discussions, we throw the notion of “trust” out the window and focus on “Assurance” instead.

-- [Ken Thompson’s paper]: “Chain of Trust” If A B, and B C, is it true that A C?If someone trusts you, and you trust someone else, can the first person trust the third person? Not necessarily, but sometimes.

[critical assumption: conditions where transitivity can be true must exist/be verified; anti-patterns or profiles would need to be identified as well, to help “characterize the “boundary of trust”]

-- Different Aspects or Characteristics [or uses of the term] Trust:o Belief – trusting some other entity’s integrity (data, process, etc.)

Further discussion on how much knowledge is really needed to “Trust” in a behavior vs. Believe it will continue tomorrow as it has in past days (like gravity)

o Reliance – DoD trusted system definition Further discussion on how more knowledge of a system lends “trust” to it;

cultural norms and standards provide Reliable “trust boundaries” (light switches in generally the same location of a wall)

o Trustworthiness – a metric of a system; how much do you Trust a system. Literature often calls this “trust” when it’s really a measure of how much or whether (a scalar, not binary).

o Containment (“Boundary of Trust”)

Perhaps a spectrum of characteristics exist that could culminate in some combination as a level of Trustworthiness (index?):

- combination of physical characteristics, - conditional based on “cultural” norms/standards/verification models

(to include tech norms/standards and human norms/cultural standards, because:

- Cyber war is against People, using Technology as the means .

Note: Perhaps an analogy can be drawn from visual recognition. Human visual recognition of shapes in a context uses “gaze points,” the collection of gaze points define the information set necessary to identify the shape in the presence of (noise/clutter/other shapes).

Like “gaze points,” maybe there are discrete elements of Trust that matter, and others that really aren’t important (like the remainder of the shape is not necessary in order to recognize it).

27

Page 28: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Focus Point 2: “Trust” is a Conditional State; dynamic and transitory…-- from Physical Security: situations are ever changing, so characteristics that make up Trust are in flux-- there may be no absolute measurement of “Trust,” all you know is “relative Trust:”

Is there enough Trust to accomplish the Mission?Ex: Who cares if there’s $30 unaccounted for in the budget if you still have enough to operate?

-- in complex systems [and in all non-trivial systems?] how do we consider/trust a) what we do know [observed or derived or evolving pattern?]b) what we don’t know [emergent cause/source?]

Focus Point 3: How do you equate “Trust” between entities? -- is one person’s “trust threshold” the same as another’s?-- Ex: Totally dependant on context. Yes , if the context and the mission need for trust are the same; but not necessarily if the mission needs are outside the “trust boundaries” of each other’s mission needs/contexts.

Focus Point 4: Trustworthiness and Risk relationship is Context Dependent -- do you even care about Trustworthiness if there is no (or little) Risk to your mission?-- Risk considerations: risk to mission; risk to human life

Ex: when Risk is HIGH, you care about the Trustworthiness of a systemWhen Risk is LOW or ZERO, does concept of system Trust have any meaning to you?

-- Are there Properties to understand (like Risk, or others) before you can know the target Trust Level you want/need?

Note: Risk Assessments are a well understood body/discipline; are not hard COMPARED to getting the Human to make the right Risk-based Decisions…

Focus Point 5: Trust and Assurance relationship-- Given a level of Assurance, you can “derive” a level of “trust… “ (more related to human emotions – does the analogy transfer?)-- Ex: A Formally evaluated chip with known performance can be a “Trusted” device, but if it is not installed correctly, or installed and then operated with incorrect signals, the deployed result can lose all the “Trust” built-in.

[Note: Does this imply that Assurance as well as Trust require context as well as deployment instance to be evaluated?]

-- Given a level of known Trust levels, you can assign resources with appropriate levels of Assurance.-- Ex: use one machine for things I really don’t care about – casual writing, games, [maybe never connected to external networks?]

Use a more trusted machine for banking activities connected to external public network…

Focus Point 6: Visual Cues to differentiate Assurance Levels or states?-- humans need some kind of cue-- current Cues are both insufficient and in some circumstances wrong (Belief in them is wrongly placed)

Ex: https - just tells you that info comes to you securely; does not guarantee a) your response goes out securely or b) that you are sending it where you believe you are sending it

28

Page 29: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

-- some other measures or habit patterns with visual alerts that “this is not your normal routing for transactions” would be a useful Cue.

Topic B: is “Trust” as a concept, useful to Cyber Researchers?-- or is it anything other than Binary (have trust; not have trust)-- does a Trust look any different or cause us to act any differently at each of the OSI layers?

Ex: On the Shuttle systems, NASA “trusts” a fault reading if 4 voting systems agree that a fault exists (consensus)

Focus Point 7: is the question we are really asking: are our systems Effective or Not (rather than can we trust them)?

-- if a Trusted DB only trusts the OS 85%, what does that mean? Trust or effectiveness?

-- Authentication: can have errors (Type 1 and Type 2), but while we can’t always know what is right, we can tell when it is very wrong (matter of degree?)Ex: Grandma NEVER logs in from Khazakstan.

-- an Evaluation of Access needs to be based o more than User ID/Password (statistical or historical use patterns?)

-- but there needs to be mechanism to all for people on vacations-- or link suspect evaluations to consequences? (less access or restricted access?)Ex: in Europe => get $100 from the ATM

In Khazakstan => get $50 from the ATM

Focus Point 8: How do you trade off or compare Mission Success vs Consequences -- How do you: discover, analyze, decide the consequences or benefits of Trust?-- How do you: determine if a decision was “faulty” [maybe your logic was incomplete?]-- How do you: prevent consequences which include Societal level effects when Trust breaks down?

Note: avoid single decision points; use cumulative decisions? Formulas of Consensus?

-- Need research of: Systems using Decisions in ContextSensitivity Analysis of those decisions (in Context)

- to know when its OK to ignore “trust problems”- and when its OK to derive trust, etc.- to recognize Low and High consequence problems (and ignore , as

appropriate)

Where: Elements under consideration include- Context Alternatives- Direct Consequences- Indirect Consequences- Risk Appetite- Time component- Monitoring/Feedback- Cues/Alerts- Frame of Reference

And the starting point; where 100% Trust is invested is in “You”-- and degrades outward-- perhaps looking at what you DON’T trust as well, for comparison

29

Page 30: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

All with the goal of Building Smart Systems (like Smart Grid) that have Trust mechanisms built-in…

Recommendations for "Next Steps"Topic C: Does Human Trust model have any bearing on what/how we need to build our systems?

-- Human “Trust” is learned. It gets broken and relearned.-- systems are an extension of Humans, so it’s not a “black & white” answer-- this may be an open question…

Focus Area 9: Trust Evaluation Research Questions-- exhaust all paths & permutations to start with; in simulation-- over time, you need a feedback mechanism

Note: Trust of a single channel; network; connection isn’t important if you have 1000 redundant channels.

Note: System Success needs to be defined; level of “Goodness” of what the system accomplishes.

-- Purpose helps to define Success of system

Note: and 90% of the world doesn’t care about anyone other than themselves and their universe]

Note: we all apply human “trust” criteria differently; systems may need to apply trust criteria differently (based on context, mission need, other risk factors, over time) as well.

Q: Can HW systems trust each other?-- depends on how much they know about each other’s architecture and components[Note: what does THIS say about Trust and “composable” systems… ?]

Q: Can evaluating the results from a system be sufficient for making a Trust decision?-- without understanding the mechanisms within a system, how would you know if you are looking at an intermediate result or a final result?-- Genetic Algorithms are goal-oriented, but otherwise intermediate results aren’t necessarily always converging on the ultimate solution…

Q: Is “End-to-End” Trust a viable alternative? (another way of stating the previous question)-- this would be a conscious decision to “accept risk” at a “certain level”-- but what measures would you use to determine the Risk you accept -- and how to determine what risk level appropriate at any given time? -- How to gather and record evidence of Risk decisions for comparison, analysis, future decisions?

Q: Are Risk Decisions you make as a result of evidence a separate question? [Back to “how are Trust and Risk related”]

Q: How can Risk or Trust be combined/assessed when you have a series of separate machine?-- [or for set of machines with individual Risk/Trust, what is the combined Risk/Trust of a specific “assembly”… series, parallel, combination]

30

Page 31: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Q: And take that answer (which is a “Static Assurance” answer), and consider how to assess when the assembly and operational states are Dynamic instead… -- how are Risk/Trust assessed then? -- similarly, or with a different approach?-- and consider the historical track record of the Risk/Trust as an input to the current assessment of Risk/Trust

Note: “Certification” or assessment of Risk/Trust of a system [or component?] should not be a “snapshot” but a continuous process… [as should the basis for the assessment in a dynamic context?]

Q: How can “Confidence” in a system occur?-- do “confidence intervals” have the same interpretation? Are our old assumptions limiting for this new way of applying the concept of “confidence?”-- if we assume “The Cloud is Hostile” how can you increase “confidence” in a composable system?

- redundant streams? And comparisons of their results?

Q: Have we neglected research areas because of past technology limitations (memory constraints, processing power constraints) which are now no longer valid?-- research question: how do we document our scoping decisions for research areas (decision logic and decisions to “prune” branches off)?-- do we ever go back to revisit?-- [allow SME’s from other domains to review and revisit for reuse or tackle the “pruned” branches of inquiry?]

Q: What and to what degree is important to restore when a system becomes degraded/shut down?-- how “clean” or how much Risk/Trust must be restored?-- when can the system be “trusted” for resumed operation?-- does “natural background noise” equivalent play a roll: how to characterize and include in assessments?

Q: How can Trust be Re-Evaluated after a disaster event?-- assess the trust criteria first? -- establish sufficient criteria first?-- ensure that system is rebuilt/retrofitted with trust criteria monitoring/auditing capability?-- when/how much data is enough?-- how can that be instrumented?-- how can industry be directed to build it into all components?-- dashboards?

Q: Does it still come down to the Human liability in the end?

DOE Grassroots Roundtable on Cyber SecurityTrust Discussion Group – Day 2, Aug 27, 2009Topic Leaders: Alex Nicoll, Aaron TombNote taker: Tami StewartTopic Description: Trust & Trust Worthiness

Discussion Objectives: Metaphors/Models/Architectures for Trust and for Trustworthiness

31

Page 32: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

What Trust is required to support emerging Social/Business interaction modes of cyber–physical systems

What concrete actions can improve inclusion of Trust/Trustworthiness in conduct and improvement of our individual research disciplines (Examples:

o Mathematics and Analysis (Predictive Awareness)o Information Management Systems (Self-protective data; software)o Platform and Composable System Architecture (

Discussion Objectives: Metaphors/Models/Architectures for Trust and for Trustworthiness What Trust is required to support emerging Social/Business interaction modes of

cyber–physical systems What concrete actions can improve inclusion of Trust/Trustworthiness in conduct and

improvement of our individual research disciplines (Examples:o Mathematics and Analysis (Predictive Awareness)o Information Management Systems (Self-protective data; software)o Platform and Composable System Architecture (

Topic D – Human Trust what is needed to build it, maintain it, validate it?Topic E: Computer Trust what is needed to be build in, can it be built upon or extended from legacy or existing resources, and issues of scope?Topic F: Trust from the Data/Data Sender’s Perspective

Discussion "Points Of Focus:"  (lots of side discussions occurred in our session, but there were some focal points to the over all session)Focus Point 1: Trust relies on Level of Perception (a Human attribute?) which contributes to Confidence (a twin could fool you into trusting him… )Focus Point 2: “Trust” is a Conditional State; dynamic and transitory…Focus Point 3: How do you equate “Trust” between entities? Focus Point 4: Trustworthiness and Risk relationship is Context Dependent Focus Point 5: Trust and Assurance relationshipFocus Point 6: Visual Cues to differentiate Assurance Levels or states?Focus Point 7: is the question we are really asking: are our systems Effective or Not (rather than can we trust them)?Focus Point 8: How do you trade off or compare Mission Success vs Consequences Focus Area 9: Trust Evaluation Research QuestionsFocus Point 10: Willingness [for a human?] to extend “Trust” is based on the level of “Risk” assessed [for a situation?... since we have dynamic conditions?…]Focus Point11: Problem Space includes dimension of “How to Trust Data” Focus Point12: Can I replace “Trust” with a sufficiently robust set of controls (policy, electronic, procedures, etc.)?

Topic D – Human Trust what is needed to build it, maintain it, validate it?Focus Point 10: Willingness [for a human?] to extend “Trust” is based on the level of “Risk” assessed [for a situation?... since we have dynamic conditions?…]

Question: Does this leave “Human Trust” out of the equation?… It might be useful to work on quantifying “Risk:”

o Lot of literature, so potentially “easily” fruitful area to addresso Yes – BUT – most of the literature is less than rigorous in quantifying “Risk”

32

Page 33: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Research Recommendation: go back to the roots of Risk Assessment and take a rigorous approach from a mathematical perspective.

The counterpart of Risk is Confidence:

o Concern that we need to include looking at the EVENT and include information about the Real Person (the Human) […to assess Confidence, Risk, Trust]

o EX: For Identity Verification Is a signal from where that person “could have been” based on known patterns

useful? (Granny might log in from Paris, but not from Paris one minute and Kazakhstan the next)

Are there any personally identifying patterns that could be used to confirm identity (voice, gait, other sensor input?)

o What would stop someone from emulating all the reams of profile information? Information (at varying degrees of reliability) could be pulled together [to provide

accumulating evidence of identity] Information could be broadcast across different channels to insure that all arrive

at destination identically (otherwise, compromise is suspected).o What about Privacy – are we restricted from using authentication mechanisms because

of privacy issues? Are we forced into using certain authentication mechanisms because of protecting privacy?

Topic F: Trust from the Data/Data Sender’s PerspectiveFocus Point11: Problem Space includes dimension of “How to Trust Data”

Data integrity is one type of “Trust”o Resilience = self-healing data (from the integrity viewpointo Research Recommendations : Improve components leading to Integrity Trust:

Data modification/destruction Operational use of data/meta data User Identity/Authorization Path or pedigree of how “Trust” acquired

“Trust” that data is used properly by the right people, is another type ”Trust” that data is interpreted correctly by people using it is another type Consider the Sender’s “Trust” requirements as well as the Receiver’s “Trust”

requirementso Consider the “trust” given once someone is allowed access to data

Some Tools (PACMAN?) that use socio-technical data to determine if someone may be a potential insider threat

Focus Point12: Can I replace “Trust” with a sufficiently robust set of controls (policy, electronic, procedures, etc.)?

Need to consider the “environment” you’re creating, not a specific machine or perimeter:o Can I believe the data I put in?o Will it be the same data when I go to retrieve it?o EX: pirated movies and the suppression of the FBI warning.

33

Page 34: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Appendix B. Complex Systems Breakout Session Notes

Note takers: Catherine Call and Thomas Ng

Note taker: Thomas Ng

Note takers: Catherine Call and Thomas Ng

Topic LeadersLeaders: Robert Armstrong and Joanne WendelbergerCharacterization: This was a small but good representation of the larger group

Workshop ObjectivesSpend less time on these questions

Defining the specific nature of and peculiar challenges of complex systems (can include everything from enterprise through SCADA and including human, psychosocial, cultural, political…)

Spend more time on these questions How can we take advantage of complex systems so that they are more secure rather than

being overwhelmed by their complexity? How can we tie trust/trustworthiness and scientific evaluation principles to better

understand complex systems?What concrete actions can we take to improve effective use and protection of complex systems?And if we do all this, is it sufficient?

DiscussionWe began by agreeing that our charge was to should look at complexity in terms of scientific method and that complexity makes things difficult: we need to both leverage and manage complexity. We asked ourselves the questions:

What problem are we trying to solve? Are we looking for an architecture, an approach such as decomposition or reduction? Are we looking for a solution or a refinement?

The remainder of this section is arranged around the major points that arose over the two sessions.Lexicon/Taxonomy: There are a number of terms that arise when we discuss complex systems for which we have not yet agreed on precise definitions.

Complexity: not synonymous with undecidability Composability Composition: does not mean preservation of properties, that's composability Compositionality Emergent (properties): there are both known and unknown emergent properties, both

controlled and uncontrolled, and both expected and unexpected. Emergent properties can

34

Page 35: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

be good or bad but we need to be cognizant that good and bad are relative and depend on the context and requirements.

Formal Methods: misunderstood in the context of approaching complexity scientifically. Alternative terms include formal approach, principled development, analytic techniques)

Irreducibility: often used to be roughly synonymous with unpredictability and to describe a state in which properties are analytically derivable from the lower layers but are not representative of those layers

Obfuscation (apparent complexity): the group generally agreed that it’s a fantasy to believe we can design in complexity so that the enemy is defeated while we fully understand.

Undecidability: not synonymous with irreducibility Unpredictability

How we got here: In the past, when computer science was a science, there were systems such as Multics that were formally, rigorously, and efficiently specified. These systems addressed security at the system level. With the advent of mass-market PCs, smart phones, service-oriented architectures and, most recently, the “cloud,” time to market economic pressures have created a demand for convenience and utility at the expense of formal design, scrutiny and system security. There is little liability for bad software. In this process, the manufacturers have been as surprised as anyone at the creativity of system misusers and malware. Security has also been commoditized and driven by market forces: the new threat gets attention. Problems such as DDOS have been discussed since the 1980s, but there are many attacks and attack vectors; we’ve been picking the low-hanging “sexy” fruit and brushing aside the hard problems. In the large, we’ve accepted a victim mentality and the role of playing a catch-up game against the hackers. Even now we’re looking at how we can secure PCs , but in five years when we’re all using smart devices what matter if a PC is secure? One conclusion drawn was that current economics doesn’t support science. The good news is that the government is beginning to realize we cannot tolerate insecure systems.Modeling and Simulation: Modeling and simulation need to be combined with experiment, precise requirements and architecture. We need modularity, compartmentalization, component encapsulation, abstraction, well-defined interfaces and precise requirements. We also need to anticipate and analyze emergent behaviors. We can’t model small pieces and expect to understand the composed system. Model checkers only check properties described by the model and not unanticipated emergent properties. Scale is an important factor, especially as concerns trust. We must take a holistic and systemic approach and consider factors other than code. Simulation may be efficient if the components are small, the environmental parameters are well-controlled and/or if the distribution is well understood.Composing Systems: This subject was raised in context of a “divide and conquer” approach to managing complexity: If we have modular composition and loosely coupled systems, can we not confer (conserve and preserve) properties and/or compose for one or more selective emergent behaviors? There was no consensus on this and much debate. On the one hand there’s the idea that we can guarantee small components and can control how they are combined so that we preserve desirable properties (e.g. robustness). On the other, is the argument that composition creates unexpected, emergent behaviors, new vulnerabilities and new risks. When we compose, we’re not only preserving properties but preserving vulnerabilities due to these properties and/or the uncertainty of the properties themselves. Examples of this include:

In a system composed of a thermodynamic and a hydraulic subcomponents, the interactions of the thermo and hydraulic systems are irreducible. How you couple/compose (e.g. proximity of the components) affects performance as does the setup (e.g., configuration parameters and constraints) and the operating environment (e.g. temperature, humidity). Different behaviors will emerge depending on these factors. If an emergent behavior of the meta-system is undesirable, it’s a design flaw.

eBay and PayPal together are more fertile grounds for hacking than either by itself.

35

Page 36: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

If you put payroll onto the same system as the lab, you’ve created greater risk for yourself and greater opportunity for a hacker.

Electronic devices on airplanes may represent an unintended covert channel. Voting machines: lots of opportunity for bad things to happen

Several research questions were brought up: Define how we compose: how can we apply rigor and better understand the limitations of

composing systems Define the contexts under which we expect our systems to run Define perspicuous interfaces and how we configure systems Design for evolving systems: we will compose and configure systems differently over time

as we adapt to different circumstances. It must be noted that we had some fun here thinking about system “evolution” and “intelligent design.”

One conclusion reached was that we need to adopt a systems viewpoint: any system is composed of code, libraries, hardware, humans, etc. KLOCs (k lines of code): should not be the metric for complexity--it’s not just the code but all the interactions and ensuing side effects. We must also accept that we have to make decisions in an imperfect world and that we may need to accept approximate solutions that work well.Design for a hostile environment: Malware is designed to to survive and/or degrade gracefully in a hostile environment. The systems we build on the other hand are largely designed to run in protected environments and are brittle when used in circumstances for which they were not intended. Hackers use many of the same components as the good guys, but they are composing differently. A good place to start would be to design and build hostile environments, and to design, test and compose for hostile environments. We don’t do this currently. Red team exercises have many “don’t touch” rules (no social engineering, don’t do that or you’ll break the simulation, etc.). Measuring adversary creativity would be another good place to start: hackers read journals to take advantage of new knowledge and technology, they operate under a different set of economic factors, and they also make mistakes.Engineering discipline: There are good examples of engineering discipline, process, and well-controlled systems in other fields, and we should draw analogies from successful engineering practices and add discipline to cyber. We need to ensure we are looking at whole systems; we need to look beyond the software and take into account both system configuration and the operating context. When a system is put into an environment for which was was not intended, we lose control. For example, attacks often move our systems out of our models and assumptions. We need firm requirements not only for the individual components but for the composed, “top layer” system. This is especially important in implementing new requirements. Without these requirements, we don’t know what we need to analyze. And if we can’t analyze, we can’t test. This can be restated as saying we need to include risk management in our process: we need to look beyond systems analysis and also consider the vulnerabilities and risk in terms of meeting the requirements and accomplishing the mission. Wake up already: We’ve discussed the same problems that have been on the table for many years. We keep making the same recommendations. We need a massive culture change and to break the chain of irresponsibility if we expect to make progress. We need to identify good tools and methodologies and to understand the glue by which we compose systems. We need strong requirements and to bring risk management into the analytical process.

Recommendations

Research Needs: Principled Development Taxonomy of definitions, principles, foundations Abstraction, modularity, encapsulation, composition, and evaluation of properties

36

Page 37: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Emergent behavior of cyber systems, anticipated or unanticipated, intentional or unintentional,

Irreducibility of some emergent behavior Analysis of properties at higher levels of abstraction Separation of data types Limits of attribution (of properties) due to complexity

Research Needs: System Engineering in the Context of Cyber Development of requirements Fidelity to specifications Assessment of vulnerabilities and risks Sensitivity and survivability to changes Designing in a malicious environment (HW, SW, Communications, people) Evolution

Research Needs: Data-oriented approaches Analysis on complex data structures Signal to Noise (needle in a haystack) issues associated with complexity Gathering information on emergent behavior and importance of context Collection and analysis of massive, streaming data from multiple sources Sensor network type approaches that incorporate concepts such as statistical physics,

robustness Grids, Clouds, and beyond

Educational Awareness Foundational issues Techniques for rigorous statement and solution of problems “Best practices” for code development and cyber defenses Incorporation of scientific approaches for comparing alternatives

Action Items Develop and post white papers on technical topics. Complete summary of sessions on SIAM Mini-symposium on Mathematical Challenges in

Cyber Security. Explore opportunities for future technical forums on complexity. Respond to National Cyber Leap Year Summit Report from Complexity perspective.

Address the Problem Need cultural change/paradigm shift. The complexity of Cyber Security requires an interdisciplinary, scientific approach. Break the chain of irresponsibility (cuts across the cyber industry: Microsoft, education)

Questions and DiscussionQ: Can signal to noise be an asset?A: Yes, we can hide things in the noise.

Q: What’s an example of an emergent behavior?A: Take a thermal system and a hydraulic system. How you compose is important. The design specification says how to compose them (therefore the system is reducible), but with computer systems you get unexpected effects. It's not just A + B but the vulnerabilities of both (compounded

37

Page 38: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

or not) as well as the “chemical reaction”or cross-contamination of A+B which is the part you didn't know about. We can't avoid it, but we can manage this better.

Q: Did we talk about mission inconsistency?A: No, but this is part of the process and must be designed in. It's an emergent property as is process inconsistency. Either might be good or bad. Someone mentioned the Dilbert Principle.

Additional Discussion Points Not only is application context important, but we need to take mission and risk

management into account We need to stop thinking about cyber security as an ops/technical issue. It's an

interdisciplinary science problem and a business issue We need to create and design systems that can tolerate internal and external

inconsistencies, corruptions, etc. and design for a hostile environment

Note taker: Thomas Ng

Complexity Breakout Notes

Topic Description Complexity does not tell what is possible, tells only what is not, only way to know what it

will do is to simulate it◦ Example, Botnets: cannot figure out emergent behavior based on complete description

of current behavior

Discussion ObjectivesSpend less time on these questions

Defining the specific nature of and peculiar challenges of complex systems (can include everything from enterprise through SCADA and including human, psychosocial, cultural, political…)

Spend more time on these questions How can we take advantage of complex systems so that they are more secure rather than

being overwhelmed by their complexity? How can we tie trust/trustworthiness and scientific evaluation principles to better

understand complex systems?What concrete actions can we take to improve effective use and protection of complex systems?

And if we do all this, is it sufficient?

Points of Focus Topic: Simulation, in relation to complexity (accurate? Possible?)

◦ What is meant by finding out what it will “do”? (problem is not well-defined)▪ What level of abstraction? How specific?

Some are trivial to simulate

38

Page 39: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Abstraction is fundamental to dealing with complexity◦ Opinions

▪ It is very hard to simulate, how can tell if its accurate▪ Add a human among bots, and now it is a different problem▪ Simulation is not the answer to complexity, but it is a tool▪ Either can sit and wait, or do a simulation to figure it out (e.g. conficker)

◦ Ambiguity of actual problem being solved right now▪ One opinion: Trying to solve problem via leveraging of mathematical data over large

scale Some amount of disagreement about specifics

◦ Vertical complexity, each layer required different requirements/dependencies for the simulation▪ Applying simulation technologies applicable when scope is limited to a

layer Overarching topic: How can complexity be leveraged to make cyber systems more secure?

◦ Example: Conficker – made complex to make difficult to fix◦ Example: Scalable Trustworthiness

▪ Scaling is very hard unless it is anticipated▪ When using simulation, end up simulating artifacts that you based your system on

◦ Use complexity to advantage – use approach that adds complexity to attackers while being understandable to yourself (obfuscation)▪ Is it possible to add complexity/obfuscation in a real sense?

One says no◦ Apparent Complexity is not useful

One says yes, example:◦ Chain of trust vs. statistical mass of trust (Google model)◦ Second is more robust(?)◦ More agreement on this view than the “no” view (by body language)

Keys to using trust/complexity◦ Sound architecture◦ Modulatrity◦ Abstractions key to using complexity (do all agree? It seems yes)◦ Example: airplanes, very complex

▪ Everything is specified by requirements▪ Everything must be monitored (when installed,who, where, etc)▪ Cyber security is a little different▪ This example analogy for management of complexity▪ Simulation in this case is done very carefully

Well-controlled environment parameters, or good understanding of environmental distributions

Two opposing ideas so far:◦ Composition

▪ Must have some means of composition to make each 'box' manageable, then combine boxes

▪ Example: Russian hackers that Boeing helped catch Perl scripts generated e-mail accounts, ebay ( and auctions), etc etc Point: modulization is difficult, but could exist (one says there are examples, lists

a few, such as PSOS)◦ Marketing prevented these solutions from moving forward

▪ Counter-Example: Cultural shift from not using computers to the

39

Page 40: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

necessity of them▪ Also, did not anticipate the fact that these machines would be put into

adversarial environment (personal use 'ruined' secure development of larger scale things)

▪ Composition summary: make small modules that were able to be understood, and then combine them Problem with this: doesn't necessarily scale up

Code complexity◦ Protecting against more difficult attacks is harder with complex, undecidable code◦ Almost unanimous disagreement against this slide

▪ Complexity != decidability, complexity != klocs▪ It is difficult(impossible) to quantify how difficult something is to defend against

◦ Point of slide: verifiable code, with other verifiable modules, verification is preserved▪ Unanimous disagreement on this as well

When composing two modules, must now take into account the emerging properties of the composition

Composition vs. compositionality are different◦ In the end, composition is very limited

▪ Positive side – it is a research area, but still not figured out; a discipline that is to be developed

▪ Aside: Hardware vs. software security Hardware cost when fails – higher, software cost when fails – much lower Reason why software security failures are high

◦ Example: Boeing puts together planes on computers, and it works▪ How can this translate to composition of components WRT security?▪ Another example: two systems are 99.9 percent secure, when combined two 0.1

percents combine. Aggregate, and it gets worse and worse Model checking (checking composition of pieces), only checks for a property

being existent, not a lack of existence of other new properties (i.e. cannot create lack of properties

Malware is designed to run in a hostile environment◦ Other things are not designed to work under attack◦ Malware writers know how to make their code survive, on the aggregate

How to build systems to work in hostile environments◦ Testing in a hostile environment is hard – impossible to find every single possible

method of attack, so very hard to make an accurate hostile environment Example: graphs of internet traffic

◦ Emergent behaviors were easy to predict once the ideas are understood, but not possible before

◦ What would be nice: Develop a model that would allow these predictions to be made beforehand

Conclusion:◦ Hard problem, but still doable (example: hurricane prediction)

Build security from the ground-up, then problem is solved (much more)◦ New techs that will be the 'next thing' – cell phones, etc

▪ “Who cares if PC is insecure, not as important”

Recommendations Made (Next Steps) Steps moving forward:

40

Page 41: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

◦ Composition of components, abstractions: not enough, but still necessary▪ Modularity, etc still necessary▪ i.e. better software engineering▪ Especially considering the cloud

◦ Research questions:▪ How to represent a hostile environment?▪ What properties can be composed and still be guaranteed?

Composability of vulnerability Compartmentalization and isolation Is it possible to cross two things so that the composition of the unknown crosses

some space?◦ Attacker doesn't know what the unknown is either, so doesn't know

vulnerability▪ Can I guarantee any subsystem by imposing a relationship with an external system?'

Possible Research Topics1. Modularity and properties – composition scheme of modules that would

conserve/preserve properties1. Emergent properties: good and bad

1. Increased functionality2. Vulnerabilities3. Contextually dependent

2. Ignore value judgments, study emergent behavior3. Possibly avoid the term “emergent”, too broad

1. Irreducibility? Undecidability? Unpredictability?1. Use metaphor from other disciplines

2. This is to explain the interaction between systems, not the systems themselves4. Seek to adopt constraints, like other disciplines, that allow for better overall

processes1. Look at other successful procseses, apply to software development2. “System engineering in the context of cybersecurity”, and application context

5. Statistical model when composition causes too much complexity1. Counterpoint – economics – largely statistical, often fails

6. How to compose components to promote an emergent behavior? It is possible, has been done1. Can still accomplish the final mission as long as emergent behaviors are within a

bounds2. Investigate system engineering process to see what causes emergent behaviors: trends3. Uniform taxonomy to discuss this issue4. Culture change, to acknowledge complexity

Analytics techniques (instead of formal methods), principled development (formal approach)

41

Page 42: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Appendix C. Scientific Evaluation Breakout Session Notes

Note Takers: Sarah Alkire, Lucas Reber

Note Takers: Sarah Alkire, Lucas Reber

Scientific Evaluation Breakout SessionSession Moderators: Glenn Fink, Clifford NeumanBackground: The Talaris Grassroots Roundtable was held at the Talaris Conference Center, in Seattle Washington, on August 27 and 28th, 2009.Session DescriptionScientific evaluation is a hallmark of the scientific process. This process is taught and used throughout the sciences to ensure that experimentation is documented in a known verifiable format. Although this is standard in the biological sciences formal scientific evaluation is not as well established in the areas of computer science and especially in the field of cyber security. This has resulted in a class of research with limited ability to reproduce results through follow-up experimentation. The question then, is if there is value in Scientific Evaluation, how do we encourage the use of scientific evaluation in the field of cyber security research?Focus PointsThe discussion over 2 sessions focused on the following aspects of scientific evaluation:

Encouraging scientific evaluation in the classroom Adopting a manual or guidelines for scientific evaluation in cyber security Implementing a grand challenge event or mini-challenge events Spurring re-experimentation through tracks in common Common datasets available for repeatable experimentation Implement educational resources in the area of cyber security

Lack of Scientific Evaluation in Computer Science A common theme over the 2 sessions was how to encourage the adoption of formal scientific evaluation in standard computer science curricula. It was generally accepted that scientific evaluation is a basis of the hard sciences but appears to be curiously absent in the field of computer science and cyber security research. This can be attributed to a number of factors, relatively young field, lack of understanding the value of the method, but more importantly, it has not been required to get published. This lack of a requirement for proper scientific evaluation has resulted in a field of study with very few experiments that are repeatable. Having repeatability in the process enables researchers to review experiments and determine if they receive the same results under similar conditions. To put this in perspective, one attendee noted that if the lack of scientific evaluation requirements in

42

Page 43: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

computer security applied to physics, we would all believe that the cold fusion works based on the Fleischmann and Pons experiment.There shouldn’t be constraints on methodologies on cyber security, but also include observational and scientific but non-experimental methods that are out there. However, there is a hard time finding subjects.

Lack of Common Guidelines for Scientific Evaluation in Computer Science A common point of concern was identifying a standard guideline that can be applied in computer science and in cyber security research. Attendees noted that the concept of proper scientific evaluation needed to be taught and applied in the classroom and in the review process for scientific journals but there was no consensus on a common guideline. This prompted discussion on the possibility of creating a standard guideline for the community. Possibly establish a list of the top twenty books in the area?It was also noted that one of the attendees is publishing a book on the topic and would be open to receiving input from the community on the book. Further discussions indicated the need for a boilerplate for papers. This boilerplate for experimental papers should include instructions to authors, reviewers. This would be a valuable tool for committees reviewing papers since it will ensure papers are good and on topic.

Educational EffortsAn increase in educational efforts was proposed and agreed upon. Some of these efforts included ideas such as:

Establish book and community norms. Establish jury of peers for helping with design – to help each other in the cyber security

community.o This would be how to do experimental design.

Investigate the prospect of local university juries – not just one Research methods courses – friendly jury of peers (one for cyber security or individual)

o What we want to get out of this is the “low hanging fruit” – just do something! Include guidelines from this list of books so we can add the good points into one book. Many

don’t go into the psychosocial part of cyber security.o Does yet another book need to be written though?o Can we just take the info from these books – we should recap what’s out there, put a

page on the Wiki from the books that are already out there for people to use. In general it would be nice to have an application that includes examples from other cyber

literature. One good thing about the Wiki is that if any of us have cyber-security examples (papers,

studies, paragraphs about parts, etc.), either observational or scientific, we can put them there to help.

Throw out hard problems and several different applications like case studies, etc. Need something separate for a discussion area other than Wiki.

43

Page 44: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Implementing a Grand Challenge Event or Mini-Challenge EventAttendees noted that the concept of a grand challenge that has stricter scientific evaluation requirements and requiring open datasets may entice participation from the community. It was generally decided that a full “Grand Challenge” level event may not be appropriate for a start. However a number of attendees also chair committee, provide reviews for journals, and are active in the larger community. It was discussed that one area to help spur adoption of scientific evaluation in the community would be to add the requirements to the journals and groups that they participate in. One aspect of this that could be explored would be creating mini-Challenges in some of the common conferences that the community attends.

Encouraging RepeatabilityAnother area of interest would be encouraging repeat research to verify results by creating a sub-track in these conferences for repeated experiments. Although this is extremely important to the understanding of the field as a formal science, it is generally accepted that it is a difficult area to achieve funding in. By having the sub-track as an option it may also provide a new area for beginning researchers to gain entry and experience in the community as well as providing a valuable addition to the foundations of the discipline.

Common Datasets for ResearchThroughout the course of the session a common theme of contention and concern was the availability of common datasets to use for cyber security research. Attendees noted that there are relatively few large scale datasets to use for cyber security research. In many cases the researchers are relying on datasets generated from in house resources (locally managed sensors and instrumentation) or on data sets that may have inherent failings such as the Lincoln Labs DARPA data. It was generally considered that the more data sets researchers have available, the better. The problem then is how we acquire valid datasets for use by researchers.Developing a small database along with the problems from that data and a list of sources was recommended. If someone went through it, it would save others time in the future.This question spurred a discussion on some of the types of data sets available and what could be of use. There are very few depositories of data sets for use by the community. This is a concern since it affects how repeatable an experiment can be. If outside researchers do not have access to the original dataset, it will be quite difficult to have repeatable and reliable results. This problem spurred the discussion of the possibility of a centralized clearing house for this type of data. There are a few sites available to the community at this time that may be able to be expanded upon or it may require creating a site that allows researchers to point to existing datasets in essentially a registry format. There was discussion about the security of such a database, but there was a general consensus that a community agreement would need to be reached to provide security for the information.

44

Page 45: University of Washington€¦  · Web viewNote: In the real word, there are recourses when your trust is violated, i.e. the law. References: X.509. PNNL studies of the insider threat

DRAFT 1.00

Takeaways The community needs to encourage the adoption of formal scientific evaluation in the

classroom, in the journals, and in the conferences. A common scientific evaluation guideline for use across the discipline should be developed

for adoption. This guideline should also include a boilerplate for conferences and journals. Encouraging re-experimentation by offering forums at conferences. Encourage Scientific Evaluation through the use of mini-challenges based off of similar

methodology to DARPA Grand Challenges It is important that we have these grassroots efforts give the sense that there is a

community to talk to so there can be recommendations.

45