Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

38
Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802 [email protected] Security Architecture and Security Architecture and Design: Part II Design: Part II Learning by Doing Theory Practice IST 515

description

Security Architecture and Design: Part II. Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802 [email protected]. Theory  Practice. Learning by Doing. IST 515. The Transformation Process. - PowerPoint PPT Presentation

Transcript of Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Page 1: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Chao-Hsien Chu, Ph.D.College of Information Sciences and Technology

The Pennsylvania State UniversityUniversity Park, PA 16802

[email protected]

Security Architecture and Security Architecture and Design: Part IIDesign: Part II

LearningbyDoing

Theo

ry

Practi

ce

IST 515

Page 2: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

The Transformation Process

SecurityPolicy

SecurityModel Program-ming

• Abstract objectives, goals and requirements

• Rules or practice• Framework

• Mathematical relationship and formulas

• Specifications• Data structure

• Computer code• GUI – check box

The security policy provides the abstract goals and the security model provides the do’s and don’ts necessary to fulfill the goals

• Product or• System

Page 3: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Security Policy

• Outlines the security requirements for an organization.

• Is an abstract term that represents the objectives and goals a system must meet and accomplish to be deemed secure and acceptable.

• Is a set of rules and practices that dictates how sensitive information and resources are managed, protected, and distributed.

• Expresses what the security level should be by setting the goals of what the security mechanisms are supposed to accomplish.

• Provides the framework for the systems’ security architecture.

Page 4: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Security Model

• Is a symbolic representation of a policy, which – Outlines the requirements needed to support the security

policy and how authorization is enforced.– Maps the abstract goals of the policy to information

system terms by specifying explicit data structures and techniques that are necessary to enforce the security policy.

– Maps the desires of the policymakers into a set of rules that computer system must follow.

• Is usually represented in mathematics and analytical ideas, which are then mapped to system specifications, and then developed by programmers through programming codes.

Page 5: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

• Derive the mathematical relationships and formulas explaining how x can access y only through outlined specific methods.

• Develop specifications to provide a bridge to what this means in a computing environment and how it maps to components and mechanisms that need to be coded and developed.

• Write the program code to produce the mechanisms that provide a way for a system to use access control lists and give administrators some degree of control. This mechanism presents the network administrator with a GUI representation, like check boxes, to choose which subjects can access what objects, within the operating system.

Security Policy“Subjects need to be authorized to access objects.”

Security ModelExample

Page 6: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Security Models

• Bell-LaPadula Model (1973)

• Biba Model (1977)

• Clark-Wilson Model (1987)

• Access Control Matrix

• Information Flow Model

• Noninterference Model

• Chinese Wall Model

• Lattice Model

• Confidentiality

• Integrity

• Availability

Security Requirements Security Models

Page 7: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Bell-LaPadula Model

• Funded by the U.S. government, Bell-LaPadula model is the first mathematical model of a multilevel security policy. Because users with different clearances use the system, and the system processes data with different classifications.

• Is a state machine model that enforce the confidentiality aspects of access control, but not with integrity or availability

• Is an information flow security model as it ensures information does not flow in an insecure manner.

• All mandatory access control (MAC) model are based on the Bell-LaPadula model.

Page 8: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Bell-LaPadula Model Properties

• The Simple Security Property (ss Property) states that a subject at a given security level cannot read data that resides at a higher security level (No Read Up).

• The * (star) Security Property states that a subject in a given security level cannot write information to a lower security level. (No Write Down).

• The Strong Star Property states that a subject that has read and write capabilities can only perform those functions at the same security level, nothing higher and nothing lower. A subject to be able to read and write to an object, the clearance and classification must be equal.

Page 9: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

The Bell-LaPadula Model

SimpleSecurityProperty

Star (*)Property

StrongStar (*)Property

Layer ofLower Secrecy

Layer ofHigher Secrecy

Read Write Read/Write

DivulgingSecrets

DivulgingSecretsΧ Χ

Χ ReadingSecrets

ReadingSecretsΧ

Page 10: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Biba Security Model

• Developed in 1977, the Biba integrity model mathematically describes read and write restrictions based on integrity access classes of subjects and objects. It is the first model to address integrity.

• Is an information flow model as it is concerned about data flowing from one level to another.

• The model looks similar to the Bell-LaPadula Model; however, the read-write conditions are reversed.

Page 11: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Biba Integrity Model Axiom

• The Simple Integrity Axiom: States that a subject at one level of integrity is not permitted to observe (read) an object of a lower integrity. No Read Down.

• The * (Star) Integrity Axiom: States that an object at one level of integrity is not permitted to modify (write to) an object of a higher level of integrity. No Write Up.

• Invocation property states that a subject at one level of integrity cannot invoke (call up) a subject at a higher level of integrity.

Page 12: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

The Biba Model

SimpleIntegrityProperty

IntegrityStar (*)Property

Layer ofLower Secrecy

Layer ofHigher Secrecy

Read Write

Χ

ContaminationΧ

GetContaminated

Page 13: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

The Invocation Property

• The Biba model can be extended to include an access operation called invoke. A subject can invoke another subject, such as a software utility, to access an object.

• The subject cannot send message (logical request for service) to subjects of higher integrity. Subjects are only allowed to invoke utilities or tools at the same or lower integrity level (otherwise, a dirty subject could use a clean tool to access or contaminate a clean object).

Page 14: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Clark-Wilson Integrity Model

The model uses the following elements:

•Users: Active agents.

•Transformation Procedures (TPs): Programed abstract operations, such as read, write and modify.

•Constrained Data Item (CDI): A data item whose integrity is to be preserved. Can only be manipulated by TPs.

•Unconstrained Data Item (UDI): Data items outside of the control area of the modeled environment such as input information. Can be manipulated by users via primitive read and write operations.

•Integrity Verification Procedure (IVP): Check the consistency of CDIs with external reality.

Page 15: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Elements of Clark-Wilson Model

TP

CDI 1

CDI 2

CDI 3

Log CDI

UDI

Users

CDI

IVP

Page 16: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Clark-Wilson Model

The three main rules of integrity models:• Prevent unauthorized users from making

modifications• Prevent authorized users from making improper

modifications (separation of duties)• Maintain internal/external consistency (well-

formed transaction)

Clark-Wilson model addresses each of these goals. Biba model only addresses the first goal.

Page 17: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Clark-Wilson Model

• Developed by Clark-Wilson in 1987, the model addresses the integrity requirements of applications.

• Clark-Wilson model enforces the three goals of integrity by using access triple (subject, software TP, and object), separation of duties, and auditing. It enforces integrity by using well-formed transactions (through access triple) and separation of user duties.

Page 18: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Information Flow Model

• Information flow model deals with any kind of information flow, which help architects and developers make sure their software does not allow information to flow in a way that can put the system or data in danger.

• One way that the information flow model provides protection is by ensuring that covert channels do not exist in the code.

• The Bell-LaPadula model focuses on preventing information from flowing from a high security level to a low security level.

• The Biba model focuses on preventing information from flowing from a low security level to a high security level.

Page 19: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Other Models

• State machine model is an abstract mathematic model that uses state variables to represent the system state. Failure of a state machine should fail in a secure state.

• Non-interference model ensures that actions at a higher level (domain) cannot interfere with actions at a lower level.

• Graham-Denning Modem defines a set of eight primitive protection rights in terms of commands that a specific subject can execute on an object.

• Brewer and Nash Model (Chinese Wall Model) allows for dynamically changing access controls to protect against conflicts of interest.

Page 20: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Security Protection Mechanisms

• Protection mechanisms are used to ensure the separation between objects in a system.

• Active protection mechanism. It prevents access to an object if the access is not authorized.

• Passive protection mechanism. It prevents or detects unauthorized use of the information associated with an object, even if access to the object itself is not prevented. In most cases, these techniques use cryptographic techniques to prevent unauthorized disclosure of information or checksum techniques to detect an unauthorized alteration of an object.

Page 21: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Protection Mechanisms 1

• Trusted Computing Base (TCB)

• Trusted Computer System

• Abstraction, Encapsulation, and Information Hiding

• Security Perimeter

• Trusted Path

• Labeling and Classification

Page 22: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Protection Mechanisms 2

• Centralized backup for desktop systems

• Control of software on desktop systems

• Encryption / File encryption

• Appropriate access controls

• Robust access control and biometrics

• Protection of applications and database

• Protection domain, disks, systems, laptops

Page 23: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

• Separation of privileged process and others

• Logging of transaction and transmission

• Email and download/upload policies

• Security awareness and regular training

• Graphical user interface mechanism

• Security formal methods in software development, change control, configuration management, and environmental change

• Disaster recovery and business continuity planning, for all systems including desktop, file system and storages, database and applications, data and information

Protection Mechanisms 3

Page 24: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Factors for Security Product Selection

• Security.• Cost.• Flexibility.• Environmental.• User interface.• System

administration.• Future development

of a product.• Process.

• Functionality.• Effectiveness.• Assurance.

Page 25: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Security / System Evaluation

• A security evaluation examines the security-relevant parts of a system including:– Trusted computing base (TCB)– Access control mechanisms.– Reference monitor– Kernel– Protection mechanisms.

• There are different methods of evaluating and assigning assurance levels to systems, as various parts of the world look at computer security differently and rate some aspects of security differently.

Page 26: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Security Evaluation Standards

• Trusted Computer Security Evaluation Criteria (TCSEC):– 1985 by the National Computer Security Center (NCSC)– Also known as Orange Book of the Rainbow Series.– Address Confidentiality.

• The Trusted Network Interpretation (TNI):– 1987, known as Red Book– Address network and telecommunications

• Information Technology Security Evaluation Criteria (ITSEC):– Drafted in 1990 and endorsed by the Council of the European

Union in 1995.– Include integrity and availability as well as confidentiality as

security goals.

• The Common Criteria (CC):– Based on the U.S. Federal Criteria that expanded on the ITSEC.– An international standard to evaluate trust.

Page 27: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Orange Book - TCSEC

• The U.S. Department of Defense (DoD) developed the Trusted Computer System Evaluation Criteria (TCSEC) to evaluate operating systems, applications, and different products.

• These evaluation criteria are published in a book with an orange cover, which is called, appropriately, the Orange Book.

• The Orange Book is used to review the functionality, effectiveness, and assurance of a product during its evaluation, which can be used to evaluate whether a product contains the security properties the vendor claims it does and whether the product is appropriate for a specific application or function.

Page 28: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Orange Book - TCSEC

Topics

• Security policy.

• Labels. Marking of objects.

• Identification of subjects.

• Accountability.

• Life-cycle assurance.

• Documentation.

• Continuous protection.

Classification Systems

A: Verified Protection.

B: Mandatory Protection:B1: Labeled security.B2: Structured protection.B3: Security domains

C: Discretionary Protection:C1: Discretionary security

protectionC2: Controlled asset protection

D: Minimal Protection

Page 29: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Rainbow Series

• Security practitioners have pointed out the deficiencies in the Orange Book:– It looks specifically at the operating system and not at

other issues like networking and database.– It focuses mainly on one attribute of security –

confidentiality, and not on integrity and availability.– It works with government classification and not the

protection classifications commercial industries use.– It has a relatively small number of rating. Many different

aspects of security are not evaluated and rated.

• More books were written to extend the coverage of the Orange Book, which are collectively called Rainbow Series.

Page 30: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

The Red Book - TNI

• The Trusted Network Interpretation (TNI), also called the Red Book, addresses security evaluation topics for networks and network components, including local area networks and wide area internetwork systems.

• Like Orange Book, the Red Book only provides a framework for securing different types of networks, and does not supply specific details on how to implement security mechanisms.

• The Red Book rates confidentiality of data and operations that happen within a network and the network products. Data and labels need to be protected from unauthorized modification, and the integrity of information needs to be ensured. The source and destination mechanisms used for messages are evaluated and tested to ensure modification is not allowed.

Page 31: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Common Criteria

• The Orange Book and the Rainbow Series provide evaluation schemes that are too rigid for the business world. ITSEC attempted to provide a more flexible approach by separating the functionality and assurance. However, this add much complexity and results in too many classifications to keep straight.

• The International Organization for Standardization (ISO) identified the need of international standard evaluation criteria in security, which resulted in the development of Common Criteria. The Common Criteria was developed through a collaboration among national security standards organizations within the United States, Canada, France, Germany, the United Kingdom, and the Netherlands.

Page 32: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Common Criteria Assurance Levels

• Under the Common Criteria model, an evaluation is carried out on a product and it is assigned an Evaluation Assurance Level (EAL).

• The Common Criteria has seven assurance levels:– EAL1: Functionally tested– EAL2: Structurally tested– EAL3: Methodically tested and checked– EAL4: Methodically designed, tested and reviewed– EAL5: Semiformally designed and tested– EAL6: Semiformally verified design and tested– EAL7: Formally verified design and tested.

Page 33: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Certification

• Certification is the comprehensive evaluation of the technical and non-technical security features of an information system and the other safeguards, which are created in support of the accreditation process, to establish the extent in which a particular design and implementation meets the set of specified security

• Certification is the endorsement that the system/application meets their functional and security requirements. It is the comprehensive technical analysis of the security features and safeguards of a system to establish the extent to which the security requirements are satisfied.

Page 34: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Certification

• Certification uses a combinations of security evaluation techniques:- Risk analysis.- Validation, verification and testing- Security countermeasure evaluation- Audit

• Certification should consider the following issues:- Security modes of operation- Specific users and their training- System and facility configuration and location- Intercommunication with other systems

Page 35: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Vulnerabilities of Certification

• Organizations and users cannot count on the certified product being free of security flaws. Because new vulnerabilities are always being discovered, no product is ever completely secure.

• Most software products must be securely configured to meet certain protection mechanisms.

• Certifications are not the definitive answer to security. IS security depends on more than just technical software protection mechanisms, such as personnel and physical security measures.

Page 36: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Accreditation

• Accreditation is a formal declaration by a Designated Approving Authority (DAA) where an information system is approved to operate in a particular security mode using a prescribed set of safeguards at an acceptable level of risk

• Accreditation is the official management decision to operate a system. It is management’s formal approval of the adequacy of a system’s

Page 37: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Certification and Accreditation

• Accreditation looks at the following items:- Particular security mode- Prescribed set of countermeasures- Defined threats; stated vulnerabilities- Given operational concept and environment- Stated interconnections to other systems- Risk formally accepted- Stated period of time

• Certification and accreditation should be an ongoing process. A formal recertification and reaccreditation is required whenever a major change occurs, a major application is added, the security environment changes, or significant technology is upgraded.

Page 38: Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology

Examples of Guidelines for Certification

• Defense Information Technology Security Certification and Accreditation Process (DITSACP). DOD #5200.40 (December 1997)

• National Information Assurance Certification and Accreditation Process (NIACAP). NSA #NSTISSI No. 100. (April 2000).

• Guide for the Security Certification and Accreditation of Federal Information Systems, NIST SP 800-37. (October 2002)