LVISSA CISSP Course Fall 2016 Domain 6lvissa.org/mentor_slides/LVISSA CISSP Course Winter 2017...
Transcript of LVISSA CISSP Course Fall 2016 Domain 6lvissa.org/mentor_slides/LVISSA CISSP Course Winter 2017...
2/23/2017
2
Design & Validate Assessment & Test Strategies.
Analyze & Report Test Output.
Conduct Security Control Testing.
Collect Security Process Data.
Conduct or Facilitate Internal & 3rd Party Audits.
Assessment & Test Strategy:
Definition: Security Assessments and Tests are exercises, conducted according to a plan, that are designed to assess the operational security of a system.
2/23/2017
3
Goals of Test & Evaluation (T&E):
Why: Provide knowledge to assist in managing risks involved in developing, producing, operating, and sustaining systems & capabilities.
A properly planned & executed test and evaluation strategy can provide:
A. Information about risk & risk mitigation
B. Empirical data to validate models & simulations
C. Evaluate technical performance & system maturity
D. Determine whether systems are operationally effective, suitable & survivable.
Sample CISSP T&E Tasks:
A. Create, evaluate, or recommend T&E Strategy for development or acquisition.
B. Monitor T&E activities and suggest changes, especially during development & ops testing.
C. Where T&E strategies fit in the lifecycle to minimize risk effectively.
D. Communicate any concerns during analysis.
2/23/2017
4
Who? Working Group – T&E Integrated Product Team Consists of:
A. Test & Evaluation SME’s.
B. Customer/User representatives
C. Other stakeholders
Is the plan documented & consistent with acquisition & development objectives as well as partner/vendor/3rd party contracts?
Software Verification: Design & s/w development outputs meet specified
input requirements.
During development, s/w & docs are: Consistent
Complete
Correct
Software Verification is one of several related verification activities: Static & dynamic analyses
Code & document inspections
Walkthroughs
Other
2/23/2017
5
Security Verification: Developers can’t test forever!
Level of acceptable validation, verification & testing varies with safety risk (hazard) posed by the system.
Security Verification: How do we know we have enough?
Confidence intervals: Requirements + user expectations.
Number of defects in specification docs.
Estimate defects remaining.
Testing
Coverage
Other techniques.
S/W validation sub-part of overall System validation.
Testing against documented System user needs & intended uses.
Overall system specifications
s/w specifications traceable to sys specifications.
2/23/2017
6
Hardware Failures
Problems from manufacture, design & development.
Hardware lifespan, MTBF.
Modes of failure…
Software Failures• Problems from design,
development process & meeting specs.
• Software branching = complexity
• Testing can’t fully verify s/w correct & complete.
• Structured & documented process important!
• No advanced warning.
Software Development Process• Speed & ease of change ≠ less controlled
process.
• Problems not easily detected/found.
• Cost of software patching ≠ zero.
• Documentation is essential.– Plan, Control & Document to detect and correct
unexpected results of software changes.
– Define security & risk requirements.
– Understand re-usable code, libraries & integration!
2/23/2017
7
1. Define Security Control Testing:
A. ____________________________________.
2. What are the goals of Security Control Testing?
A. ____________________________________.
Methods:
Vulnerability Assessment
Penetration Testing (Covered in Domain 1).
Log Reviews
Synthetic Transactions
Code Review & Testing
Negative Testing
Misuse Case Testing
Test Coverage Analysis
Interface Testing
Help identify specific areas of weakness needing to be addressed.
Steps:
1. Discussion with Business Owners/Stakeholders.
2. Examine Existing Controls & match to known threats (automated tools/threat databases).
3. Check for accuracy. (Latest db, patches).
4. Identify Gaps
5. Plan Remediation
6. Discuss findings.
7. Follow-up to verify actions taken to remediate.
2/23/2017
8
Quote: “Don’t spend $1000 to protect $100.”Vulnerability Matrix:
Score Definition
Very High One or more major weaknesses make assets extremely susceptible to a hazard/aggressor.
High One or more significant weaknesses…
Medium High An important weakness has been ID’ed
Medium A weakness… …fairly susceptible
Medium Low …somewhat susceptible
Low A minor weakness… …slightly increases
Very Low No known weaknesses found
Definition: SCAP – Security Content Automation Protocol
standardized format/nomenclature for s/w flaw & security config. info is communicated.
The SCAP measurement & scoring systems are Common Vulnerability Scoring System (CVSS), and Common Configuration Scoring System (CCSS).
Definition: Penetration Testing Exploits existing vulnerabilities to determine true
nature & impact of a given vulnerability.
AKA: Ethical Hacking, White Hat Testing, Red Teaming, Vulnerability Testing.
Simulates an attack to evaluate risk profiles of an environment. Skill Required
Time/Resources Needed
Depth of access & privileges attained, assets accessed.
Rules of Engagement: Blind, Double-Blind, Scope.
2/23/2017
9
Log Reviews: Generating, Transmitting Storing, analyzing & disposing of computer security Log Data
Key: Capture enough detail, and store it long enough.
Also useful for: Investigations of security incidents, policy violations, fraudulent activity, operational problems; establishing baselines, trends & long-term problems.
Issues: High number of sources, inconsistent logs, formats, timestamps, large volume of data, resources to regularly analyze data.
Policies & Procedures: Log management requirements, goals, policies should be set.
Priorities: Prioritize log data for analysis & storage: High value = mandatory, Low value = time/resource-available basis.
Set Policies & Procedures
Prioritize Log Management
Create and Maintain a Log Management Infrastructure
Provide Proper Support for All Staff with Log Management Responsibilities
2/23/2017
10
Standard Log Mgmt. Operational Processes: Monitoring logging status of all sources
Monitoring log rotation & archive process
Checking for upgrades/patches to logging software – acquire, test & deploy.
Sync time sources
Reconfigure logging as needed by policy changes, tech changes.
Documenting & reporting anomalies.
Ensure logs are consolidated to SIEM (Security Information and Event Management) systems.
Network-based & Host-based log sources: Anti-malware & Anti-virus Software
Intrusion Detection & Intrusion Prevention Systems
Remote Access Software
Web Proxies
Vulnerability Management Software
Authentication Servers
Routers
Firewalls
Network Access Control (NAC) / Network Access Protection (NAP) Servers
Network-based & Host-based log sources (continued): System Events: (From host O/S)
Audit Records: (SysAdmin config changes, failed access attempts, new account creation, …
Client Requests / Server Responses
Account Information (Auth. attempts)
Usage Information / Metrics
Operational events (startup/shutdown, failures, app config. changes).
2/23/2017
11
http://raffy.ch/blog 2013http://raffy.ch/blog 2013
Real User Monitoring (RUM): Web monitoring aiming to capture & analyze every transaction of every user of a website or app.
AKA: Real-user measurement, real-user metrics, end-user experience monitoring (EUM).
Passive monitoring, relies on web-monitoring services continuously track availability, performance & functionality.
Generally can drill-down to detail individual user sessions/experience.
Synthetic Transactions: Internal scripts run scripted transactions against a web app and measure results.
Benefits: Predictable (non-burst), non-noisy data, shows site availability and network performance issues more reliably; no users needed; very granular tests.
Website Monitoring: http requests
Database Monitoring: SQL request
TCP Port Monitoring: Network Level
2/23/2017
12
Most software security errors caused by: Bad programming patterns
(missing checks of user-inputs).
Misconfiguration of security infrastructures (permissible access control or weak crypto).
Functional bugs in security infrastructure (access control that doesn’t restrict access).
Logical flaws in implemented processes. (Allowing orders without paying).
Synthetic Transaction Advantages:
Monitor app availability 24x7
Remote site reachable?
Performance impact of 3rd party services.
Performance/Avail. Of SaaS, IaaS, PaaS, Cloud infrastructure.
B2B Web Services (SOAP, REST, Web Services)
Monitor critical dbase queries for avail.
Objectively measure SLA’s.
Baseline and analyze performance trends.
Compliment RUM during low-usage periods.
Black-Box Testing White-Box Testing
No knowledge of system internals. Access to internals / source code.
Dynamic Testing Static Testing
SUT is executed SUT is not executed
Manual Testing Automated Testing
Guided by a Human Executed by script / specialized app.
Unit Testing System Level Testing
Starting / Initial Testing of one unit. Concluding steps / Testing whole sys.
2/23/2017
13
Attack Surface Finds different vulnerabilities.
Application Type Different behaviors with different apps.
Quality of Results and Usability
Usability and quality varies
Supported Technology Tests can be specific to app or technology types.
Performance & ResourceUtilization
Required differentcomputing resources and overhead.
Application Security Testing
Application security testing (AST) products and services are designed to analyze and test applications for security vulnerabilities using static AST (SAST), dynamic AST (DAST) and interactive AST (IAST) technologies.
SAST technology analyzes application source, byte or binary code for security vulnerabilities at the programming and/or testing software life cycle (SLC) phases (see "Hype Cycle for Application Security, 2013").
DAST technology analyzes applications in their running state (in real or "almost" real life) during operation or testing phases. It simulates attacks against a Web application, analyzes application reactions and, thus, determines whether it is vulnerable.
Gartner, 2014
IAST technology combines the strengths of SAST and DAST. It is typically implemented as an agent within the test runtime environment (for example, Java Virtual Machine [JVM] or .NET CLR) that observes possible attacks and is capable of demonstrating a sequence of instructions that leads to an exploit (see "Evolution of Application Security Testing: From Silos to Correlation and Interaction").
AST technology can be delivered as a tool or a cloud service.
AST has been introduced for analysis of Web applications and some legacy applications. AST has also evolved to analyze mobile applications.
Gartner, 2014
2/23/2017
14
During Planning & Design Architecture Security Reviews
Threat Modeling: Precompiled Security Threats based on bus. Model.
During Application Development Static Source Code Analysis (SAST) & Manual Code Review
Static Binary Code Analysis and Manual Binary Review
Executable in a Test Environment Manual or Automated Penetration Testing
Automated Vulnerability Scanners
Fuzz Testing Tools
Dynamic Application Source Testing (DAST) [Test + Ops]
System Operation & Maintenance
Software Testing Tennets: Full Testing ≠ Security Guarantee.
Expected test outcome predefined
A good test case = high probability of finding error(s).
Successful test = finds an error
Independent from coding process.
Both App (user) and s/w (programming) expertise used.
Testers use different tools than coders.
Examining only usual case is insufficient.
Test docs permits reuse, independent confirmation of pass/fail status of tests during subsequent review.
2/23/2017
15
Code Review: Structural Coverage Metrics
Statement Coverage
Decision (Branch) Coverage
Condition Coverage
Multi-Condition Coverage
Loop Coverage
Path Coverage
Data Flow Coverage
Functional S/W Testing (Easy to Hard)
Normal Case: Testing with usual inputs.
Output Forcing: All software outputs are generated by testing.
Robustness: Demonstrate correct behavior when given unexpected inputs. Equivalence Class Partitioning, Boundary Value Analysis, Special Case Identification (Error Guessing).
Combination of Inputs: Multiple inputs
System Level Software Testing Security & privacy performance Performance issues Responses to stress conditions Operation of internal & external security features Recover procedures / disaster recovery Usability (Users, Admins, others). Compatibility w/ other software Behavior in defined h/w configs. Accuracy of documentation.
DOCUMENT ALL TESTING!!!
2/23/2017
16
Maintenance Tasks:
Software Validation Plan Revision
Anomaly Evaluation
Problem Identification & Resolution Tracking
Proposed Change Assessment
Task Iteration
Documentation Updating
Good Practice combines Positive & Negative Testing.
Positive Testing (+) Negative Testing (-)
Determines app works as expected.If error occurs, test fails.
App handles invalid input, unexpected user behaviors in order to avoid crashes find weak points, improve quality.Exceptions/error-messages are expected.
Populating Required Fields Correspondence between Data & Field Types Allowed Number of Characters Allowed Data Bounds and Limits Reasonable Data Web Session Testing (access w/o login).
2/23/2017
17
Integration Testing: Testing different components of a system against requirements.
INTERFACE Testing: Checks components are in sync with each other, pass data & control between each other properly. Usually performed by both testers & developers.
Used to check & verify: All interactions between apps & servers executed
properly.
Errors handled properly.
What happens if user interrupts a process?
If connection to web server is reset?
Server Interfaces: Web Server App Server
App Server Database Server
Database Server Storage Server…
External Interfaces: Supported Browsers?
Error conditions to external interfaces when server or app is unavailable?
2/23/2017
18
Internal Interfaces: Plug-Ins?
Document types? (i.e. PDF –vs- MS-Word)
Downloading Errors?
User Copy/Paste Issues?
Submitting un-encrypted form data?
Crash re-start efficient/reliable?
If user leaves site mid-task?
If network connectivity disappears?
Browser crashes?
Intelligent Error Handling?
User Interface: Part of QA
User Reaction (tester only observes)
Overall acceptance
User Experience
Rate of User Errors User Application
User Documentation
Top 25 Dangerous Software Errors
Three categories:A. Insecure interaction between components
B. Risky resource management
C. Porous defenses
Easy to find; Easy to exploit!!!
2/23/2017
19
Rank CWE ID
Name
1 89 Improper Neutralization of Special Elements Used in an SQL Command (‘SQL Injection’)
2 78 Improper Neutralization of Special Elements Used in an OS Command (‘OS Command Injection’)
4 79 Improper Neutralization of Input during Webpage Generation (‘Cross-site Scripting’).
9 434 Unrestricted Upload of File with Dangerous Type
12 354 Cross-Site Request Forgery (CSRF)
22 601 URL Redirection to Untrusted Site (‘Open Redirect’).
Insecure Interaction Between Components
Rank CWE Name
3 120 Buffer Copy w/o checking size of input(‘Classic Buffer Overflow’)
13 22 Improper Limitation of a Pathname to a Restricted Directory (‘Path Traversal’)
14 494 Download Code w/o Integrity Check
16 829 Inclusion of functionality from Untrusted Control Sphere
18 676 Use of Potentially Dangerous Function
20 131 Incorrect Calculation of Buffer Size
23 134 Uncontrolled Format String
24 190 Integer Overflow or Wraparound
Risky Resource Management
Rank CWE Name
5 306 Missing Authentication for Critical Function
6 862 Missing Authorization
7 798 Use of Hard-coded Credentials
8 311 Missing Encryption of Sensitive Data
10 807 Reliance on Untrusted Inputs in a Security Decision
11 250 Execution with Unnecessary Privileges
15 863 Incorrect Authorization
17 732 Incorrect Permission Assignment for Critical Resource
19 327 Use of Broken or Risky Cryptographic Algorithm
21 307 Improper Restriction of Excessive AuthenticationAttempts
25 759 Use of a One-Way Hash without a Salt
Porous Defenses
2/23/2017
20
CSCID#
Description Category
6-1 App still supported by vendor? Upgrade + Patch + Config.
Quick Win
6-2 Web Application Firewall – Polices traffic to server.Block X-Site Scripting, App attacks, SQL injection, etc.
Other App. Firewalls.
Quick Win
6-3 In-House S/W: Explicit error checking for all input.Size, Data-type, range, format…
Visibility/Attribution
6-4 Test Web Apps w/ automated scanners for common weaknesses, incl. DOS attacks, resource exhaustion.
Visibility/Attribution
6-5 Output sanitization: No internal error messages to outside users.
Visibility/Attribution
6-6 Separate production & non-production.No unmonitored dev. Access to production systems.
Visibility/Attribution
6: Application Software Security
SQL Injection can appear in surprising places!
CSCID#
Description Category
6-7 In-house s/w: Test for coding errors w/ automated static code analysis + manual testing & inspection.
(Focus: Input validation, output encoding)
Configuration/Hygiene
6-8 3rd party s/w: Examine product security process (history of vulnerabilities, customer notices, patching).
Configuration/Hygiene
6-9 Dbase Apps: Standard hardening config templates. Test app. parts involving mission critical processes
Configuration/Hygiene
6-10 Train developers in secure programming for their specific development environment
Configuration/Hygiene
6-11 In-house: Remove development artifacts (sample data/scripts, unused libraries, components, debug
code or tools) not relevant to production.
Configuration/Hygiene
6: Application Software Security (…continued)
2/23/2017
21
Further References:
https://buildsecurityin.us-cert.gov/
https://www.owasp.org/
Account Management
Management Review
Key Performance & Risk Indicators
Backup Verification Data
Training & Awareness
Disaster Recovery & Business Continuity
2/23/2017
22
ISCM: Information Security Continuous Monitoring – Established to collect information in accordance with pre-established metrics, utilizing info readily available in part through implemented security controls.
maintaining ongoing awareness of: Information security Vulnerabilities Threats…to support organizational risk management decisions.
ISCM: Information Security Continuous Monitoring – Established to collect information in accordance with pre-established metrics, utilizing info readily available in part through implemented security controls.
maintaining ongoing awareness of: Information security Vulnerabilities Threats…to support organizational risk management decisions.
NIST SP 800-137: Base requirements for continuous monitoring.
NIST SP 800-53: Automated inspection items in connection with a CA-2 (security assessment), CA-4 (security certification) and CA-7 (continuous monitoring and vulnerability detection) continuous monitoring program.
The prescribed frequency of monitoring is daily, but it may sometimes be hourly. An example of an automated inspection item would be automated determination of the integrity of system and application files and directories.
NERC/FERC CIP: NERC/FERC CIP-005-1-R1.6 states that “an electronic Security Perimeter should be established that provides . . . Monitor and Log Access 24X7X365.” In other words, organizations must continuously monitor network and log access.
2/23/2017
23
NIST SP 800-53:
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf
NIST SP 800-137:
http://csrc.nist.gov/publications/nistpubs/800-137/SP800-137-Final.pdf
ISO/IEC 27001: ISO 27001 provides a description of an information security management system that calls for continual process improvement in information security. To accomplish this goal, an organization must continuously monitor its own security-related processes and improve according to feedback from objective measurements.
FISMA/FISMA 2: FISMA and FISMA 2 require continuous monitoring activities that include configuration management and control of information system components, security impact analyses of changes to the system, ongoing assessment of security controls, and status reporting.
2/23/2017
24
NIST Special Publication (SP) 800-137
ISCM (InfoSec Cont. Monitoring) Strategy Begins with senior leadership
Encompasses: Technology
Processes
Procedures
Operating Environments
People.
2/23/2017
25
Understands risk tolerance; helps set priorities & manage risk consistently.
Includes metrics indicating security status at all org tiers.
Ensures continued effectiveness of security controls. Verifies compliance with InfoSec requirements (org
missions, bus. functions, legislation, directives, regulations, policies, standards & guidelines).
Covers all IT assets & maintains visibility into security of assets.
Knowledge & control of changes to systems & environments.
Maintains awareness of threats & vulnerabilities.
Process for developing ISCM strategy & implementing program:1. Define strategy based on risk tolerance, maintaining
clear visibility into assets, vulnerabilities, current threats & impacts.
2. Establish program: determine metrics, frequency, control assessment frequency & ISCM technical architecture.
3. Collect Info: metrics, assessments & reporting. Automate when possible.
4. Analyze data & report findings, determine responses: accept, transfer, or avoid.
5. Review & update ICSM program and adjust.
Items to consider:
Management Review
Key Performance & Risk Indicators
Backup Verification Data
Training & Awareness
Disaster Recovery & Business Continuity
2/23/2017
26
Examples of ISCM metrics: Number and severity of vulnerabilities revealed &
remediated.
Number of unauthorized access attempts.
Configuration baseline information
Contingency plan testing dates & results
Number of employees current on awareness training requirements
Risk tolerance thresholds for organizations.
Risk score associated of given system configuration
NIST 800-137: “Take following into consideration”1. Security Control Volatility2. System Categorization / Impact Levels3. Security Controls or Specific Assessment Objects
Providing Critical Functions4. Security Controls with Identified Weaknesses5. Organizational Risk Tolerance6. Threat Information7. Vulnerability Information8. Risk Assessment Results9. Reporting Requirements
1. Changes to core missions or business processes2. Significant changes in the enterprise architecture
(including +/- of systems)3. Changes in org. risk tolerance.4. Changes in threat information5. Changes in vulnerability information.6. Changes within info systems (incl. changes in
categorization/impact level).7. Trend analysis of status reporting output8. New laws or regulations9. Changes to reporting requirements.
2/23/2017
27
How is analysis conducted?
What is the audience?
What action(s) do report(s) drive?
Follow-Ups?
Types of Test Output
Description
Automated Generated via automated process. (example: Dashboard)
Manual Generated via human interaction.(example: Monthly Executive Report/Briefing)
2/23/2017
28
Some regulations require compliance audits…
Compliance ≠ Security !!!
Task: Ensure proper scoping & tailoring to get appropriate controls at correct level for target system.
Think about: Outsourcing, non-finance issues…
Other Regulatory Standards: PCI: Payment Card Industry HIPPA: Healthcare Information Patient Privacy Act Other?
Regulations & Terms:
FISMA: Federal Information Security Management Act –Agencies must self-audit & have an independent auditor review their infosec implementation at least annually.
ICOFR: Internal Control Over Financial Reporting.
SAS: Statement on Auditing Standards (SAS) 70 focused on risks related to financial reporting.
ISO 27001: International Standards Organization (Security Framework).
Regulations / Terms:
AICPA: American Institute of Certified Public Accountants
CICA: Canadian Institute of Chartered Accountants
TSPC: Trust Services Principles & Criteria– A set of specific AICPA requirements to provide assurance above ICOFR.
SOC: Service Organization Control – Audit reports replacing SAS 70, to address users of outsourced services, system availability, and security.
2/23/2017
29
SOC Reporting Options
SAS 70: assist service org’s users & auditors with financial statement audit. (retired 2011).
Now 3 types of SOC reports address larger user needs: security, privacy, availability & assurance over control environments.
SOC Report Types
Type 1: Point-in-time “snapshot” (i.e. 10:03a-PDT, May 6, 2015), covering design.
Type 2: Period-of-time “continuous” (i.e. Jan-Dec) reports covering design & operating effectiveness.
Default time period = 12 months.
SOC 1 (aka SSAE 16, AT801, ISAE 3402 report)
SOC 2 SOC 3 (aka SysTrust, WebTrust, TrustServices report)
Summary Detailed report for users & their auditors.
ICOFR perspective only.No: Disaster RecoveryNo: Privacy
Detailed report for users, their auditors, & specified parties.
Short report that can be more generally distributed, with the option of using a website seal.
Applicability Focused on: financialreporting risks & controls specified by service provider, especially when SP performs financial transaction processing or supports transaction processing systems.
Modular Reports -- Focused on:• Security• Availability• Confidentiality• Processing Integrity• Privacy
Service Organization Control Reports
2/23/2017
30
1. Period of time report covering Security & Availability for a system?
A. SOC ______ Type ______ Report.
2. Point in time report covering ICOFR financial controls for a particular system?
A. SOC ______ Type ______ Report.
2 2
11
Security IT security policy
Security awareness & communication
Risk assessment
Logical access
Physical access
Security monitoring
User authentication
• Incident management
• Asset classification & management
• Systems development & maintenance
• Personnel security
• Configuration management
• Change management
• Monitoring & compliance.
Availability Availability policy
Backup and restoration
Environmental controls
Disaster recovery
Business continuity management
2/23/2017
31
Confidentiality Confidentiality Policy
Confidentiality of inputs
Confidentiality of data processing
Confidentiality of outputs
Information disclosures (including 3rd parties)
Confidentiality of information in systems development
Processing Integrity System processing integrity policies
Completeness, Accuracy, Timeliness &Authorization of: Inputs
System Processing
Outputs
Information tracing from source to disposition
Privacy Management
Notice
Choice and consent
Collection
Use and retention
Access
Disclosure to 3rd parties
Quality
Monitoring & enforcement
2/23/2017
32
Examples: Cloud-based ERP:
SAS 70 Report (200x - 2011)
SOC 1 (2012 onward)
SOC 2 or SOC 3 for Security & Availability.
Data Center: SAS 70 Report (200x - 2011) – Was limited to
physical + environmental security controls.
SOC 2 (2012++) covering environmental security & availability criteria.
Audit Preparation Phase Define audit scope & overall project timeline ID existing / required controls through discussions w/
mgmt. & review of avail. docs. Perform readiness review to ID gaps requiring mgmt.
attention. Communicate prioritized recommendations addressing
ID’ed gaps. Hold working sessions to discuss alternatives &
remediation plans. Verify gaps have been closed before beginning formal
audit. Determine most effective audit & reporting approach to
address service provider’s external requirements.
Formal Audit Phase Provide overall project plan Complete advance data collection before on-site
work to accelerate the process. Conduct on-site meetings & testing. Complete off-site analysis of collected info. Conduct weekly reporting of project status &
issues. Provide draft report for mgmt. review & electronic +
hard copies of final report. Provide internal report for mgmt. containing overall
observations & recommendations to consider.
2/23/2017
33
SAS 70 reports:
Old, Financial Focus
Not covering: security, availability & privacy.
Low knowledge of ISO 27001, WebTrust, SysTrust frameworks.
SOC 1: Financial service providers (payroll, transaction processing, asset management) moved to SOC 1 in 2011.
SOC 2: IT service providers not involved with finance.
SOC 3: Communicating assurance to broad userbase without disclosing details meant for others.
Design & Validate Assessment & Test Strategies.
Analyze & Report Test Output.
Conduct Security Control Testing.
Collect Security Process Data.
Conduct or Facilitate Internal & 3rd Party Audits.
2/23/2017
34
1. Real User Monitoring (RUM) is an approach to Web monitoring that?
A. Aims to capture and analyze select transactions of every user of a website or application.
B. Aims to capture and analyze every transaction of every user of a website or application.
C. Aims to capture and analyze every transaction of select users of a website or application.
D. Aims to capture and analyze select transactions of select users of a website or application.
2. Synthetic performance monitoring, sometimes called proactive monitoring, involves?
A. Having external agents run scripted transactions against a web application.
B. Having internal agents run scripted transactions against a web application.
C. Having external agents run batch jobs against a web application.
D. Having internal agents run batch jobs against a web application.
3. Most security vulnerabilities are caused by one? (Choose ALL that apply)
A. Bad programming patterns
B. Misconfiguration of security infrastructures
C. Functional bugs in security infrastructures
D. Design flaws in the documented processes
2/23/2017
35
4. When selecting a security testing method or tool, the security practitioner needs to consider many different things, such as:
A. Culture of the organization and likelihood of exposure
B. Local annual frequency estimate (LAFE), and standard annual frequency estimate (SAFE)
C. Security roles and responsibilities for staff
D. Attack surface and supported technologies
5. In the development stages where an application is not yet sufficiently mature enough to be able to be placed into a test environment, which of the following techniques are applicable: (Choose ALL that apply)
A. Static Source Code Analysis and Manual Code Review
B. Dynamic Source Code Analysis and Automatic Code Review
C. Static Binary Code Analysis and Manual Binary Review
D. Dynamic Binary Code Analysis and Static Binary Review
6. Software testing tenets include: (Choose Two)
A. Testers and coders use the same tools
B. There is independence from coding
C. The expected test outcome is unknown
D. A successful test is one that finds an error
2/23/2017
36
7. Common structural coverage metrics include: (Choose ALL that apply)
A. Statement Coverage
B. Path Coverage
C. Asset Coverage
D. Dynamic Coverage
8. What are the two main testing strategies in software testing?
A. Positive and Dynamic
B. Static and Negative
C. Known and Recursive
D. Negative and Positive
9. What is the reason that an Information Security Continuous Monitoring (ISCM) program is established?
A. To monitor information in accordance with dynamic metrics, utilizing information readily available in part through implemented security controls
B. To collect information in accordance with pre-established metrics, utilizing information readily available in part through implemented security controls
C. To collect information in accordance with pre-established metrics, utilizing information readily available in part through planned security controls
D. To analyze information in accordance with test metrics, utilizing information readily available in part through implemented security controls.
2/23/2017
37
10. The process for developing an ISCM strategy and implementing an ISCM program is?
A. Define, analyze, implement, establish, respond, review and update
B. Analyze, implement, define, establish, respond, review and update
C. Define, establish, implement, analyze, respond, review and update
D. Implement, define, establish, analyze, respond, review and update
11. The NIST document that discusses the Information Security Continuous Monitoring (ISCM) program is?
A. NIST SP 800-121
B. NIST SP 800-65
C. NIST SP 800-53
D. NIST SP 800-137
12. A Service Organization Control (SOC) Report commonly covers a
A. 6 month period
B. 12 month period
C. 18 month period
D. 9 month period