Post on 20-May-2015
Copyright McCabe & Associates 19991
ManagementManagementOverviewOverview
9861 Broken Land ParkwayFourth Floor
Columbia, Maryland 21046800-638-6316
www.mccabe.comsupport@mccabe.com
1-800-634-0150
Copyright McCabe & Associates 19992
AgendaAgenda
McCabe IQ Overview Software Measurement Issues McCabe Concepts Software Quality Metrics Software Testing Questions and Answers
Copyright McCabe & Associates 19993
About McCabe & AssociatesAbout McCabe & Associates
20 Years of 20 Years of ExpertiseExpertise
Global Global PresencePresence
Analyzed Over 25 Billion Analyzed Over 25 Billion Lines of CodeLines of Code
Copyright McCabe & Associates 19994
McCabe IQ process flowMcCabe IQ process flow
Analysis platformTarget platform
Source code
Compile and run
Execution log
EffectiveTesting
QualityManagem
entInstrumented source
code
McCabe IQ
Copyright McCabe & Associates 19995
McCabe IQ and Configuration McCabe IQ and Configuration ManagementManagement
• Merant PVCS
• Rational ClearCase
• CA Endevor
McCabe IQ
ExecutionLog
TestEnvironme
nt
Effective
Testing
QualityManagement
• Monitor quality as software changes
• Manage test environment
Copyright McCabe & Associates 19996
McCabe IQ and Test AutomationMcCabe IQ and Test Automation
McCabe IQMercury Interactive:• TestDirector• WinRunner
Source code
Test executable
Execution log
RiskManagement
TestManagement
GUI TestAutomationEffective
Testing
• Risk-driven test management
• Effective, automated testing
Non-GUI TestAutomation
Copyright McCabe & Associates 19997
McCabe IQ ComponentsMcCabe IQ Components
McCabe IQ Framework(metrics, data, visualization, testing, API)
TESTINGTESTINGMcCabe Test
McCabe TestCompressMcCabe Slice
McCabe ReTest
QUALITY ASSURANCE
McCabe QAMcCabe Data
McCabe CompareMcCabe Change
Source Code Parsing Technology(C, C++, Java, Visual Basic, COBOL, Fortran, Ada)
Copyright McCabe & Associates 19998
McCabe QAMcCabe QA
McCabe QA measures software quality with industry-standard metrics– Manage technical risk factors as
software is developed and changed
– Improve software quality using detailed reports and visualization
– Shorten the timebetween releases
– Develop contingency plans to address unavoidable risks
Copyright McCabe & Associates 19999
McCabe DataMcCabe Data
McCabe Data pinpoints the impact of data variable modifications
– Identify usage of key data elements and data types
– Relate data variable changes to impacted logic
– Focus testing resources on the usage of selected data
Copyright McCabe & Associates 199910
McCabe CompareMcCabe Compare
McCabe Compare identifies reusable and redundant code
– Simplify maintenance and re-engineering of applications through the consolidation of similar code modules
– Search for software defects in similar code modules, to make sure they’re fixed consistently throughout the software
Copyright McCabe & Associates 199911
McCabe ChangeMcCabe Change
McCabe Change identifies new and changed modules– Manage change with more
precision than the file-level information from CM tools
– Work with a complete technical risk profile
Complex? Poorly tested? New or changed?
– Focus review and test efforts
Copyright McCabe & Associates 199912
McCabe test maximizes testing effectiveness
– Focus testing on high-risk areas– Objectively measure testing
effectiveness– Increase the failure detection
rate during internal testing– Assess the time and resources
needed to ensure a well-tested application
– Know when to stop testing
McCabe TestMcCabe Test
Copyright McCabe & Associates 199913
McCabe SliceMcCabe Slice
McCabe Slice traces functionality to implementation
– Identifies code that implements specific functional transactions
– Isolates code that is unique to the implementation of specific functional transactions
– Helps extract business rules for application redesign
Copyright McCabe & Associates 199914
McCabe IQ Components McCabe IQ Components SummarySummary
McCabe QA: Improve quality with metrics McCabe Data: Analyze data impact McCabe Compare: Eliminate duplicate code McCabe Change: Focus on changed software McCabe Test: Increase test effectiveness McCabe TestCompress: Increase test efficiency McCabe Slice: Trace functionality to code McCabe ReTest: Automate regression testing
Copyright McCabe & Associates 199915
Software Measurement IssuesSoftware Measurement Issues Risk management Software metrics Complexity metrics Complexity metric evaluation Benefits of complexity measurement
Copyright McCabe & Associates 199916
Software Risk ManagementSoftware Risk Management Software risk falls into two major categories
– Non-technical risk: how important is the system? Usually known early
– Technical risk: how likely is the system to fail? Often known too late
Complexity analysis quantifies technical risk– Helps quantify reliability and maintainability
This helps with prioritization, resource allocation, contingency planning, etc.
– Guides testing Focuses effort to mitigate greatest risks Helps deploy testing resources efficiently
Copyright McCabe & Associates 199917
Software Metrics OverviewSoftware Metrics Overview Metrics are quantitative measures
– Operational: cost, failure rate, change effort, …– Intrinsic: size, complexity, …
Most operational metrics are known too late– Cost, failure rate are only known after deployment– So, they aren’t suitable for risk management
Complexity metrics are available immediately– Complexity is calculated from source code
Complexity predicts operational metrics– Complexity correlates with defects, maintenance
costs, ...
Copyright McCabe & Associates 199918
Complexity Metric EvaluationComplexity Metric Evaluation Good complexity metrics have three properties
– Descriptive: objectively measure something– Predictive: correlate with something interesting– Prescriptive: guide risk reduction
Consider lines of code– Descriptive: yes, measures software size– Predictive, Prescriptive: no
Consider cyclomatic complexity– Descriptive: yes, measures decision logic– Predictive: yes, predicts errors and maintenance– Prescriptive: yes, guides testing and improvement
Copyright McCabe & Associates 199919
Benefits of Complexity MeasurementBenefits of Complexity Measurement Complexity metrics are available from code
– They can even be estimated from a design They provide continuous feedback
– They can identify high-risk software as soon as it is written or changed
They pinpoint areas of potential instability– They can focus resources for reviews, testing, and
code improvement They help predict eventual operational metrics
– Systems with similar complexity metric profiles tend to have similar test effort, cost, error frequency, ...
Copyright McCabe & Associates 199920
McCabe ConceptsMcCabe Concepts Definition: In C and C++, a module is a function or subroutine with a single entry
point and a single exit point. A module is represented by a rectangular box on the Battlemap.
main
function a function c
function dprintf
Difficult to maintainable module
Difficult to test module
Well-designed, testable module
Library module
Copyright McCabe & Associates 199921
Analyzing a ModuleAnalyzing a Module For each module, an annotated source listing and
flowgraph is generated. Flowgraph - an architectural diagram of a software
module’s logic.
1 main()2 {3 printf(“example”);4 if (y > 10)5 b();6 else7 c();8 printf(“end”);9 }
Stmt CodeNumber
main Flowgraph
node:statement or blockof sequential statements
condition
end of condition
edge: flow of controlbetween nodes
1-3
4
5 7
8-9
Battlemap
main
b c printf
Copyright McCabe & Associates 199922
if (i) ; if (i) ; else ; if (i || j) ;
do ; while (i); while (i) ; switch(i) { case 0: break; ... }
Flowgraph Notation (C)Flowgraph Notation (C)
if (i && j) ;
Copyright McCabe & Associates 199923
Flowgraph and Its Annotated Source Flowgraph and Its Annotated Source ListingListing
Module: marketing
Annotated Source Listing
Program : corp4 09/23/99File : ..\code\corp4.iLanguage: instc_nppModule Module Start Num ofLetter Name v(G) ev(G) iv(G) Line Lines------ ----------------------------------------------------------- ----- ------ B marketing 2 1 2 16 10
16 B0 marketing()17 {18 int purchase;1920 B1* B2 purchase = query("Is this a purchase");21 B3 if ( purchase == 1 )22 B4* B5 development();23 else24 B6* B7 B8 support();25 B9 }
0
1*
2
3
4*
5
6*
7
8
9
Origin information
Node correspondence
Metric information
Decision construct
Copyright McCabe & Associates 199924
Low Complexity SoftwareLow Complexity Software Reliable
– Simple logic Low cyclomatic complexity
– Not error-prone– Easy to test
Maintainable– Good structure
Low essential complexity
– Easy to understand– Easy to modify
Copyright McCabe & Associates 199925
Moderately Complex SoftwareModerately Complex Software Unreliable
– Complicated logic High cyclomatic complexity
– Error-prone– Hard to test
Maintainable– Can be understood– Can be modified– Can be improved
Copyright McCabe & Associates 199926
Highly Complex SoftwareHighly Complex Software Unreliable
– Error prone– Very hard to test
Unmaintainable– Poor structure
High essential complexity
– Hard to understand– Hard to modify– Hard to improve
Copyright McCabe & Associates 199927
Would you buy a used car from this software?Would you buy a used car from this software? Problem: There are size
and complexity boundariesbeyond which softwarebecomes hopeless– Too error-prone to use– Too complex to fix– Too large to redevelop
Solution: Control complexityduring development andmaintenance– Stay away from the boundary
Copyright McCabe & Associates 199928
Important Complexity MeasuresImportant Complexity Measures Cyclomatic complexity: v(G)
– Amount of decision logic Essential complexity: ev(G)
– Amount of poorly-structured logic Module design complexity: iv(G)
– Amount of logic involved with subroutine calls Data complexity: sdv
– Amount of logic involved with selected data references
Copyright McCabe & Associates 199929
Cyclomatic ComplexityCyclomatic Complexity
The most famous complexity metric Measures amount of decision logic Identifies unreliable software, hard-to-test
software Related test thoroughness metric, actual
complexity, measures testing progress
Copyright McCabe & Associates 199930
Cyclomatic complexity, v - A measure of the decision logic of a software module.– Applies to decision logic embedded within written
code.– Is derived from predicates in decision logic.– Is calculated for each module in the Battlemap.– Grows from 1 to high, finite number based on the
amount of decision logic.– Is correlated to software quality and testing
quantity; units with higher v, v>10, are less reliable and require high levels of testing.
Cyclomatic ComplexityCyclomatic Complexity
Copyright McCabe & Associates 199931
Cyclomatic ComplexityCyclomatic Complexity 1
4
2
6
7
8
9
11
13
14
15
3 5
10 12
region method regions = 11
Beware of crossing lines
R1 R2
R3 R4
R5
R6R7
R8R9
R10
R11
19
23
1
23
45
67
89
10
11
12
13
1415
16
17
18
2021
22
23
24
edges and node methode = 24, n = 15v = 24 -15 +2v = 11
=2
=1
=1
=2
=1
=1
=1
=1predicate methodv = + 1v = 11
Risks ofRisks ofincreasingincreasingvv
TIMETIME
Vital Signs and High v’sVital Signs and High v’s
5
20
15
10
•Higher risk of failures
•Difficult to understand
•Unpredictable expected results
•Complicated test environments including more test drivers
•Knowledge transfer constraints to new staff
Copyright McCabe & Associates 199933
Essential ComplexityEssential Complexity
Measures amount of poorly-structured logic Remove all well-structured logic, take
cyclomatic complexity of what’s left Identifies unmaintainable software Pathological complexity metric is similar
– Identifies extremely unmaintainable software
Copyright McCabe & Associates 199934
Essential complexity, ev - A measure of “structuredness” of decision logic of a software module.– Applies to decision logic embedded within written
code.– Is calculated for each module in the Battlemap.– Grows from 1 to v based on the amount of
“unstructured” decision logic.– Is associated with the ability to modularize
complex modules.– If ev increases, then the coder is not using
structured programming constructs.
Essential ComplexityEssential Complexity
Copyright McCabe & Associates 199935
Branching out of a loop Branching in to a loop
Branching into a decision
Branching out of a decision
Essential Complexity - Unstructured Essential Complexity - Unstructured LogicLogic
Copyright McCabe & Associates 199936
CyclomaticComplexity = 4
Essential Complexity - Flowgraph Essential Complexity - Flowgraph ReductionReduction
EssentialComplexity = 1
Essential complexity, ev, is calculated by reducing the module flowgraph. Reduction is completed by removing decisions that conform to single-entry, single-exit constructs.
Copyright McCabe & Associates 199937
Essential ComplexityEssential Complexity Flowgraph and reduced flowgraph after structured constructs
have been removed, revealing decisions that are unstructured.
v = 5Reduced flowgraph
v = 3Therefore ev of the original flowgraph = 3
Superimposedessential flowgraph
Copyright McCabe & Associates 199938
Essential ComplexityEssential Complexity
Good designs
Can quicklydeteriorate!
v = 10 ev = 1
v = 11 ev = 10
Essential complexity helps detect unstructured code.
Risks ofRisks ofincreasingincreasingevev
TIMETIME
Vital Signs and High ev’sVital Signs and High ev’s
1
10
6
3
•Intricate logic
•Conflicting decisions
•Unrealizable test paths
•Constraints for architectural improvement
•Difficult knowledge transfer to new staff
TIMETIME
How to Manage and Reduce v and evHow to Manage and Reduce v and ev
DecreasingDecreasingand managingand managingv and evv and ev
1
20
15
10•Emphasis on design architecture and methodology
•Development and coding standards
•QA procedures and reviews
•Peer evaluations
•Automated tools
•Application portfolio management
•Modularization
Copyright McCabe & Associates 199941
How Much Supervising Is Done?How Much Supervising Is Done?
Module Design ComplexityModule Design Complexity
Copyright McCabe & Associates 199942
Module design complexityModule design complexity Measures amount of decision logic involved
with subroutine calls Identifies “managerial” modules Indicates design reliability, integration
testability Related test thoroughness metric, tested
design complexity, measures integration testing progress
Copyright McCabe & Associates 199943
Module design complexity, iv - A measure of the decision logic that controls calls to subroutines.– Applies to decision logic embedded within written
code.– Is derived from predicates in decision logic
associated with calls.– Is calculated for each module in the Battlemap.– Grows from 1 to v based on the complexity of
calling subroutines.– Is related to the degree of "integratedness"
between a calling module and its called modules.
Module Design ComplexityModule Design Complexity
Copyright McCabe & Associates 199944
Module Design ComplexityModule Design Complexity Module design complexity, iv, is calculated
by reducing the module flowgraph. Reduction is completed by removing decisions and nodes that do not impact the calling control over a module’s immediate subordinates.
Copyright McCabe & Associates 199945
Module Design ComplexityModule Design Complexity
main
progeprogd
iv = 3
Therefore,
iv of the original flowgraph = 3
Reduced Flowgraph
v = 3
proge()
progd()
main v = 5
proge()
progd()
Example:
main(){
if (a == b) progd();if (m == n) proge();switch(expression){case value_1:
statement1;break;
case value_2:statement2;break;
case value_3:statement3;
}}
do not impact calls
Copyright McCabe & Associates 199946
Data complexityData complexity Actually, a family of metrics
– Global data complexity (global and parameter), specified data complexity, date complexity
Measures amount of decision logic involved with selected data references
Indicates data impact, data testability Related test thoroughness metric, tested
data complexity, measures data testing progress
Copyright McCabe & Associates 199947
Data complexity calculationData complexity calculation
Paths ConditionsPb : 1-2-3-4-9-3-4-9-12 C1 = T, C2 = T, C2 = FP2 : 1-2-12 C1 = FP3 : 1-2-3-4-9-12 C1 = T, C2 = F
v = 6
M :
data complexity = 3
M :
=>Data A Data A
C1
C2
C1
C2
C3
C4
C5
1
2
5
7 8
3
10
11
12
4*
9
6
3
2
1
12
4*
9
Copyright McCabe & Associates 199948
Module Metrics ReportModule Metrics Reportv, number of unit test paths for a module
Total number of test paths for all modules
iv, number of integration tests for a module
Average number of testpaths for each module
Page 1 10/01/99 Module Metrics Report
Program: less Module Name Mod # v(G) ev(G) iv(G) File Name------------- ----- ------ ----- ----- ------------------CH:fch_get 118 12 5 6 ..\code\CH.ICH:buffered 117 3 3 1 ..\code\CH.Ich_seek 105 4 4 2 ..\code\CH.Ich_tell 108 1 1 1 ..\code\CH.Ich_forw_get 106 4 1 2 ..\code\CH.Ich_back_get 110 6 5 5 ..\code\CH.Iforw_line 101 11 7 9 ..\code\INPUT.Iback_line 86 12 11 12 ..\code\INPUT.Iprewind 107 1 1 1 ..\code\LINE.Ipappend 109 36 26 3 ..\code\LINE.Icontrol_char 119 2 1 1 ..\code\OUTPUT.Icarat_char 120 2 1 1 ..\code\OUTPUT.Iflush 130 1 1 1 ..\code\OUTPUT.Iputc 122 2 1 2 ..\code\OUTPUT.Iputs 100 2 1 2 ..\code\OUTPUT.Ierror 83 5 1 2 ..\code\OUTPUT.Iposition 114 3 1 1 ..\code\POSITION.Iadd_forw_pos 99 2 1 1 ..\code\POSITION.Ipos_clear 98 2 1 1 ..\code\POSITION.IPRIM:eof_bell 104 2 1 2 ..\code\PRIM.IPRIM:forw 95 15 8 12 ..\code\PRIM.IPRIM:prepaint 94 1 1 1 ..\code\PRIM.Irepaint 93 1 1 1 ..\code\PRIM.Ihome 97 1 1 1 ..\code\SCREEN.Ilower_left 127 1 1 1 ..\code\SCREEN.Ibell 116 2 1 2 ..\code\SCREEN.Ivbell 121 2 1 2 ..\code\SCREEN.Iclear 96 1 1 1 ..\code\SCREEN.Iclear_eol 128 1 1 1 ..\code\SCREEN.Iso_enter 89 1 1 1 ..\code\SCREEN.Iso_exit 90 1 1 1 ..\code\SCREEN.Igetc 91 2 1 2 ..\code\TTYIN.I------------- ----- ------ ----- ----- ------------------Total: 142 93 82Average: 4.44 2.91 2.56Rows in Report: 32
Copyright McCabe & Associates 199949
Deriving Tests– Creating a “Good” Set of Tests
Verifying Tests– Verifying that Enough Testing was Performed
– Providing Evidence that Testing was Good Enough
When to Stop Testing Prioritizing Tests
– Ensuring that Critical or Modified Code is Tested First
Reducing Test Duplication– Identifying Similar Tests That Add Little Value
& Removing Them
Common Testing ChallengesCommon Testing Challenges
Copyright McCabe & Associates 199950
Requirements TestScenarios
StaticIdentificationof Test Paths
Implementation
BlackBox
WhiteBox
Sub-Systemor System
Analysis
An Improved Testing ProcessAn Improved Testing Process
Copyright McCabe & Associates 199951
ExecuteCode
TraceInfo
Source Code
Parsing
Build Executable
Import
TheMcCabe
Tools
RequirementsTracing
TestCoverage
UntestedPaths
Database
InstrumentedSource Code
Export
What is McCabe Test?What is McCabe Test?
Copyright McCabe & Associates 199952
Color Scheme Represents Coverage
No Trace File Imported
Coverage ModeCoverage Mode
Copyright McCabe & Associates 199953
Colors Show “Testedness”
Lines Show Execution Between Modules
Color Scheme:- BranchesBranches- PathsPaths- Lines of CodeLines of Code PartiallyPartially
TestedTested
TestedTested
UntestedUntested
Trace File Imported
3 67%My_Func1ion
Coverage ResultsCoverage Results
Copyright McCabe & Associates 199954
Module _>Slice
Coverage Results at Unit LevelCoverage Results at Unit Level
Copyright McCabe & Associates 199955
Examine Partially Tested Modules
Visualize Untested Modules
Module Names Provide Insight into Additional Tests
Module Name ‘search’
Deriving Functional TestsDeriving Functional Tests
Copyright McCabe & Associates 199956
Deriving Tests at the Unit LevelDeriving Tests at the Unit Level
18 times
Statistical Paths = 1018
Too Many Theoretical Tests! What is the Minimum
Number of Tests? What is a “Good” Number of
Tests?
0 1018
Minimum yeteffective testing?
Too Few Tests Too Many Tests
Copyright McCabe & Associates 199957
Example ‘A’ Example ‘B’
Which Function Is More Complex?
Code Coverage Code Coverage
Copyright McCabe & Associates 199958
Example ‘A’ Example ‘B’
2 Tests Required
2 Tests Required
Code Coverage Is Not Proportional to Complexity
Using Code Coverage Using Code Coverage
Copyright McCabe & Associates 199959
McCabe's Cyclomatic Complexity v(G)Number of Linearly Independent Paths
One Additional Path Required to Determine the
Independence of the 2 Decisions
McCabe's Cyclomatic ComplexityMcCabe's Cyclomatic Complexity
Copyright McCabe & Associates 199960
Complexity = 10
Deriving Tests at the Unit LevelDeriving Tests at the Unit Level
Minimum 10 Tests Will:• Ensure Code Coverage• Test Independence of Decisions
Copyright McCabe & Associates 199961
C E
B G
A
F
D
M=N
O=P X=Y
S=T
Basis set of paths Path conditionsP1: ABCBDEF Pb: M=N,O=P,S=T,O not = PP2: AGDEF P2: M not = N, X=YP3: ABDEF P3: M=N,O not = PP4: ABCF P4: M=N,O=P,S not = TP5: AGEF P5: M not = N,X not = Y
Unit Level Test Paths - Baseline Unit Level Test Paths - Baseline MethodMethod The baseline method is a technique used to
locate distinct paths within a flowgraph. The size of the basis set is equal to v(G).
v = 5
Copyright McCabe & Associates 199962
E
F G
H
A
B C
D
M
N O
P
I
J K
L
R1
R2
R3
R4
R5
1. Generates independent tests
Basis setP1: ACDEGHIKLMOPP2: ABD…P3: ACDEFH…P4: ACDEGHIJL…P5: ACDEGHIKLMNP
2. Code coverage - frequency of execution
Node A B C D E F G H I J K L M N O PCount 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5
Structured Testing CoverageStructured Testing Coverage
Copyright McCabe & Associates 199963
E
F G
H
A
B C
D
M
N O
P
I
J K
L
R1
R2
R3
R4
R5
1. Generates independent tests
Basis setP1: ABDEFHIJLMNPP2: ACD…P3: ABDEGH…P4: ABDEGHIKL…P5: ABDEGHIKLMOP
2. Code coverage - frequency of execution
Node A B C D E F G H I J K L M N O PCount 5 4 1 5 5 4 1 5 5 4 1 5 5 4 1 5
Other Baselines - Different CoverageOther Baselines - Different Coverage
Previous code coverage - frequency of execution
Node A B C D E F G H I J K L M N O PCount 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5
Same number of tests; which coverage is more effective?
Copyright McCabe & Associates 199964
Untested Paths at Unit LevelUntested Paths at Unit Level
Cyclomatic Test Paths– Module _>Test Paths– Complete Test Paths by Default
Configurable Reports– Preferences _>Testing– Modify List of Graph/Test Path
Flowgraphs
Module _>Test Paths
Remaining UntestedTest Paths
Copyright McCabe & Associates 199965
Untested Branches at Unit LevelUntested Branches at Unit Level
Preferences _>Testing (Add ‘Tested Branches’ Flowgraph to List)Module _>Test Paths
Number ofExecutions
for DecisionsUntestedBranches
Copyright McCabe & Associates 199966
Untested Paths at Higher LevelUntested Paths at Higher Level
System Level Integration Paths– Based on S1
– End-to-End Execution– Includes All iv(G) Paths
S1 = 6
Copyright McCabe & Associates 199967
Untested Paths at Higher LevelUntested Paths at Higher Level
System Level Integration Paths
– Displayed Graphically– Textual Report– Theoretical Execution Paths– Show Only Untested Paths
S1 = 6
Copyright McCabe & Associates 199968
Untested Paths at Higher LevelUntested Paths at Higher Level
Textual Report of End-to-End Decisions
Decision Values with Line/Node #
Module Calling List
Copyright McCabe & Associates 199969
Use Coverage to Verify Tests
Store Coverage Results in Repository
Use Execution Flowgraphs to Verify Tests
Verifying TestsVerifying Tests
Copyright McCabe & Associates 199970
Verifying Tests Using CoverageVerifying Tests Using Coverage
Four Major Coverage Techniques:
– Code Coverage– Branch Coverage– Path Coverage– Boolean Coverage (MC/DC)
67%
23%
0%
35%
100%
Copyright McCabe & Associates 199971
When to Stop TestingWhen to Stop Testing Coverage to Assess Testing Completeness
– Branch Coverage Reports Coverage Increments
– How Much New Coverage for Each New Test of Tests?
Copyright McCabe & Associates 199972
Is All of the System Equally Important? Is All Code in An Application Used
Equally? 10% of Code Used 90% of Time Remaining 90% Only Used 10% of Time
Where Do We Need to Test Most?
When to Stop TestingWhen to Stop Testing
Copyright McCabe & Associates 199973
When to Stop Testing / Prioritizing When to Stop Testing / Prioritizing TestsTests Locate “Critical” Code
– Important Functions
– Modified Functions
– Problem Functions
Mark Modules– Create New “Critical” Group
Import Coverage Assess Coverage for “Critical”
Code– Coverage Report for “Critical” Group
– Examine Untested Branches 32 67%Runproc
39 52%Search
56My_Func1ion
Copyright McCabe & Associates 199974
Optionally Use Several “Critical” Groups
Increasing Levels Determine Coverage for Each
Group Focus Testing Effort on
Critical Code
Coverage30%
25%
90%
70%
50%
Insufficient Testing?
Useful as a Management Technique
Criticality CoverageCriticality Coverage
Copyright McCabe & Associates 199975
When to Stop TestingWhen to Stop Testing Store Coverage in
Repository– With Name & Author
Load Coverage– Multiple Selections– Share Between Users– Import Between Analyses with
Common Code
Testing _>Load/Save Testing Data
Copyright McCabe & Associates 199976
Testing the ChangesTesting the Changes
Version 1.0 - Coverage Results Version 1.1 - Previous CoverageResults Imported Into New Analysis
ChangedCode
Import Previous Coverage Results Into New Analysis:• Parser Detects Changed Code• Coverage Removed for Modified or New Code
Copyright McCabe & Associates 199977
Testing the ChangesTesting the Changes
Store Coverage for Versions– Use Metrics Trending to Show
Increments– Objective is to Increase
Coverage between Releases
0.00%
20.00%
40.00%
60.00%
80.00%
100.00%
v1.0 v1.1 v1.2 v2.0
Incremental Coverage
Copyright McCabe & Associates 199978
McCabe ChangeMcCabe Change
Marking Changed Code– Reports Showing Change
Status– Coverage Reports for
Changed Modules
Configurable Change Detection– Standard Metrics– “String Comparison”
ChangedCode
Copyright McCabe & Associates 199979
Manipulating CoverageManipulating Coverage Addition/Subtraction of slices
– The technique:
~(Test A) (Test B) (Test B) ^ ~(Test A) Compliment of Test A Test B Intersect Compliment ofTest A
Test A Test B Test A Test B Test A Test B
Copyright McCabe & Associates 199980
Slice ManipulationSlice Manipulation
Slice Operations Manipulate Slices Using Set Theory Export Slice to File
– List of Executed Lines Must be in Slice Mode
Copyright McCabe & Associates 199981
ReviewReview McCabe IQ Products Metrics
– cyclomatic complexity, v– essential complexity, ev– module design complexity, iv
Testing– Deriving Tests– Verifying Tests– Prioritizing Tests– When is testing complete?
Managing Change