AXIOMS Paul Gerrard THE TESTING OF Advancing Testing Using Axioms.
-
Upload
irving-stopper -
Category
Documents
-
view
240 -
download
3
Transcript of AXIOMS Paul Gerrard THE TESTING OF Advancing Testing Using Axioms.
AXIOMS
Paul Gerrard
THE
TESTINGOF
Advancing Testing Using Axioms
Agenda
Axioms – a Brief Introduction Advancing Testing Using Axioms
First Equation of Testing Test Strategy and Approach Testing Improvement A Skills Framework for Testers Quantum Theory for Testing
Close
Some testers disagree with other testers...Truly, madly, deeply!
Surely, there must be SOME things that ALL testers can AGREE ON?
Or are we destined to argue FOREVER?
Test axioms – a little history
Started as a ‘thought experiment’ in my blog in February 2008
Some quite vigorous debate on the web ‘great idea’ ‘axioms don’t exist’ ‘Paul has his own testing school’
Initial 12 ideas evolved to 16 test axioms Testers Pocketbook:
testers-pocketbook.com Test Axioms Website test-axioms.com
If Axioms are “common ground” for ALL testers...
• Some very useful by-products• Test strategy, improvement, skills framework
• Interesting research areas!• First Equation of Testing, Testing Uncertainty
Principle, Quantum Theory, Relativity, Exclusion Principle...
• You can tell I like physics
There is no agreed set of testing laws
There are no agreed definitions of test or
testing!
Our definitions MUST be context-neutral
to support the testing of ANY SYSTEMThe words software, IT,
program, technology, methodology, v-model,
entry/exit criteria, risk – do not appear in definitions
The selected definition of ‘test’
American Heritage Dictionary:
Test: (noun)• A procedure for critical evaluation;• A means of determining the presence,
quality, or truth of something;• A trial.
‘Stakeholder Obsessed’
A testing stakeholder is someone who is interested in the outcome
of testing;
You can be your OWN stakeholder (e.g. dev and users)
16 Proposed Axioms
(in three groups)
Let’s look at a few of the test axioms
Stakeholder Axiom
Testing needs stakeholders
Test Model Axiom
Test design is based on models
Test Basis Axiom
Testers need sources of
knowledge to select things to
test
Coverage AxiomTesting needs a test
coverage model or models
Fallibility Axiom
Our sources of knowledge are
fallible and incomplete
Confidence Axiom
The value of testing is
measured by the confidence of stakeholder
decision making
Event Axiom
Testing never goes as planned; evidence arrives in discrete quanta
“Ohhhhh... Look at that, Schuster... Dogs are so cute when they try to comprehend quantum mechanics.”
Never-Finished Axiom
Testing never finishes; it
stops
Consider Axioms as thinking tools
Advancing Testing Using Axioms
First Equatio
n of Testing
Axioms+ Context
+ Values+ Thinking
=Approac
h
Why is the equation useful? Separation of Axioms, context, values
and thinking Tools, methodologies, certification,
maturity models promote approaches without reference to your context or values
No thinking is required!
Without a unifying test theory you have no objective way of assessing these products.
Test Strategy and Approach
Strategy is a thought process not a document
Contexts of Test Strategy
TestStrategy
Risks
Goals
ConstraintsHuman resourc
e
EnvironmentTimescales
Process(lack of?)
Contract
Culture
Opportunities
User involvement
Automation
De-Duplicatio
n
Early Testing
Skills
Communication
Axioms
Artefacts
Testing needs stakeholders (p64)
Summary:Identify and engage the people or organisations that will use and benefit from the test evidence we are to provide
Consequence if ignored or violated:There will be no mandate or any authority for testing. Reports of passes, fails or enquiries have no audience.
Questions: Who are they? Whose interests do
they represent? What evidence do
they want? What do they need it
for? When do they want
it? In what format? How often?
Test design is based on models (p68)
Summary:Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make
Consequence if ignored or violated:Tests design will be meaningless and not credible to stakeholders.
Questions Are design models available to use as
test models? Are they mandatory? What test models could be used to
derive tests from the Test Basis? Which test models will be used? Are test models to be documented or
are they purely mental models? What are the benefits of using these
models? What simplifying assumptions do
these models make? How will these models contribute to
the delivery of evidence useful to the acceptance decision makers?
How will these models combine to provide sufficient evidence without excessive duplication?
How will the number of tests derived from models be bounded?
IEEE 829 Test Plan Outline1. Test Plan Identifier2. Introduction3. Test Items4. Features to be Tested5. Features not to be
Tested6. Approach7. Item Pass/Fail Criteria8. Suspension Criteria
and Resumption Requirements
9. Test Deliverables10. Testing Tasks11. Environmental
Needs12. Responsibilities13. Staffing and
Training Needs14. Schedule15. Risks and
Contingencies16. ApprovalsBased on IEEE Standard 829-1998
IEEE 829 Plan and Axioms Items 1, 2 – Administration Items 3+4+5 – Scope Management, Prioritisation Item 6 – All the Axioms are relevant Items 7+8 – Good-Enough, Value Item 9 – Stakeholder, Value, Confidence Item 10 – All the Axioms are Relevant Item 11 – Environment Item 12 – Stakeholder Item 13 – All the Axioms are Relevant Item 14 – All the Axioms are Relevant Item 15 – Fallibility, Event Item 16 – Stakeholder Axioms
A better Test Strategy and Plan?
1. Stakeholder Objectives Stakeholder management Goal and risk management Decisions to be made and how
(acceptance) How testing will provide
confidence and be assessed How scope will be determined
2. Design approach Sources of knowledge (bases
and oracles) Sources of uncertainty Models to be used for design
and coverage Prioritisation approach
3. Delivery approach Test sequencing policy Repeat test policies Environment requirements Information delivery approach Incident management approach Execution and end-game
approach
4. Plan (high or low-level) Scope Tasks Responsibilities Schedule Approvals Risks and contingencies
Testing Improvement
Test process improvement is a waste
of time
The delusion of ‘best practice’ There are no “practice” Olympics to
determine the best There is no consensus about which
practices are best, unless consensus means “people I respect also say they like it”
There are practices that are more likely to be considered good and useful than others, within a certain community and assuming a certain context
Good practice is not a matter of popularity. It’s a matter of skill and context. Derived from “No Best Practices”, James Bach, www.satisfice.com
Actuallyits 11
(most were not software related)
The delusion of process models(e.g. CMM)
Google search “CMM” – 22,300,000 “CMM Training” – 48,200 “CMM improves quality” – 74 (BUT really 11 – most of
these have NOTHING to do with software) A Gerrard Consulting client…
CMM level 3 and proud of it (chaotic, hero culture) Hired us to assess their overall s/w process and make
recommendations (quality, time to deliver is slipping) 40+ recommendations, only 7 adopted – they
couldn’t change How on earth did they get through the CMM 3 audit?
“Test Process Improvement is a Waste of Time”
Using process change to fix cultural or organisational problems is never going to work
Improving test in isolation is never going to work either
Need to look at changing context rather than values…
Why you are where you are
Context+ Values
+ Thinking=Approa
ch
<- your values<- your context
<- your thinking<- your approach
Where maturity models come from
Context+ Values
+ Thinking=Approa
ch
<- someone else's
<- someone else's<- someone else's<- someone else's
Making change happen
Axioms+
Context+ Values
+ Thinking=Approa
ch
<- recognise
<- hard to change
<- could change?<- just do some<- your approach
Using the axioms and questions Axioms represent the critical things
to think about Associated questions act as
checklists to: Assess your current approach Identify gaps, inconsistencies in current
approach QA your new approach in the future
Axioms represent the WHAT Your approach specifies HOW
Eight stage change process (after Kotter) Mission Coalition Vision Communication Action Wins Consolidation Anchoring
Changes identified here
If you must use one, this is where your
‘test model’ comes into play
A Skills framework for testers
Axioms indicate WHAT to think about...
...so the Axioms point to SKILLS
Test design is based on models (p68)
Summary:Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make.
Consequence if ignored or violated:Tests design will be meaningless and not credible to stakeholders.
Questions: Are design models available to use
as test models? Are they mandatory? What test models could be used to
derive tests from the Test Basis? Which test models will be used? Are test models to be documented or
are they purely mental models? What are the benefits of using these
models? What simplifying assumptions do
these models make? How will these models contribute to
the delivery of evidence useful to the acceptance decision makers?
How will these models combine to provide sufficient evidence without excessive duplication?
How will the number of tests derived from models be bounded?
Test design and modelling skills A tester needs to understand:
Test models and how to use them How to select test models from fallible
sources of knowledge How to design test models from fallible
sources of knowledge Significance, authority and precedence of
test models How to use models to communicate The limitations of test models
Familiarity with common modelsIs this all that current certification provides?
Testing as a commodity;Testers must specialise Functional testers are endangered:
Certification covers process and clerical skills Functional testing is becoming a commodity
and is easy to outsource To survive, testers need to specialise:
Management Test automation Test strategy, design, goal- and risk-based Stakeholder management Non-Functional testing Business domain specialists...
Training and certification must change
Intellectual skills and capabilities are more important than the clerical skills
Need to re-focus on: Testing thought processes (Axioms) Real-world examples, not theory Testing as information provision Goal and risk-based testing Testing as a service (to stakeholders)
Practical, hands-on, real-world training, exercises and coaching.
Quantum Theory of Testing
If evidence arrives in discrete quanta...
...can we assign a value to it?
How testing builds confidence Tests are usually run one by one Every individual test has some
significance Some tests expose failures but
ultimately we want all tests to PASS When all tests pass – the
stakeholders are happy, aren’t they? Can we measure confidence?
But...
Testing never goes as planned (p78) Testers cannot usually:
Prepare all tests they COULD do Run ALL tests in the plan Re-test ALL fixes Regression-test as much or as often as required
How do we judge the significance of tests? To include them in scope for planning (or not) To execute them in the right order? To ensure the most significant tests are run?
Test ‘progress’ is measured using test cases and incidents
What stakeholders want ultimately, is every test to pass
The ideal situation is: We have run all our tests All our tests pass Acceptance is a formality
Not all tests pass though We track incidents, severity and priority –
great But how do we track the significance or value
of tests that pass?
The significance of a single test varies Significance varies by objective:
Criticality of the business goal it covers Criticality of the risk it covers
Significance varies by precedent: The first end-to-end test pass is significant Subsequent e2e passes are less significant
Significance varies by functional dependence: A test of shared functionality is more important
than standalone functionality Significance by stakeholder:
Customers and sponsor tests are more significant than developer tests.
Using the significance of tests to manage progress Stakeholders usually know how to judge
the significance of failures when tests FAIL So why don’t we assess the significance of
tests BEFORE we run them?
If we did that: We could scope and prioritise more effectively We would know exactly which tests provide
enough information for an acceptance decision Acceptance criteria would be taken seriously.
Quantum Testing
Using business goals, risks and coverage to drive testing is ‘advanced’ - but it is still VERY CRUDE
Quantum Testing proposal: Need to assign a micro-significance to all tests Need to assess macro-significance to
collections of tests As tests are created and executed,
evidence increases incrementally Manage progress by monitoring EVIDENCE
rather than by counting test cases.
The promise of quantum testing We and our stakeholders could know the
value of tests BEFORE we run them Stakeholders would understand WHAT we
are doing and WHY The problem of ‘enough testing’ becomes a
shared challenge (testers and stakeholders)
Caveats: we assign significance qualitatively rather than numerically
Significance is RELATIVE rather than absolute!
Close
Axioms are context-neutral rules for testing The Equation of Testing
Separates axioms, context, values and thinking We can have sensible conversations about
process Axioms and associated questions provide
context neutral checklists for test strategy, assessment/improvement and skills
Quantum Testing aims to address the question, “how much testing is enough?”
AXIOMS
Thank-You!
THE
TESTINGOF
testaxioms.comtesters-pocketbook.comgerrardconsulting.com