Let's Test Together by Justin Hunter

17
101: “Let's Test Together” Track: Hands-On Testing Techniques Lab Tuesday, Oct. 19th 10:15am - 11:15am Justin Hunter, CEO of Hexawise

Transcript of Let's Test Together by Justin Hunter

101: “Let's Test Together”Track: Hands-On Testing Techniques Lab

Tuesday, Oct. 19th 10:15am - 11:15am

Justin Hunter, CEO of Hexawise

Objectives

1. Introduce a test design method that will help make you a more effective tester

2. Have you actively participate and share your ideas in creating tests as we talk

3. Change the way you think about how you should be testing software

2

Agenda

1. “Lessig-style” overview of benefits

2. Spools Exercise

3. Darts Exercise

4. Generating Tests

3

Lessig-style Overview of Benefits

[Launch Lessig-style presentation]

4

Spools Exercise

How many possible tests do 4 spools represent?

Credit Rating (5)

A+A

A-B

<B

Income (6)

01-30K

30,001-50K50,001-100K100,001-1Mil1 Mil - 50 Mil

Property Type (6)

1 Family2 Family3 Family4 FamilyCoopCondo

States (6)

CAIL

GANYTXWI

5

Spools Exercise

How long does it take before your head is “ready to explode?”

How many possible tests do 8 spools represent?

How many possible tests do all the spools represent?

6

Spools Exercise

How do you cope?

http://www.flickr.com/photos/stewf/2579810818

7

Spools Exercise: ‘The Problem with Blinders’

1) Use shorten URL feature? = Y and2) Type to shorten = already shortened URL

1) Take a photo? = Y and2) In the midst of composing tweet = Y

Combinations of 2 test inputs are responsible for many defects, including these ones:

8

http://www.flickr.com/photos/kanaka/1798327442

Darts Exercise

With 3.7 million total possible tests, how many tests are required to test for all possible pairs of values?

9

Hands on Test Generation

1. H/W and/or Software Configurations

2. User Types

3. Main User Actions

4. Sub-steps and Choices for User Actions

5. Who, What, When, Where, Why, How

6. Business Rules

“Let’s Test Together”

First Set of Tests- Mortgage Example:

10

Hands on Test Generation

1. H/W and/or Software Configurations

2. User Types

3. Main User Actions

4. Sub-steps and/or Choices for User Actions

5. Who, What, When, Where, Why, How

6. Business Rules

“Let’s Test Together”

Second Set of Tests - (Participants to Choose an Application):

11

Appendix Slides

12

601: “5 Kick Ass Test Design Techniques from Combinatorial Testing”

Track: Hands-On Testing Techniques Lab

Wednesday, Oct. 20th 2:30pm - 3:30pm

Justin Hunter, CEO of Hexawise

Want to hear more? Want to practice?

14

1 2

3

4

5

© Hexawise, 2010. All rights reserved.

Overview of techniques discussed in tomorrow’s session

Excellent introductory articles and instructional videos: www.CombinatorialTesting.com

Also, please feel free to contact me if you have any questions. I’d be happy to quickly review a test plan or two, answer your questions, give quick pointers to help you run a pilot, etc..... Seriously.... I enjoy helping people get started with this approach. Please don’t hesitate to reach out. There’s no charge and no catch.

15

Additional sources of information

1.  Plan ScopeBe  clear  about  single  or  mul3ple:

-­‐    Features  /  Func3ons  /  Capabili3es

-­‐    User  types

-­‐    Business  Units

-­‐    H/W  or  S/W  Configura3ons    

Level  of  DetailAcceptable  op3ons:  

-­‐    High  level  “search  for  something”

-­‐    Medium  level  “search  for  a  book”

-­‐    Detailed  “search  for  ‘Catcher  in  the  Rye’  

       by  its  3tle”

Passive  Field  TreatmentDis3nguish  between  important  fields  (par3cularly  those  that  will  trigger  business  rules)  and  unimportant  fields  in  the  applica3on.

Quickly  document  what  your  approach  will  be  towards  passive  fields.  You  might  consider:  ignore  them  (e.g.,  don’t  select  any  Values  in  your  plan)  or  a  3  Value  approach  such  as  “valid”  “Invalid  (then  fix)”  and  “Blank  (then  fix)”

2.  Create Configura3onsFirst  add  hardware  configura3ons

Next  add  soZware  configura3ons

UsersNext,  add  mul3ple  types  of  users  (e.g.,  administrator,  customer,  special  customer)

Consider  permission  /  authority  levels  of  admin  users  as  well  as  business  rules  that  different  users  might  trigger

Ac3onsStart  with  Big  Common  Ac3ons  made  by  users

AZer  comple3ng  Big  Common  Ac3ons,  circle  back  and  add  Small  Ac3ons  and  Excep3ons

Remember  some  ac3ons  may  be  system-­‐generated

3.  Refine Business  RulesSelect  Values  to  trigger  bus.  rules

Iden3fy  equivalence  classes

Test  for  boundary  values

Mark  constraints  /  invalid  pairs

Gap  FillingIden3fy  gaps  by  analyzing  decision-­‐tree  outcomes  sought  vs.  delivered  by  tests,  “gap  hun3ng”  conversa3ons  w/  SME’s,  etc.

Fill  gaps  by  either  (i)  adding  Parameters  and/or  Values  or  (ii)  crea3ng  “one-­‐off”  tests.  

Itera3onRefine  longest  lists  of  Values;  reduce  their  numbers  by  using  equivalence  classes,  etc.

Create  Tests  with  and  w/out  borderline  Values;  consider  cost/benefit  tradeoffs  of  addi3onal  test  design  refinements

Consider  stopping  tes3ng  aZer  reaching  ~80%  coverage  

Consider  2-­‐way,  3-­‐way,  and  Mixed-­‐Strength  op3ons

4.  Execute Auto-­‐Scrip3ng      

Add  auto-­‐scrip3ng  instruc3ons  once;  apply  those  instruc3ons  to  all  of  the  tests  in  your  plan  instantly

Don’t  include  complex  Expected  Results  in  auto-­‐scripts  

Expected  ResultsExport  the  tests  into  Excel  when  you’re  done  itera3ng  the  plan

Add  complex  Expected  Results  in  Excel  post-­‐export

Con3nuous  ImprovementIf  possible  measure  defects  found  per  tester  hour  “with  and  without  Hexawise”  and  share  the  results

Add  inputs  based  on  undetected  defects

Share  good,  proven,  plan  templates  with  others

Practice Tips: 4-Step Process

Four-Step Process to Design Efficient and Effective Tests

© Hexawise, 2010. All rights reserved. 16

“You might be headed for trouble if...”

© Hexawise, 2010. All rights reserved.

1.  Plan Scope...  You  cannot  clearly  describe  both  the  scope  of  your  test  plan  and  what  will  be  leZ  out  of  scope.

Level  of  Detail...  Parameters  with  the  most  Values  have  more  Values  than  they  require.    365  values  for  “days  of  the  year”  is  bad.    Instead,  use  equivalence  class  Values  like  “weekend”  &  “weekday.”    When  in  doubt,  choose  more  Parameters  and  fewer  Values.

Passive  Field  Treatment

...  You  cannot  clearly  describe  your  strategy  to  deal  with  unimportant  details.    If  Values  will  impact  business  rules,  focus  on  them.    If  Values  don’t  impact  business  rules,  consider  ignoring  them.  

2.  Create Configura3ons...  You  have  ignored  hardware  and  soZware  configura3ons    without  first  confirming  this  approach  with  stakeholders.

Users...  You  have  not  included  all  the  different  types  of  users  necessary  to  trigger  different  business  rules.    What  user  types  might  create  different  outcomes?    Authority  level?  Age?  Loca3on?    Income?    Customer  status?

Ac3ons...  You  start  entering  Small  Ac3ons  (e.g.,  “search  for  a  hardback  science  book  by  author  name”)  before  you  enter  Big  Ac3ons  (e.g.,  “Put  Something  in  Cart.    Buy  it.”)    First  go  from  beginning  to  end  at  a  high  level.    AZer  you’ve  done  that,  feel  free  to  add  more  details.  

3.  Refine Business  Rules...  You  forget  to  iden3fy  invalid  pairs.          -­‐  or  -­‐    ...  You  rely  only  on  Func3onal  Requirements  and  Tech  Specs  w/out  thinking  hard  yourself  and  asking  ques3ons  to  SME’s  about  business  rules  and  outcomes  that  are  not  yet  triggered.

Gap  Filling...  You  assume  that  the  test  condi3ons  coming  out  of  Hexawise  will  be  100%  of  the  tests  you  should  run.    There  might  well  be  addi3onal  “one-­‐off”  things  that  you  should  test  and/or  a  few  nega3ve  tests  to  design  by  hand.

Itera3on...  You  forget  to  look  at  the  Coverage  Analysis  charts.    If  you  achieve  80%  coverage  in  the  first  quarter  of  the  tests,  you  should  measure  the  cost/benefit  implica3ons  of  execu3ng  the  last  3/4  of  the  tests.

4.  Execute Auto-­‐Scrip3ng...  You  add  detailed  Expected  Results  in  the  tests.    -­‐  or  -­‐    ...  You  forget  that  this  feature  exists  and  find  yourself  typing  out  test-­‐by-­‐test  instruc3ons  one-­‐by-­‐one.

Expected  Results...  You  invest  a  lot  of  3me  in  calcula3ng  and  documen3ng  Expected  Results  before  you  have  determined  your  “final  version”  Parameters  and  Values.    Last  minute  addi3ons  to  inputs  will  jumble  up  test  condi3ons  for  most  test  cases.

Con3nuous  Improvement...  You  don’t  ask  (when  defects  that  the  tests  missed  are  found  post-­‐tes3ng)  “What  input  could  have  been  added  to  the  test  plan  to  detected  this?”  “Should  I  add  that  input  to  the  Hexawise  test  plan  now  to  improve  it  in  advance  of  the  next  3me  it  is  used?”  

Practice Tips: Warning Signs

17