Test Pyramid vs Roi

Post on 16-Apr-2017

306 views 0 download

Transcript of Test Pyramid vs Roi

Test Pyramid vs ROI

Anton Semenchenko

Anton Semenchenko

Creator of communities www.COMAQA.BY

and www.CoreHard.by, founder of company

www.DPI.Solutions, «tricky» manager at

EPAM Systems. Almost 15 years of

experience in IT, main specialization:

Automation, С++ and lower development,

management, sales.

Anton Semenchenko

EPAM Systems, Software Testing Manager

Agenda

1. Test Pyramid: general information

2. ROI: general information

3. Two questions

4. Test Pyramid: detailed information

5. Sources of information

6. Two the same questions

7. Solution: high level information

Test Pyramid: definition

• «An effective test automation strategy calls for automating tests at three different

levels, as shown in Figure, which depicts the test automation pyramid»

Test Pyramid with percentage #1

1. UI - 5%

2. Acceptance – 5%

3. Integration – 10%

4. Unit - 80%

Test Pyramid with percentage #2

1. UI – 1%

2. End to End Flow – 4%

3. Workflow API – 6%

4. Integration – 9%

5. Domain Logic Acceptance – 10%

6. Unit – 70%

ROI: general information

Answer three simple questions:

1. How many hours do we spend currently?

2. How many hours will we spend after automation?

3. When automation will bring value?

What information should be gathered?

1. Number of test cases (for estimations)

2. Number of executions per release

3. Effort for manual execution of the test cases

ROI: general information

What should be estimated?

Implementation effort:

1. Effort to create/adopt automation framework

2. Effort to automate test cases

Maintenance effort:

1. Execution of automated scripts

2. Results analysis

3. Bug reporting

4. Fixing “broken” scripts

ROI: calculation

How many hours do we spend currently?

Manual effort per release = Effort for TC execution * number of runs per release

How many hours will we spend after automation?

Effort after automation per release = Maintenance efforts +

Effort for TC execution (non-automatable) * number of runs per release

When automation effort will bring value?

Number of Releases to invest in automation = Automation implementation effort

/ (Manual effort per release - Effort after automation per release)

Two questions (or 4 )

• Percent ??? Time \ Money or amount of Test Cases?

• Pyramid \ Triangle angels ???

• How to “calculate” 2 pyramids for exact project \ context?

• How to “calculate” exact percentage for exact project \ context?

Test Pyramid: definition

• «An effective test automation strategy calls for automating tests at three different

levels, as shown in Figure, which depicts the test automation pyramid»

Test Pyramid: Unit Testing

• «Unit testing should be the foundation of a solid test automation strategy and as

such represents the largest part of the pyramid. Automated unit tests are wonderful

because they give specific data to a programmer—there is a bug and it’s on line 47.»

• «Unit tests are usually written in the same language as the system, programmers are

often most comfortable writing them.»

Test Pyramid: Service

• «Although I refer to the middle layer of the test automation pyramid as the service

layer, I am not restricting us to using only a service-oriented architecture. All

applications are made up of various services. In the way I’m using it, a service is

something the application does in response to some input or set of inputs.»

• «Service-level testing is about testing the services of an application separately from its

user interface.»

Test Pyramid: Service - Calculator example

• «What’s needed next is a simple program that can read the rows of this spreadsheet,

pass the data columns to the right service within your application, and verify that the

right results occur. Despite this simplistic example where the result is simple

calculation, the result could be anything—data updated in the database, an e-mail

sent to a specific recipient, money transferred between bank accounts, and so on.»

Test Pyramid: UI

• «Automated user interface testing is placed at the top of the test automation

pyramid because we want to do as little of it as possible.»

• «User interface tests often have the following negative attributes:• Brittle. A small change in the user interface can break many tests. When this is repeated many

times over the course of a project, teams simply give up and stop correcting tests every time the user interface changes.

• Expensive to write. A quick capture-and-playback approach to recording user interface tests can work, but tests recorded this way are usually the most brittle. Writing a good user interface test that will remain useful and valid takes time.

• Time consuming. Tests run through the user interface often take a long time to run. I’ve seen numerous teams with impressive suites of automated user interface tests that take so long to run they cannot be run every night, much less multiple times per day.»

The remaining role of user Interface Tests

• “But don’t we need to do some user interface testing? Absolutely, but far less of it than any

other test type.

• Instead, we run the majority of tests (such as boundary tests) through the service layer,

invoking the methods (services) directly to confirm that the functionality is working properly.

At the user interface level what’s left is testing to confirm that the services are hooked up to

the right buttons and that the values are displaying properly in the result field. To do this we

need a much smaller set of tests to run through the user interface layer.

• Where many organizations have gone wrong in their test automation efforts over the years

has been in ignoring this whole middle layer of service testing. Although automated unit

testing is wonderful, it can cover only so much of an application’s testing needs. Without

service-level testing to fill the gap between unit and user interface testing, all other testing

ends up being performed through the user interface, resulting in tests that are expensive to

run, expensive to write, and brittle”

Automate Within the Sprint

Find a balance

Sources of information: 2 poles

• “Succeeding with Agile: Software Development Using Scrum” by Mike Cohn

• “Хабрахабр”

• Игорь Хрол: “Автоматизация тестирования: отбрасываем лишнее и проверяем

суть” (#3)

• Игорь Хрол: “Автоматизация тестирования: отбрасываем лишнее и проверяем

суть, SQA Days-15” (#2)

• “Хабрахабр”

• Игорь Хрол: “Автоматизация тестирования: отбрасываем лишнее и проверяем

суть” (#1)

Two the same questions

• Percent ??? Time \ Money or amount of Test Cases?

• Pyramid \ Triangle angels ???

• How to “calculate” 2 pyramids for exact project \ context?

• How to “calculate” exact percentage for exact project \ context?

Solution

• Adapt ROI Calculator for you project \ context

• Calculate ROI to “build” 2 Test pyramids

ROI Factors?

ROI factors ???

ROI factors

Name Definition Impact

“run” per Release How many times TC should be “run” per Release +

“envs” How many “envs” should be covered +

“updated” per Release How many times TC should be “updated” per Release -

“write” t\$ to “write” TC -

“develop” t\$ to “develop” TC -

“update” t\$ to “update” TC -

“prepare” env t\$ to “prepare” env for TC run -

“run” t\$ to “run” TC -

“analyze” t\$ to “analyze” Test Result -

“locate” t\$ to “locate” defect -

“fix” t\$ to “fix” defect -

Haven't we forgotten anything?

+1 more factor

Business Needs

Needs

Examples

• Facebook, Bamboo

• Head hunter

• Microsoft

• Military related software (Data Protection solution for US Army)

Haven't we forgotten anything?

+1 more factor

Business Risks

Risks

Examples

• A huge set of projects for Siemens

• Data Protection Systems: granular Back-up \ Restore plug-ins

• Back-end “oriented” software

Automation tests types #1

1. UI - 5%

2. Acceptance – 5%

3. Integration – 10%

4. Unit - 80%

ROI factors

Name Definition Impact

“run” per Release How many times TC should be “run” per Release +

“envs” How many “envs” should be covered +

“updated” per Release How many times TC should be “updated” per Release -

“write” t\$ to “write” TC -

“develop” t\$ to “develop” TC -

“update” t\$ to “update” TC -

“prepare” env t\$ to “prepare” env for TC run -

“run” t\$ to “run” TC -

“analyze” t\$ to “analyze” Test Result -

“locate” t\$ to “locate” defect -

“fix” t\$ to “fix” defect -

Testing types “comparative analysis”

Factor Unit Integration Acceptance UI

“run” per Release + +- - -

“envs” - +- + +

“updated” per Release

- +- -+ -

“write” + +- - -

“develop” + +- - -

“update” + +- - -

“prepare” env + +- - -

“run” + +- +- -

“analyze” + +- -+ -

“locate” + +- - -

“fix” + +- - -

Automation tests types #2

1. UI – 1%

2. End to End Flow – 4%

3. Workflow API – 6%

4. Integration – 9%

5. Domain Logic Acceptance – 10%

6. Unit – 70%

Sources of information: refresh

• “Succeeding with Agile: Software Development Using Scrum” by Mike Cohn

• “Хабрахабр”

• Игорь Хрол: “Автоматизация тестирования: отбрасываем лишнее и проверяем

суть” (#3)

• Игорь Хрол: “Автоматизация тестирования: отбрасываем лишнее и проверяем

суть, SQA Days-15” (#2)• “Хабрахабр”

• Игорь Хрол: “Автоматизация тестирования: отбрасываем лишнее и проверяем

суть” (#1)

ROI Calculator: refresh

• ISKRA Антон Семенченко «Метрики в тестировании»

• SQA Days 16. Доклад Антона Семенченко о продаже автоматизированного тестирования

• SQA Days 15 Антон Семенченко. Как эффективно организовать Автоматизацию, если у вас недостаточно

времени, ресурсов и денег

• Solit 2014. Антон Семенченко. "Как эффективно продавать Automation Service“

• Solit 2014. Антон Семенченко. "Как эффективно организовать Автоматизацию”

• Антон Семенченко: «Эволюция тестового окружения»

• Антон Семенченко: «Автоматизированное тестирование переживает взрывное развитие»

• Антон Семенченко: «Как автоматизатору не оказаться за бортом эволюции»

• TMPA-2015. Антон Семенченко. Automated testing: yesterday, today, tomorrow – the vectors of

development

• SECR 2016. Антон Семенченко. Метрики в тестировании – практика использования в СНГ

CONTACT ME

semenchenko@dpi.solutions

dpi.semenchenko

https://www.linkedin.com/in/anton-semenchenko-612a926b

https://www.facebook.com/semenchenko.anton.v

https://twitter.com/comaqa

www.COMAQA.BY

Community’s audience

Testing specialists (manual and automated)

Automation tools developers

Managers and sales specialists in IT

IT-specialists, thinking about migrating to automation

Students looking for perspective profession.

Community goals

Create unified space for effective communication for all IT-specialists in the context of automated testing.

Your profit

Ability to listen to reports from leading IT-specialists and share your experience.

Take part in «promo»-versions of top IT-conferences in CIS for free.

Meet regularly, at different forums, community «offices», social networks and messengers.

www.COMAQA.BY

info@comaqa.by

https://www.facebook.com/comaqa.by/

http://vk.com/comaqaby

+375 33 33 46 120

+375 44 74 00 385

www.CoreHard.by

Community’s audience

«Harsh» С++ developers & co, IoT, BigData, High Load, Parallel Computing

Automation tools developers

Managers and sales specialists in IT

Students looking for perspective profession.

Community goals

Create unified space for effective communication for all IT-specialists in the context of «harsh»

development.

Your profit

Ability to listen to reports from leading IT-specialists and share your experience.

Take part in «promo»-versions of top IT-conferences in CIS for free.

Meet regularly, at different forums, community «offices», social networks and messengers.

www.CoreHard.by

info@corehard.by

https://www.facebook.com/corehard.by/

+375 33 33 46 120

+375 44 74 00 385