July 2013 | Volume VII | Issue 3
SHOWCASING THOUGHT LEADERSHIP AND ADVANCES IN SOFTWARE TESTING
Technical Debt: A Nightmare for TestersTechnical Debt: A Nightmare for TestersTechnical Debt: A Nightmare for Testers
By Michael HackettBy Michael Hackett
A gi le Tes t ing
LogiGear MAGAZINE
Quantify the Impact of Quantify the Impact of
Agile App DevelopmentAgile App Development Larry Maccherone
Principles for Agile Test Principles for Agile Test
AutomationAutomation Emily Bache
Is Your Cloud Project Is Your Cloud Project
Ready to be Agile?Ready to be Agile? David Taber
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
I n our continuing effort to be the best source of
information for keeping testers and test teams
current, we have another issue to explore test-
ing in Agile development. As Agile evolves, system-
ic problems arise and common rough situations
become apparent. We want to provide solutions.
For anyone who has worked on Agile projects, es-
pecially if you have worked at more than one com-
pany or for a few clients, you know ―testing in Ag-
ile‖ can be an adventure. Remember there are no
―rules‖ or best practices for Agile testing. There are better practices. Eve-
ry team and Scrum implementation is unique. This is still evolving.
The varieties of Agile implementations, most commonly Scrum, have a
nontraditional concept of testing. Yet, most organizations still want
someone to do the tasks associated with traditional testing, such as vali-
dation, regression testing, bug hunting, exploratory testing, scenario test-
ing, data driven testing, etc. These words have different connotations in
Scrum and Agile.
This month we are tackling more Agile topics with a specific focus on how
these practices impact testing. As Agile matures and comes of age we
are learning more, adjusting our practices, modifying our strategies and
hopefully communicating better; we are being Agile.
In this issue, Emily Bache highlights principles to help you plan your test-
ing strategy; I warn teams about the implications of technical debt; David
Taber explains that if your team is set up to handle it, Agile can greatly
benefit your projects in the cloud; Larry Maccherone looks at why Agile is
becoming a vital strategy for small and large businesses; John Turner
reviews A Practical Guide for Testers and Agile Teams by Lisa Crispin and
Janet Gregory.
As always, we hope this information is helping you solve problems and
release higher quality products. September‘s issue is on Mobile Testing.
Happy Summer!
Michael Hackett
Senior Vice President, LogiGear Corporation
Editor in Chief
Letter from the Editor
Editor in Chief
Michael Hackett
Managing Editor
Brian Letwin
Deputy Editor
Joe Luthy
Worldwide Offices
United States Headquarters
2015 Pioneer Ct., Suite B
San Mateo, CA 94403
Tel +01 650 572 1400
Fax +01 650 572 2822
Viet Nam Headquarters
1A Phan Xich Long, Ward 2
Phu Nhuan District
Ho Chi Minh City
Tel +84 8 3995 4072
Fax +84 8 3995 4076
Viet Nam, Da Nang
7th Floor, Dana Book building
76-78 Bach Dang
Hai Chau District
Tel +84 511 3655 33
Fax +84 511 3655 336
www.LogiGear.com
www.LogiGear.vn
www.LogiGearmagazine.com
Copyright 2013
LogiGear Corporation
All rights reserved.
Reproduction without permission is prohibited.
Submission guidelines are located at
http://www.LogiGear.com/magazine/
issue/news/editorial-calendar-and-
submission-guidelines/
3
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
I n thi s Issue
4 I N T H E N E W S
5 P R I N C I P L E S F O R
A G I L E T E S T
A U T O M A T I O N Emily Bache
Principles to help you plan
your testing strategy and
tools for functional automat-
ed testing to design more
maintainable, useful test
cases.
7 T E C H N I C A L D E B T :
A N I G H T M A R E F O R
T E S T E R S Michael Hackett, LogiGear Corporation
Learning more about Scrum
processes, or whatever
lifecycle processes your
team follows, can be a big
benefit in preventing and
dealing with debt.
14 I S Y O U R C L O U D
P R O J E C T R E A D Y
T O B E A G I L E ? David Taber, CIO, SalesLogistix
Agile can greatly benefit
your projects in the cloud,
provided that your team is
set up to handle it.
1 6 Q U A N T I F Y T H E I M -
P A C T O F A G I L E A P P
D E V E L O M E N T
Larry Maccherone, Rally Software
As the world demands more soft-
ware, development teams - from
scrappy startups to big corpora-
tions - are meeting the challenge
with Agile.
1 8 A G I L E T E S T I N G
G L O S S A R Y
Some of the terms used when discussing
Agile testing.
1 9 B O O K R E V I E W John Turner
A review of a Practical Guide for Test-
ers and Agile Teams by Lisa Crispin
and Janet Gregory.
2 3 V I E T N A M ‘ S N A T I O N A L
C O S T U M E — T H E Á O D À I Brian Letwin, LogiGear Corporation
Today’s áo dàis have come a long
way from their imperial ancestors.
But even as the country continues
its march towards modernization,
the dress is a core element of Vi-
etnam’s illustrious history.
4
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
I n the N ew s
Three New TestArchitectTM Products
LogiGear has expanded its TestArchitect product line with the intro-
duction of three new TestArchitect Editions— Professional, Mobile
Plus and Enterprise.
The Professional Edition is an economical automation solution for
Windows-based applications. Mobile Plus offers Windows-based
application plus mobile testing, and Enterprise includes all Win-
dows, web, cloud and mobile testing capability. Mobile Plus and
Enterprise support iOS and Android phones and tablets, with both
web and hybrid app testing capability.
The Enterprise Edition includes name refactoring to reduce test case maintenance by making it possible to
automatically update test case suites whenever a test entity name is changed.
Most Software Development Heads Fail to Meet Deadlines
Almost three-quarters (71 percent) of UK software development
heads say conventional approaches to software development and
testing means new customer-facing applications are delayed.
CA Technologies questioned 301 in-house software development
managers in enterprises across the UK, France and Germany.
More than half (56 percent) of UK developers reported that their IT
department's reputation had been tarnished because of issues
relating to "out-dated" application development and testing meth-
ods.
While 59 percent of UK respondents cited quality and time-to-
market on integration testing as major challenges, it was lower (48
percent) across all three countries. In the UK, 41 percent had is-
sues with performance testing compared to 32 percent overall.
Win a Free Ticket to EuroSTAR 2013
To celebrate the launch of the 21st EuroSTAR Confer-
ence Program, they are giving members of the EuroSTAR
Software Testing Community the opportunity to win one
of three FREE places at this year‘s conference.
All you have to do to be in with a chance of winning one
of three free places is tell them why you want to attend
EuroSTAR Conference 2013 this November.
Read more here: http://www.eurostarconferences.com/content/gothenburg-in-60-seconds
5
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
By Emily Bache
Principles for Agile Test Automation
I feel like I‘ve spent most of my career learning how to
write good automated tests in an Agile environment.
When I downloaded JUnit in the year 2000 it didn‘t
take long before I was hooked – unit tests for everything in
sight. That gratifying green bar is near-instant feedback that
everything is going as expected, my code does what I in-
tended and I can continue developing from a firm founda-
tion.
Later, starting in about 2002, I began writing larger granu-
larity tests, for whole subsystems; functional tests if you
like. The feedback that my code does what I intended, and
that it has working functionality has given me confidence
time and again to release updated versions to end-users.
I was not the first to discover that developers design auto-
mated functional tests for two main purposes. Initially we
design them to help clarify our understanding of what to
build. In fact, at that point, they‘re not really tests, we usual-
ly call them scenarios, or examples. Later, the main pur-
pose of the tests becomes to detect regression errors, alt-
hough we continue use them to document what the system
does.
When you‘re designing a functional test suite, you‘re trying
to support both aims, and sometimes you have to make
tradeoffs between them. You‘re also trying to keep the cost
of writing and maintaining the tests as low as possible, and
as with most software, it‘s the maintenance cost that domi-
nates. Over the years, I‘ve begun to think in terms of four
principles that help me to design functional test suites that
make good tradeoffs and identify when a particular test
case is fit for purpose.
Readability When you look at the test case, you can read it through and
understand what the test is for. You can see what the ex-
pected behavior is, and what aspects of it are covered by the
test. When the test fails, you can quickly see what is broken.
If your test case is not readable, it will not be useful, neither
for understanding what the system does, nor identifying re-
gression errors. When it fails, you will have to dig though oth-
er sources outside of the test case to find out what is wrong.
You may not understand what is wrong and you will rewrite
the test to check for something else, or simply delete it.
Robustness When a test fails, it means there is a regression error,
(functionality is broken), or the system has changed and the
tests no longer document the correct behavior. You need to
take action to correct the system or update the test, and
this is as it should be. If however, the test has failed for no
good reason, you have a problem: a fragile test.
There are many causes of fragile tests. For example, tests
that are not isolated from one another, duplication between
test cases, and dependencies on random or threaded code. If
you run a test by itself and it passes, but fails in a suite to-
gether with other tests, then you have an isolation problem. If
you have one broken feature and it causes a large number of
test failures, you have duplication between test cases. If you
have a test that fails in one test run, then passes in the next
when nothing changed, you have a flickering test.
If your tests often fail for no good reason, you will start to
ignore them. Quite likely there will be real failures hiding
amongst all the false ones, and the danger is you will not
see them. Speed As an Agile developer you run your test suite frequently.
Both (a) every time you build the system, (b) before you
check in changes, and (c) after check-in in an automat-
ed Continuous Integration environment. I recommend
time limits of 2 minutes for (a), 10 minutes for (b), and
60 minutes for (c). This fast feedback gives you the best
chance of actually being willing to run the tests, and to
find defects when they‘re cheapest to fix, soon after
insertion.
Blogger of the Month
Principles to help you plan your testing strategy and tools for functional automated testing, to
design more maintainable, useful test cases. “
”
6
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
If your test suite is slow, it will not be used. When you‘re
feeling stressed, you‘ll skip running them, and problem
code will enter the system. In the worst case, the test suite
will never become green. You‘ll fix the one or two problems
in a given run and kick off a new test run, but in the mean-
time you‘ll continue developing and making other changes.
The diagnose-and-fix loop gets longer and the tests become
less likely to ever all pass at the same time.
Updatability When the needs of the users change, and the system is
updated, your tests also need to be updated in tandem. It
should be straightforward to identify which tests are affect-
ed by a given change, and quick to update them all.
If your tests are not easy to update, they will likely get left
behind as the system moves on. Faced with a small change
that causes thousands of failures and hours of work to up-
date them all, you‘ll likely delete most of the tests.
Following these four principles implies Maintainability Taken all together, I think how well your tests adhere to
these principles will determine how maintainable they are,
or in other words, how much they will cost. That cost needs
to be in proportion to the benefits you get: helping you un-
derstand what the system does, and regression protection.
As your test suite grows, it becomes ever more challenging
to adhere to all the principles. Readability suffers when
there are so many test cases you can‘t see the forest for
the trees. The more details of your system that you cover
with tests, the more likely you are to have Robustness prob-
lems – tests that fail when these details change. Speed
obviously also suffers – the time to run the test suite usual-
ly scales linearly with the number of test cases. Updatability
doesn‘t necessarily get worse as the number of test cases
increases, but it will if you don‘t adhere to good design prin-
ciples in your test code, or lack tools for bulk update of test
data for example.
I think the principles are largely the same whether you‘re
writing skinny little unit tests or fatter functional tests that
touch more of the codebase. My experience tells me that
it‘s a lot easier to be successful with unit tests. As the test-
ing thickness increases, the feedback cycle gets slower,
and your mistakes are amplified. That‘s why I concentrate
on teaching these principles through unit testing exercises.
Once you understand what you‘re aiming for, you can trans-
fer your skills to functional tests.
How can you use these principles? I find it useful to remember these principles when designing
test cases. I may need to make tradeoffs between them,
and it helps just to step back and assess how I‘m doing on
each principle from time to time as I develop. If I‘m review-
ing someone else‘s test cases, I can point to code and say
which principles it‘s not following, and give them concrete
advice about how to make improvements. We can have a
discussion for example about whether to add more test
cases in order to improve regression protection, and how to
do that without reducing overall readability.
I also find these principles useful when I‘m trying to diag-
nose why a test suite is not being useful to a development
team, especially if things have got so bad they have
stopped maintaining it. I can often identify which
principle(s) the team has missed, and advise how to refac-
tor the test suite to compensate.
For example, if the problem is lack of speed you have some
options and tradeoffs to make:
Replace some of the thicker, slower end-to-end tests
with lots of skinny fast unit tests, (may reduce regres-
sion protection).
Invest in hardware and run tests in parallel (costs $).
Use a profiler to optimize the tests for speed the same
as you would production code (may affect Readability).
Use more fakes to replace slow parts of the system
(may reduce regression protection).
Identify key test cases for essential functionality and
remove the other test cases. (sacrifice regression pro-
tection to get Speed).
Strategic Decisions These principles also help me when I‘m discussing auto-
mated testing strategy, and choosing testing tools. Some
tools have better support for updating test cases and test
data. Some allow very Readable test cases. It‘s worth noting
that automated tests in Agile are quite different from in a
traditional process, since they are run continually through-
out the process, not just at the end. I‘ve found many tradi-
tional automation tools don‘t lead to enough speed and
Robustness to support Agile development.
I hope you will find these principles help you to reason
about your strategy and tools for functional automated test-
ing, and to design more maintainable, useful test cases. ■
About Emily
Emily Bache is an independent consultant
specializing in automated testing and
Agile methods. With over 15 years
of experience working as a software devel-
oper in organizations as diverse as multi-
national corporation to small startup, she
has learnt to value the technical practices
that underpin truly Agile teams. Emily is
the author of ―The Coding Dojo Handbook:
a practical guide to creating a space
where good programmers can become great programmers‖
and speaks regularly at international conferences such as
Agile Testing Days and XP2013.
7
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
T he sprint is almost over; the burndown chart has
not budged. The test team sits around waiting. They
hear about all kinds of issues, obstacles and imped-
iments at the daily stand-up but there is no code to test.
Closing in on the demo and sprint review... then at Wednes-
day‘s standup: the heroes arrive and tell everyone, ―All the
stories are done. Everything is in the new build. Test team -
get to work! You have one day to test everything for this
sprint, we will have an internal demo of everything tomor-
row afternoon and a demo to the PO on Friday morning.
Get busy!‖
Sound familiar? Your team has just gone over the cliff into
certain technical debt.
As organizations build more experience being Agile, some
trends have emerged. Technical debt is one of these
trends. That is not a good thing. Technical debt is a big
topic and getting larger by the day. Much is even written
just about what it is! There are definitions of debt far from
the original definition with some definitions completely
wrong.
Companies and teams struggle with technical debt con-
cerning its: governance, management, documentation,
communication, sizing and estimating, as well as tracking
and prioritizing. Dealing with technical debt is difficult and
new for most teams. There are dire predictions and warn-
ings, and sadly - they are real. Some products, projects and
teams have imploded from the weight of debt.
Like most concepts in Agile, technical debt can be used as
a broad-brush classification, but here I will explore tech-
nical debt from just the testing perspective focusing on
testers and their part in technical debt.
What is technical debt? Technical debt has a large and growing definition. Before
going any further, let‘s look at the original definition of tech-
nical debt. First coined by Ward Cunningham, the financial
metaphor referred only to refactoring.
Now people talk and write about technical debt using all
sorts of financial jargon like good debt, bad debt, interest,
principle, mortgage, futures and interest while losing track
of the real problem. Resist this. Stay basic. It is key for any
organization to have a good, agreed upon working defini-
tion of debt.
Technical debt happens when the team decides to ―fix it
later.‖ Anything we put off or postpone is considered debt.
And it will come due with an interest payment. This is not to
be confused with bugs that need to be fixed. Bugs are al-
most always associated with the function of the system, not
testing tasks. Bugs are communicated, handled and man-
aged differently. Technical debt is, as Joanna Rothman
says, ―what you owe the product,‖ such as missing unit
tests and out of date database schemas - it‘s not about
bugs!
Think of the difference in technical debt and bugs as simi-
lar to the old discussion of ‗issues vs. bugs.‘
C over St or y
Technical Debt: A Nightmare for Testers
Learning more about Scrum processes, or whichever Agile lifecycle processes your team
follows, can be a big benefit in preventing and dealing with debt.
By Michael Hackett, LogiGear Corporation
Code Refactoring
Code refactoring is a "disciplined technique
for restructuring an existing body of code,
altering its internal structure without
changing its external behavior," undertak-
en in order to improve some of the non-
functional attributes of the software.
-Wikipedia
8
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
You know you have debt when you start hearing things like:
―Don‘t we have documentation on the file layouts?‖
―I thought we had a test for that!‖
―If I change X it is going to break Y….I think.‖
―Don‘t touch that code. The last time we did it took
weeks to fix.‖
―The server is down. Where are the backups?‖
―Where is the email about that bug?‖
―We can‘t upgrade. No one understands the code.‖
Andy Lester, Get out of Technical Debt Now!
Now let‘s look at the common causes and symptoms of
technical debt so you can recognize when you are getting
into a debt situation. This list has been gathered from a
variety of sources to provide a solid and broad understand-
ing of the causes and symptoms:
Lack of test coverage.
Muddy or overly rigid content type definitions.
Hardcoded values.
Misused APIs.
Redundant code.
Inappropriate or misunderstood design patterns.
Brittle, missing or non-escalating error handling.
Unscalable software architectural design.
Foundational commitment to an abandoned platform.
Missing or inaccurate comments and documentation.
―Black box‖ components.
Third-party code that‘s fallen far behind its public sta-
ble release.
Overly long classes, functions, control structures
(cyclomatic complexity).
Clashing programming or software architectural styles
within a single application.
Multiple or obscure configuration file languages.
Hardwired reliance on a specific platform or product
(e.g., MySQL, Solaris, Apache httpd).
Matt Holford, Can Technical Debt Be Quantified? The Limits
and Promise of the Metaphor
The problem From reading the list of technical debt, it‘s easy to see how
products, processes and practices can get unnecessarily
complicated and become slow, buggy and difficult to exe-
cute and manage. What follows is that the teams working
on these systems spend more time dealing with systematic
issues than developing new functionality which slows down
the delivery of customer value. By the way, decreasing ve-
locity is often one of the first signs a team is dealing with
too much technical debt.
Technical debt happens and sometimes it is understanda-
ble. Software development happens over time. It‘s not a
nice, linear process. Very often things are not clear until the
team attempts to actually build something. Problems and
solutions unfold along with the project‘s clarity and we all
know that not everything can be planned for.
Let‘s look at some reasons why this occurs:
User stories are too big.
The team did not fully understand the user story or it
lacked acceptance criteria to better describe what was
to be built.
Low estimating skill or consistently unrealistic estimates.
No use of spikes to better understand what is to be
developed.
Team is too pressured to ―get it done!‖
Weak ScrumMaster or overbearing Product Owner.
Unexpected things happened.
Very short timeframes for sprints make teams rush and
focus only on what must be done to get a release - at
the exclusion of ―good things to do.‖
JIT (just-in-time) architecture or design.
Special concerns for Testers 1– Team attitudes about Testing
There are situations where debt builds from how the team
handles testing, specifically for testers. Some teams are still
under intense pressure to deliver on a fixed date. Regard-
9
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
less of the state of testing or findings from testing or test
coverage, there is pressure on testers to ―say it works.‖
Some Agile basics, from XP (eXtreme Programming)
need to be understood here. Working at a sustainable
pace and respecting a team‘s velocity are important.
When there is old style management (―chickens‖ dic-
tating what has to be done to ―pigs‖) teams invariably
have to cut corners and testing almost always gets
crunched.
Sometimes, teams get into debt trouble with testing
because testers were not included in user story estima-
tion. The testing takes longer than expected; the team
cuts corners and builds debt. And—there are always
bugs! That is not the issue. It is the pressure to defer,
minimize, or ignore that build debt.
Many of the original Scrum teams I worked with struggled
with having cross-functional teams. Now that Scrum has
been around for a few years, I see fewer companies at-
tempting to have cross-functional teams.
When the Scrum Guide explains cross functional teams, the
description promotes iterative design, refactoring, collabo-
ration, cooperation, and communication but shuns handoff.
All these things will reduce gaps and provide early, expand-
ed testing communication and information, providing for
more full understanding – all this will reduce technical debt.
Yet, the way Scrum has generally evolved promotes handoff
and less collaboration and communication which will in-
crease technical debt.
For integrated teams, this means sitting together, discuss-
ing, talking and refactoring. It means asking questions,
driving the development by developing tests (TDD); it is
absolutely iterative and full of refactoring. Anti-Agile is when
developers work in isolation and handoff completed code to
testers to validate and call done.
Handoff otherwise known as AgileFalls is a dirty word in
Agile.
I was asked to help a company and found out, within the
first half hour, that they had a programmer sprint, then a
tester sprint. I said, ―That sounds like waterfall.‖
They totally misunderstood Scrum teams.
2 - The Cliff: a special Scrumbut situation
Testers still get time crunched. Back in the traditional soft-
ware development days, test teams very often lost schedule
time they had planned for. This continues as a common
practice in the Agile world. The following graphs allow you to
visualize this situation.
10
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
The Crunch
Hans Buwalda has often used these diagrams to describe
problematic software development projects. In the plan-
ning stage each phase or team gets their allotted time.
When it comes to testing reality, requirements are defined
late, added late, the design was late or the code was late,
testers get crunched on time so the team won‘t slip the
schedule.
There is no way a test team can do an effective job at this
point. Most teams in this situation, under pressure from
product owners/customers/whomever, make up quick and
dirty rules:
The story is ―done but not tested.‖ (ScrumBut)
Test it in the next sprint while they wait for new func-
tionality. (AgileFalls)
Break the story into 2 stories, the coding and the test-
ing. The coding is done. (ScrumBut and AgileFalls)
Say it‘s done and if the PO finds a bug during the demo
we can write a new user story on that bug. (ScrumBut)
…And many more creative and flawed ways to ―count the
story points for velocity,‖ or say it‘s done and build more
technical debt.
There is so much wrong with these solutions, so much
ScrumBut and AgileFalls combined, these situations need
their own article on how to recognize and remediate these
situations. We will discuss solutions to these problems later
in the article, but for now, know that these are not normal,
it's not Scrum, they‘re not good and they need to be re-
solved in sprint retrospectives.
The Cliff
A theoretical burndown chart has the same idea. User
stories and ―user story points‖ get moved from ―In Devel-
opment‖ to ―In Testing‖ at a somewhat steady pace over
time and are delivered over time - this is ideal.
The troubling phenomenon common to so many teams
these days is the cliff. Testers wait and wait and, as the
final days of the sprint approach, the bulk of user stories
get dumped on them, with the expectation of full valida-
tion and testing as the sprint demo and review come up.
11
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
3 - Specific to automation:
While many Agile teams take their development practices
from XP (eXtreme Programming), such as TDD (test driven
development); CI (continuous integration) or pair program-
ming; sustainable pace; small releases and the planning
game, there are many foundation practices that allow
teams to achieve these goals. Specifically, unit test automa-
tion, user story acceptance criteria automation, high vol-
ume regression test automation and automated smoke
tests (quick build acceptance tests for the continuous inte-
gration process).
Many test teams struggle with the need for speed in auto-
mating tests in Agile development. To create and automate
tests quickly, some teams use unstructured record and
playback methods resulting in ―throw-away automation.‖
Throw-away automation is ―quick and dirty,‖ typically suita-
ble only for the current sprint and created with no intention
of maintenance. Struggling teams will be resigned to throw
away automation or do 100% manual testing during a sprint
and automate what and if they can - 1 or 2 or 3 or more
sprints after the production is written.
Automation suites that lose relevance with new functional
releases in Agile without enough time for maintenance,
upgrade, infrastructure, or intelligent automation design are
a drain on resources. From my experience test automation
is rarely accounted for when product teams quantify tech-
nical debt. This is changing and needs to change more.
To remedy these problems, many teams are conducting test
automation framework development in independent sprints
from production code. Since automation is software, its
development can be treated as a different development
project, supporting the production code. Using this ap-
proach, automation code should have a code review, coding
standards and its own testing; otherwise technical debt will
accrue in the form of high maintenance costs.
I‘ve always been amazed at the lack coding standards, de-
sign and bug finding work applied to automation code that
is intended to verify the production code that is created
with rigorous processes. I hope I‘m not the only one who
sees the shortsightedness in this.
4 - ―Done but not done done.‖
It could be said here, that any team using the phrase ―done
but not done done‖ is building debt just by saying it! There
is a mess building up around the Definition of Done, the
DoD. The Scrum Guide stresses that a team needs to have
a clear Definition of Done, but it‘s becoming obvious over
time that teams don‘t always have one.
The Definition of Done, for some teams, has morphed into
the old waterfall style of ―milestone criteria‖ and ―entrance/
exit criteria.‖ It‘s nice to have, but it isn‘t really enforced
due to schedule constraints and product owner pressure to
get some functionality out to the customer. This is a prob-
lem.
With small, often stressed sprint iterations, the pressure to
call something done and move on to the next function to
develop very often surpasses the pressure to get things
done right. Corners get cut and things are skipped (this is
the essence of technical debt!). A strong, enforced defini-
tion of done will help prevent technical debt. I have seen
this case in many smooth-running Agile development envi-
ronments. A great DoD is the foundation and it is strictly
enforced. The phrase ―done done‖ is never used. It is only
ever done or not!
The definition of done must be agreed upon by the team. It
is enforced by the ScrumMaster, not testers. This, by the
way, is a great change in Agile. Many traditional-style devel-
opment teams were dictated by the milestone police, some-
times creating ill-will as testers hold back milestones or feel
trampled when milestones are passed without having actu-
ally been achieved. In Scrum, it is the Scrum Master‘s job to
determine done. But, since the tester provides much of the
information, this is a great place for them to provide full
communication about what is tested, what is automated,
what works, what doesn‘t, and any ancillary tasks that may
not have been done. This is the core of risk recognition and
risk communication.
TEST AUTOMATION TECHNICAL
DEBT REFERENCES
―Software test automation in practice: empirical
observations,‖ Advances in Software Engineer-
ing, vol. 2010, 2010.
―Empirical Observations on Software Testing
Automation,‖ in 2009 International Conference
on Software Testing Verification and Validation.
IEEE, Apr. 2009, pp. 201–209.
―Establishment of automated regression test-
ing at ABB: industrial experience report on
‗avoiding the pitfalls‘,‖ in Automated Software
Engineering, 2004 Proceedings. 19th Interna-
tional Conference on. IEEE, Sep. 2004, pp. 112
–121.
―Analysis of Problems in Testing Practices,‖ in
16th Asia-Pacific Software Engineering Confer-
ence. Los Alamitos, CA, USA: IEEE Computer
Society, 2009, pp. 309–315.
―Observations and lessons learned from auto-
mated testing,‖ in Proceedings of the 27th in-
ternational conference on Software engineer-
ing. ACM, 2005, pp. 571–579.
12
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Having a solid, universally known, adhered to ―definition of
done‖ — an explicit, agreed-upon, no exceptions, workable,
meaningful, measurable definition of done - is essential.
5 - Test Documentation
When testing is not documented you create debt.
Agile practices have borrowed much from Lean Develop-
ment. Lean means lean documentation, not no documenta-
tion! Document what is essential. Eliminate redundancy. Do
not document merely to document or because there is a
tool to document. Resist this. Document for knowledge
transfer, repeatability, automation, correctness- good rea-
sons.
6 - A special reminder on Risk
Testers have to be especially skilled in all aspects of risk
management. Technical debt sometimes sneaks up on
teams because individuals do not recognize it or don‘t ef-
fectively communicate it. Some of the most important skill
sets testers can build are risk recognition, assessment and
communication. Recognizing and communicating risk
should be as lucid and pragmatic as possible as to prevent
debt or to provide for a better risk analysis before accruing
debt.
Preventions and Solutions For the whole Team, prevent debt by implementing these
great recommendations from Ted Theodoropoulos:
Document system architectures.
Provide adequate code coverage in QA processes.
Implement standards and conventions.
Properly understand the technologies leveraged.
Properly understand the processes the technology is
designed to support.
Refactor code base or architecture to meet changing
needs.
Technical Debt, by Ted Theodoropoulos
Prevent technical debt by investing heavily in test automa-
tion. It is an undeniable truth that the more you invest in the
design and code for test automation, the bigger long term
payback you will have from it. Shortcuts and sloppy test
automation will wind up costing your team more in the im-
mediate term and significantly more over time.
What is becoming clearer to many teams is the similarity of
technical debt to financial debt and that ignoring it will re-
sult in the worst possible outcome! Debt gets bigger! Recog-
nize it. State it. Write it down. Communicate it. Make the
debt visible. Keep an explicit debt list. Developer Alex Pukin-
skis suggests teams use florescent pink index cards on the
Team board/kanban board for debt.
Service the debt. If you accumulate too much debt, you
spend all your income/resources paying down the in-
terest and not the capital. Get a great definition of
technical debt. Have it agreed upon.
Use spikes to better analyze user stories that are not
well understood or teams are hesitant to give esti-
mates.
Pay attention to velocity! When velocity drops, the team
is bogged down or not getting stories ―done.‖
Prevent debt by improving estimating skills and learn-
ing from mistakes.
A few scrum basics and reminders:
The PO accepts a story as done. If there are development,
processes or testing shortcuts taken, point them out.
The Scrum Master must be an expert on the scrum process:
Be the scrum police.
Remove obstacles and stick to the rules. Keep chick-
ens out of the pig‘s way.
If Scrum rules are being routinely broken or compro-
mised, the Scrum Master has responsibility to fix it.
Accurately measure burndown and velocity.
Testers and the whole team need to use sprint retro-
spectives.
Where processes are not followed or break - recognize
it. Report it, communicate it effectively.
Chickens and pigs - Chickens need to be made more
aware of velocity and the realities about what can‘t get
done.
13
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
A story is not done if there are bugs or issues to re-
solve.
Don‘t split stories.
Have a great definition of done. Enforced by Scrum
Master, not testers. Remove the politics of testers be-
ing ―no‖ people.
Use hardening or regression sprints for integration.
Summary Technical Debt is inevitable but is not all bad. It has to be
communicated, managed and serviced! When teams are
chronically in debt, the issues that brought it up need to be
recognized, communicated, and hopefully resolved.
Test teams play a special role in certain types of debt.
Test teams should be especially aware to not create more
debt by compromised test automation.
Test teams can greatly help the team thru recognizing and
communicating debt issues as they arise. They need to doc-
ument intelligently, not document everything.
Learning more about Scrum processes, or whatever lifecy-
cle processes your team follows, can be a big benefit in
preventing and dealing with debt. ■
About Michael Michael Hackett co-founded Logi-
Gear in 1994 and leads the compa-
ny's LogiGear University training op-
erations division, setting the stand-
ard in software testing education
programs for many of the world's
leading software development organ-
izations.
Michael is co-author of the popu-
lar Testing Applications on the
Web (Wiley, 2nd ed. 2003), and has
helped many clients produce, test and deploy applications
ranging from business productivity to educational multime-
dia across multiple platforms and multiple language edi-
tions. His clients have included Palm Computing, Oracle,
CNET, Electronics for Imaging, The Learning Company, and
PC World.
Prior to co-founding LogiGear Michael managed QA teams
at The Well, Adobe Systems, and PowerUp Software. He
holds a Bachelor of Science in Engineering from Carnegie-
Mellon University.
Agile 2012 State of the Union—Serena Software
14
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
I n the decade since the Agile Manifesto, the movement
has encouraged a number of best practices like test-
driven development, user-centered design, iterative
development, clean code, refactoring, continuous integra-
tion, and—arguably—cloud computing. I'm a card-carrying
Agile zealot, and to me its benefits are unarguable.
“There's a catch, though: not
every IT organization can really
implement Agile, let alone profit
from it. There are organizational,
project, and personnel character-
istics that can make Agile down-
right dangerous. The awesome
price of freedom is that you have
to live up to its obligations.”
Is your IT organization ready to be Agile, seriously? Score
yourself on these questions:
Is this the right project? Be discriminating about where you apply Agile, as certain
projects just fit better. For example, if the project has a lot
of code that was developed using waterfall methodologies
or is inextricably bound to tools and infrastructure that can't
be modernized, you should do your learning elsewhere. If
the project by its nature must be released as a "big bang" or
slash-cut deployment, you'll need to have a very experi-
enced Agile team.
Are the expectations for UI functionality and schedule rea-sonable? You can't design a good UI with a gun to your head.
Is this the right project, revisited? Is the project already in trouble? Does it have achievable
goals? Is the project late, already over budget, and mired in
politics? Take a pass on Agile until things are in balance.
Are these the right users? Agile projects demand a lot from team members, and that
goes double for the internal users who are on the team. Are
the user representatives flexible yet consistent, well in-
formed yet willing to learn new habits, energetic without the
need to throw their weight around? Do the user representa-
tives actually know the business process, and don't have to
guess "how would this work in the business?" Do they have
an inherent sense of what is technologically within reach
given your budget and schedule? Are they personally invest-
ing their time in the team's success, or are they likely to
point fingers? Think Disneyland: Users must be at least this
mature to ride the Agile roller coaster.
Can the focus stay on business value? Agile deliveries mean doing something small and meaning-
ful and valuable, and doing that fast. Over and over again --
to deliver the most valuable things first and to build trust. If
your company uses terms like requirements document, cost
per function point, or defects per KLOC, Agile is going to be
a rough transition.
Does upper management really get it? Some Execs can't stop from "helping" -- scrutinizing weekly
progress reports, asking who's assigned to specific tasks,
demanding schedule updates mid-iteration, celebrating
hoped-for success before they're delivered.
They may also think Agile just means they can ask for more
features at any point in the project without consequences.
Dilbert has a great series on this topic: if you see the boss
here and particularly here, you might not want to push Agile
quite yet.
Feature
Is Your Cloud Project Ready to be Agile?
Provided your team is set up to handle it, Agile can greatly benefit your projects in the cloud.
By David Taber, CIO, SalesLogistix
15
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Does finance have the right headgear? If the finance folks are talking about value and investing to
maximize business results, you're on the right track. If all
they can talk about is fixed budget, defined deliverables,
and the words "compliance" or "variance," you haven't got a
chance.
Does your IT team have the horsepower? Agile is hard, and -- let's face it -- requires considerable
brainpower. Companies that really are Agile at scale -- firms
like ThoughtWorks or Salesforce.com -- have staff that are
smarter than the average bear. And it isn't just smarts: it's
an attitude of "deliver or die," including the occasional all-
nighter for the sake of solid, clean code. Developers need to
have a good visual sense, be flexible, and focus on deliver-
ing the smallest amount of code to satisfy the need. Agile
developers don't just listen passively to the users -- they add
value to the conversation. Agile excellent firms tend to have
a knack for hiring well...and when they don't, firing fast. Be
honest with yourself when scoring this one.
Do you have the right process maturity? Because cloud computing must span several vendors' do-
mains, developers will have to leverage several languages,
libraries, and layers of infrastructure. And that means that
the available infrastructure is fragmentary, at best. Whether
it's debugging, configuration management, deployment, or
error logging, your team will have to roll a lot of their own
operational infrastructure. So when it comes to internal
processes for parallel development, continuous integration,
deployment validation, and real-time troubleshooting, your
team is going to need a lot of discipline and good reflexes.
Without these, Agile only means chaos. Again, be honest
with yourself when scoring this one, as it's not an area
where you want to do a lot of On-The-Job-Training.
Does the team have the right attitude about what's being developed? It's OK to make a UI that's "good enough," particularly if you
do it under budget. Agile teams assume that UI code is dis-
posable, and may have to be completely replaced in 18
months even if everything is right today. For most UI pro-
jects, perfectionism simply doesn't pay. Moving to Agile
takes a mix of energy, skills building, internal trust, invest-
ment, and patience. Agile provides a huge strategic ad-
vantage over the longer term, but only if you give it time to
succeed in the shorter term.
Pick your project carefully, put some of your best folks on it,
give them the room to learn...and to make a few mistakes.
Try to convince the execs to help less. If this doesn't sound
like your organization and project, reconsider. ■
I'd like to thank Bo Laurent and Rich Mironov for their excel-
lent contributions to this article.
About David
David Taber is the author of the Pren-
tice Hall book, "Salesforce.com Secrets
of Success," a book on best practices
for both the system implementer and
company executives.
David is also the CEO of SalesLogistix,
a certified Salesforce.com integrator
with 100+ clients in the US, Canada, Europe, Israel, and
India. SalesLogistix specializes in improving business pro-
cesses, policies, and people issues in concert with extend-
ing Salesforce.com systems. He also serves as an expert
witness for court cases involving SFA, CRM, and forecasting
issues, and is a member of the Forensic Expert Witness
Association.
David earned his BA and an MBA from the University of
California. He has been a guest lecturer at the graduate
business schools of the University of California and Carne-
gie Mellon University, and has taught at the University of
California Berkeley extension.
16
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
A gile is here to stay. Once the radical alternative to
Waterfall development methods, these legacy meth-
odologies are being disrupted and replaced by Agile
practices that improve time-to-market, reduce development
costs, and produce higher quality software that better
meets customer expectations. As the world demands more
software, development teams - from scrappy startups to big
corporations - are meeting the challenge with Agile.
But while Agile software development projects scale across
the enterprise, management is still searching for the best
way to gain deeper visibility into these projects. Large or-
ganizations cannot rely on the subjective anecdotes of
those closest to the work; they require quantitative insight
upon which to base business decisions.
Here are some quick tips to quantify the impact of the choices you
make during and after your Agile transformation.
1. Start with desired outcomes, not what‘s easy to measure Better measurement leads to better insights, which in turn
leads to better decisions and eventually better outcomes.
Most people start by measuring what‘s easy. But measuring
what‘s easy can drive the wrong behavior. Let‘s take a look
at this story about two NBA players.
In 2010, Monta Ellis, with the Golden State Warriors, was
the 9th highest scorer in the NBA. Carmelo Anthony, with
the Denver Nuggets, was the 8th highest scorer. Measuring
individual scoring totals is easy. You would assume that
because they were prolific scorers, their teams would win.
However, when they were in the game, their teams
won less. Scoring is a function of two other measures: 1)
the number of shots and 2) the percentage of those shots
that go in the basket. It turns out, these two ―stars‖ have
low measures for #2, their shooting percentage.
The only reason they are high scorers is because they
take more shots. They are literally stealing shots from their
teammates who might have a better chance of scoring.
So, while the flow of learning goes from measures to out-
comes, the way we think about it should start with out-
comes. That‘s why we call this ODIM:
better OUTCOMES ← better DECISIONS ← better INSIGHTS
← better MEASURES
The NBA players should focus on the outcome of winning
more games rather than being a high scorer. If they used
the overall likelihood of the team scoring under various
conditions as feedback, it would help them make better
game-time decisions to achieve the ultimate outcome of
winning. This brings us to our second tip.
2. Think of measurement as feedback, not levers Frequent feedback is the primary difference between Wa-
terfall and Agile development. Successful Agile projects
incorporate short iterations with fast feedback from custom-
ers. The key to effective Agile measurement is to think of
measurement in terms of feedback, not as the traditional
lever to motivate behavior. This often devolves into keeping
score, which is where the dark side of measurement starts
— avoid it.
There is a subtle, but important, distinction between
―feedback‖ and ―lever.‖ Feedback is something you seek to
improve your own performance. Levers are used to influ-
ence others. The difference is more in how you use the
measure than the measure itself.
For example, healthy use of a burndown chart tells the
team if they are on track with their commitment so they can
make adjustments in time. The counterexample is a manag-
er using burndown charts to red-flag projects in trouble.
Feature
Quantify the Impact of Agile App Development
As the world demands more software, development teams - from scrappy startups to big cor-
porations - are meeting the challenge with Agile.
By Larry Maccherone, Director of Analytics, Rally Software
17
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
While it may drive improvement, nobody wants the red flag
thrown at them, so the tendency is to keep the metric in the
green regardless of the reality of the situation.
You can‘t make better informed decisions if the metrics you
are using to gain insight don‘t accurately represent reality
(see tip #1). Rather, the manager could provide coaching on
tools that the team can use to improve their own perfor-
mance — a subtle but critical difference.
3. A balanced measurement regime or none at all Balance in Agile measurement (Figure A) includes four cor-
nerstones:
1. Do it fast.
2. Do it right.
3. Do it on time.
4. Keep doing it.
Figure A
Without balance of these four elements, it‘s easy to focus
on just one. For example, if we focus only on increasing
productivity this will likely drive down quality and customer
satisfaction.
4. Measure specific outcomes for software 1. Productivity
2. Responsiveness
3. Quality
4. Customer Satisfaction
5. Predictability
6. Employee Engagement
These six outcomes are the elements of the Software Devel-
opment Performance Index (SDPI), used to quantify insights
about development work and provide feedback on how
process and technology decisions impact the development
team‘s performance. Know what to measure and focus on
each individual element.
5. Listen to Experts Agile has caught the attention of leading industry analysts,
asserting itself as a key part of application lifecycle man-
agement (ALM) evaluations. These evaluations encompass
more than just functionality for developers; they assess
commitment to the ALM market, ALM product strategy, cor-
porate strategy, and market presence.
In fact, independent research firm Forrester Research, Inc.
recently evaluated the most significant ALM software pro-
viders, treating Agile and Lean as critical tests of an ALM
vendor‘s offering. The report also found that businesses
―can no longer accept historical gulfs between business and
application development and delivery teams as, increasing-
ly, firms now expect to manage application development
and delivery as a business and treat it as a competency.‖
The Agile perspective for software development metrics In the move to Agile, overall goals are largely the same as
before: to delight users with a quality product delivered in a
predictable and efficient manner. Even after your Agile
transformation, you will largely do the same ―types‖ of
things: analyze, design, code, test, release, maintain, and,
yes, measure.
It‘s the perspective you take when doing these that is differ-
ent with Agile. ■
About Larry
Larry started working at Rally Soft-
ware in 2009 and is currently Rally‘s
Director of Analytics. He is currently
working on his PhD in Software Engi-
neering at Carnegie Mellon.
Prior to starting work full-time on his
PhD, Larry served as the Manager of
Software Assurance Initiatives for the
CyLab at Carnegie Mellon where he
promoted the development and wide-
spread adoption of best practices,
tools, and methods.
His interests include measurement/analytics, Agile method-
ologies, software engineering, software craftsmanship, and
software assurance. He also have a strong interest in infor-
mation visualization and has a passion for coding in general
which includes programming language technology and de-
sign patterns.
18
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Lean Defining Lean Software Development is challenging be-
cause there is no specific Lean Software Development
method or process. Lean is not an equivalent of Personal
Software Process, V-Model, Spiral Model, EVO, Feature-
Driven Development, Extreme Programming, Scrum, or
Test-Driven Development. A software development lifecy-
cle process or a project management process could be
said to be ―lean‖ if it was observed to be aligned with the
values of the Lean Software Development movement and
the principles of Lean Software Development. So those
anticipating a simple recipe that can be followed and
named Lean Software Development will be disappointed.
You must fashion or tailor your own software develop-
ment process by understanding Lean principles and
adopting the core values of Lean.
There are several schools of thought within Lean Soft-
ware Development. The largest, and arguably leading,
school is the Lean Systems Society, which includes Don-
ald Reinertsen, Jim Sutton, Alan Shalloway, Bob Charette,
Mary Poppendeick, and David J. Anderson. Mary and Tom
Poppendieck‘s work developed prior to the formation of
the Society and its credo stands separately, as does the
work of Craig Larman, Bas Vodde, and, most recently, Jim
Coplien. This article seeks to be broadly representative of
the Lean Systems Society viewpoint as expressed in its
credo and to provide a synthesis and summary of their
ideas.
Lean Software Development is more strategically focused
than other Agile methodology. The goals are to devel-
op software in one-third the time, with one-third the budg-
et, and with one-third the defect rate.
JIT (Just in Time), another important element of Lean is a
production strategy that strives to improve a busi-
ness return on investment by reducing in-
process inventory and associated carrying costs. This
improves quality at every step and empowers the team. The 7 key principles of lean software development are:
1. Eliminate Waste.
2. Quality at every step/Build Quality In.
3. Create Knowledge.
4. Decide JIT (just in time)/Defer Commitment.
5. Deliver Fast.
6. Respect People.
7. Optimize The Whole.
Sources: Microsoft, allaboutAgile.com and codebet-
ter.com.
Kanban The Kanban method, as formulated by Toyota‘s Taiichi
Ohno, is a system to improve and maintain a high level of
production. It is a way to organize the chaos that sur-
rounds so many delivery teams by making the need for
prioritization and focus clear.
It is also a way to find workflow and process problems to
solve in order to deliver more consistently to your client/
customer/etc. Both of these are made possible by intro-
ducing constraints into the system to optimize the flow of
value. Flow of value is king. If you can‘t get your business
value flowing out the door consistently, your business is
not performing optimally.
Finally, Kanban resets your brain to value finishing over
starting. It sounds like common sense right? Well, if
you‘re like most developers you have been conditioned
into associating your value by what you have started.
Kanban reminds you to stop starting and start finishing!
Kanban has 8 things you need to know. They are broken
down into three basic principles (how you need to think)
and five properties (what you need to do).
How you need to think
1. Start with what you do now.
2. Agree to pursue incremental, evolutionary change.
3. Respect the current process, roles, responsibilities &
titles.
What you need to do
4. Visualize the workflow - a ―Kanban board‖ team
using Scrum may call it a Scrum board or
―whiteboard‖ but this comes from Kanban and is
called a Kanban board.
5. Limit Work in Progress (Wip).
6. Manage flow.
7. Make Process Policies Explicit.
8. Improve Collaboratively (using models & the scientific
method).
Sources: Wikipedia, kanbanblog.com, Agilepro-
ductdesign.com.
Gl ossar y : A gi l e Tes t i ng
19
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
I have worked with testers on an Agile team before
and it has worked very well for both the team and the
customer. In my previous role at Bank of Ireland,
testers who had come from a traditional testing back-
ground worked within our teams to help ensure we had
quality deliverables at the end of each iteration. It was
different from traditional test approaches in that they
were sitting with the team, collaborated constantly and
were integral to the process of developing the solution. I
never found them critical of poor quality or guilty of ring
fencing roles and responsibilities. This was refreshing and
without doubt a better way of ensuring quality than those
I had experienced before.
Recently, here at Paddy Power, we have been interview-
ing for a number of open Agile tester positions. I‘m pretty
sure I know what a good Agile tester looks like but I have
often struggled to fully articulate what that entails. I have
had this book on my shelf for a couple of months and
now I‘m looking to it to help me fully understand what an
Agile tester is.
What is Agile testing anyway? The book starts by introducing Agile testing and compar-
ing Agile to traditional testing. I liked the way in which the
activities of programmers and testers on an Agile team
were explained using the terms ―Technology-Facing
Tests‖ and ―Business-Facing Tests‖. It is sometimes con-
fusing for people, who come to understand that program-
mers do testing, why there is a need for a tester on an
Agile team and these terms help clarify that need. To
quote the book directly, testers on an Agile team ―..are
working together with a team of people who all feel re-
sponsible for delivering the best possible quality...‖
I have recently had a number of conversations about
having a tester in an Agile team or having a separate test
team. Again, quoting from the book, ―Testers are also on
the developer team, because testing is a central compo-
nent of Agile software development. Testers advocate for
quality on behalf of the customer and assist the develop-
ment team in delivering the maximum business val-
ue.‖ To steal a term from Lean, I think this is the best
way to ensure you ―have quality built in‖. This whole team
approach also ensures that the ―…team thinks constantly
about designing code for testability.‖
Ten principles for Agile testers One of the behaviors that used to frustrate me as a devel-
oper was that testers were seen as successful when they
found bugs in the software. Agile testing sees a success-
ful tester as one who helps the team ensure the software
does not contain any bugs. This book reinforces this by
saying ―An Agile tester doesn‘t see herself as a quality
police officer, protecting her customers from inadequate
code‖.
Lisa and Janet go on to list the ten principles as:
Provide continuous feedback.
Deliver value to the customer.
Enable face-to-face communication.
Have courage.
Keep it simple.
Practice continuous improvement.
Respond to change.
Self-organize.
Focus on people.
Enjoy.
People familiar with Agile in general will find these quite
familiar.
Book Re vi ew
A Practical Guide for Testers and Agile Teams
By John Turner
20
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Cultural challenges Many of the cultural challenges discussed are similar to
those faced by any team member adopting Agile for the first
time. As such, I have covered many of these while read-
ing Mike Cohn‘s Succeeding With Agile. There are however
a number that are either unique or more acute when faced
by the testing organization.
In my experience, testing is often the poor cousin to pro-
gramming. Test teams are often under funded, have limited
influence and are not given a voice. Those who are given
these things can fall into the trap of assuming the role of
―Quality Police‖ which fosters antagonistic behavior.
To succeed in an Agile team, testers must become fully
signed up members. For this reason, I really liked the
―Tester Bill of Rights‖ outlined by Lisa and Janet.
Tester Bill of Rights
You have the right to bring up issues related to testing,
quality, and process at any time.
You have the right to ask questions of customers, pro-
grammers, and other team members and receive time-
ly answers.
You have the right to ask for and receive help from
anyone on the project teams, including programmers,
managers, and customers.
You have the right to estimate testing tasks and have
these included in story estimates.
You have the right to the tools you need to perform
testing tasks in a timely manner.
You have the right to expect your entire team, not just
yourself, to be responsible for quality and testing.
Team logistics As I mentioned previously, I have recently been involved in a
discussion on independent test teams or a tester as part of
an Agile team. This was covered very well with Lisa and
Janet putting forward very compelling arguments for dis-
banding independent test teams and absorbing testers into
the Agile team.
Similar to Mike Cohn‘s ―Community of Practice‖, it is still
necessary to bring together like minded people (testers in
this case) to share experiences and learn from one another.
The other points that were made around inclusive discus-
sion, performance and rewards etc. are a little repetitive
having read Cohn‘s books on Agile.
Transitioning typical processes Quite a bit of this chapter is dedicated to metrics and defect
tracking. I found this very useful and reinforces some of my
own ideas on the subject. For example, a defect found and
fixed within an iteration should not really be tracked. The
value in tracking defects is only relevant once bugs are
released to the customer which typically means from user
acceptance test onward.
The purpose of testing Next we are introduced to the ―Agile Testing Quadrants‖.
Tests are grouped into four categories that in varying de-
grees support the team, critique the product, are business
facing or are technology facing. This is very useful identify-
ing why a team would/should perform different types of
tests. I also found it useful in justifying why certain types of
tests should not test certain aspects of the product.
Technology-facing tests that support the team Technology facing tests include unit and component tests
and focus on guiding design and development. Test-Driven
Development is discussed briefly and some time is given to
highlight that TDD is more about design than testing. The
sidebar on layered architecture is particularly relevant and
added context to the discussion. A further sidebar on test-
ing legacy systems was also very interesting and in line with
my own experiences of working with legacy code and sys-
tems.
What a test should not do is just as important as what a
test should do. Where do unit tests stop and component
tests start? What type of tests are appropriate? These
questions are answered in the context of technology facing
tests.
Toolkits have evolved as people have adopted Agile and
TDD. Some of these toolkits are mentioned briefly.
Business-facing tests that support the team From my understanding, business facing tests assert that
the software does what the business expects (these may be
considered conditions of satisfaction or acceptance tests).
In an Agile environment, these tests are added to and
amended on an ongoing basis (although we limit this to
clarifications during the sprint or iteration).
Business facing tests are automated (as much as possible;
non-tangible qualities such aesthetics cannot be automati-
cally tested).
21
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
An important point was made that using business facing tests
to drive development maintains the focus on testable code.
Toolkit for business-facing tests that support the team The toolkit outlined by Lisa and Janet is pretty much aligned
to the tools I am familiar with. Mind mapping tools are a
great way to organize high level requirements while test
tools such as Fit, Fitnesse, Selenium etc. provide the ability
to automate business facing tests. As always, it‘s important
to select the right tool for the specific circumstances.
The book also touches on Behavior Driven Development
(BDD) and I find that this is a subject that is receiving more
and more focus. John Smart of Wakaleo, who recently
worked with our development teams at Paddy Power, is a
strong advocate of BDD and introduced some excellent
tools and techniques to our development process.
Business –facing tests that critique the product This testing includes Exploratory Testing, Usability Testing
and User Acceptance Testing (UAT). Often, development
teams are not involved in these test activities even in com-
panies that have adopted and adapted Agile. They are
often seen as an ‗over the fence‘ activity where the test
team takes a release and provides feedback.
That said, Exploratory testing can occur within an iteration
but often only works well in a team with dedicated testers.
Without dedicated testers, a development team does not
have the same focus on testing in my experience. I have
also been involved in projects where Usability Testing oc-
curs in parallel using UI prototyping tools developed with
the user stories.
Critiquing the product using technology-facing tests Performance, load and security testing are often forgotten
until the end but they have the ability to make or break a
project. More and more you see performance testing
(typically relative rather than absolute performance test-
ing) occurring within a continuous integration environment.
There are a wide range of commercial and open source tool
sets available that support performance and load testing.
I particularly liked the ‗Performance Testing from the Start‘
story where Ken De Souza described how he included per-
formance test scripts within his definition of done. This
way, when he got to the point when he wanted to execute
load (or stress tests) he had scripts available that had
already been executing in his CI environment.
Why we want to automate tests and what holds us back I‘m not sure that there is anyone left that does not under-
stand the need to automate tests. However, the return on
investment is often overlooked. We should consider the
cost/benefit of automating specific tests by thinking about
the risk that is mitigated by a test, the level of volatility of
the code base, the historic occurrence of bugs and the
effort required to provide automated test coverage. Some-
times it is more appropriate to continue to provide manual
regression testing.
Testers should also consider the level(s) at which testing
should occur. Component, integration and acceptance
tests interact with the system in different ways and will
test different aspects of the system. Do we need to auto-
mate each type of test? Is it possible or practical to test at
each level? These are questions an Agile tester must con-
sider when deciding to automate tests.
An Agile test automation strategy Just like delivery of features, a test automation strategy
can be incremental. Consider automating the areas that
cause the most pain and build out your automation itera-
tion after iteration. There are lots of considerations that
are covered well by the book but if I were to highlight one
thing it is this: A regression test suite has the same charac-
teristics as the product code base. You must build it in a
way that is easy to extend and maintain. A regression test
suite enables the team to continue to deliver features at a
fast pace. Ensure it does not become a millstone for the
team by becoming difficult to maintain, extend or manage.
Bring the same rigor to building your test suite that you
would to any product or feature.
Tester activities in release or theme planning The life of a tester on an Agile team is a busy one. During
planning they must decide what types of testing to under-
take, what test environments are required, what test data
is required and what tests should be automated. They
must also contribute to the estimates for features/user
stories and consider how they should be tested and esti-
mate what size the test effort should be. In the same way
that developers consider the priority and sequence of de-
veloping features, testers must also consider the priority
and sequence so that the product is tested as efficiently
as possible. A tester also needs to consider what is an
appropriate level of testing. Usually this is based on some
type of risk (life, reputation, financial etc.).
22
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Most teams (if not all) consider acceptance testing
(automated or manual) as the last step in their ‗Definition
of Done‘. For this reason, test reposting is also a very im-
portant communication tool that provides a detailed view of
the progress made during an iteration and release. I have
seen some excellent examples of using test reporting to
drive the development process that include things like re-
quirements traceability.
Iteration kickoff One of the problems I often see testers on Agile teams
have is that they will maintain that they have nothing to
test until features start to be delivered. The reality is that
on an Agile team a tester is involved in a more diverse set
of activities than they are on a traditional team. During
iteration kickoff, testers should be reviewing stories for
testability and helping the customer define conditions of
satisfaction. They should help the customer convey stories
to the development team and work with the development
team to build a shared understanding. They should be writ-
ing task cards and pro-actively seeking opportunities to
help the team with their tasks by preparing test data, envi-
ronments etc. There is no end of things that testers can
(and should) become involved in.
Coding and testing During coding and testing, an Agile tester continues to col-
laborate with the customer and the team. They do this
through reporting, raising defects, triaging defects raised by
the customer, pairing with programmers (that‘s right, I said
pairing with programmers!!) etc. They prioritize test execu-
tion and automation based on risk and return. Testers
should adapt their approach during an iteration based on
what they are learning about the features and their imple-
mentation. They should manage their own time while re-
maining aware that they are taking a ‗whole team approach‘.
Wrap up the iteration Wrapping up an iteration is no different for an Agile tester
than it is for any other member of the team. They should be
involved in the ‗show and tell‘ and the retrospective. They
should contribute as a first class member of the team and
not fall into the pattern of allowing others to drive these
activities. The book suggests that testers are ideally placed
to demonstrate new features to the customer. I much pre-
fer to rotate this privilege as it is often the only chance for
the team to get direct feedback and hopefully praise from
the customer.
Successful delivery The aim of an Agile software team is to deliver potentially
shippable product every iteration (normally 2-4 weeks). But
when a potentially shippable product becomes a product
that is going to be shipped there are normally a number of
additional activities that need to occur. The term ‗End
Game‘ is used to refer to the time during which a team are
making those finishing touches to a product just before it is
shipped to a customer. This might include testing on differ-
ent environments, Alpha and Beta testing or packaging
among other things.
The activities that occur during the ‗End Game‘ tend to
differ significantly depending on the characteristics of the
product but for me one thing remains constant. A team,
product manager etc. should strive to reduce the duration
of this ‗End Game‘ as much as possible so that value is
delivered as close as possible to when the cost is incurred.
Key success factors The book concludes by identifying six key success factors for implementing Agile testing. I‘ll enumerate them here:
Use the whole team approach.
Adopt an Agile testing mindset.
Automate regression testing.
Provide and obtain feedback.
Build a foundation of core practices.
Collaborate with customers.
Look at the big picture.
I don‘t think I can add much here except to say that if you
are an Agile team you should be striving to automate
(almost) everything!
Thoughts I had to agree with the statement that ―Some people saw
testers as failed programmers or second-class citizens in
the world of software development‖. I still see this from
time to time but the flip side of the coin is that ―Testers who
don‘t bother to learn new skills and grow professionally
contribute to the perception that testing is low-skilled work.‖
Having testers as an integral part of the team helps de-
bunk this idea and develop mutual respect and apprecia-
tion between testers, programmers and business analysts.
A whole section of the book is dedicated to ‗An Iteration in
the Life of a Tester‘ and this section is really effective in
communicating what activities a tester should become
involved in during an iteration. This is a must read for any
aspiring or practicing Agile tester. ■
About John
John has been developing software in
the financial and betting industry since
1999. Passionate about technology,
John developed an interest in the prac-
tices and principles fundamental to
Agile software development particularly
those of Scrum and Lean. John also
spends his time identifying how best to
leverage the emergence of Cloud (IAAS,
PAAS, SAAS) to create more Agile businesses. This has
extended to investigate how best to employ NoSQL and Big
Data to meet increasingly demanding data requirements.
23
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Vi e tnam Vi ew
Vietnam‘s National Costume - The Áo Dài
W hen one thinks of Vietnam, the first pic-
ture in their mind is probably the coni-
cal hat, gracing the heads of rice farm-
ers and street vendors. But these hats are purely
utilitarian, meant to protect people from the rain
and sun. On the opposite end of the Vietnamese
fashion spectrum is the traditional dress, known as
the áo dài, one of Vietnam‘s most iconic cultural
garments.
Áo dàis were created in 1744 by the command of
Lord Nguyen Phuc Khoat of Hue. At that time, fash-
ion was universal - there was little difference in
style between the peasants and the aristocrats.
Lord Nguyen, influenced by the fashions of the Chi-
nese imperial court, decreed that that both men
and women in his court wear trousers and a gown
with buttons down the front.
Over the next 150 years, the áo dài underwent
some minor transformations, with extra panels be-
ing added and subtracted. But it was the French
who would ultimately shape the current style.
Imagine Paris in its 1920s golden age - jazz, beat-
niks, writers, lavish parties, artistic exploration and
opulence. It was this atmosphere that would take
the áo dài from a baggy traditional garment to form
-fitting dress that made its Paris debut in 1921.
By 1930, the style had gone full-circle and returned
to Vietnam where local fashionistas further devel-
oped on the Parisian influences.
The dress all but disappeared during World War II,
but the áo dài reappeared en mass during the
1950s. The infamous Madame Nhu, first lady of
South Vietnam, popularized a collarless version in
1958 that gained immense popularity in the south
that lasted until 1975. A brightly colored áo dài
hippy was even introduced in 1968.
With the economy sputtering after the war with
America, the dress was relegated to traditional cel-
ebrations such as weddings. It wasn‘t until the late
1980s that the style was again popularized and
used as school uniforms and also by professional
women in banks, travel agencies and flight attend-
ants (the áo dài is the official uniform for state-run,
Vietnam Airlines).
With Vietnam‘s recent economic revival, consumer
spending has skyrocketed and it‘s not uncommon
for women to have numerous áo dàis of varying
color for any number of different occasions.
Today‘s áo dàis have come a long way from their
traditional beginnings. But even as the country con-
tinues its march towards modernization, the tradi-
tional dress is still a core element of Vietnam‘s
contemporary style.■
By Brian Letwin, LogiGear Corporation
Today’s áo dàis have come a long way from their humble beginning. But even as the country contin-
ues its march towards modernization, the traditional dress is a core element of Vietnam’s contem-
porary style.
24
W W W . L O G I G E A R M A G A Z I N E . C O M J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
United States
2015 Pioneer Ct., Suite B
San Mateo, CA 94403
Tel +1 650 572 1400
Fax +1 650 572 2822
Viet Nam, Da Nang
7th floor, Dana Book Building
76-78 Bach Dang
Hai Chau District
Tel: +84 511 3655 333
Fax: +84 511 3655 336
Viet Nam, Ho Chi Minh City
1A Phan Xich Long, Ward 2
Phu Nhuan District
Tel +84 8 3995 4072
Fax +84 8 3995 4076
L O G I G E A R M A G A Z I N E
J U L Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 3
Top Related