Effective unit testing

download Effective unit testing

If you can't read please download the document

Transcript of Effective unit testing

Roberto Casadei

2013-05-30

Roberto Casadei

Notes taken from:Effective Unit Testing: A guide for Java developers

Effective Unit Testing

Testing

Expressing and validating assumptions and intended behavior of the code

Checking what code does against what it should do

Tests help us catch mistakes

Tests help us shape our design to actual use

Tests help us avoid gold-plating by being explicit about what the required behavior is

The biggest value of writing a test lies not in the resulting test but in what we learn from writing it

The value of having tests

First step: (automated) unit tests as a quality toolHelps to catch mistakes

Safety net against regressionfailing the build process when a regression is found

Second step: unit tests as a design toolInforms and guides the design of the code towards its actual purpose and use

From design-code-test to test-code-refactor (i.e. TDD) a.k.a red-green-refactor

The quality of test code itself affects productivity

Test-Driven Development (TDD)

Direct results:Usable code

Lean code, as production code only implements what's required by the scenario it's used for

Sketching a scenario into executable code is a design activity A failing test gives you a clear goal

Test code becomes a client for production code, expressing your needs in form of a concrete example

By writing only enough code to make the test pass, you keep your design simple and fit-for-purpose

Behaviour-Driven Development (BDD)

Born as a correction of TDD vocabularyTest word as source for misunderstandings

Now, commonly integrated with business analysis and specification activities at requirements levelAcceptance tests as examples that anyone can read

Not just tests but good tests (1)

Readability

Mantainability

Test-code organization and structureNot just structure but a useful structureGood mapping with your domain and your abstractions

What matters is whether the structure of your code helps you locate the implementation of higher-level concepts quickly and reliably

So, pay attention to:Relevant test classes for task at hand

Appropriate test methods for those classes

Lifecycle of objects in those methods

Not just tests but good tests (2)

It should be clear what your tests are actually testingDo not blindly trust the names of the tests

The goal is not 100% coverage testing the right thingsA test that has never failed is of little value it's probably not testing anything

A test should have only one reason to failbecause we want to know why it failed

Not just tests but good tests (3)

Test isolation is important Be extra careful when your tests depend on things such as:Time, randomness, concurrency, infrastructure, pre-existing data, persistence, networking

Examples of measures:Test doubles

Keep test code and the resources it uses together

Making tests set up the context they need

Use in-memory database for integration tests that require persistence

In order to rely on your tests, they need to be repeatable

Test Doubles

Test doubles

Objects to be substituted for the real implementation for testing purposesReplacing the code around what you want to test to gain full control if its context/environment

Essential for good test automationAllowing isolation of code under testFrom code it interacts with, its collaborators, and dependencies in general

Speeding-up test execution

Making random behavior deterministic

Simulating particular conditions that would be difficult to create

Observing state & interaction otherwise invisible

Kinds of test doubles

Stubs: unusually short things

Fake objects: do it without side effects

Test spies: reveal information that otherwise would be hidden

Mocks: test spies configured to behave in a certain way under certain circumstances

Stubs

(noun) def. a truncated or unusually short thing

A stub is simple impl that stands for a real imple.g. An object with methods that do nothing or return a default value

Best suited for cutting off irrelevant collaborators

Fake objects

Replicating the behavior of the real thing without the side effects and other consequences of using the real thing

Fast alternative for situations where the real thing is difficult or cumbersome to use

Test spies

Built to record what happens when using them

E.g. they are useful when none of the objects passed in as arguments to certain operations can reveal through their API what you want to know

Mock objects

Mocks are test spies that specify the expected interaction together with the behavior that results from them

E.g. A mock for UserRepository interface might be told to return null when findById() is invoked with param 123, and

return a given User instance when called with 124

Choosing the right test double

As usual, it depends

Rule of thumb: stub queries; mock actions

Structuring unit tests

Arrange-act-assertArrange your objects and collaborators

Make them work (trigger an action)

Make assertions on the outcome

BDD evolves it in given-when-thenGiven a context

When something happens

Then we expect certain outcome

Check behavior, not implementation

A test should test.. just one thing, and..

test it well, while..

communicating its intent clearly

What's the desired behavior you want to verify?

What's just an implementation detail?

Test Smells

Readability

WhyAccidental complexity adds cognitive load

GoalReading test code shouldn't be hard work

HowThe intent and purpose of test code should be explicit or easily deducible

ConsiderLevel of abstraction

Single Responsibility Principle (also applies to tests)

Readability smells (1)

Primitive assertionsAssertions that use a level of abstraction that is too lowE.g. Testing structural details of results

Twin of primitive obsession code smell (which refers to use of primitive types to represent higher-level concepts)

Also the abstraction level of the testing API matters

General advice: keep a single level of abstraction in test methods

HyperassertionsAssertions that are too broadmake it difficult to identify the intent and essence of the test

may fail if small details change, thus making it difficult to find out why

Approach: remove irrelevant details + divide-et-impera

Readability smells (2)

Incidental detailsThe test intent is mixed up with nonessential information

ApproachExtracts nonessential information into private helpers and setup methods

Give things appropriate, descriptive names

Strive for a single level of abstractions in a test method

Setup sermonSimilar to Incidental details but focuses on the setup of a test's fixture (= the context in which a given test executes), i.e. on the @Before and @BeforeClass (setup) methods

Magic numbersGenerally, literal values does not communicate their purpose well

Approach: replace literals using costants with informative names that make their purpose explicit

Readability smells (3)

Split personalityWhen a test embodies multiple tests in itself

A test should only check one thing and check it wellso that what's wrong could be easily located

Approach: divide-et-impera

Split logicTest code (logic or data) is scattered in multiple places

Approach:Inline the data/logic into the test that uses it

Maintainability

Test code requires quality (as production code)

Maintainability of testsis related to test readability

is related to structure

Look fortest smells that add cognitive load

test smells that make for a mantainance nightmare

test smells that cause erratic failures

Mantainability smells (1)

Duplicationneedless repetition of concepts or their representations

all copies need to be synchronized

Examples:Literal duplication extract variables

Structural duplication (same logic operating with different data istances) extract methods

Sometimes, it may be better to leave some duplication in favor of better readability

Conditional logiccan be hard to understand and error-prone

Control structures can be essential in test helpers but, in test methods, these structures tend to be a major distraction

Thread.sleep()It slows down your test; so, you should use synchronization mechanisms such as count-down-latches or barries

Mantainability smells (2)

Flaky testsTests that fails intermittentlyDoes the behavior depend on time/concurrency/network/?

When you have a source for trouble, you can1) Avoid it 2) Control it 3) Isolate it

Unportable file pathsPossibly, use relative paths (e.g. evaluated against the project's root dir)

You could also put resources on Java's classpath and look them up via getClass().getResource(filename).getFile()

Persistent temp filesEven though you should try to avoid using physical files altogether if not essential, remember to delete temp files during teardown

Maintainability smells (3)

Pixel perfectionIt refers to tests that assert against (hardcoded) low-level details even though the test would semantically be at a higher-level you may require a fuzzy match instead of a perfect match

From Parametrized-Test pattern to Parametrized-MessSome frameworks might not allow you to trace a test failure back to the specific data set causing it

to express data sets in a readable and concise way

Lack of cohesion in test methods each test in a test-case should use the same text fixture

Trustworthiness

We need to trust our tests so that we can feel confident in evolving/modifying/refactoring code

Look for test code that deliver a false sense of securityMisleading you to think everything is fine when it's not

Trustworthiness smells (1)

Commented-out testsTry to understand and validate their purpose, or delete them

Misleading commentsMay deliver false assumptions

Do not comment what the test does, as the test code should show that clearly and promptlyInstead, comments explaining the rationale may be useful

Never-failing testsHave no value

E.g. forgetting fail() in a try{}catch{}

Shallow promisesTests that do much less than what they say they do

Trustworthiness smells (2)

Lowered expectationsTests asserting for loose conditions (vague assertions, ) give a false sense of security raise the bar by making the assertions more specific/precise

Platform prejudiceA failure to treat all platforms equal

Measures: different tests for different platforms

Conditional testIt's a test that's hiding a secret conditional within a test method, making the test logic different from what its name would suggestPlatform prejudice is an example (the specific test depends on the platform)

As a rule of thumb, all branches in a test method should have a chance to fail

Some advanced stuff

Testable design

Design decisions can foster or hinder testability

Principles supporting testable designModularity

SOLIDSingle responsability principle a class should have only a single responsibility

Open/closed principle sw entities should be open for extension, but closed for modificationyou can change what a class does without changing its source code

Liskov substitution principle objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program

Inteface segregation principle many client-specific interfaces are better than one general-purpose interface

Dependency inversion principle as a way for depending on abstractions rather than on concretionsgreat for testability!

Testability issues

Testability issuesrestricted accessprivate/protected methods

inability to observe the outcome (e.g. side effects), void methods

inability to substitute parts of an implementationinability to substitute a collaborator

inability to replace some functionality

Guidelines for testable design

Avoid complex private methods

Avoid final methods

Avoid static methods (if you foresee chance for stub)

Use new with careit hardcodes the implementation class use IoC if possible

Avoid logic in constructors

Avoid the Singleton pattern

Favor composition over inheritance

Wrap external libraries

Avoid service lookups (factory classes)as the collaborator is obtained internally to the method ( service = MyFactory.lookupService() ), it may be difficult to replace the service

JUnit 4

JUnit4 basics

Package: org.junit

Test classes are POJO classes

Annotations

@Test (org.junit.Test)

@Before: marked methods are exec before each test method run

@After: marked methods are exec after each test method run

Using assertions & matchersimport static org.junit.Assert.*;

import static org.hamcrest.CoreMatchers.*;

So that in your test methods you can write something asassertThat( true, is( not(false) ) );

Parametrized-Test pattern in JUnit

Mark the test class with@RunWith(org.junit.runners.Parametrized.class)

Define private fields and a constructor that accepts, in order, your parameterspublic MyParamTestCase(int k, String name) { this.k=k; }

Define a method that returns your all your parameter [email protected] static Collection data(){return Arrays.asList( new Object[][] { { 10, roby}, } );

}

Define a @Test method that works against the private fields that are defined to contain the parameters.

Fate clic per modificare il formato del testo della strutturaSecondo livello strutturaTerzo livello strutturaQuarto livello strutturaQuinto livello strutturaSesto livello strutturaSettimo livello struttura

Fate clic per modificare il formato del testo della strutturaSecondo livello strutturaTerzo livello strutturaQuarto livello strutturaQuinto livello strutturaSesto livello strutturaSettimo livello struttura

Fate clic per modificare il formato del testo della strutturaSecondo livello strutturaTerzo livello strutturaQuarto livello strutturaQuinto livello strutturaSesto livello strutturaSettimo livello struttura