Unit Testing

63
Roberto Casadei 2013-05-30 Roberto Casadei Notes taken from: Effective Unit Testing: A guide for Java developers The Art of Unit Testing: With examples in C#, 2e Unit Testing

description

Basics of unit testing from "The art of unit testing"

Transcript of Unit Testing

  • Roberto Casadei 2013-05-30

    Roberto Casadei

    Notes taken from:Effective Unit Testing: A guide for Java developersThe Art of Unit Testing: With examples in C#, 2e

    Unit Testing

  • Effective Unit Testing 2

    Testing

    Expressing and validating assumptions and intended behavior of the codeChecking what code does against what it should doTests help us

    catch mistakesshape our design to actual useavoid gold-plating by being explicit about what the required behavior is

    The biggest value of writing a test lies not in the resulting test but in what we learn from writing it

  • Effective Unit Testing 3

    The value of having tests

    First step: (automated) unit tests as a quality toolHelps to catch mistakesSafety net against regression (= units of work that once worked and now don't)

    failing the build process when a regression is found

    Second step: unit tests as a design toolInforms and guides the design of the code towards its actual purpose and useFrom design-code-test to test-code-refactor (i.e. TDD) a.k.a red-green-refactor

    The quality of test code itself affects productivity

  • Effective Unit Testing 4

    Test-Driven Development (TDD)

    Direct results:Usable codeLean code, as production code only implements what's required by the scenario it's used for

    Sketching a scenario into executable code is a design activity A failing test gives you a clear goalTest code becomes a client for production code, expressing your needs in form of a concrete exampleBy writing only enough code to make the test pass, you keep your design simple and fit-for-purpose

  • Effective Unit Testing 5

    Behaviour-Driven Development (BDD)

    Born as a correction of TDD vocabularyTest word as source for misunderstandings

    Now, commonly integrated with business analysis and specification activities at requirements level

    Acceptance tests as examples that anyone can read

  • Effective Unit Testing 6

    Not just tests but good tests (1)

    ReadabilityMantainabilityTest-code organization and structure

    Not just structure but a useful structureGood mapping with your domain and your abstractions

    What matters is whether the structure of your code helps you locate the implementation of higher-level concepts quickly and reliablySo, pay attention to:

    Relevant test classes for task at handAppropriate test methods for those classesLifecycle of objects in those methods

  • Effective Unit Testing 7

    Not just tests but good tests (2)

    It should be clear what your tests are actually testingDo not blindly trust the names of the tests

    The goal is not 100% coverage testing the right things

    A test that has never failed is of little value it's probably not testing anythingA test should have only one reason to fail

    because we want to know why it failed

  • Effective Unit Testing 8

    Not just tests but good tests (3)

    Test isolation is important Be extra careful when your tests depend on things such as:

    Time, randomness, concurrency, infrastructure, pre-existing data, persistence, networking

    Examples of measures:Test doublesKeep test code and the resources it uses togetherMaking tests set up the context they needUse in-memory database for integration tests that require persistence

    In order to rely on your tests, they need to be repeatable

  • Effective Unit Testing 9

    Test Doubles

  • Effective Unit Testing 10

    Test doubles

    Def.: Objects to be substituted for the real implementation for testing purposes

    Replacing the code around what you want to test to gain full control if its context/environment

    Essential for good test automationAllowing isolation of code under test

    From code it interacts with, its collaborators, and dependencies in generalSpeeding-up test executionMaking random behavior deterministicSimulating particular conditions that would be difficult to createObserving state & interaction otherwise invisible

  • Effective Unit Testing 11

    Kinds of test doubles

    Stubs: unusually short thingsFake objects: do it without side effectsTest spies: reveal information that otherwise would be hiddenMocks: test spies configured to behave in a certain way under certain circumstances

  • Effective Unit Testing 12

    Stubs

    (noun) def. a truncated or unusually short thingA stub is simple impl that stands for a real impl

    e.g. An object with methods that do nothing or return a default value

    Best suited for cutting off irrelevant collaborators

  • Effective Unit Testing 13

    Fake objects

    Replicating the behavior of the real thing without the side effects and other consequences of using the real thingFast alternative for situations where the real thing is difficult or cumbersome to use

  • Effective Unit Testing 14

    Test spies

    Built to record what happens when using themE.g. they are useful when none of the objects passed in as arguments to certain operations can reveal through their API what you want to know

  • Effective Unit Testing 15

    Mock objects

    Mocks are test spies that specify the expected interaction together with the behavior that results from themE.g. A mock for UserRepository interface might be told to

    return null when findById() is invoked with param 123, andreturn a given User instance when called with 124

  • Effective Unit Testing 16

    Choosing the right test double

    As usual, it dependsRule of thumb: stub queries; mock actions

  • Effective Unit Testing 17

    Structuring unit tests

    Arrange-act-assertArrange your objects and collaboratorsMake them work (trigger an action)Make assertions on the outcome

    BDD evolves it in given-when-thenGiven a contextWhen something happensThen we expect certain outcome

  • Effective Unit Testing 18

    Check behavior, not implementation

    A test should test.. just one thing, and..test it well, while..communicating its intent clearly

    What's the desired behavior you want to verify?What's just an implementation detail?

  • Effective Unit Testing 19

    Test Smells

  • Effective Unit Testing 20

    Readability

    WhyAccidental complexity adds cognitive load

    GoalReading test code shouldn't be hard work

    HowThe intent and purpose of test code should be explicit or easily deducible

    ConsiderLevel of abstractionSingle Responsibility Principle (also applies to tests)

  • Effective Unit Testing 21

    Readability smells (1)

    Primitive assertionsAssertions that use a level of abstraction that is too low

    E.g. Testing structural details of resultsTwin of primitive obsession code smell (which refers to use of primitive types to represent higher-level concepts)Also the abstraction level of the testing API mattersGeneral advice: keep a single level of abstraction in test methods

    HyperassertionsAssertions that are too broad

    make it difficult to identify the intent and essence of the testmay fail if small details change, thus making it difficult to find out why

    Approach: remove irrelevant details + divide-et-impera

  • Effective Unit Testing 22

    Readability smells (2)

    Incidental detailsThe test intent is mixed up with nonessential informationApproach

    Extracts nonessential information into private helpers and setup methodsGive things appropriate, descriptive namesStrive for a single level of abstractions in a test method

    Setup sermonSimilar to Incidental details but focuses on the setup of a test's fixture (= the context in which a given test executes), i.e. on the @Before and @BeforeClass (setup) methods

    Magic numbersGenerally, literal values does not communicate their purpose wellApproach: replace literals using costants with informative names that make their purpose explicit

  • Effective Unit Testing 23

    Readability smells (3)

    Split personalityWhen a test embodies multiple tests in itselfA test should only check one thing and check it well

    so that what's wrong could be easily locatedApproach: divide-et-impera

    Split logicTest code (logic or data) is scattered in multiple placesApproach:

    Inline the data/logic into the test that uses it

  • Effective Unit Testing 24

    Maintainability

    Test code requires quality (as production code)Maintainability of tests

    is related to test readabilityis related to structure

    Look fortest smells that add cognitive loadtest smells that make for a mantainance nightmaretest smells that cause erratic failures

  • Effective Unit Testing 25

    Mantainability smells (1)

    Duplicationneedless repetition of concepts or their representationsall copies need to be synchronizedExamples:

    Literal duplication extract variablesStructural duplication (same logic operating with different data istances) extract methods

    Sometimes, it may be better to leave some duplication in favor of better readability

    Conditional logiccan be hard to understand and error-proneControl structures can be essential in test helpers but, in test methods, these structures tend to be a major distraction

    Thread.sleep()It slows down your test; so, you should use synchronization mechanisms such as count-down-latches or barries

  • Effective Unit Testing 26

    Mantainability smells (2)

    Flaky testsTests that fails intermittently

    Does the behavior depend on time/concurrency/network/?When you have a source for trouble, you can

    1) Avoid it 2) Control it 3) Isolate it

    Unportable file pathsPossibly, use relative paths (e.g. evaluated against the project's root dir)You could also put resources on Java's classpath and look them up via getClass().getResource(filename).getFile()

    Persistent temp filesEven though you should try to avoid using physical files altogether if not essential, remember to delete temp files during teardown

  • Effective Unit Testing 27

    Maintainability smells (3)

    Pixel perfectionIt refers to tests that assert against (hardcoded) low-level details even though the test would semantically be at a higher-level

    you may require a fuzzy match instead of a perfect match

    From Parametrized-Test pattern to Parametrized-MessSome frameworks might not allow you

    to trace a test failure back to the specific data set causing itto express data sets in a readable and concise way

    Lack of cohesion in test methods each test in a test-case should use the same text fixture

  • Effective Unit Testing 28

    Trustworthiness

    We need to trust our tests so that we can feel confident in evolving/modifying/refactoring codeLook for test code that deliver a false sense of security

    Misleading you to think everything is fine when it's not

  • Effective Unit Testing 29

    Trustworthiness smells (1)

    Commented-out testsTry to understand and validate their purpose, or delete them

    Misleading commentsMay deliver false assumptionsDo not comment what the test does, as the test code should show that clearly and promptly

    Instead, comments explaining the rationale may be useful

    Never-failing testsHave no valueE.g. forgetting fail() in a try{}catch{}

    Shallow promisesTests that do much less than what they say they do

  • Effective Unit Testing 30

    Trustworthiness smells (2)

    Lowered expectationsTests asserting for loose conditions (vague assertions, ) give a false sense of security raise the bar by making the assertions more specific/precise

    Platform prejudiceA failure to treat all platforms equalMeasures: different tests for different platforms

    Conditional testIt's a test that's hiding a secret conditional within a test method, making the test logic different from what its name would suggest

    Platform prejudice is an example (the specific test depends on the platform)As a rule of thumb, all branches in a test method should have a chance to fail

  • Effective Unit Testing 31

    Some advanced stuff

  • Effective Unit Testing 32

    Testable design

    Design decisions can foster or hinder testabilityPrinciples supporting testable design

    ModularitySOLID

    Single responsability principle a class should have only a single responsibilityOpen/closed principle sw entities should be open for extension, but closed for modification

    you can change what a class does without changing its source codeLiskov substitution principle objects in a program should be replaceable with instances of their subtypes without altering the correctness of that programInteface segregation principle many client-specific interfaces are better than one general-purpose interfaceDependency inversion principle as a way for depending on abstractions rather than on concretions

    great for testability!

  • Effective Unit Testing 33

    Testability issues

    Testability issuesrestricted access

    private/protected methodsinability to observe the outcome (e.g. side effects), void methods

    inability to substitute parts of an implementationinability to substitute a collaboratorinability to replace some functionality

  • Effective Unit Testing 34

    Guidelines for testable design

    Avoid complex private methodsAvoid final methodsAvoid static methods (if you foresee chance for stub)Use new with care

    it hardcodes the implementation class use IoC if possibleAvoid logic in constructorsAvoid the Singleton patternFavor composition over inheritanceWrap external librariesAvoid service lookups (factory classes)

    as the collaborator is obtained internally to the method ( service = MyFactory.lookupService() ), it may be difficult to replace the service

  • Effective Unit Testing 35

    Best practices

  • Effective Unit Testing 36

    Why good unit testing is essential

    A failing projectdoing TDD (red-green-refactor), the first months were greatas time went by, requirements changedwe were forced to change code, and when we did, tests broke

  • Effective Unit Testing 37

    Good unit tests

    Good unit tests shouldbe AUTOMATED and REPEATABLEbe EASY TO IMPLEMENTbe RELEVANT TOMORROWbe RUNNABLE BY ANYONE WITH EASERUN QUICKLYbe CONSISTENT in its resultshave FULL CONTROL OF THE UNIT under testbe FULLY ISOLATEDbe COMMUNICATIVE WHEN FAILING

  • Effective Unit Testing 38

    Why we need best practices

    Just because you write your tests doesn't mean they're maintainable, readable, and trustworthy.If they're so doesn't mean you get the same benefits as when writing them test first.If you do so doesn't mean you'll and up with a well-designed system.

  • Effective Unit Testing 39

    Naming conventions

    Test project [ProjectUnderTest].UnitTestsFor a class in ProjectUnderTest [ClassName]Tests For each unit of work test method named

    [UnitOfWorkName]_[ScenarioUnderTest]_[ExpectedBehavior]UnitOfWorkName: name of method/methods/classes being testedScenario: conditions under which the unit is tested (e.g. user already exists, or system out of memory)ExpectedBehavior: what you expect the tested method to do

    return a value / change state of SUT / call 3rd-party systemNOTE: SUT = System Under Test

  • Effective Unit Testing 40

    Possible naming conventions of scenarios

    In cases of state-based testingMyMUT_WhenCalled_DoSomethingMyMUT_Always_DoSomething

  • Effective Unit Testing 41

    Other naming conventions

    Fake objects for interface IMyInterfaceStubMyInterface, MockMyInterface, FakeMyInterface (for both stubs and mocks)

  • Effective Unit Testing 42

    Types of testing

    Value-based testingcheck for values returned by a function

    State-based testing (state verification)determines whether the exercised method worked correctly by examining the changed behavior of the SUT and its collaborators

    Interaction testingtests how an object send messages to other objectsyou use interaction testing when a method call is the end result of a specific unit of work

  • Effective Unit Testing 43

    Using stubs to break dependencies

    Case: your SUT relies on dependencies over which you have no control (or that don't work yet)

    Examples of these dependencies: filesystem, threads, memory, time ...By using a stub (a controllable replacement for a dependency) you can test your SUT without dealing with the dependency

  • Effective Unit Testing 44

    Refactoring for testability (1)

    You may need to refactor your design to make it more testablee.g. by introducing seams, i.e. places in your code where you can plug in different functionality

    Refactorings to allow replacements with stubsAbstracting concrete objects (dependencies) into interfaces or delegatesAllowing injection of fake impls of those interfaces or delegates

    By making the OUT (Object Under Test) receive an interface at constructor-level / property-level (C#) for later useor receive the interface in a method call via

    a parameter of the method (parameter injection) a factory class a local factory method (extract and override) variations of the preceding techniques

  • Effective Unit Testing 45

    Refactoring for testability (2)

    Which injection? Rules of thumb:Constructor injection for non-optional dependenciesProperty injection for optional dependencies

    Indirection levelsDepth 1: faking a member in the class under test (constructor/property injection or faking a method via subclassing)Depth 2: faking a member of a factory class

    add a setter to the factory and set it to a fake dependencyDepth 3: faking the factory class by implementing the factory interface

    Extract and ovverride can help to create fake resultsYou subclass your class under test and pverride a virtual method to make it return your stub

  • Effective Unit Testing 46

    Testable code and encapsulation

    Making a class testable may imply breaking encapsulation

    SolutionsUsing internal instead of public for methods, and exposing them to your test assembly via [InternalsVisibleTo]Using conditional compilation via #if and #endif

  • Effective Unit Testing 47

    Mocks help you to assert something in your test

    Mocks vs. stubsstubs replace objects so that you can test other objects without problems; a stub can never fail a test; the emphasis remains on the object under testmocks can fail tests; the emphasis is in the interaction between the object under test and another object

    Mockingthe class under test communicates with the mock objectthe mock object records all the communicationthe test uses the mock object to verify that the test passes

  • Effective Unit Testing 48

    Mocks

    Rule of thumb: no more than one mock per testand more stubs, if necessaryAvoid overspecification

    Handwriting mocksis cumbersome, it takes time, a lot of boilerplate code, hard to reuse...

    isolation / mocking frameworks

  • Effective Unit Testing 49

    Isolation frameworks

    Two categories of mocking frameworksconstrained

    they generate code and compile it at runtime (i.e. they're constrained by the compiler and intermediate code / bytecode abilities)e.g. they cannot fake static methods, nonvirtual methods, nonpublic methods...for C#: RhinoMocks, Moq, NMock, EasyMock, NSubstitute, FakeItEasyfor Java: jMock, EasyMock

    unconstrainedin .NET (Typemock Isolator, JustMock, Moles) they are profiler-based (i.e. they use the profiling APIs, which are unmanaged and allow to inject IL-based code at runtime)thus the process that runs the tests must be enabled by specific env varsfor Java: PowerMock, JMockit; for C++: Isolator++, Hippo Mocks PROS: you can fake 3rd-party systems; it allows to test previously untestable codeCONS: some tests may become unmaintainable because you're faking APIs that you don't own

  • Effective Unit Testing 50

    Good isolation frameworks have...

    features promoting test robustness recursive fakes: objects returned by calling methods on a fake object will be fake as wellignored argument by default

    no need to always include Arg.IsAnyfaking multiple methods at once

    e.g. ability to specify a return value of type T for every method which returns Tnonstrict mocks

    strict mocks fail in two cases A) when a unexpected method is called on them, or 2) if an expected method is NOT called on them

  • Effective Unit Testing 51

    Good isolation frameworks have also...

    a good design which promotes clarity and readabilityFor example

    API names which distinguish between mocks and stubsAAA (Arrange-Act-Assert) style of testing rather than record-and-replay style

  • Effective Unit Testing 52

    Test execution

    Two common scenariostests run during the automated build process

    automated build as a collection of build scripts, automated triggers, a build integration server, and a shared team agreement to work this way CI servers manage, record and trigger build scripts based on specific eventstypical scripts: CI build (should be quick!), nightly build, and deployment build scripts

    tests run by developers in their own machineSome tools

    for build scripts: NAnt, MSBuild, FinalBuilder, Rakefor CI servers: CruiseControl.NET, Jenkins, Travis CI, TeamCity, Hudson, Bamboo

  • Effective Unit Testing 53

    Test code organization

    Separate integration tests from unit testsin different projects, orin different folders and namespaces

    Define a mapping from test classes to code under test MyProject, MyProject.UnitTests, MyProject.IntegrationTestsMapping tests from classes, approaches (not exclusive):

    one-test-class-per-class-under-test pattern, e.g. MyClassTest one-test-class-per-feature, e.g. MyLoginClassTestForPasswordChanges

    Test method names: [MethodUnderTest]_[Scenario]_[ExpectedBehavior]

  • Effective Unit Testing 54

    Building a test API...

    for code testability & readability/maintainance of testsuse inheritance in test classes for code reuse

    (abstract test infrastructure class pattern) base test classes with common utility methods, factory or template methods common setup/teardown code test methods enforcing a structure for testing in subclasses

    create test utility classes (e.g. named as AssertUtility, FactoryUtility, ConfigurationUtility) and methods

    factory methods for complex objects, object configuration methods, ...system initialization methodsmethods for handling (setup, connection, read, ..) external resources (e.g. DBs, )special assert methods

  • Effective Unit Testing 55

    JUnit 4

  • Effective Unit Testing 56

    JUnit4 basics

    Package: org.junitTest classes are POJO classesAnnotations

    @Test (org.junit.Test)@Before: marked methods are exec before each test method run@After: marked methods are exec after each test method run

    Using assertions & matchersimportstaticorg.junit.Assert.*;

    importstaticorg.hamcrest.CoreMatchers.*;

    So that in your test methods you can write something asassertThat(true,is(not(false)));

  • Effective Unit Testing 57

    Parametrized-Test pattern in JUnitMark the test class with

    @RunWith(org.junit.runners.Parametrized.class)Define private fields and a constructor that accepts, in order, your parameters

    public MyParamTestCase(int k, String name) { this.k=k; }Define a method that returns your all your parameter data

    @org.junit.runners.Parametrized.Parameterspublic static Collection data(){

    return Arrays.asList( new Object[][] { { 10, roby}, } );}

    Define a @Test method that works against the private fields that are defined to contain the parameters.

  • Effective Unit Testing 58

    NUnit

  • Effective Unit Testing 59

    NUnit basics (1)

    Library: NUnit.Framework.dll (add reference to project)Namespace: using NUnit.Framework;Test classes annotated with [TestFixture]Annotations

    [Test] 1+ [TestCase(params)] for multiple parametrizations of a test method[SetUp]: marked methods are exec before each test method run[TearDown]: marked methods are exec after each test method run[ExpectedException( typeof(ArgumentException), ExpectedMessage=...)][Ignore] to skip the tests that need to be fixed

  • Effective Unit Testing 60

    NUnit basics (2)

    Using assertions & matchersAssert.True(cond,msg);Assert.False(cond,msg);Assert.AreEqual(obj1,obj2);

    varex=Assert.Catch(()=>/*exceptionalcode*/);

    StringAssert.Contains(...,ex.Message);

    Fluent syntax: Assert.That( strObj, Is.StringContaining(...) )

  • Effective Unit Testing 61

    A mocking framework: NSubstitute (1)

    It supports the arrange-act-assert modelarrange: create and configure your fake objectsact: run your SUTassert: verify that your fake was called

    ISomething fake = Substitute.For();/* act */fake.Received().SomethingMethod(...);

    Received() returns the fake object itself so that calling a method of its interface is checked against the expectation set by Received()

    fake.aMethodCall(Arg.Any()).Returns(myFakeReturnVal);// the previous line forces the calls to aMethodCall() to return myFakeReturnValAssert.IsTrue(fake.fake.aMethodCall(...));fake.When(x => x.m(Any.Any())).Do(context => { throw new Exception });Assert.Throws( () => fake.m(ahah));

  • Effective Unit Testing 62

    A mocking framework: NSubstitute (2)

    Argument-matching constrains can be specifiedmock.Received().m(Arg.Is(obj => obj.Prop1==...));

  • Effective Unit Testing 63

    NSubstitute: testing event-related activities

    You can test events in the two different directionstesting that someone is listening to an event

    var stub = Substitute.For();stub.MyEvent += Raise.Event(str);mock.Received().MyEventMethod(str);// NOTE: public delegate void Action(T obj) // def in mscorlib

    testing that someone is triggering an eventbool evFired = false;stubEventProvider.MyEvent += delegate { evFired = true; };SUT.doSomethingThatEventuallyFiresTheEvent();Assert.IsTrue(evFired);

    Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31Slide 32Slide 33Slide 34Slide 35Slide 36Slide 37Slide 38Slide 39Slide 40Slide 41Slide 42Slide 43Slide 44Slide 45Slide 46Slide 47Slide 48Slide 49Slide 50Slide 51Slide 52Slide 53Slide 54Slide 55Slide 56Slide 57Slide 58Slide 59Slide 60Slide 61Slide 62Slide 63