Testing and Mocking Object - The Art of Mocking.

Post on 10-May-2015

3.078 views 0 download

Tags:

description

The Art Of Mocking with EasyMock in Java.

Transcript of Testing and Mocking Object - The Art of Mocking.

Testing Overview and Mocking Objects

Deepak SinghviFeb, 04 2009

Goal of Presentation

Show how “Mocking Objects” can greatly improve unit testing by facilitating tests that are easier to write and less dependant upon objects outside the test domain

The ultimate goal - Quality

Agenda

Software Quality Testability and the problems in it V model Terminologies

– Error, Fault, Failure Test Methodologies

– Functional– Behavioral

Type of testing– UT, ST, IT, Alpha, Beta, etc

Mocking Objects - Why Mocking using EasyMock and examples Q & A

Software Quality

QUALITY

FormalTechnicalReviews

Software Engineering

Methods

Standardsand

Procedures

SQA

Measurements

Testing

Testability

IEEE 610.12

Testability.

(1) Degree to which a system or component facilitates the establishment of a test criteria and the performance of tests to determine whether those criteria have been met. ...

IEEE 610.12

Testability.

(1) Degree to which a system or component facilitates the establishment of a test criteria and the performance of tests to determine whether those criteria have been met. ...

Testability

Testability has two facets: – controllability and – observability.

To test a component we must be able to:– control the input (and internal state) and – observe its outputs.

However, there are many obstacles to controllability and observability:– A component under test is almost always embedded in another

system.– A component is embedded in another system and we wants to test

that another system.

Why to do testing

Tests Keep you out of the (time hungry) debugger! Tests Reduce Bugs in New Features Tests Reduce Bugs in Existing Features Tests Reduce the Cost of Change Tests Improve Design Tests Allow Refactoring Tests Constrain Features Testing Is Fun Testing Forces You to Slow Down and Think Testing Makes Development Faster Tests Reduce Fear

Who Tests the Software?

developerdeveloper independent testerindependent testerUnderstands the system Understands the system

but, will test "gently"but, will test "gently"

and, is driven by "delivery"and, is driven by "delivery"

Must learn about the system,Must learn about the system,

but, will attempt to break itbut, will attempt to break it

and, is driven by qualityand, is driven by quality

The V-model of development

Terminology

Error– Represents mistakes made by people

Fault– Is result of error. May be categorized as

Fault of Commission – we enter something into representation that is incorrect Fault of Omission – Designer can make error of omission, the resulting fault is

that something is missing that should have been present in the representation Failure

– Occurs when fault executes. Incident

– Behavior of fault. An incident is the symptom(s) associated with a failure that alerts user to the occurrence of a failure

Test case– Associated with program behavior. It carries set of input and list of expected output

Verification– Process of determining whether output of one phase of development conforms to its

previous phase. Validation

– Process of determining whether a fully developed system conforms to its SRS document

A Testing Life Cycle

RequirementSpecs

Design

Coding

Testing

Fault Resolution

FaultIsolation

FaultClassification

Error

Fault

Fault

Fault

Error

Error

incident

Fix

Classification of Test

There are two levels of classification– One distinguishes at granularity level

Unit level System level Integration level

– Other classification is based on methodologies Black box (Functional) Testing White box (Structural) Testing

Black-box and white-box are test design methods. Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.

Relationship – program behaviors

Program Behaviors

Specified(expected)Behavior

Programmed(observed)BehaviorFault

OfOmission

FaultOfCommission

Correct portion

Test methodologies

Functional (Black box) inspects specified behavior– Equivalence partitioning– Boundary analysis– Error guessing

Structural (White box) inspects programmed behavior– Statement coverage– Decision coverage– Condition coverage– Decision/condition coverage– Multiple condition coverage

When to use what

Few set of guidelines available

A logical approach could be– Prepare functional test

cases as part of specification. However they could be used only after unit and/or system is available.

– Preparation of Structural test cases could be part of implementation/code phase.

– Unit, Integration and System testing are performed in order.

More types of Testing

Unit Testing: The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

Integration Testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional Testing: Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers.

System Testing: Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Sanity Testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Still more types of testing

Performance Testing: This term is often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

Usability Testing: Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Installation/Uninstallation Testing: Testing of full, partial, or upgrade install/uninstall processes.

Security Testing: Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Compatability Testing: Testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Ad-hoc Testing: Testing the application in a random manner.

Alpha Testing: Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta Testing: Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Testing

Walkthroughs

Code Reviews

Static Dynamic

Inspections White Box Black Box

Unit Testing

Black Box

System TestingIntegration Testing

UAT Testing

Alpha Testing

Beta Testing

Functional Testing

Usability Testing

Non Functional Testing

Performance Testing

Functional Testing

System Testing

Smoke Testing

Regression Testing

Sanity Testing Adhoc TestingRetesting

Gorilla Testing

Negative Testing

Non Functional Testing

System Testing

Load Testing

Volume Testing

Stress testingPerformance Testing

Testing Steps

Art of Mocking Objects

What are the Problems of Software Testing?

• Time is limited

• Applications are complex

• Requirements are fluid

Few Definitions

Test Fixture/ Test Class – A class with some unit tests in it.

Test (or Test Method) – a test implemented in a Test Fixture

Test Suite - A set of test grouped together

Test Harness/ Runner – The tool that actually executes the tests.

Mock Objects

The objective of unit testing is to exercise just one method at a time.

But what happens when the method depends on other hard-to-control elements, such as:

– network or

– database

Method Controllability is threatened by these hard- to-control elements.

Why engage in Object Mocking?

The real object has behavior that is hard to cause or is non-deterministic

The real object is difficult to set up The real object is slow The real object has (or is ) a UI Test needs to query the object but queries

are not available Real objects do not exist

Mock Objects - Definition

Using mock objects we can get around these problemes.

A mock Object is a Test Pattern which proposes a fake implementation of objects hard to control.

Its purpose is to simulate the real objects strictly for testing.

They have the same interface of the real objects.

The importance of IoC

Inversion of Control (aka DIP – Dependency Inversion Prinicipal)

– A. High-level modules should not depend on low-level modules. Both should depend on abstractions.

– B. Abstractions should not depend on details. Details should depend on abstractions.

OOD principle

Inversion Of Control

Inversion Of Control, Dependency Injection, The Hollywood Principal etc.

In stead of instantiating concrete class references in your class, depend on an abstraction and allow your concrete dependencies to be given to you.

OOD principle

Concrete Class Dependency

Allow dependency to be passed in

Some mock principals to follow

Mocking Interfaces and Classes outside Your Code – In general – do not mock code you do not own. For example

– do not mock Active Directory or LDAP (Lightweight Directory Access Protocol ), in stead - create your own interface to wrap the interaction with the external API classes. Like:

Rules of thumb

If you cannot test code – change it so you can.

Test first!

Only ONE concrete class per test – MOCK THE REST.

How many mocks????

How Many Mock Objects in any Given Test?

There should never be more than 2-3 mock objects involved in any single unit test.  I wouldn’t make a hard and fast rule on the limit, but anything more than 2 or 3 should probably make you question the design.  The class being tested may have too many responsibilities or the method might be too large.  Look for ways to move some of the responsibilities to a new class or method.  Excessive mock calls can often be a sign of poor encapsulation. 

Only Mock your Nearest Neighbor

Ideally you only want to mock the dependencies of the class being tested, not the dependencies of the dependencies.  From hurtful experience, deviating from this practice will create unit test code that is very tightly coupled to the internal implementation of a class’s dependencies. 

The law of Demeter (LoD)

The Law of Demeter (LoD) is a simple style rule for designing object-oriented systems. "Only talk to your friends" is the motto.

Each unit should only use a limited set of other units: only units “closely” related to the current unit.

“Each unit should only talk to its friends.” “Don’t talk to strangers.”

Main Motivation: Control information overload. We can only keep a limited set of items in short-term memory.

Too many mocks and mocking past your immediate neighbors are due to violating this prinicpal.

Law of Demeter

FRIENDS

“closely related”

Violations: Dataflow Diagram

AB C

1:b 2:c

P Q

3:p()

4:q()

foo()

bar()

m

OO Following of LoD

AB C

1:b c

P Q3:p() q()

foo()

bar()

m

2:foo2()

4:bar2()

foo2

bar2

Testing is easy in isolation

Class Under Test

Test Class

Testing is harder with dependencies …

Class Under Test

Test Class

… so remove the dependencies (for developer testing)

Class Under Test

Test Class

mock

mock

mockm

ock

Examples

To get a Mock Object, we need to 1) Create a Mock Object for the interface we would like to simulate, 2) Record the expected behavior, 3) And switch the Mock Object to replay state.

EasyMock uses a record/replay metaphor for setting expectations. For each object you wish to mock you create a mock object. To indicate an expectation you call the method, with the arguments you expect on the mock. Once you've finished setting expectations you call replay - at which point the mock finishes the recording and is ready to respond to the primary object. Once done you call verify.

testRemoveNonExistingDocument :– After activation in step 3, mock is a Mock Object for the

Collaborator interface that expects no calls. This means that if we change our ClassUnderTest to call any of the interface's methods, the Mock Object will throw an AssertionError:

Verifying Behavior :There is one error that we have not handled so far: If we specify behavior, we would like to verify that it is actually used. The current test would pass if no method on the Mock Object is called. To verify that the specified behavior has been used, we have to call verify(mock):

Expecting an Explicit Number of Calls

Up to now, our test has only considered a single method call. The next test should check whether the addition of an already existing document leads to a call to mock.documentChanged() with the appropriate argument. To be sure, we check this three times

To avoid the repetition of mock.documentChanged("Document"), EasyMock provides a shortcut. We may specify the call count with the method times(int times) on the object returned by expectLastCall(). The code then looks like:

expectLastCall().times(3);

Specifying Return Values

For specifying return values, we wrap the expected call in expect(T value) and specify the return value with the method andReturn(Object returnValue) on the object returned by expect(T value).

As an example, we check the workflow for document removal. If ClassUnderTest gets a call for document removal, it asks all collaborators for their vote for removal with calls to byte voteForRemoval(String title) value. Positive return values are a vote for removal. If the sum of all values is positive, the document is removed and documentRemoved(String title) is called on all collaborators:

Relaxing Call Counts

To relax the expected call counts, there are additional methods that may be used instead of times(int count):

times(int min, int max) – to expect between min and max calls,

atLeastOnce() – to expect at least one call, and

anyTimes() – to expected an unrestricted number of calls.

If no call count is specified, one call is expected. If we would like to state this explicitely, once() or times(1) may be used.

Flexible Expectations with Argument Matchers

If you would like to use matchers in a call, you have to specify matchers for all arguments of the method call. There are a couple of predefined argument matchers available. eq(X value)

– Matches if the actual value is equals the expected value. Available for all primitive types and for objects.

anyBoolean(), anyByte(), anyChar(), anyDouble(), anyFloat(), anyInt(), anyLong(), anyObject(), anyShort() – Matches any value. Available for all primitive types and for objects.

eq(X value, X delta) – Matches if the actual value is equal to the given value allowing the given delta. Available for float

and double. aryEq(X value)

– Matches if the actual value is equal to the given value according to Arrays.equals(). Available for primitive and object arrays.

isNull() – Matches if the actual value is null. Available for objects.

notNull() – Matches if the actual value is not null. Available for objects.

same(X value) – Matches if the actual value is the same as the given value. Available for objects.

Machers contd.

isA(Class clazz) – Matches if the actual value is an instance of the given class, or if it is in instance of a class that extends or implements

the given class. Null always return false. Available for objects. lt(X value), leq(X value), geq(X value), gt(X value)

– Matches if the actual value is less/less or equal/greater or equal/greater than the given value. Available for all numeric primitive types and Comparable.

startsWith(String prefix), contains(String substring), endsWith(String suffix) – Matches if the actual value starts with/contains/ends with the given value. Available for Strings.

matches(String regex), find(String regex) – Matches if the actual value/a substring of the actual value matches the given regular expression. Available for Strings.

and(X first, X second) – Matches if the matchers used in first and second both match. Available for all primitive types and for objects.

or(X first, X second) – Matches if one of the matchers used in first and second match. Available for all primitive types and for objects.

not(X value) – Matches if the matcher used in value does not match.

cmpEq(X value) – Matches if the actual value is equals according to Comparable.compareTo(X o). Available for all numeric primitive types

and Comparable. cmp(X value, Comparator<X> comparator, LogicalOperator operator)

– Matches if comparator.compare(actual, value) operator 0 where the operator is <,<=,>,>= or ==. Available for objects. capture(Capture<T> capture)

– Matches any value but captures it in the Capture parameter for later access. You can do and(someMatcher(...), capture(c)) to capture a parameter from a specific call to the method.

Limitations of Mocks

Can only mock interfaces and virtual members (generally)

Why not JMock

Too Stringy – Intelli-J, Eclipse does not like to refactor strings.

Why not NMock

Same thing – stringy!!!

Mocks to use

Rhino.Mocks (.net) – http://www.ayende.com/projects/rhino-mocks.aspx

EasyMock (java) – http://www.easymock.org/

NOT STRINGY!!!

Thank Youdeepak.singhvi@gmail.com

Object oriented design principle

There are five principles of class design.

* (SRP) The SingleResponsibilityPrinciple * (OCP) The OpenClosedPrinciple * (LSP) The LiskovSubstitutionPrinciple * (DIP) The DependencyInversionPrinciple * (ISP) The InterfaceSegregationPrinciple

There are three principles of package cohesion

* (REP) The ReuseReleaseEquivalencePrinciple * (CCP) The CommonClosurePrinciple * (CRP) The CommonReusePrinciple

There are three principles of package coupling

* (ADP) The AcyclicDependenciesPrinciple * (SDP) The StableDependenciesPrinciple * (SAP) The StableAbstractionsPrinciple Back