Auriga Test of Emb Sw Prod

9
www.auriga.com Worldwide & Europe: +7 (495) 713-9900 in[email protected] USA: +1 (866) 645-1119 Testing of Embedded Software Products Abstract Te paper is devoted to organizing testing process or the embed- ded sofware products, and suggesting useul techniques in this area. Some details discussed in the presentation are specic to oshore development and testing teams Keywords:  esting, embedded sofware. Introduction Tis paper is devoted to the questions o organizing the testing process or the embedded sofware products. Embedded sofware domain has its own specics that inuence all components o the development process. esting is not an exception. o understand what practices and techniques work best or this domain, we should rst understand what the main dierences o the embedded sofware rom other sofware types are. Tis analysis is done rom the point o view o the oshore team. For in- house team some details dier, still the overall approach remains the same. Embedded software domain specics Te rst eye-catching thing is that the embedded sofware is signicantly less visible to the end users. User interac- es are limited. Tere may be a console-based text menu, a simple command line interace, a set o digital inputs o outputs, or something similar. Rarely do we get more than that. On the other hand the inter-component inter- aces can be very rich and complex—including APIs to the higher-level sofware, implementations o various com- munication, data exchange, control, and other standards, etc. Tus the main ocus o embedded sofware testing is not on testing the user interaces, but on testing the com- ponents not visible to the end users. Te second major dierence is the level o the depen- dence on the hardware specics. Embedded sofware is the level o the sofware closest to the hardware. Other sofware types such as operating systems and applications may be built upon the interaces provided by the embed- ded sofware such as BIOS or boot loader. Te embedded sofware itsel, even i it uses some more or less standard ramework underneath, needs to care more about hard- ware details. Embedded sofware by denition is designed or a particula r hardware unit (or a set o hardware units in common case). Ofen, those hardware units are developed in parallel with the embedded sofware. Te created sof- ware is the rst to run on it. Tus, unlike application de-  velopmen t, in the embed ded wor ld we can’t r ely on the act that the operating system is already tested on that hard- ware platorm, or that the ability o the hardware itsel to execute various sofware is already thoroughly tested. As a result, the developed sofware may have solutions and workarounds specic or particular hardware revi- sions. Operation o the embedded sofware may depend on such things that we usually don’t care about or the application-lev el sofware, like the length o the cable, ty pe o the mouse, serial port requency, or type o the devices connected to the same bus. Tat makes the successul exe- cution embedded sofware to a much higher degree depen- dent on the particular hardware unit and on the behavior o the other units in the same bus or network. Compared to the conventional case, race conditions are mostly caused not by the interaction o t he internal sofwa re componen ts, but rather by the interactions o the sofware with the en-  vironment. So, the number o actors and pa rameters that can inuence the operation is bigger than or the average application. And reproduction o a deect is more dicult. Support operations, such as sofware deployment, up- grade, getting debug in ormation, also dier rom what we are used to see or conventional application-level sofware, with its plug-n-play concept, installation wizards, ability to attach a convenient debugger rom one o the IDEs, or at list dump all debug output lines to a large le on disk . In the embedded world we ofen need to put the sofware in the special mode, disable EEPROM write-protection, at- tach to some le-distribution (like FP) server, reboot a couple o times, and care about other similar things. Tat

Transcript of Auriga Test of Emb Sw Prod

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 1/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

Testing of Embedded Software ProductsAbstract

Te paper is devoted to organizing testing process or the embed-

ded sofware products, and suggesting useul techniques in this

area. Some details discussed in the presentation are specific to

offshore development and testing teams

Keywords: esting, embedded sofware.

IntroductionTis paper is devoted to the questions o organizing the

testing process or the embedded sofware products.

Embedded sofware domain has its own specifics that

influence all components o the development process.

esting is not an exception. o understand what practices

and techniques work best or this domain, we should first

understand what the main differences o the embedded

sofware rom other sofware types are. Tis analysis is

done rom the point o view o the offshore team. For in-

house team some details differ, still the overall approachremains the same.

Embedded software domain specifics

Te first eye-catching thing is that the embedded sofware

is significantly less visible to the end users. User interac-

es are limited. Tere may be a console-based text menu,

a simple command line interace, a set o digital inputs

o outputs, or something similar. Rarely do we get more

than that. On the other hand the inter-component inter-

aces can be very rich and complex—including APIs to the

higher-level sofware, implementations o various com-

munication, data exchange, control, and other standards,

etc. Tus the main ocus o embedded sofware testing is

not on testing the user interaces, but on testing the com-

ponents not visible to the end users.

Te second major difference is the level o the depen-

dence on the hardware specifics. Embedded sofware is

the level o the sofware closest to the hardware. Other

sofware types such as operating systems and applications

may be built upon the interaces provided by the embed-

ded sofware such as BIOS or boot loader. Te embedded

sofware itsel, even i it uses some more or less standard

ramework underneath, needs to care more about hard-

ware details. Embedded sofware by definition is designed

or a particular hardware unit (or a set o hardware units in

common case). Ofen, those hardware units are developed

in parallel with the embedded sofware. Te created sof-

ware is the first to run on it. Tus, unlike application de- velopment, in the embedded world we can’t rely on the act

that the operating system is already tested on that hard-

ware platorm, or that the ability o the hardware itsel to

execute various sofware is already thoroughly tested.

As a result, the developed sofware may have solutions

and workarounds specific or particular hardware revi-

sions. Operation o the embedded sofware may depend

on such things that we usually don’t care about or the

application-level sofware, like the length o the cable, type

o the mouse, serial port requency, or type o the devicesconnected to the same bus. Tat makes the successul exe-

cution embedded sofware to a much higher degree depen-

dent on the particular hardware unit and on the behavior

o the other units in the same bus or network. Compared

to the conventional case, race conditions are mostly caused

not by the interaction o the internal sofware components,

but rather by the interactions o the sofware with the en-

 vironment. So, the number o actors and parameters that

can influence the operation is bigger than or the average

application. And reproduction o a deect is more difficult.

Support operations, such as sofware deployment, up-

grade, getting debug inormation, also differ rom what we

are used to see or conventional application-level sofware,

with its plug-n-play concept, installation wizards, ability

to attach a convenient debugger rom one o the IDEs, or

at list dump all debug output lines to a large file on disk. In

the embedded world we ofen need to put the sofware in

the special mode, disable EEPROM write-protection, at-

tach to some file-distribution (like FP) server, reboot a

couple o times, and care about other similar things. Tat

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 2/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

makes the sofware update process lengthy and inconve-nient. Besides, it might happen that the device that stores

your sofware supports only a limited number o re-write

cycles.

Tus even during the active development phases the

sofware versions tend to be updated less requently than

or the other orms o sofware. Te new revision would

typically be deployed only afer a significant number o de-

ects are resolved. Tus, the testing process should attempt

to find as many deects as possible, and not stop afer the

first one, even i it makes the product crash.

alking about the difficulties with debugging and gather-

ing additional inormation or deects, embedded sofware

development process differs in those aspects as well. No

or limited non-volatile storage, inability to run debugger

on the same system, ofen reduced version o underlying

operating system or no OS at all what leads to the absence

o remote debugging methods, that makes the debugging

task much harder than or the conventional application-

level sofware.

As the last but not least characteristic o the embedded

sofware domain, I should mention a high level o robust-

ness typically required rom that sofware. Serving as a

base or higher level application sofware, and working in

the environment that doesn’t allow or high maintainabil-

ity, embedded sofware is very sensitive or the robustness

level.

Embedded software testing challenges

Te specifics o the embedded sofware domain imply

certain requirements or the organization o the testing

process. Let’s quickly reiterate through the specifics andunderstand what challenges they mean or the testing

process.

Focus on non-human interaces leads to the act that

regardless o our attitude to that, we can’t use manual in-

terace testing approach. o test the developed embedded

sofware we need first to develop more sofware. Special

applications, test agents, need to be created to provide

stimulus and capture response through the non-human

interaces. It is also ofen required to emulate particular

electrical signal patterns on various data lines to test the

behavior o the embedded sofware or such inputs. It canbe done using special hardware/sofware complex and

that also implies having a special test agent to control that

complex.

High level o hardware dependency and the act that the

embedded sofware is ofen developed in parallel with the

hardware lead to several important consequences. First,

there may be only ew samples o the newly developed

hardware. Second, the range o the hardware unit types to

test our sofware on can be quite wide. Tus, typically the

testing team has to share a very limited set o hardware

units among its members and/or organize remote access to

the hardware. In the second case, that means that the test-

ing team has no physical access to the hardware at all.

Another aspect in having the sofware developed or a

reshly created hardware is a high ratio o hardware de-

ects that can be discovered during the testing process.

Any discovered deect may be related to the hardware, not

only sofware. Always keeping that in mind is especially

important or embedded sofware projects. What’s worse

the sofware may work just fine with one revision o the

hardware, and doesn’t work with another. Tat’s definitely

a challenge or the testing process.

We have already mentioned that deects are harder to

reproduce in the embedded case. Tat orces the embed-

ded testing process to value each deect occurrence much

higher than in a conventional case and attempt to gather

as much inormation as possible to simpliy looking or

the root o the deect. Combined with the very limited de-

bug capabilities o the embedded products, that gives us

another challenge.

Limitations related to the sofware updates make thetesting process very persistent in discovering as many

bugs as possible or a given sofware revision. It also in-

creases importance o build and deployment procedure

organization.

High level o requirements on the robustness/availabil-

ity ront leads to the need in very thorough stress testing.

Another consequence o that act is the need to emulate

the sequences o rapidly ollowing events to check or race

conditions under those circumstances.

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 3/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

Embedded software testing approachAs it is clear rom the previous paragraphs, testing process

in the embedded sofware case aces a number o specific

challenges. Tis section suggests an approach that satisfies

all the needs listed above and that has proven its viability

based on the results o implementing it in the real-lie em-

bedded sofware projects perormed by Auriga.

Automated vs. manual testing

First o all, it is obvious that using manual testing as the

main method or such projects is very difficult, i not im-

possible. Routine, time-consuming, repetitive stress test-

ing, work with non-human interaces, need to discover

race conditions or the ast sequences o events, and some

other actors—all stand against that. Tus automated test-

ing is the first cornerstone o the approach.

O course, manual testing as a lways has its percentage o

tests that is more cost-effective to run manually than auto-

mate. But this percentage is smaller than usually orced by

higher relative efficiency o automation in remote access

environment (the alternative to which is organizing a trip

to the remote lab) and special supporting means described

later in this chapter. In any case, automation is done or

more than 95% o the test cases, and we’ll mostly concen-

trate on it in this section.

Having said that, it must be mentioned that automation

and usage o test agents doesn’t simply change the way o

executing the test cases and presenting results, it affects all

aspects o the testing process.

Test design and tracing requirements

wo things must be understood. First, a great number othe test cases created or the embedded sofware simply

cannot be executed manually. Tus a straightorward test

design approach—get requirements design test cases

run manually, optimize, fix, detail create script based on

the manual case—doesn’t work here. Second, unlike regu-

lar methodology, the sofware requirements specification

does not lead to and is not traced to just the set o the test

cases. Instead, based on the sofware requirements o the

embedded sofware, two artiacts are created—the set o

the test cases and the requirements or the test support in-

rastructure consisting o the automation ramework andtest agents. In the ormal sense, the embedded sofware

requirements are traced to the test cases, which in turn

are traced to the sofware requirements or the agents and

ramework. But rom the practical perspective, test cases

and support sofware requirements cannot be separated.

Validation of the test support infrastructure

Te second influence on the testing process is in the

act that the support sofware must itsel be validated.

Basically, that means that, first, the test agents and the au-

tomation ramework must be tested themselves—test de-

sign, execution, coverage analysis, and all other activities

are perormed or them as well. est agents are typically

relatively simple sofware entities with a limited set o re-

quirements, so testing them is significantly simpler than

testing the original sofware product. Still, they ofen need

to implement complex data exchange protocols (includ-

ing encryption, authentication, compression, connection

establishment, and what not), so testing them is not at all

simple.

Complete testing o the test agents is ofen impossible

without having more-or-less working version o the target

process. So, passing the tests or a test agent also means

passing basic unctionality tests in a particular area or the

target sofware. During this testing, previously verified

test agents and hardware debugging tools—bus analyzers,

network sniffers, JAG probes, and oscilloscopes—are ex-

tensively used. Te hardware debugging tools are especial-

ly useul at this stage o achieving a basically unctional

application. Tat has another natural implication on the

embedded sofware development process. Te design othe test support tools is done parallel with the target em-

bedded sofware design, and the development plans or the

target sofware and test agents are highly dependent.

Te second component o the test support inrastruc-

ture, automation ramework, also obviously requires vali-

dation. However, unlike the test agents, which perorm

unctions specific to a particular embedded product, it

can and should be designed and implemented as project-

independent, at least inside some wide technological or or-

ganizational segment. Tat saves a great amount o testing

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 4/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

effort that can be done only once and doesn’t need to berepeated or every next project.

Defect tracking and analysis

Besides the direct verification and validation effort, the

need to validate the test support inrastructure also influ-

ences the deect liecycle and deect tracking repository

setup. For embedded sofware, several possible origins

should be considered or each deect: the target sofware,

the underlying hardware, the test support inrastructure.

As one o the examples o the practical consequences o

that, it leads to speciying target sofware, hardware, and

test support suite IDs in every discovered deect record.

Another example is including the representative o the

test support inrastructure development team in the triage

committee or the project.

For hardware-caused deects, the testing team must in-

clude a person with hardware engineer skills and skills in

using various hardware debugging tools mentioned above.

Tis person is also included in the triage committee, ex-

amines each deect rom the point o view o probability

or it to be o the hardware origin, provides guidance to

the team regarding the suspicious signs in hardware be-

havior and gathers additional data or analysis i hardware

deect is suspected.

Hardware coverage matrix

A higher probability o the hardware deect doesn’t lead

 just to the need to speciy hardware ID in the deect record

and having a hardware engineer in the team. Te target

sofware must also be tested on the range o the possible

target hardware types and revisions. Tat doesn’t meanthat each test case must be run on all possible hardware

units/types/revisions. A conscious choice between the

coverage and cost/time must be made. It is ofen possi-

ble to combine hardware units in groups or testing each

unctionality area, or at least perorm random selection

or regression testing purposes. Te test strategies defined

or different projects may vary in this aspect based on the

project constraints and requirements.

In any case, the hardware coverage matrix is created. All

“test case—hardware unit” combinations that should be

 verified are marked in this matrix. For obvious reasons,automation ramework should allow or speciying and

changing the matrix without affecting the bodies o the

individual test cases.

Software builds

Establishing the right build and deployment process is also

essential or the success o the embedded sofware test-

ing task. As it was mentioned, it is important to correctly

identiy the target sofware revision, or which a deect is

revealed. Several techniques are used to address the issues

related to the sofware build identification.

One o the useul practices is obtaining the build num-

ber rom the running target sofware at the beginning o

the test suite execution—the embedded sofware that has

some user interace ofen allows getting that inormation.

Using this practice prevents incorrect identification o the

 version in deect records, i a test suite was run against the

wrong version by mistake.

Another practice is used or the smoke tests o regular

sofware releases. According to the practice, the test sup-

port inrastructure contains all necessary tools or making

the build, assigning it a unique number, tagging the source

tree, archiving the binaries, transerring the binaries to

the deployment server (e.g. FP server) and rom it to

the target board, and updating the sofware on the board.

Such operations may be perormed at the beginning o the

overnight smoke test or a regular build. For the projects

with no limitations on the number o sofware updates or

the target hardware unit, this operation can be perormed

completely (build and deploy on the board) or partly (de-

ploy only) beore every build to ensure the right version tobe used during the testing.

Debug support

One o the goals o the good testing process, besides re-

 vealing as many deects as possible, should be assistance

to the developers in resolving the deects. Te value o a

deect that was seen by the testing team, but then could not

be reproduced by the development team, and thus could

not be fixed due to insufficient inormation, is low. As we

discussed, in the embedded world the deects are harder to

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 5/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

reproduce, thus as much inormation is possible should begathered on the first occurrence. Due to the act that de-

bugging is also more difficult or the embedded sofware,

the development team ofen uses special debug builds or

special debug modes o the target sofware with increased

logging capabilities. Tere are two implications o this sit-

uation or the testing process.

First, the timing and other characteristics o the debug

and release versions o the target sofware may differ, and

the deect seen on one version may never be seen or the

other version. Tus it is important to keep track o the

sofware revision, or which the deect was discovered by

testing. Tis topic is discussed separately in this paper.

Second, the test cases should be designed to allow using

these extended capabilities o the debug version or mode.

When a deect is revealed the test case should store the de-

bug output o the sofware in the test log tied to the test

result, so that a developer assigned to resolving the deect

can use this data during the analysis. Te test case should

also be able to detect the type o version o the target sof-

ware—debug or release, or switch between the modes. Te

details o that are highly project-specific and are usually

implemented either through the parameters passed to the

test case, or by employing a specialized test agent.

Test runs

Due to the contradicting characteristics o the embedded

sofware product, there are two types o test runs em-

ployed or it.

As it was said, it is ofen beneficial to reveal as many de-

ects as possible or the deployed version beore updating

the sofware revision. An ideal tool or that is batch runo test cases. All selected test cases are run according to

the hardware coverage matrix, and the results are stored

in the test log. I an error is detected, the test run doesn’t

stop, but rather captures all possible inormation about the

system state at the time when the deect was discovered

(and all debug support techniques are important here) and

continues with the next test case. Needless to say, the test

support ramework should perorm a complete clean up

afer each test case to avoid influence between the test cas-

es in general and a series o ai led cases afer the first crash

in particular. Such clean ups ofen include system reboot,typically sofware reboot afer a successully completed

test case, and hardware reboot afer a ailure.

Such test runs are lengthy, the required time urther in-

creased by the need to clean up, and are typically sched-

uled to be perormed in automatic mode overnight. Such

batch runs are especially useul as smoke/regression tests

or new builds.

In certain cases, the second type o the test run is used.

Te tests are run until the first ailure, and i a ailure oc-

curs, the test run is stopped, the system state is preserved,

and a developer is notified and allowed to examine the

system state in details to reveal the root cause o the ail-

ure. It is also possible to create an automation ramework

that would break the test run only o the ailure occur in

a particular test case (or a set o test cases). Such test runs

are useul or hunting down the deects, or which inor-

mation gathered in the batch mode is insufficient, and a

developer needs to get access to the system at the moment

o deect to investigate it.

Virtual laboratoryTe methodological approaches described in the previous

sections allow orming the testing process relevant to the

specifics o the embedded sofware testing. However, there

is another important part o the approach—a sofware and

hardware solution, called Virtual Laboratory, or VL. Tis

solution provides the means or solving several technically

complex problems aced during the testing.

Target Chassis

VL Server/

Test Master ExternalEthernet

LocalVL Client

RemoteVL Client

Secure ShellLocalEthernet

Power Bar

Digital IO,Serial Ports,

Ethernet

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 6/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

First, it contains a database o the existing hardwareunits. Te units are identified by simple string IDs. For

each unit, it is possible to speciy several properties, such

as hardware revision, communication IDs—IP address,

MAC address, login credentials, etc. For a test script that

means that by a unique unit ID passed as a parameter,

it can restore all other parameters required or commu-

nicating with this board and providing complete deect

reports.

Second, VL supports serial consoles, power bars (devic-

es allowing switching the power on and off or the target

units), and dry contact controllers (relays). Console/relay/

power bar lines are associated with a particular unit in the

hardware units database. And as a result, all operations

with a particular unit are perormed by the test scripts

based on the name o that unit.

Tird, VL provides means or ensuring exclusive access

to the shared hardware. Beore accessing a unit’s console,

toggling the power or some other relay or the unit, the test

script must first ‘lock’ that unit using a special command.

While the unit is locked, no other entity can ‘lock’ it. Afer

all testing actions are perormed, the test scripts ‘unlocks’

the unit, a llowing others to control it. Such exclusive lock-ing mechanism prevents intererence o different test

scripts and human operators attempting to run test on the

same board simultaneously.

VL provides human-riendly command-line interace

over secured connection, and can be used both by test

scripts and human test operators.

VL serves the base or executing all automated and man-

ual tests or the target sofware.

Automation framework 

Te description o the test support inrastructure would

be incomplete without a brie description o the automa-

tion ramework. As it was already said, besides the test

agents the test support inrastructure contains also the au-

tomation ramework. Tis ramework contains a library o

methods called by the test scripts. All typical, project-in-

dependent operations perormed by the test scripts should

be moved there. Re-using this ramework or subsequent

projects allows saving a lot o effort on automation and

 validation o typical operations.

Te majority o those typical operations were already

DeveloperHost

Deployment ServerBuild

Server

VersionControlServer

Execution Facility

ExecutionBox # 1

ExecutionBox # Y

ExecutionServer

LoggingServer

TesterWorkstation

# Z

TesterWorkstation

# 2

TesterWorkstation

# 1

VL Server

TestHW # 1

TestHW # X

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 7/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

discussed in one or another orm in the previous sec-tions. Here we provide a compilation o the highlights. Te

ramework includes methods or perorming the ollowing

key tasks:

Defining the set o test cases and hardware targets or a•

particular test run

Obtaining unit parameters based on unit ID•

Accessing serial console o the specific unit, perorming•

login with correct credentials i applicable

Controlling the power and relay channels or a particu-•

lar target

Invoking test agents with certain parameters to send•

raw or predefined messages through different protocol

channels

Obtaining parameters rom the target unit—sofware•

revision, debug state, etc

Obtaining debug logs rom the board•

Clean reset o the target•

Recording the results o the test case execution•

Generating test results report•

Controlling the test run execution—breaking the test•

run on test ailure or continuing with urther test cases

depending on the settings

Building tagging, archiving sofware•

Deploying sofware on the target board•

Test environment

Te diagram on this page represents the overall architec-

ture o the test environment. It consists o the boxes repre-

senting logical unctions o various test environment units

and interaces between them. It must be noted that logical

unctions are not always directly mapped to the physicalcomponents. E.g. a single physical server can play the role

o build server and version control server simultaneously.

According to this diagram (we start rom the bottom),

testers connect rom their personal workstations to the ex-

ecution server.

Execution server is a dispatcher. Its main goal is to dis-

tribute testing jobs between the execution boxes. Very o-

ten it serves as the authorization gate to the whole testing

acility.

Logging server is the centralized data storage center, alltest logs are dumped there or analysis.

Execution boxes are the systems which execute automat-

ic tests. Te main reasons or separating execution boxes

rom tester workstations are the ollowing:

Execution box on which the tests are run may have per-•

ormance requirements different rom the tester’s work-

station configuration. Also, once run on the execution

box the test scripts are not affected by the urther activi-

ties on the tester’s workstation. Tus, recording a DVD

with the new build on the tester’s workstation, or shut-

ting down the workstation or the night, won’t affect the

test script in any way.

In case o the remote access to the hardware, execution•

boxes reside in that remote lab along with VL server and

target hardware units, and thus its communication with

them is not a subject to Internet-caused delays, which

otherwise could affect the test results.

Build server and version control server are used or

making the product builds and storing the version control

repository respectively.

Te role o the deployment server was already discussed.

It allows uploading the new version o the sofware to the

target hardware units, using a product-specific interace

(e.g. FP).

VL server is used as a gate. It contains the hardware units

database, and is the only entity that can directly access the

hardware units. Other entities use its exclusive locking

mechanism to ensure proper usage o shared resources.

Development host depicted on the diagram is used as

an example o access provided to other members o the

development team, involved in ad hoc testing and debug-ging activities. It communicates with build, version con-

trol, and deployment servers to put a new sofware version

on the target board, and with the VL server to control the

board operation.

Summary

Te test approach presented in this section has the ollow-

ing main components:

7/27/2019 Auriga Test of Emb Sw Prod

http://slidepdf.com/reader/full/auriga-test-of-emb-sw-prod 8/8

www.auriga.com Worldwide & Europe: +7 (495) [email protected] USA: +1 (866) 645-1119

95+% automated testing•

Virtual laboratory solution•

Re-usable automation ramework simpliying typical•

operations

Practices and techniques•

raceability o the target sofware requirements to the-

test cases and test support sofware requirements

Validation o the test support inrastructure-

ight integration o the target sofware and test sup--

port sofware development plans

Usage o hardware debug tools, especially during early-

stages

Analysis o 3 origins or every deect: target sofware,-

hardware, test support inrastructure

Speciying 3 IDs or every deect: target sofware,-hardware, test support inrastructure

Hardware engineer in the team-

Defining hardware coverage matrix-

Double-checking target sofware version by reading it-

rom the unit

Automatic sofware build and/or deployment as a part-

o test execution (optional)

Support or extensive debug builds/modes in automa--

tion ramework 

wo types o test runs: non-stop batch, deect hunting-