An alternative approach to An alternative approach to software testing to enable software testing to enable
SimShip for the localisation SimShip for the localisation market, using Shadow™market, using Shadow™
Dr. K Arthur, D Hannan, M Ward
Brandt Technologies
AbstractAbstract
Our approach to automated software testing is novel.
Test several language instances of an application simultaneously.
Direct engineer interaction or through a record/playback script.
Examine the effect of separating out the functions of a test engineer into a product specialist and QA specialist.
ShadowShadow
Developed by Brandt Technologies.Different from other remote control
applications.Allows control of many machines
simultaneously.Control Mode: Mimic mode and Exact
match mode.
Shadow setupShadow setup
SimShipSimShip
SimShip is the process of shipping the localised product to customers at the same time as the original language product.
Delta – time difference between shipping original language product and localised products.
Why SimShip?Why SimShip?
To increase revenue.To avoid loss of revenue.To maintain/increase market share.
SimShip issuesSimShip issues
Unrecoverable costs:– Localised software not taking off in markets.– Localised software being substandard.– Localised software having been created inefficiently.– Changing build means additional content or different
user interface.
We can address some of these issues with Shadow.
What is quality?What is quality?
Crosby defines quality as “conformance to requirements”.
“Fitness for purpose”. ISO 9126 provides a definition of the
characteristics of quality in software which can be used in evaluating software quality:– Functionality, Usability, Maintainability, Portability.
For our purposes, software quality will be defined as software that conforms to customer driven requirements and design specifications.
Software testingSoftware testing
Software “testing involves operation of a system or application under controlled conditions and evaluating the results.”
Testing is performed:– to find defects.– to ensure that the code matches specification.– to estimate the reliability of the code.
To improve or maintain consistency in quality.
Software testingSoftware testing
Testing costs – schedule and budget.Not performing testing also costs.In 2002 a US government agency suggests
that software bugs cost $59.5 billion annually.
Software testingSoftware testing
Testing should be an integral part of the software development process.
Fundamental to “eXtreme Programming”.“Never write a line of code without a failing
test.” Quality of the test process and effort will
have an impact on the outcome.
Software testingSoftware testing
There are two types of software testing;– Manual testing.– Automated testing.
Within these testing types there are many subdivisions.
Either process runs a set of tests designed to assess the quality of the application.
Manual testingManual testing
Running manual tests requires a team of engineers to write and execute a test script.
Advantages:– Infrequent cases might be cheaper to perform.– Ad hoc testing can be very valuable.– Engineers can perform variations on test cases.– Tests product usability.
Manual testingManual testing
Disadvantages:– Time consuming.– Tedious.– Difficult sometimes to reproduce some issues
manually all of the time.
Automated testingAutomated testing
Requires a test script to be written.Requires team of specialised engineers to
code the test script.Advantages:
– If tests have to be repeated – cheaper to reuse.– Useful for a large testing matrix.– Consistency in producing results.– High productivity – 24 x 7.
Automated testingAutomated testing
Disadvantages:– Can be expensive to code from scratch– Can require specialised skills to code– It takes longer to write, test, document and automate
tests.– Test automation is a software development activity –
with all of the implications.– Test cases are fixed.– Automation cannot perform ad hoc testing.– Difficulty in keeping in-synch with a changing build.
Which to use?Which to use?
Neither mode of testing is a “silver bullet”.Successful software development and
localisation should use both manual and automated testing methodologies.
Balance between use of each mode can be decided using budgetary and schedule constraints.
ShadowShadow
Shadow is a software-testing tool for performing automated and manual tests on original or localised software applications.
Allows user to simultaneously test localised software applications running on either different language operating systems or original language products running in different configurations.
Quick to set up and use.
ShadowShadow
In the following slide we see the Shadow setup.
One PC is running 3 VMWare machines.Each VMWare machine is shown
displaying the Start menu.
Shadow remote controlShadow remote control
Shadow interfaceShadow interface
Shadow is shown in the following slide in control of 4 VMWare machines.
Each VMWare machine running the Catalyst application with a TTK open.
Shadow interfaceShadow interface
Shadow architectureShadow architecture
Shadow consists of 3 pieces of software;– Dispatcher server– Client Viewer– Client Target
Shadow architectureShadow architecture
DemonstrationDemonstration
This demonstration shows Shadow connecting to three Windows XP clients.
Shadow can connect to a PC running Windows XP Professional in exactly the same way as it can connect to VMWare running Windows XP Pro clients.
Connection DemoConnection Demo
Screenshot demoScreenshot demo
Shadow differentiatorsShadow differentiators
Makes automated testing easier to use with less programming.
Separating Test Engineer into “Product Specialist” and “Quality Assurance Specialist”.
Making software testing more like the actions of a human user.
Accelerating the manual testing process through the unique Shadow user interface.
Recording screenshot data by default.
Case study: Client ACase study: Client A
Client A produces ERP software. Task list:
– Write test scripts– Update test scripts– Set up the hardware and software– Execute the test script on the machines; using both
Shadow and WinRunner– LQA – performed by linguists using the screenshots– Localisation functional QA using Shadow and
WinRunner
Case study: Client A - resultsCase study: Client A - results40 screenshots Shadow WinRunner
Task Days Days Comment
Write LQA script 3 – 4 3 – 4 Tool independent
Update LQA script 1 – 2 1 – 2 Tool independent
Write TSL script 1 – 2 WinRunner only
Execution script 2 1 Shadow and WinRunner
LQA 1 – 2 1 – 2 Tool independent
Functional QA 1 – 2 1 – 2 Tool independent
Total 8 – 12 8 – 13
Case study: Client A - resultsCase study: Client A - results400 screenshots Shadow WinRunner
Task Days Days Comment
Write LQA script 20 – 25 20 – 25 Tool independent
Update LQA script 10 – 15 10 – 15 Tool independent
Write TSL script 0 25 – 30 WinRunner only
Execution script 16 8 Shadow and WinRunner
LQA 5 – 6 5 – 6 Tool independent
Functional QA 9 – 10 9 – 10 Tool independent
Total 60 – 72 77 – 94
Case study: Client A - Case study: Client A - conclusionsconclusions
Shadow and WinRunner take approximately the same time to setup for small number of screenshots.
For a larger number, Shadow is faster.The client setup had 3 machines, this could
be improved.WinRunner requires build preparation.Wait for feature.
Case study: Brandt Case study: Brandt TranslationsTranslations
Project – localisation of multimedia tours in several languages.
Recent projects include;– 6 tours x 5 languages– 3 tours x 7 languages
This type of project occurs often.Need to be efficient, as the schedule is tight.
Case study: BrandtCase study: Brandt
Brandt uses Shadow for the purposes of testing and as an automation tool to perform tasks that need to be repeated frequently.
Tasks in the localisation of Captivate tours;– Audio integration.– Text integration.– Font assignment.
Recorded scripts perform these tasks.
Case study: BrandtCase study: Brandt
Text Integration – Localised text on every slide in the tour.
Audio integration – Importing a single WAV file per slide, voice reads the text.
Font assignment – Localised text font has to be consistent for all slides.
Case study: Brandt - resultsCase study: Brandt - resultsTask Automation per 30
slide tour – minutesManual time per 30 slide tour – minutes
Audio integration 10 15
Text integration 15 20
Font assignment 10 15
Efficiency is in the parallelism.Automated repetitive task – reduced issues
due to human errors.
Brandt DemoBrandt Demo
Case study: Brandt - Case study: Brandt - conclusionsconclusions
Shadow was used as an automation tool for this project.
Characteristics of this project making Shadow efficient: repeated tasks, easily done in parallel.
Shadow was essential to the effectiveness of the engineering team.
ConclusionsConclusions
We looked at the advantages and disadvantages of the different modes of testing.
Mix of manual and automated testing is essential. Shadow allows separation of QA from specialist
product knowledge and hardware setup. For localisation, Shadow can take screenshots of the
application, for linguistic review. Shadow can be used by the engineer, with specialist
product knowledge, to walk through the different language versions of an application simultaneously.
Shadow: Future developmentsShadow: Future developments
Addition/integration of an OCR moduleEnhanced AI modules
AcknowledgementsAcknowledgements
Gary Winterlich provided the Camtasia demonstrations.
Bibliography is included a forthcoming paper to accompany this presentation in LRC XII
Thank you for your time.Q&A
Top Related