UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called...

37
Usability Study of a Traceability Management Tool Bachelor of Science Thesis in Software Engineering and Management Per Skytt Tobias Nersing Department of Computer Science and Engineering UNIVERSITY OF GOTHENBURG CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2017

Transcript of UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called...

Page 1: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

Usability Study of a Traceability Management ToolBachelor of Science Thesis in Software Engineering and Management

Per SkyttTobias Nersing

Department of Computer Science and EngineeringUNIVERSITY OF GOTHENBURGCHALMERS UNIVERSITY OF TECHNOLOGYGothenburg, Sweden 2017

Page 2: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

The Author grants to University of Gothenburg and Chalmers University of Technology the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it acces-sible on the Internet. The Author warrants that he/she is the author to the Work, and warrants thatthe Work does not contain text, pictures or other material that violates copyright law.

The Author shall, when transferring the rights of the Work to a third party (for example a publisheror a company), acknowledge the third party about this agreement. If the Author has signed acopyright agreement with a third party regarding the Work, the Author warrants hereby that he/shehas obtained any necessary permission from this third party to let University of Gothenburg andChalmers University of Technology store the Work electronically and make it accessible on theInternet.

Usability Study of a Traceability Management Tool

Per SkyttTobias Nersing

c© Per Skytt, June 2017.c© Tobias Nersing, June 2017.

Supervisor: Salome MaroExaminer: Jennifer Horkoff

University of GothenburgChalmers University of TechnologyDepartment of Computer Science and EngineeringSE-412 96 GöteborgSwedenTelephone + 46 (0)31-772 1000

Department of Computer Science and EngineeringUNIVERSITY OF GOTHENBURGCHALMERS UNIVERSITY OF TECHNOLOGYGothenburg, Sweden 2017

Page 3: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

Usability Study of a TraceabilityManagement Tool

Per SkyttGöteborgs UniversitetGothenburg, Sweden

[email protected]

Tobias NersingGöteborgs UniversitetGothenburg, [email protected]

Abstract

This study aims to add to the body ofusability research especially in regards to soft-ware tools. There is a lack of consideration tousability and empirical evaluation of softwaretools for developers in general, and in the fieldof traceability in context of usability is close tonon-existent. Usability studies have been usingvarious methods and measurements over theyears but most of the time previous work inthe field is not built upon and measurementsand questionnaires are seldom fully revealedmaking reproduction and comparison acrossstudies very difficult. This study evaluated theusability of a traceability management toolcalled Capra using a remote usability testingmethod where screen recording and post-testquestionnaire are the means of gathering data.This study aims to build upon previous workby using validated and proven methods toassess usability and classify usability problemsfound to suggest improvement to the Capra-tool along with the aim to evaluate the overallusability.

Keywords

Capra, Traceability Management, Usabil-ity

I. INTRODUCTION

Usability in the software industry is a key issuewhich concerns the user experience. Today theinteractive software market is constantly growing inthe number of software tools, products and solutionsbeing provided. The competition is hard as well asuser-expectations growing standards of constantlyanticipating software which is easier to handle,easier to understand and have increased quality over

previous iterations. The usability of the softwareaffects the user experience and in order to improvethis usability, and thus the user experience, devel-opers must understand and predict user behaviours.Toleman and Welsh [1] mentions that the typicaldesign and development models used for softwaredevelopment tools ignores empirical user testing.

Usability is defined by ISO 9241 as “the extentto which a product can be used by specified usersto achieve specified goals with effectiveness, ef-ficiency and satisfaction in a specified context ofuse”. Good usability means that users will bothenjoy the work they do more and be more efficientin doing it.

This thesis will be done in collaboration with theCapra Traceability Tool project [2], which is a resultof the ITEA-funded project Amalthea4Public [3],where we will conduct a usability study on theCapra traceability management tool.

The current research of usability of traceabilitytools in the software engineering domain is quitelacking. The traceability tools and methods dis-cussed in several papers lack empirical validation.Much of usability studies conducted in the HCIdomain does not build upon previous work and thisleads to increasing difficulty of comparing resultsand reproducing experiments [4].

To improve the result as well as add to the bodyof research in usability testing of traceability toolswe will use standardized measurements of bothobjective and subjective data that have been provento work and been validated in earlier research. Byusing validated tools and methods in this user studythe benefit to the current body of research is pro-

Page 4: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

vided by making our contribution reproducible andbuilding upon previous work within the field- Thisstudy will aim show that standardized measure-ments and questionnaires can be used to evaluatethe usability of a traceability management tool inan effective way and make the study both reusableand comparable to other similar studies. The datagathered from the study will be used to provideconcrete suggestions about improvements regardingusability in the open source software Capra.

II. LIST OF ABBREVIATIONS

CUP - "Classification of Usability Prob-lems"HCI - "Human Computer Interaction"HVAC - "Heating Ventilation Air-Conditioning"IDE - "Integrated Development Environ-ment"SUS - "System Usability Scale"SUT - "System Under Test"UP - "Usability Problem"

III. RESEARCH QUESTIONS

Our thesis will aim to answer the followingresearch questions:

• RQ1:How usable is the traceability manage-ment tool?

– SQ1 How satisfying is the traceabilitymanagement tool to use?

• RQ2: What changes could be done toimprove usability of the traceability manage-ment tool ?

– SQ1 How can the efficiency be improvedfor the traceability management tool?

– SQ2 How can we improve satisfaction ofusing the traceability management tool?

The main objective of this study is to evaluate thecurrent state of Capra from point of view of usabil-ity along with proposing possible improvements.

RQ1 asks the question of the usability of thetraceability tool which we have clarified with anextra sub-question about satisfaction. RQ2 is aboutwhat kind of improvements this study and researchwill be able to suggest for the traceability tool.Sub-questions for RQ2 separates efficiency andsatisfaction..

IV. LITERATURE REVIEW

In literature review by Nair et al., publishedin 2013 [5], the authors aimed to provide in-sight in how traceability research evolved in theRequirements Engineering conference during theprevious 20 years. The authors wrote 4 suggestionsfor future research in the field and 2 of themincluded “tool qualification must be studied in moredepth” and that “it is necessary to focus on theopinion and experiences of practitioners differentto the researchers”. This is further substantiatedin the systematic review and included case studyby Torkar et al., 2012 [6] where they also studiedrequirements traceability, they claimed that mostpapers in their review focused on new featuresand extensions for tools and lacked validation. Theauthors writes that “most techniques and tools werenot validated empirically.”

A study of software development tools used forrefactoring by Mealy et al., 2007 [7] mentionsthe frequent lack of consideration regarding issuesof usability, both in production and research, forsoftware tools in general, and especially for soft-ware tools used for refactoring where the authorsmentions that little research has been conducted onusability. While the study by Mealy et al focuson determining usability of a software tool usedby developers for refactoring, our study will usea traceability management tool. In the systematicevaluation by Toleman & Welsh, 1998 [1] theystudied design choices of software developmenttools and claimed that the general model for designand development used by the tool designers ignoredempirical user testing. In the article by Ðukic, etal. from 2015 [8] the authors talk about the currentlack of or rather non existing focus on usability andtraceability in information software systems.

In an article by Laura Faulkner from 2003 [9]it was found that using a low number of users inusability testing can give unreliable results in termsof how many problems were found. 5 users foundas little as 55% of the problems, at 10 users it was80% and at 20 users 95% of all the problems werefound.

Page 5: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

V. METHODOLOGY

A. Background

The methods used to evaluate usability gener-ally fall into two categories: usability inspectionmethods and usability studies. Usability inspectionmethods involves evaluators who inspect a systemin order to evaluate the usability which requires theevaluator to have some experience in usability [10].Example of inspection methods are heuristic evalua-tions and cognitive walkthroughs. A usability studyon the other hand is user-centered and usability testare used to assess the suitability of a system inregards to its intended purpose and intended usersby revealing problems experienced by the user whentrying to accomplish a set of tasks.

Remote usability test, also called unmoderatedusability test, involves the user running the actualtest on their own. The user is often in their homeor office and their behaviours and interactions withthe system is then captured using tools for exam-ple: screen-recordings, number of clicks, verbaliza-tion,eye movement, etc. This data can then later beanalyzed. The method can be synchronous, the testis done and observed in real time by the tester orasynchronous, as in the user runs the test in theirown time without and the testers receive the datafor analysis afterwards.

This study will measure subjective usability usingthe questionnaire System Usability Scale created byJohn Brooke [11]. This is a Likert scale, it is simpleand consists of ten items which can be used to getthe subjective overall view of a systems usablility.SUS is generally used directly after a system hasbeen used to capture the immediate response ofthe user. The SUS score is a single number andthe process of getting to this number is to startby calculating the sum of the score contributions,values ranging from 0 to 4, from each item. Items1,3,5,7,and 9 has a score contribution of the scaleposition minus 1, and items 2,4,6,8 and 10 has ascore contribution of 5 minus the scale position.When you have the sum it will be multiplied by2.5 to obtain the overall value.

Classification of usability problems scheme isa framework for describing and detailing usabilityproblems. Its divided in two parts, pre-CUP and

pos-CUP, where pre-CUP contains nine attributeswhich describes the UPs found in usability testing.The pre-CUP is presented to the developers whichcan then fill in the four attributes in post-CUP [12],[13].

B. Methodology

Before the tests were sent out twelve users hadsigned up to participate in the study. Out of thesetwelve only six actually completed the test giving uslimited data to work with and one of our identifiedrisks became a reality. These participants comefrom both academia and industry. A sample size ofat least 10 users was the goal to reliably find most ofthe problems with usability [9]. To find participantsfrom the academic domain we posted requests forparticipants via the facebook-community pages forthe software engineering program and managementbachelor’s program asking students from secondand third year. To find participants from the industrypersonal contacts were used.

We categorized the data to be collected intotwo main groups: objective data and subjectivedata. Subjective data consists of measurements thatconcern users’ perception of or attitudes towards theinterface, the interaction, or the outcome whereasobjective data is concerned with data not depen-dent on the user perception or attitudes. Studyingboth subjective and objective measurements can,as Hornbaek [4] points out, show different resultsregarding usability of an artifact. We will besideskeeping the types separate include both types ofdata in this study as we believe a more completepicture of usability might be achieved by doing so.

The study was conducted as a remote usabilitystudy, this is a method which means that those con-ducting the study and the participants are not in thesame room or location as the test participants. Ourremote usability method was of the asynchronousvariant, where evaluators of the test won’t be in-teracting or gathering data in real-time while theparticipants perform the test. We chose this methodbased on the benefits of the test being location-independent, time saving and easy to scale for alarge sample while still considered effective andsuitable on a low-budget [14]. The asynchronousmethod will not be able to record observations of

Page 6: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

the user or spontaneous verbalizations, althoughthe synchronous method can be perceived as moreintrusive [15] and would require more coordinationin regards of time and schedule. Another benefitof the remote usability testing method is that theparticipants will use the software in their ownenvironment.

To record these sessions Open Broadcaster Soft-ware [16] usually referred to as OBS was used.This software was chosen because it is free touse, it’s open source and works on Windows, Macand Linux. This made it possible to create a setof instructions that worked for every user and theresearcher only needed to learn 1 software in casethe users needed help with the installation or usageof the recording. To upload the recordings GoogleDrive [17] was used. To make sure that there wasenough space on the account to upload all therecordings the researchers created test recordings tosee how big on average a file would be. It was foundthat the recordings were small enough that even ifevery user took twice as long as the expected timethe free space would be enough.

The participants of the test received all the ma-terial needed to understand the goals of the studyin a PDF document. This document can be seen inthe appendix section A This material consisted ofinstructions to install needed software for gatheringdata, install and run the. Capra tool, an introductoryvideo of Capra and specific instructions for how tostart the test and upload the data. The participantswas also provided with 12 tasks which we estimatedwouldn’t take more than 30 minutes on averageto complete. To complete these tasks an exampleproject of a Heating, ventilation and air conditioning(HVAC) system was provided in which includedamong other files: statecharts, requirements,jUnittests and a feature-model which were used in thetests. A SUS questionnaire [11] to fill out post-test.

The 12 tasks that users were asked to completeare these:

1) Create a trace link between the Requirement4 and the ITOS feature.

2) Create a trace link between ITOS feature andTemperatureAdapter Statechart.

3) Create a trace link between Tempera-

tureAdapter Statechart and the ITOSTest javaclass.

4) View the trace links of the ITOS featurethrough PlantUML diagram.

5) Use transitivity-function to see the wholetrace of connections in relation to the ITOSfeature.

6) Remove the trace link between ITOS featureand TemperatureAdapter Statechart.

7) Delete the ITOSTest java class.8) Open the Eclipse Problem view. Find the

warning concerning the deleted ITOSTest javaclass and use the Eclipse "Quick Fix" functionto remove all affected trace links.

9) Create a trace link between Requirement 4and HVAC_manager feature.

10) Create a trace link between HVAC_managerfeature and TemperatureAdapter Statechart.

11) View the trace links of HVAC_manager fea-ture through PlantUML diagram.

12) View the Capra traceability matrix ofHVAC_manager feature, TemperatureAdapterStatechart and ITOS feature to make sureITOS feature isn’t linked to the other two.

An expert on Capra, a developer from the project,was asked to run the scenarios to establish a best-case of time, error rate, success/failure-rate for theintended user-scenarios in the user test, which wasused as a baseline for our objective measurements.

We intend to use post-test measures for perceivedusability as a subjective measure, here in formof the validated questionnaire System UsabilityScale [11]. To be able to get more informationabout what the participants think of the differentparts of the software additional questions have beenadded. These questions are linked to the differenttasks the participants perform asking about specificfeatures of the software. These additional questionswas not part of the overall SUS-score but only useda 5 point likert-scale ranging from strongly disagreeto strongly agree. To end the questionnaire thereare two free form questions where the participantsare asked to write down any improvements or newfeatures they want to see in the software. Thequestionnaire that users will be asked to fill out can

Page 7: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

be seen in table ?? "Questionnaire".

As mentioned in Hornbaek’s paper [4] thereappears to be a lack of studies using validatedinstruments for measuring satisfaction which buildsupon previous work and we intend to not further addto this type of disarrangement to enable future com-parison to related studies and higher reproducibilityof our study.

C. Data AnalysisThe authors examined the answers in the ques-

tionnaires and the answers regarding specific func-tions as well as the overall SUS-score to get aninitial idea of the usability issues which could bepresent in the Capra software. Both authors watchedeach screen-recording separately to identify usabil-ity problems(UP). The initial criteria we used foridentification of UPs derived from video analysiswas based on Nielsen’s 10 principles [18] and thedefinition for usability problems described by nineproblem criteria by Jacobsen et al.(1998) [19]:

1) The user articulates a goal and cannot succeedin attaining it within three minutes

2) The user explicitly gives up3) The user articulates a goal and has to try three

or more actions to find a solution4) The user produces a result different from the

task given5) The user expresses surprise6) The user expresses some negative affect or

says something is a problem7) The user makes a design suggestion8) The evaluator generalizes a group of previ-

ously detected problems into a new problemThe Usability problems(UPs) were then docu-

mented in a spreadsheet by each author separatelydetailing the following information:

• A headline that summarizes the problem• An explanation that details the problem - As

many details as possible and to ensure thatthe description was understandable withoutknowledge of the test sessions or the videos

• A description of why the problem is seriousto some or all users of the software - Forexample if users get confused, express that theyare insecure, or cannot finish their tasks

• A description of the context - A description ofthe context where the problem was identified,for example in a certain task scenario or partof the user interface

• Identified in task• Participant ID - Which test participants session

was the UP identified in• Which evaluator found this

To see a list of all identified UPs and detailed in-formation about each of them refer to the appendixsection A.

The objective data consists of screen recordingswhere the users work on the scenarios/tasks theyare given during the user test. We will measuretime spent on tasks, number of errors and thesuccess/failure to complete a task. In the eventof a crash of Eclipse or other error halting theuser test, if the error is not deemed related to theSUT(software under test), the time to recover fromthis error is not counted in the total task time.

The time for task one is counted from when theuser starts the test until the goal of task one iscompleted. When the goal for task one is completedthe clock starts counting the time for task two. Thisway of counting the time is the same for all tasks. Ifa user fails to complete a task or gives up trying tocomplete a task the time is stopped when the userappears to start the next task, usually this was seenby the user going to a specific part of the interfaceor looking at the instructions.

Errors that was looked for was either when anerror message appeared or if an action did not resultin what was expected i.e. a bug. The success andfailure rate of the tasks were 100% if the usercompleted the goal of the task. No considerationto how many steps or how long time the user tookwas taken here. If the user skipped or forgot to doa task it was 0%. If the user tried to complete thetask but didn’t succeed, depending on how closethey came, an approximation was done to get apercentage somewhere between 0 and 100 to reflecthow close the user was to complete the task.

These UPs were then discussed and comparedand organized in a consolidated list which bothauthors agreed upon where duplicates were markedas one UP resulting in a list of UPs to be clas-

Page 8: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

sified using Classification of Usability ProblemsScheme(CUP) [12], [13]. This list would containall the agreed upon UPs as well as reference torelevant tasks and screen-recordings.

These UPs were then classified using Classifi-cation of Usability Problems Scheme. We dividedthe UPs between the two authors, classifying halfof the UPs each, and then reviewed each othersclassifications and through discussion made editsand agreed upon the CUP classifications. All theclassifications can be seen in full in the appendixsection A.

VI. RESULTS

This study got responses from 6 participants ofthe user test where each answered the questionnaireand sent in a screen-recording made during their runof the tasks in the test. The participants consistedof 83,3% 3rd year bachelor students of softwareengineering and 16,7% industry practitioners ac-cording to responses of the first question about theparticipant background. On a likert scale rangingfrom "1", strongly disagree, to "5", strongly agree,the participants were asked two more questionsabout their background. We asked how familiarthey are with the concept of traceability in softwareengineering. Out of all participants 66,7% answered"1", strongly disagree, and 16,7% answered "2" and16,7% answered "3" on the scale in regards to theirfamiliarity of the concept of traceability in softwareengineering. The last background question using thesame likert scale asked how familiar the participantswere with the Eclipse IDE. 33% of the particpantsanswered "2" and 66,7% answered "3". Althougha majority of the participants weren’t familiar withthe concept of traceability, most were familiar withthe Eclipse IDE.

A. Video Data

Together the authors identified a total of 27 UPsduring their individual evaluation of the screen-recordings, 13 UPs and 14 UPs respectively. As canbe seen in table ??, out of these 16, 7 were classifiedas minor, 6 as moderate and 3 as severe. Duringthe tests no bugs were encountered and the onlyother error message that appeared, which occurredonce to only one participant, was ruled to not have

anything to do with Capra and only one of theusers experienced this error and is therefore notconsidered in the analysis.

The average time per task compared to the experttimes can be seen in table ??. The expert is fasterat every task except 1, the expert is being slowerhere but note that that the expert had the exampleproject’s organization of files slightly different thanthe example project for the actual user test.

The tasks 1,4,5,8 in the table has a differenceof more than 1 minute of time for the participantsin the study to complete the task compared to theexpert’s time, and task 12 stands out with over2 minutes more time elapsed on average for theparticipants of the test compared to the expert.The tasks 1,4,5,8 and 12 relates to the features ofcreating trace link, PlantUML-visualization, Tran-sitivity, warnings and Trace Matrix-visualizationrespectively.

The percentage of how much of a task the par-ticipants completed can be seen in table ??. Theonly task where any participant failed to completeany part of the task was in task 6 where 2 par-ticipants had a completed percentage of 0%. Allother participants managed to complete the sametask in full. Another task where the participantswere struggling was task 12. It took the users a longtime to complete this task but 4 out of 6 participantsmanaged to complete the task and 2 participantsmanaged to complete 75% of the task.

B. Questionnaire Data

The questionnaire responses for the first 10 ques-tions following the background of the participantswere based on SUS and in the following table is theSUS-score from each participants responses alongwith overall average and median. The average SUS-score for Capra in this study is 41,25

Page 9: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

Participant SUS-Score1 402 32,53 37,54 355 57,56 45

Total Average 41,25Total Median 38,75

After the ten SUS questions there were six ad-ditional questions about certain features in Capra.The users were asked to use a likert scale from1-5, strongly disagree to strongly agree to thesequestions:

1) "I thought the removing trace links-feature inthe Capra tool was easy to use"

2) "I found the creating links-feature of theCapra tool very cumbersome to use"

3) "I thought the "Trace Matrix" in the Capratool was easy to use"

4) "I thought the "PlantUML View" in the Capratool was easy to use"

5) "I found that the "Transitivity"-function in theCapra tool was well integrated"

6) "I found the notification/warning functions inthe Capra tool were well integrated"

In the table ?? additional questions each of theresult from the additional questions about certainfeatures of Capra, the result showed in the tablecontains the average, median, related feature and apercentages of of all responses to each question.

In the last section of the questionnaire the par-ticipants could answer two free-form questions:"What kind of improvements would you suggestfor the Capra tool?" and "What kind of additionalfeatures would you suggest for the Capra tool?". Tothe question "What kind of improvements wouldyou suggest for the Capra tool?" only 4 of the 6participants responded:

• "Too many dependencies on other pluginsmade the getting started phase very long. I’dprefer to have them integrated."

• "Trace matrix should have limits on the lengthof names/info in each cell"

• "Better guide to get started and 1 installationpackage instead of the X amount."

• "To make remove trace function more obvious(as long as I couldn’t fine it). "

To the question "What kind of additional featureswould you suggest for the Capra tool?" only 3 outof the 6 participants responded:

• "I missed the possibility to interact with thetrace through the visualizations, such as Plan-tUML View and the matrix"

• "When showing tooltips (hovering over icons)offer some more information"

• "I’m not that familiar with the topic. I thinkit would probably nice to have some smalltutorial/tips about some functionalities."

VII. DISCUSSION

From the data gathered and the free text questionsthese are some of the more interesting findings.Two users suggested that the installation of Caprarequires too many dependencies making the processtaking more time than if dependencies were inte-grated into Capra. The example project used for thetest had the need for some additional dependencieswhich wasn’t related to Capra but we suspect thiscould have been misinterpreted by the users as partof the installation of Capra although clearly dividedand explained to the test participant this could bethe case.

Furthermore the additional questions about cer-tain feature the participants answered reveals thatthe functionality of removing trace links couldbe improved, since most of the users on averageselected strongly disagree that this feature is easyto use. And looking at our results from analyzingthe videos the data shows that task 6 which involvedremoving a trace link during the test had a comple-tion rate of only 66,67% whereas all other tasks hada completion rate of >90%. Although task 6 had anaverage time-per-task of 1 minute and 41 secondsthe low completion rate together with the fact that4 users completed the task successfully and 2 of theusers completed 0% of the task involving removalof a trace link this feature’s usability issues shouldbe deemed a high priority to fix.

The "Traceability Matrix" feature of Capra wasused in task 12 of the test which had the highest

Page 10: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

average task completion time of 3 minutes and37 seconds and only 4 of the users managed tocomplete the task. The question in the questionnaireabout the matrix feature’s ease of use had an aver-age of 2,5 on the likert scale but during our analysiswe found that when the user wants to see a matrixof artifacts of different types the selection of saidartifacts can be problematic and this usability issueis detailed in CUP with identifier "UP16" in theappendix. We suggest this UP be deemed severeto improve the efficiency and performance of theCapra tool.

Research Question 1 We first asked the questionof how usable the Capra traceability managementtool is and how satisfying Capra is to use. Theoverall usability of Capra according to the SystemUsability Scale shows that Capra in this studyreceived a average score of 42,5 out of 100 whichis is regarded a low score since the average issomewhere around 68 [20].

Sub Research Question 1 On a 5 point scaleonly 1 user rated a 3 to the question if they wouldlike to use Capra frequently, all other users gavea rating of 1 or 2. This together with the factthat no user rated higher than 2 on the questionif they felt confident using the system shows thatthe satisfaction of using Capra is low.

Research Question 2 The second research ques-tion we asked was regarding which improvementscould be made for the Capra tool. We asked theparticipants in the user test to suggest improvementsin a free form text in the questionnaire where theanswers, as seen in the result-section, were relatedto three aspects: the installation, the matrix-featureand remove trace function.

Sub Research Question 1 The efficiency ofCapra appears to be high for several features.Creating trace links, visualizing features such astransitivity and viewing a diagram and using theproblem view to automatically fix errors were fea-tures that the participants had few issues with andwere completed quickly. The biggest concerns whenit comes to efficiency are removing trace links andusing the matrix to visualize links between artifacts.The participants had problems completing thesetasks and the ones that managed took a very long

time to do it.To remove trace links the user is required leave

the project that they are working in and look inthe new project where Capra stores its files. This isinconsistent with how all other features of Caprawork and it requires a lot more clicks than forexample creating a link. We suggest that this featureis made available through less steps and in line withhow other features of Capra work either by allowingthe user to remove links similarly to how they createa link or allow the user to interact and edit linkswhen they visualize them.

When using the matrix view the users appearedconfused about how to add artifacts. To accomplishthe task we asked them to they had to find a specificfile in which all artifacts could be accessed at thesame time. To make this easier for the user allowingartifacts from several different places to be addedwould help.

Sub Research Question 2 We believe the lowsatisfaction of using Capra comes from a few fea-tures dragging down the overall experience. Improv-ing the efficiency would likely help the satisfaction.We suggest giving the users more information aboutwhat the different elements of the UI does. Addingtool tips when hovering over certain icons to tellthe users what it does or can do.

VIII. CONCLUSIONS

The main objective of this study was to evaluatethe current state of Capra from the point of viewof usability along with proposing possible improve-ments. Our goal going forth with this study wasto build upon previous work within usability en-gineering by using a validated questionnaire, SUS,validated classification of usability problems, CUPand also adding to the usability research field oftraceability tools with a study that is reproducible.We showed that we could measure the usability andsuggest improvement for the Capra tool.

We designed a Remote Usability Test where wegathered enough data to identify and classify 16usability problems. There were only 6 participantsthat actually finished the test which limits the extentto which you can draw conclusions. It was alsosomewhat of a convenience sample that also couldhave an unwanted impact on the result.

Page 11: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

Even though the limitations of the study mightthreaten the validity of the result we believe itreflects the reality but to prove this further doingthe study again with a larger sample and a moremixed background would be appropriate.

IX. FUTURE WORK

Future work we suggest is firstly to reproduce thisstudy after implementation of improvements with alarger sample size and perhaps an implementationof think-a-loud protocol where verbalizations ofthe users during the test are recorded as well.Secondly software tools for developers in generalwould benefit from an increased consideration andapplication of usability studies.

X. ACKNOWLEDGEMENTS

Both authors would like to thank Salome Marofor contributing guidance and support.

XI. THREATS TO VALIDITY

A. Internal Validity

The initially identified possible risks during thisstudy included not getting a large enough sample,a sample of at least 10 participants. The validity ofthe data gathered being able to identify >80% ofall usability problems requires only 5 participantsaccording to Nielsen, although this number has beendisputed and research indicates that 10 participantsare needed to identify around 80% or more of theusability problems. Our sample size started at 12participants who confirmed their participation andstarted the test but only 6 of the participants actuallyfinished the test and sent in the resulting data whichcan impact the number of usability issues detectedin this study.

Another issue which effects our data collectedin regards to validity is that several of the 6 usersdid not follow the instructions of the test accuratelyregarding screen-recordings. The instruction wasgiven to include all displays used during the testin the video-recordings sent to us but this was notthe case. All users recorded the screen containingEclipse IDE and the SUT. This meant that we couldnot determine if the users was seeking help or doingother work during the test when users interacted ona second display not recorded.

We did not implement a think-aloud-protocolfor the test, the protocol where users would haverecorded their voices and be encouraged to speakout all their thoughts during test-session. The ab-sence of such think-aloud-protocol during the testmade it impossible for us to accurately determine“instances of frustration” of the attribute "Impact"in the CUP classification and the protocol’s im-plementation in this study would have generateddata which could further support identification ofuser patterns. The data from the voice recordingscould reveal the users’ thoughts on their variousinteractions with Capra and back our claims of us-ability problems along with revealing more usabilityissues.

REFERENCES

[1] M. A. Toleman and J. Welsh, “Systematic evaluation ofdesign choices for software development tools,” Software-Concepts & Tools, vol. 19, no. 3, pp. 109–121, 1998.

[2] S. Swart. Capra. Accessed: 2017-03-23. [Online].Available: http://projects.eclipse.org/

[3] A. project. Amalthea. Accessed: 2017-03-23. [Online].Available: http://amalthea-project.org/

[4] K. Hornbæk, “Current practice in measuring usability:Challenges to usability studies and research,” Internationaljournal of human-computer studies, vol. 64, no. 2, pp. 79–102, 2006.

[5] S. Nair, J. L. De La Vara, and S. Sen, “A review of trace-ability research at the requirements engineering conferencere@ 21,” in Requirements Engineering Conference (RE),2013 21st IEEE International. IEEE, 2013, pp. 222–229.

[6] R. Torkar, T. Gorschek, R. Feldt, M. Svahnberg, U. A. Raja,and K. Kamran, “Requirements traceability: a systematicreview and industry case study,” International Journal ofSoftware Engineering and Knowledge Engineering, vol. 22,no. 03, pp. 385–433, 2012.

[7] E. Mealy, D. Carrington, P. Strooper, and P. Wyeth, “Im-proving usability of software refactoring tools,” in Soft-ware Engineering Conference, 2007. ASWEC 2007. 18thAustralian. IEEE, 2007, pp. 307–318.

[8] V. Ðukic, I. Lukovic, M. Crepinšek, T. Kosar, andM. Mernik, “Information system software developmentwith support for application traceability,” in InternationalConference on Product-Focused Software Process Improve-ment. Springer, 2015, pp. 513–527.

[9] L. Faulkner, “Beyond the five-user assumption: Benefitsof increased sample sizes in usability testing,” BehaviorResearch Methods, Instruments, & Computers, vol. 35,no. 3, pp. 379–383, 2003.

[10] J. Nielsen, “Usability inspection methods,” in Confer-ence companion on Human factors in computing systems.ACM, 1995, pp. 377–378.

[11] J. Brooke et al., “Sus-a quick and dirty usability scale,”Usability evaluation in industry, vol. 189, no. 194, pp. 4–7, 1996.

Page 12: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

[12] E. T. Hvannberg and E. L.-C. Law, “Classification ofusability problems (cup) scheme.” in Interact. Citeseer,2003.

[13] S. G. Vilbergsdóttir, E. T. Hvannberg, and E. L.-C. Law,“Classification of usability problems (cup) scheme: aug-mentation and exploitation,” in Proceedings of the 4thNordic conference on Human-computer interaction: chang-ing roles. ACM, 2006, pp. 281–290.

[14] E. McFadden, D. R. Hager, C. J. Elie, and J. M. Blackwell,“Remote usability evaluation: Overview and case stud-ies,” International journal of human-computer interaction,vol. 14, no. 3-4, pp. 489–502, 2002.

[15] N. Ghasemifard, M. Shamsi, A. R. R. Kenari, and V. Ah-madi, “A new view at usability test methods of interfacesfor human computer interaction,” Global Journal of Com-puter Science and Technology, vol. 15, no. 1, 2015.

[16] Open boradcaster software. Accessed: 2017-07-05.[Online]. Available: https://obsproject.com/

[17] Google drive. Accessed: 2017-07-05. [Online]. Available:https://www.google.com/drive/

[18] J. Nielsen, “10 heuristics for user interface design:Article by jakob nielsen,” 1995, accessed: 2017-03-23. [Online]. Available: https://www.nngroup.com/articles/ten-usability-heuristics/

[19] N. E. Jacobsen, M. Hertzum, and B. E. John, “The eval-uator effect in usability studies: Problem detection andseverity judgments,” in Proceedings of the Human Factorsand Ergonomics Society Annual Meeting, vol. 42, no. 19.SAGE Publications, 1998, pp. 1336–1340.

[20] J. Sauro. Measuringu: 10 things to know about the systemusability scale (sus). Accessed: 2017-05-23. [Online].Available: https://measuringu.com/10-things-sus/

Page 13: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

AP

PE

ND

IX

Page 14: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 15: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 16: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 17: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 18: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 19: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 20: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 21: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 22: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 23: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 24: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 25: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test

README

Page 26: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 27: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 28: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 29: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 30: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 31: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 32: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 33: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 34: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 35: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 36: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test
Page 37: UsabilityStudyofaTraceabilityManagementTool · usability of a traceability management tool called Capra using a remote usability testing method where screen recording and post-test