Glossary

8
QA/QC SOLUTIONS Copyright © 1997 · Quality Assurance Institute® · Orlando, Florida GLOSSARY OF TESTING TERMINOLOGY April 1997 QC. 1.1.1 PRACTICE OBJECTIVE This glossary of testing terminology has two objectives: first to define the test terms that will be used throughout this manual; and second to provide a basis for establishing a glossary of testing terminology for your organization. Information services (I/S) organizations that use a common testing vocabulary are better able to accelerate the maturity of their testing process. QAI believes that the testing terminology as defined in this glossary is the most commonly held definition for these terms. Therefore, QAI recommends that this glossary be adopted as a set of core testing term definitions. The glossary can then be supplemented with testing terms that are specific to your organization. These additional terms might include: ° Names of acquired testing tools ° Testing tools developed in your organ- ization ° Names of testing libraries ° Names of testing reports ° Terms which cause specific action to occur GLOSSARY OF TERMS Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its accep- tance criteria—enables an end user to determine whether or not to accept the system. Affinity Diagram: A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories. Alpha Testing: Testing of a software product or system conducted at the developer's site by the end user. Audit: An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the "eyes and ears" of manage- ment. Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation. Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system. Black-box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test. Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini- mum, just inside/outside boundaries, typical values, and error values. Brainstorming: A group process for generating creative and diverse ideas. Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once. Bug: A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test. Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause. GLOSSARY OF TESTING TERMINOLOGY

description

Glossary

Transcript of Glossary

  • QA/QC SOLUTIONS

    PRACTICE OBJECTIVE

    This glossary of testing terminology has twoobjectives: first to define the test terms that will be usedthroughout this manual; and second to provide a basisfor establishing a glossary of testing terminology foryour organization. Information services (I/S)organizations that use a common testing vocabulary arebetter able to accelerate the maturity of their testingprocess.

    QAI believes that the testing terminology asdefined in this glossary is the most commonly helddefinition for these terms. Therefore, QAI recommends

    ting termented

    ization.

    an-

    to accep-

    ermine

    rgeoped by

    tem.

    Audit: An inspection/assessment activity that verifiescompliance with plans, policies, and procedures, andensures that resources are conserved. Audit is a stafffunction; it serves as the "eyes and ears" of manage-ment.

    Automated Testing: That part of software testing thatis assisted with software tool(s) that does not requireoperator input, analysis, or evaluation.

    Beta Testing: Testing conducted at one or more enduser sites by the end user of a delivered software productor system.

    Black-box Testing: Functional testing based onrequirements with no knowledge of the internal programstructure or data. Also known as closed-box testing.

    Bottom-up Testing: An integration testing techniquethat tests the low-level components first using testdrivers for those components that have not yet beendeveloped to call the low-level components for test.

    Boundary Value Analysis: A test data selectiontechnique in which values are chosen to lie along dataextremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, anderror values.

    Brainstorming: A group process for generatingcreative and diverse ideas.

    Branch Coverage Testing: A test method satisfyingcoverage criteria that requires each decision point ateach possible branch to be executed at least once.

    Bug: A design flaw that will result in symptomsexhibited by some object (the object under test or someother object) when an object is subjected to anappropriate test.

    Cause-and-Effect (Fishbone) Diagram: A tool used toidentify possible causes of a problem by representing therelationship between some effect and its possible cause.

    GLOSSARY OF TESTING TERMINOLOGYthat this glossary be adopted as a set of core tesdefinitions. The glossary can then be supplemwith testing terms that are specific to your organ These additional terms might include:

    Names of acquired testing tools

    Testing tools developed in your orgization

    Names of testing libraries

    Names of testing reports

    Terms which cause specific action tooccur

    GLOSSARY OF TERMS

    Acceptance Testing: Formal testing conducted determine whether or not a system satisfies itstance criteriaenables an end user to detwhether or not to accept the system.

    Affinity Diagram: A group process that takes laamounts of language data, such as a list develbrainstorming, and divides it into categories.

    Alpha Testing: Testing of a software product or sysconducted at the developer's site by the end userCopyright 1997 Quality Assurance Institute Orlando, Florida

    GLOSSARY OF TESTING TERMINOLOGY April 1997 QC. 1.1.1

  • QA/QC SOLUTIONS

    QC. 1.1.2 April 1997 GLOSSARY OF TESTING TERMINOLOGY

    Cause-effect Graphing: A testing technique that aidsin selecting, in a systematic way, a high-yield set of testcases that logically relates causes to effects to producetest cases. It has a beneficial side effect in pointing outincompleteness and ambiguities in specifications.

    Checksheet: A form used to record data as it isgathered.

    Clear-box Testing: Another term for white-box testing. Structural testing is sometimes referred to as clear-boxtesting, since "white boxes" are considered opaque anddo not really permit visibility into the code. This is alsoknown as glass-box or open-box testing.

    Client: The end user that pays for the product received,and receives the benefit from the use of the product.

    Control Chart: A statistical method for distinguishingbetween common and special cause variation exhibitedby processes.

    Customer (end user): The individual or organization,internal or external to the producing organization, thatreceives the product.

    Cyclomatic Complexity: A measure of the number oflinearly independent paths through a program module.

    Data Flow Analysis: Consists of the graphical analysisof collections of (sequential) data definitions andreference patterns to determine constraints that can beplaced on data values at various points of executing thesource program.

    Debugging: The act of attempting to determine thecause of the symptoms of malfunctions detected bytesting or by frenzied user complaints.

    Defect: NOTE: Operationally, it is useful to work withtwo definitions of a defect: 1) From the producer'sviewpoint: a product requirement that has not been metor a product attribute possessed by a product or afunction performed by a product that is not in thestatement of requirements that define the product. 2)From the end user's viewpoint: anything that causes enduser dissatisfaction, whether in the statement ofrequirements or not.

    Defect Analysis: Using defects as data for continuousquality improvement. Defect analysis generally seeks toclassify defects into categories and identify possiblecauses in order to direct process improvement efforts.

    Defect Density: Ratio of the number of defects toprogram length (a relative number).

    Desk Checking: A form of manual static analysisusually performed by the originator. Source codedocumentation, etc., is visually checked againstrequirements and standards.

    Dynamic Analysis: The process of evaluating aprogram based on execution of that program. Dynamicanalysis approaches rely on executing a piece ofsoftware with selected test data.

    Dynamic Testing: Verification or validation performedwhich executes the system's code.

    Error: 1) A discrepancy between a computed,observed, or measured value or condition and the true,specified, or theoretically correct value or condition; and2) a mental mistake made by a programmer that mayresult in a program fault.

    Error-based Testing: Testing where information aboutprogramming style, error-prone language constructs,and other programming knowledge is applied to selecttest data capable of detecting faults, either a specifiedclass of faults or all possible faults.

    Evaluation: The process of examining a system orsystem component to determine the extent to whichspecified properties are present.

    Execution: The process of a computer carrying out aninstruction or instructions of a computer.

    Exhaustive Testing: Executing the program with allpossible combinations of values for program variables.

    Failure: The inability of a system or system componentto perform a required function within specified limits. A failure may be produced when a fault is encountered.

    Failure-directed Testing: Testing based on the knowl-edge of the types of errors made in the past that arelikely for the system under test.

    Fault: A manifestation of an error in software. A fault,if encountered, may cause a failure.

    Fault-based Testing: Testing that employs a test dataselection strategy designed to generate test data capableof demonstrating the absence of a set of prespecifiedfaults, typically, frequently occurring faults.

  • QA/QC SOLUTIONS

    Fault Tree Analysis: A form of safety analysis thatstics andffect of

    ps ofr

    written material that consists of two dominantcomponents: product (document) improvement andprocess improvement (document production andinspection).

    Instrument: To install or insert devices or instructionsinto hardware or software to monitor the operation of aassesses hardware safety to provide failure statisensitivity analyses that indicate the possible ecritical failures.

    Flowchart: A diagram showing the sequential stea process or of a workflow around a product or sevice.GLOSSARY OF TESTING TERMINOLOGY Apri

    Formal Review: A technical review conducted with theend user, including the types of reviews called for in thestandards.

    Functional Testing: Application of test data derivedfrom the specified functional requirements withoutregard to the final program structure. Also known asblack-box testing.

    Function Points: A consistent measure of software sizebased on user requirements. Data components includeinputs, outputs, etc. Environment characteristicsinclude data communications, performance, reusability,operational ease, etc. Weight scale: 0 = not present; 1 =minor influence, 5 = strong influence.

    Heuristics Testing: Another term for failure-directedtesting.

    Histogram: A graphical description of individualmeasured values in a data set that is organized accord-ing to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution ofindividual values in a data set along with informationregarding the average and variation.

    Hybrid Testing: A combination of top-down testingcombined with bottom-up testing of prioritized oravailable components.

    Incremental Analysis: Incremental analysis occurswhen (partial) analysis may be performed on anincomplete product to allow early feedback on thedevelopment of that product.

    Infeasible Path: Program statements sequence that cannever be executed.

    Inputs: Products, services, or information needed fromsuppliers to make a process work.

    Inspection: 1) A formal evaluation technique in whichsoftware requirements, design, or code are examined indetail by a person or group other than the adetect faults, violations of development standaother problems. 2) A quality improvement process for

    system or component.

    Integration: The process of combining softwarecomponents or hardware components, or both, into anoverall system.

    Integration Testing: An orderly progression of testingin which software components or hardware components,or both, are combined and tested until the entire systemhas been integrated.

    Interface: A shared boundary. An interface might be ahardware component to link two devices, or it might bea portion of storage or registers accessed by two or morecomputer programs.

    Interface Analysis: Checks the interfaces betweenprogram elements for consistency and adherence topredefined rules or axioms.

    Intrusive Testing: Testing that collects timing andprocessing information during program execution thatmay change the behavior of the software from itsbehavior in a real environment. Usually involvesadditional code embedded in the software being tested oradditional processes running concurrently with softwarebeing tested on the same platform.

    IV&V: Independent verification and validation is theverification and validation of a software product by anorganization that is both technically and manageriallyseparate from the organization responsible for devel-oping the product.

    Life Cycle: The period that starts when a softwareproduct is conceived and ends when the product is nolonger available for use. The software life cycletypically includes a requirements phase, design phase,implementation (code) phase, test phase, installationand checkout phase, operation and maintenance phase,and a retirement phase.

    Manual Testing: That part of software testing thatrequires operator input, analysis, or evaluation.

    s and

    uthor tords, andMean: A value derived by adding several qualitie

    dividing the sum by the number of these quantities.l 1997 QC. 1.1.3

  • QA/QC SOLUTIONS

    QC. 1.1.4 April 1997 GLOSSARY OF TESTING TERMINOLOGY

    Measure: To ascertain or appraise by comparing to astandard; to apply a metric.

    Measurement: 1) The act or process of measuring. 2)A figure, extent, or amount obtained by measuring.

    Metric: A measure of the extent or degree to which aproduct possesses and exhibits a certain quality,property, or attribute.

    Mutation Testing: A method to determine test setthoroughness by measuring the extent to which a test setcan discriminate the program from slight variants of theprogram.

    Nonintrusive Testing: Testing that is transparent to thesoftware under test; i.e., testing that does not change thetiming or processing characteristics of the softwareunder test from its behavior in a real environment. Usually involves additional hardware that collectstiming or processing information and processes thatinformation on another platform.

    Operational Requirements: Qualitative and quantita-tive parameters that specify the desired operationalcapabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of asystem prior to deployment.

    Operational Testing: Testing performed by the enduser on software in its normal operating environment.

    Outputs: Products, services, or information supplied tomeet end user needs.

    Path Analysis: Program analysis performed to identifyall possible paths through a program, to detect incom-plete paths, or to discover portions of the program thatare not on any path.

    Path Coverage Testing: A test method satisfyingcoverage criteria that each logical path through theprogram is tested. Paths through the program often aregrouped into a finite set of classes; one path from eachclass is tested.

    Peer Reviews: A methodical examination of softwarework products by the producer's peers to identify defectsand areas where changes are needed.

    Policy: Managerial desires and intents concerningeither process (intended objectives) or products (desiredattributes).

    Problem: Any deviation from defined standards. Sameas defect.

    Procedure: The step-by-step method followed to ensurethat standards are met.

    Process: The work effort that produces a product. Thisincludes efforts of people and equipment guided bypolicies, standards, and procedures.

    Process Improvement: To change a process to makethe process produce a given product faster, moreeconomically, or of higher quality. Such changes mayrequire the product to be changed. The defect rate mustbe maintained or reduced.

    Product: The output of a process; the work product. There are three useful classes of products: manu-factured products (standard and custom), administra-tive/information products (invoices, letters, etc.), andservice products (physical, intellectual, physiological,and psychological). Products are defined by a statementof requirements; they are produced by one or morepeople working in a process.

    Product Improvement: To change the statement ofrequirements that defines a product to make the productmore satisfying and attractive to the end user (morecompetitive). Such changes may add to or delete fromthe list of attributes and/or the list of functions defininga product. Such changes frequently require the processto be changed. NOTE: This process could result in atotally new product.

    Productivity: The ratio of the output of a process to theinput, usually measured in the same units. It isfrequently useful to compare the value added to aproduct by a process to the value of the input resourcesrequired (using fair market values for both input andoutput).

    Proof Checker: A program that checks formal proofsof program properties for logical correctness.

    Prototyping: Evaluating requirements or designs at theconceptualization phase, the requirements analysisphase, or design phase by quickly building scaled-downcomponents of the intended system to obtain rapidfeedback of analysis and design decisions.

    Qualification Testing: Formal testing, usually con-ducted by the developer for the end user, to demonstratethat the software meets its specified requirements.

  • QA/QC SOLUTIONS

    Quality: A product is a quality product if it is defeuc

    ortened

    .

    echnicalctions.

    Run Chart: A graph of data points in chronologicalorder used to illustrate trends or cycles of the charac-teristic being measured for the purpose of suggesting anassignable cause rather than random variation.free. To the producer a product is a quality prodmeets or conforms to the statement of requirements thatdefines the product. This statement is usually shto: quality means meets requirements. NOTE:Operationally, the work quality refers to productsGLOSSARY OF TESTING TERMINOLOGY April

    Quality Assurance (QA): The set of support activities(including facilitation, training, measurement, andanalysis) needed to provide adequate confidence thatprocesses are established and continuously improved inorder to produce products that meet specifications andare fit for use.

    Quality Control (QC): The process by which productquality is compared with applicable standards; and theaction taken when nonconformance is detected. Itsfocus is defect detection and removal. This is a linefunction, that is, the performance of these tasks is theresponsibility of the people working within the process.

    Quality Improvement: To change a production processso that the rate at which defective products (defects) areproduced is reduced. Some process changes may requirethe product to be changed.

    Random Testing: An essentially black-box testingapproach in which a program is tested by randomlychoosing a subset of all possible input values. Thedistribution may be arbitrary or may attempt toaccurately reflect the distribution of inputs in theapplication environment.

    Regression Testing: Selective retesting to detect faultsintroduced during modification of a system or systemcomponent, to verify that modifications have not causedunintended adverse effects, or to verify that a modifiedsystem or system component still meets its specifiedrequirements.

    Reliability: The probability of failure-free operation fora specified period.

    Requirement: A formal statement of: 1) an attribute tobe possessed by the product or a function to beperformed by the product; 2) the performance standardfor the attribute or function; or 3) the measuring processto be used in verifying that the standard has been met.

    Review: A way to use the diversity and power of agroup of people to point out needed improvements in aproduct or confirm those parts of a product in whichimprovement is either not desired or not needreview is a general work product evaluation tec

    Scatter Plot (correlation diagram): A graph designedto show whether there is a relationship between twochanging factors.

    Semantics: 1) The relationship of characters or a groupof characters to their meanings, independent of themanner of their interpretation and use. 2) Therelationships between symbols and their meanings.

    Software Characteristic: An inherent, possibly acci-dental, trait, quality, or property of software (forexample, functionality, performance, attributes, designconstraints, number of states, lines of branches).

    Software Feature: A software characteristic specifiedor implied by requirements documentation (for example,functionality, performance, attributes, or designconstraints).

    Software Tool: A computer program used to helpdevelop, test, analyze, or maintain another computerprogram or its documentation; e.g., automated designtools, compilers, test tools, and maintenance tools.

    Standards: The measure used to evaluate products andidentify nonconformance. The basis upon whichadherence to policies is measured.Standardize: Procedures are implemented to ensurethat the output of a process is maintained at a desiredlevel.

    Statement Coverage Testing: A test method satisfyingcoverage criteria that requires each statement beexecuted at least once.

    Statement of Requirements: The exhaustive list ofrequirements that define a product. NOTE: Thestatement of requirements should document require-ments proposed and rejected (including the reason forthe rejection) during the requirements determinationprocess.

    Static Testing: Verification performed withoutexecuting the system's code. Also called static analysis.

    l

    ed. AhniqueStatistical Process Control: The use of statistica 19ctt if itthat includes desk checking, walkthroughs, treviews, peer reviews, formal reviews, and inspe97 QC. 1.1.5

    QAIIf you have linked to this definition from another file, click "go back" to return.

  • QA/QC SOLUTIONS

    techniques and tools to measure an ongoing process forchange or stability.

    Structural Coverage: This requires that each pair ofmodule invocations be executed at least once.

    Structural Testing: A testing method where the testdata is derived solely from the program structure.

    Stub: A software component that usually minimallysimulates the actions of called components that have notyet been integrated during top-down testing.

    Supplier: An individual or organization that suppliesinputs needed to generate a product, service, orinformation to an end user.

    Syntax: 1) The relationship among characters orgroups of characters independent of their meanings orthe manner of their interpretation and use; 2) thestructure of expressions in a language; and 3) the rulesgoverning the structure of the language.

    System: A collection of people, machines, and methodsorganized to accomplish a set of specified functions.

    System Simulation: Another name for prototyping.

    System Testing: The process of testing an integratedhardware and software system to verify that the systemmeets its specified requirements.

    Technical Review: A review that refers to content ofthe technical material being reviewed.

    Test Bed: 1) An environment that contains the integralhardware, instrumentation, simulators, software tools,and other support elements needed to conduct a test of alogically or physically separate component. 2) A suiteof test programs used in conducting the test of acomponent or system.

    Test Case: The definition of test case differs fromcompany to company, engineer to engineer, and evenproject to project. A test case usually includes anidentified set of information about observable states,conditions, events, and data, including inputs andexpected outputs.

    Test Development: The development of anythingrequired to conduct testing. This may include testrequirements (objectives), strategies, processes, plans,software, procedures, cases, documentation, etc.

    Test Executive: Another term for test harness.

    Test Harness: A software tool that enables the testingof software components that links test capabilities toperform specific tests, accept program inputs, simulatemissing components, compare actual outputs withexpected outputs to determine correctness, and reportdiscrepancies.

    Test Objective: An identified set of software featuresto be measured under specified conditions by comparingactual behavior with the required behavior described inthe software documentation.

    Test Plan: A formal or informal plan to be followed toassure the controlled testing of the product under test.

    Test Procedure: The formal or informal procedure thatwill be followed to execute a test. This is usually awritten document that allows others to execute the testwith a minimum of training.

    Testing: Any activity aimed at evaluating an attributeor capability of a program or system to determine that itmeets its required results. The process of exercising orevaluating a system or system component by manual orautomated means to verify that it satisfies specifiedrequirements or to identify differences between expectedand actual results.

    Top-down Testing: An integration testing techniquethat tests the high-level components first using stubs forlower-level called components that have not yet beenintegrated and that stimulate the required actions ofthose components.

    Unit Testing: The testing done to show whether a unit(the smallest piece of software that can be independentlycompiled or assembled, loaded, and tested) satisfies itsfunctional specification or its implemented structurematches the intended design structure.

    User: The end user that actually uses the productreceived.QC. 1.1.6 April 1997 GLOSSARY OF TESTING TERMINOLOGY

  • QA/QC SOLUTIONS

    GLOSSARY OF TESTING TERMINOLOGY April 1997 QC. 1.1.7

    Validation: The process of evaluating software todetermine compliance with specified requirements.

    Verification: The process of evaluating the products ofa given software development activity to determinecorrectness and consistency with respect to the productsand standards provided as input to that activity.

    Walkthrough: Usually, a step-by-step simulation of theexecution of a procedure, as when walking throughcode, line by line, with an imagined set of inputs. Theterm has been extended to the review of material that isnot procedural, such as data descriptions, referencemanuals, specifications, etc.

    White-box Testing: Testing approaches that examinethe program structure and derive test data from theprogram logic.

    Copyright 1997 Quality Assurance Institute Orlando, Florida

  • QA/QC SOLUTIONS

    QC. 1.1.8 April 1997 GLOSSARY OF TESTING TERMINOLOGY

    THIS PAGE IS INTENTIONALLY BLANK.

    WelcomeObjectiveAAcceptance TestingAffinity DiagramAlpha TestingAuditAutomated Testing

    BBeta TestingBlack-box TestingBottom-up TestingBoundry Value AnalysisBrainstormingBranch Coverage TestingBug

    CCause/Effect DiagramCause-effect GraphingChecksheetClear-box TestingClientControl ChartCustomer (end user)Cyclomatic Complexity

    DData Flow AnalysisDebuggingDefectDefect AnalysisDefect DensityDesk CheckingDynamic AnalysisDynamic Testing

    EErrorError-based TestingEvaluationExecutionExhaustive Testing

    FFailureFailure-directed TestingFaultFault-based TestingFault Tree AnalysisFlowchartFormal ReviewFunctional TestingFunction Points

    HHeuristics TestingHistogramHybrid Testing

    IIncremental AnalysisInfeasible PathInputsInspectionInstrumentIntegrationIntegration TestingInterfaceInterface AnalysisIntrusive TestingIV&V

    LLife Cycle

    MManual TestingMeanMeasureMeasurementMetricMutation Testing

    NNonintrusive Testing

    OOperational RequirementsOperational TestingOutputs

    PPath AnalysisPath Coverage TestingPeer ReviewsPolicyProblemProcedureProcessProcess ImprovementProductProduct ImprovementProductivityProof CheckerPrototyping

    QQualification TestingQualityQuality AssuranceQuality ControlQuality Improvement

    RRandom TestingRegression TestingReliabilityRequirementReviewRun Chart

    SScatter PlotSemanticsSoftware CharacteristicSoftware FeatureSoftware ToolStandardsStandardizeStatement Coverage TestingStatement of RequirementsStatic TestingStatistical Process ControlStructural CoverageStructural TestingStubSupplierSyntaxSystemSystem SimulationSystem Testing

    TTechnical ReivewTest BedTest CaseTest DevelopmentTest ExecutiveTest HarnessTest ObjectiveTest PlanTest ProcedureTestingTop-down Testing

    UUnit TestingUser

    VValidationVerification

    WWalkthroughsWhite-box Testing

    Return to Welcome: