What is Testing

41
Why are there Bugs? Since humans design and program hardware and software, mistakes are inevitable. That's what computer and software vendors tell us, and it's partly true. What they don't say is that software is buggier than it has to be. Why? Because time is money, especially in the software industry. This is how bugs are born: a software or hardware company sees a business opportunity and starts building a product to take advantage of that. Long before development is finished, the company announces that the product is on the way. Because the public is (the company hopes) now anxiously awaiting this product, the marketing department fights to get the goods out the door before that deadline, all the while pressuring the software engineers to add more and more features. Shareholders and venture capitalists clamor for quick delivery because that's when the company will see the biggest surge in sales. Meanwhile, the quality-assurance division has to battle for sufficient bug-testing time. "The simple fact is that you get the most revenues at the release of software," says Bruce Brown, the founder of BugNet, a newsletter that has chronicled software bugs and fixes since 1994. "The faster you bring it out, the more money you make. You can always fix it later, when people howl. It's a fine line when to release something, and the industry accepts defects." It may seem that there are more bugs these days than ever before, but longtime bug watchers like Brown say this is mostly a visual illusion caused by increased media coverage. Not only has the number of bugs not changed, but manufacturers are fixing them more quickly. But while the industry as a whole may not be buggier, one important new category is, arguably, more flawed than other genres: Internet software. The popularity of the Internet is pushing companies to produce software faster than ever before, and the inevitable result is buggier products. "Those are crazy release schedules," says Brian Bershad, an associate professor of computer science at the University of Washington. His Kimera project helped catch several security bugs in Java. "The whole industry is bonkers. Web standards need to be developed and thoughtfully laid out, but look at all the versions of Java and HTML. It's not that the people aren't smart; it's just that they don't have time to think." But software and hardware companies persist in arguing that we should put up with bugs. Why? Because the cost of stamping out all bugs would be too high for the consumer. "Software is just getting so incredibly complicated," says Bershad. "It's too expensive to have no bugs in consumer software." What is the difference between a bug, a defect, and an error? Question: What is the difference between a bug, a defect, and an error? Answer: According to the British norm BS 7925-1: bug--generic term for fault, failure, error, human action that produces an incorrect result. Robert Vanderwall offers these formal definitions from IEEE 610.1. The sub-points are his own. mistake (an error): A human action that produces an incorrect result. - mistake made in translation or interpretation. - lots of taxonomies exist to describe errors.

Transcript of What is Testing

Page 1: What is Testing

Why are there Bugs?

Since humans design and program hardware and software, mistakes are inevitable. That's what computer and software vendors tell us, and it's partly true. What they don't say is that software is buggier than it has to be. Why? Because time is money, especially in the software industry. This is how bugs are born: a software or hardware company sees a business opportunity and starts building a product to take advantage of that. Long before development is finished, the company announces that the product is on the way. Because the public is (the company hopes) now anxiously awaiting this product, the marketing department fights to get the goods out the door before that deadline, all the while pressuring the software engineers to add more and more features. Shareholders and venture capitalists clamor for quick delivery because that's when the company will see the biggest surge in sales. Meanwhile, the quality-assurance division has to battle for sufficient bug-testing time. "The simple fact is that you get the most revenues at the release of software," says Bruce Brown, the founder of BugNet, a newsletter that has chronicled software bugs and fixes since 1994. "The faster you bring it out, the more money you make. You can always fix it later, when people howl. It's a fine line when to release something, and the industry accepts defects." It may seem that there are more bugs these days than ever before, but longtime bug watchers like Brown say this is mostly a visual illusion caused by increased media coverage. Not only has the number of bugs not changed, but manufacturers are fixing them more quickly. But while the industry as a whole may not be buggier, one important new category is, arguably, more flawed than other genres: Internet software. The popularity of the Internet is pushing companies to produce software faster than ever before, and the inevitable result is buggier products. "Those are crazy release schedules," says Brian Bershad, an associate professor of computer science at the University of Washington. His Kimera project helped catch several security bugs in Java. "The whole industry is bonkers. Web standards need to be developed and thoughtfully laid out, but look at all the versions of Java and HTML. It's not that the people aren't smart; it's just that they don't have time to think." But software and hardware companies persist in arguing that we should put up with bugs. Why? Because the cost of stamping out all bugs would be too high for the consumer. "Software is just getting so incredibly complicated," says Bershad. "It's too expensive to have no bugs in consumer software."

What is the difference between a bug, a defect, and an error?

Question:What is the difference between a bug, a defect, and an error?

Answer:According to the British norm BS 7925-1: bug--generic term for fault, failure, error, human action that produces an incorrect result.Robert Vanderwall offers these formal definitions from IEEE 610.1. Thesub-points are his own.

mistake (an error): A human action that produces an incorrect result.- mistake made in translation or interpretation.- lots of taxonomies exist to describe errors.fault: An incorrect step, process or data definition.- manifestation of the error in implementation- this is really nebulous, hard to pin down the 'location'failure: An incorrect result.bug: An informal word describing any of the above. (Not IEEE)

Rohan Khale found a web site that gave these definitions:A bug exists because what is supposed to work does not work as what youexpected. Defects occur usually when a product no longer works the way it used to.

He found these easy to understand defintions: A defect is for something that normally works, but it has something out-of-spec. On the other hand a Bug is something that was considered but not implemented, for whatever reasons.

I have seen these arbitrary definitions:Error: programmatically mistake leads to error.Bug: Deviation from the expected result. Defect: Problem in algorithm leads to failure.

Page 2: What is Testing

Failure: Result of any of the above.

Compare those to these arbitrary definitions:Error: When we get the wrong output i.e. syntax error, logical errorFault: When everything is correct but we are not able to get a resultFailure: We are not able to insert any input

How to Write a Fully Effective Bug Report

To write a fully effective report you must: - Explain how to reproduce the problem - Analyze the error so you can describe it in a minimum number of steps. - Write a report that is complete and easy to understand.

Write bug reports immediately; the longer you wait between finding the problem and reporting it, the more likely it is the description will be incomplete, the problem not reproducible, or simply forgotten.

Writing a one-line report summary (Bug's report title) is an art. You must master it. Summaries help everyone quickly review outstanding problems and find individual reports. The summary line is the most frequently and carefully read part of the report. When a summary makes a problem sound less severe than it is, managers are more likely to defer it. Alternatively, if your summaries make problems sound more severe than they are, you will gain a reputation for alarmism. Don't use the same summary for two different reports, even if they are similar. The summary line should describe only the problem, not the replication steps. Don't run the summary into the description (Steps to reproduce) as they will usually be printed independently of each other in reports.

Ideally you should be able to write this clearly enough for a developer to reproduce and fix the problem, and another QA engineer to verify the fix without them having to go back to you, the author, for more information. It is much better to over communicate in this field than say too little. Of course it is ideal if the problem is reproducible and you can write down those steps. But if you can't reproduce a bug, and try and try and still can't reproduce it, admit it and write the report anyway. A good programmer can often track down an irreproducible problem from a careful description.

Page 3: What is Testing

Bug Life Cycle Model

Bug Report Components

Report number:Unique number given to a bug.

Program / module being tested:The name of a program or module that being tested

Version & release number: The version of the product that you are testing.

Problem Summary:(data entry field that's one line) precise to what the problem is.

Report Type:Describes the type of problem found, for example it could be software or hardware bug.

Severity:Normally, how you view the bug. Various levels of severity: Low - Medium - High - Urgent

Page 4: What is Testing

Environment:Environment in which the bug is found.

Detailed Description:Detailed description of the bug that is found

How to reproduce:Detailed description of how to reproduce the bug.

Reported by:The name of person who writes the report.

Assigned to developer:The name of developer who assigned to fixed the bug.

Status:Open:The status of bug when it entered.Fixed / feedback:The status of the bug when it fixed.Closed:The status of the bug when verified.(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)Deferred:The status of the bug when it postponed.User error:The status of the bug when user made an error.Not a bug:The status of the bug when it is not a bug.

Priority:Assigned by the project manager who asks the programmers to fix bugs in priority order.

Resolution:Defines the current status of the problem. There are four types of resolution such as deferred, not a problem, will not fix, and as designed.

Bug Impacts

Low impact This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way.

Medium impactThis is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent

High impactThis should be used for only serious problems, effecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.

Urgent impactThis should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved.

Page 5: What is Testing

Defects Severity and Priority

Question:One question on the defects that we raise. We are supposed to give aseverity and a priority to it. Now, the severity can be Major, Minor orTrivial and the Priority can be 1, 2 or 3 (with 1 being a high prioritydefect).My question is - why do we need two parameters, severity and priority, for a defect Can't we do only with one?

Answer:It depends entirely on the size of the company. Severity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem.In some companies, the defect reporter sets the severity and the triage team or product management sets the priority. In a small company, or project (or product), particularly where there aren't many defects to track, you can expect you don't really need both since a high severity defect is also a high priority defect. But in a large company, and particularly where there are many defects, using both is a form of risk management.

Major would be 1 and Trivial would be 3. You can add or multiply the two values together (there is only a small difference in the outcome) and then use the event's risk value to determine how you should address the problem. The lower values must be addressed and the higher values can wait.

I discovered a new method for Risk Assessment. It is based on a military standard, MIL-STD-882. If you want a copy of the current version, search for MIL-STD-882D using Google or Yahoo! The main area of interest is section A.4.4.3 and its children where they indicate the Assessment of mishap risk.They use a four-point severity rating (rather than three): Catastrophic; Critical; Marginal; Negligible. They then use a five-point (rather than three) probability rating: Frequent; Probable; Occasional; Remote; Improbable. Then rather than using a mathematical calculation to determine a risk level, they use a predefined chart.

Blocker: This bug prevents developers from testing or developing the software.Critical: The software crashes, hangs, or causes you to lose data.Major: A major feature is broken.Normal: It's a bug that should be fixed.Minor: Minor loss of function, and there's an easy work around.Trivial: A cosmetic problem, such as a misspelled word or misaligned text.Enhancement: Request for new feature or enhancement.

Answer:sever6rity Levels can be defined as follow:S1 - High/Showstopper. Like system crash or error message forcing to close the window.Tester's ability to operate the system either totally (System Down), oralmost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing.Incident affects an area of functionality but there is a work-around which negates impact to business process.This is a problem that:a) Affects a more isolated piece of functionality.b) Occurs only at certain boundary conditions.c) Has a workaround (where "don't do that" might be an acceptable answer to the user).d) Occurs only at one or two customers. or is intermittentS3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors inlayout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

What is Testing?

Page 6: What is Testing

“Testing is the process of identifying defects, where a defect is any variance between actual and expected results”

Defect can be caused by a flaw in the application software or by a flaw in the application specification. For example, unexpected (incorrect) results can be from errors made during the construction phase, or from an algorithm incorrectly defined in the specification. Testing is commonly assumed to mean executing software and finding errors. This type of testing is known as dynamic testing, and while valid, it is not the most effective way of testing. Static testing, the review, inspection and validation of development requirements, is the most effective and cost efficient way of testing. A structured approach to testing should use both dynamic and static testing techniques.

Testing and Quality AssuranceWhat is the relationship between testing and Software Quality Assurance (SQA)? An application that meets its requirements totally can be said to exhibit quality. Quality is not based on a subjective assessment but rather on a clearly demonstrable, and measurable, basis. Quality Assurance and Quality Control are not the same. Quality Control is a process directed at validating that a specific deliverable meets standards, is error free, and is the best deliverable that can be produced. It is a responsibility internal to the team. QA, on the other hand, is a review with a goal of improving the process as well as the deliverable. QA is often an external process. QA is an effective approach to producing a high quality product. One aspect is the process of objectively reviewing project deliverables and the processes that produce them (including testing), to identify defects, and then making recommendations for improvement based on the reviews. The end result is the assurance that the system and application is of high quality, and that the process is working. The achievement of quality goals is well within reach when organizational strategies are used in the testing process. From the client's perspective, an application's quality is high if it meets their expectations.

Software Testing 10 Rules

1. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop.

3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application and better testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors.

Test Methods

Black - Box TestingIn using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.

White - Box TestingIn using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.

Page 7: What is Testing

Gray - Box TestingIn using this strategy Black box testing can be combine with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.

Test ScriptType of test file. It is a set of instructions run automatically by a software or hardware test tool.

SuiteA collection of test cases or scripts.

Types of Testing(Test Reviews)

Acceptance Testing Is the process of comparing a program to its requirements

Ad - Hoc testing

Appropriate and very often syndicated when tester wants to become familiar with the product, or in the environment when technical/testing materials are not 100% completed. It is also largely based on general software product functionality/testing understanding and the normal 'Human Common Sense'.

Build Acceptance Test

The build acceptance test is a simplistic check of a product's functionality in order to determine if the product is capable of being tested to a greater extent. Every new build should undergo a build acceptance test to determine if further testing can be executed. Examples of a build acceptance:Product can be installed with no crashes or errors that terminate the installation. (Development needs to install the software from the same source accessed by QA (e.g. Drop Zone, CD-ROM, Electronic Software Distribution archives, etc.).

Clients can connect to associated servers. Simple client server communication can be achieved

Bottom - upStart testing with the bottom of the program. The bottom - up strategy does not exist until the last module is added.

CET (Customer Experience Test)

An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any problems, assistance, or support from others.

Client-Server Test Testing Systems that operate in client/server environments.

Compatibility TestThis test is used to test compatibility between different client/server version combinations as well as other supported products.

Confidence Test

The confidence test ensures a product functions as expected by ensuring platform specific bugs are not introduced and functionality has not regressed from drop to drop. A typical confidence test is designed to touch all major areas of a product's functionality. These tests are run regularly once the Functional Freeze milestone is reached throughout the remaining development cycle.

Configuration Tests These tests are run for product testing across various system configuration combinations. Examples of configurations:Cross platform (e.g. Windows Clients against a UNIX server). Client/server network configurations. Operating systems and database combinations (also including version combinations). Web servers and web browsers (for web products). The system

Page 8: What is Testing

configurations to test are determined from the product's compatibility matrix. This test is sometimes called a 'Platform test'.CET (Customer Experience Test)An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any problems, assistance, or support from others.

Depth Test

The depth tests are designed to test all the product's functionalityin-depth.

Error Test

The error test is designed to test the dialogs, alerts, and other feedback provided to the user when an error situation occurs. The difference between this test and a Negative Test is that an Error Test is simply verifying that the correct dialogs are seen. The Negative Test is primarily looking at the robustness and recovery facets.

Event-DrivenTesting event-driven processes, such as unpredictable sequences of interrupts from input devices or sensors, or sequences of mouse clicks in a GUI environment.

Final Installation Test

Verification that the final media, prior to hand off to Operations for duplication, contains the correct code which was previously tested and is installable on all the supported platforms and databases. The product demo is executed and product Release Notes verified.

Functionality TestThis is designed to test the full functionality, features, and user interfaces of software based upon the functional specifications.

Full Test

A full test is Build Acceptance + Sanity + Confidence + Depth. This is designed to test the full functionality, features, and user interfaces of software based upon the functional specifications.

Graphical User Interface (GUI)

Testing the front-end user interfaces to applications which use GUI support systems and standard such as MS Windows or Motif.

GUI Roadmaps

Step by step walk through of a tool or application, exercising each screen or window's menus, toolbar and dialog boxes to verify the execution and correctness of the Graphical User Interface. Typically, this is handled by automated scripts and rarely is used as a manual tests due to the low numbers of bugs found from them.

Module testing

To test large program its necessary to use module testing. Module testing (or unit testing) is a process of testing individual subprograms (small blocks), rather than testing the program as a whole. Module testing eases the task of debugging. When error is found, it is known is which particular module it is.

Multi-user TestTest maximum number of users specified in the design concurrently, to simulate the real user environment when they use the product.

Negative Test

Tests that deliberately introduce an error to check an application'sbehavior and robustness. For example, erroneous data may be entered, or attempts made to force the application to perform an operation that is should not be able to complete. Generally a message box is generated to inform the user of the problem. If theprogram terminates, the program should exit gracefully

Page 9: What is Testing

Object-OrientedTesting systems designed or coded using an object-oriented approach or development environment, such as C++ or Smalltalk.

Parallel TestingTesting by processing the same (or at least closely comparable) test workload against both the existing and new versions of a product, then comparing results.

Performance

Measurement and prediction of performance (e.g. Response time and/or throughput) for a given benchmark workload.Phased ApproachA testing strategy where test cases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of time-so that a concentrated effort is directed toward testing those new features before the effort returns to validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads. 1st level - Minimal Acceptance Test 2nd level - Confidence Tests 3rd level - Full Functionality Test 4th level - Error, Negative, and other Tests 5th level - System level tests

Phased Approach

A testing strategy where testcases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of times-so that a concentrated effort is directed toward testing those new features before the effort returns tovalidate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads.1st level - Build Acceptance Test 2nd level - Sanity Test 3rd level - Confidence Test 4th level - Depth Test 5th level - Error, Negative, and other Tests 6th level - System level tests

Regression Tests

These tests are used for comprehensive re-testing of software to validate that all functionality and features of previous builds (or releases) have maintained integrity of features and functions tested previously. This suite of tests includes the Full Functionality Tests and bug Regression Tests (automated and manual).

Sanity TestSanity tests are subsets of the confidence test and are used only to validate high-level functionality.

Security Testing It is a test how easy to break program's security system.

Stress TestThese tests are used to validate software functionality at the limit (e.g. Maximum throughput) and then testing at and beyond these limits.

System Level TestThese tests check for factors such as Cross-Tool testing, memory management and other operating system factors.

Top - Down strategy Start testing with the top of the program.

Volume Testing Is the process of feeding a program with heavy volume of data.

Usability The effectiveness, efficiency, and satisfaction with which specified users can achieve specified goals in a particular

Page 10: What is Testing

environment. Synonymous with "ease of use".

Test Specifications

The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should explain "how" to implement the test cases described in the test plan.Test Specification ItemsEach test specification should contain the following items:Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case number.Title: is the title of the test.ProgName: is the program name containing the test.Author: is the person who wrote the test specification.Date: is the date of the last revision to the test case.

Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.

Expected Error(s): Describes any errors expected

Reference(s): Lists reference documententation used to design the specification.

Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.Example Test SpecificationTest Specification

Case No. 7.6.3 Title: Invalid Sequence Number (TC)ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000Background: (Objectives, Assumptions, References, Success Criteria)

Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.

Expected Sense Code: $2001, Sequence Number Error

Reference - SNA Format and Protocols Appendix G/p. 380

Data: (Tx Data, Predicted Rx Data)IUT<-------- DATA FIS, OIC, DR1 SNF=20<-------- DATA LIS, SNF=20--------> -RSP $2001

Script: (Pseudo Code for Coding Tests)SEND_PIU FIS, OIC, DR1, DRI SNF=20

Page 11: What is Testing

SEND_PIU LIS, SNF=20R_RSP $2001

Test Case Design

Test Case ID:It is unique number given to test case in order to be identified.

Test description:The description if test case you are going to test.

Revision history:Each test case has to have its revision history in order to know when and by whom it is created or modified.

Function to be tested:The name of function to be tested.

Environment:It tells in which environment you are testing.

Test Setup:Anything you need to set up outside of your application for example printers, network and so on.

Test Execution:It is detailed description of every step of execution.

Expected Results:The description of what you expect the function to do.

Actual Results:pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed.

Characteristics of a Good Test:They are: likely to catch bugsno redundant not too simple or too complex.

Test case is going to be complex if you have more than one expected results.

Qualities of a Good Tester

What makes a good software tester? Many myths abound, such as beingcreatively sadistic or able to handle dull, repetitive work. As a one-time test manager and currently as a consultant to software development and testing organizations, I've formed a picture of the ideal software tester-they share many of the qualities we look for in programmers; but there are also some important differences. Here's a quick summary of the sometimes contradictory lessons that I've learned.

1. Know Programming. Might as well start out with the most controversial one. There's a popular myth that testing can be staffed with people who have little or no programming knowledge. It doesn't work, even though it is an unfortunately common approach. There are two main reasons why it doesn't work.

(1) They're testing software. Without knowing programming, they can't have any real insights into the kinds of bugs that come into software and the likeliest place to find them. There's never enough time to test "completely", so all software testing is a compromise between available resources and thoroughness. The tester must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have useful intuition about where to look.

(2) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The

Page 12: What is Testing

tools, both as testing products and as mental disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on those techniques) are unavailable. The tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most simplistic tools.

Does this mean that testers must have formal programmer training, orhave worked as programmers? Formal training and experience is usually the easiest way to meet the "know programming" requirement, but it is not absolutely essential. I met a superb tester whose only training was as a telephone operator. She was testing a telephony application and doing a great job. But, despite the lack of formal training, she had a deep, valid, intuition about programming and had even tried a little of it herself. Sure she'sgood-good, hell! She was great. How much better would she have been and how much earlier would she have achieved her expertise if she had had the benefits of formal training and working experience? She would have been a lot better a lot earlier.

I like to see formal training in programming such as a university degree in Computer Science or Software Engineering, followed by two to three years of working as a programmer in an industrial setting. A stint on the customer-service hot line is also good training.

I don't like the idea of taking entry-level programmers and putting them into a test organization because:

(1) Loser Image.

Few universities offer undergraduate training in testing beyond "Be sure to test thoroughly." Entry-level people expect to get a job as a programmer and if they're offered a job in a test group, they'll often look upon it as a failure on their part: they believe that they didn't have what it takes to be a programmer in that organization. This unfortunate perception exists even in organizations that values testers highly.

(2) Credibility With Programmers.

Independent testers often have to deal with programmers far more senior than themselves. Unless they've been through a coop program as an undergraduate, all their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming environment is all about. As such, they have no credibility with their programming counterpart who can sluff off their concerns with "Look, kid. You just don't understand how programming is done here, or anywhere else, for that matter." It is setting up the novice tester for failure.

(3) Just Plain Know-How.

The programmer's right. The kid doesn't know how programming is really done. If the novice is a "real" programmer (as contrasted to a "mere tester") then the senior programmer will often take the time to mentor the junior and set her straight: but for a non-productive "leech" from the test group? Never! It's easiest for the novice tester to learn all that nitty-gritty stuff (such as doing a build, configuration control, procedures, process, etc.) while working as a programmer than to have to learn it, without actually doing it, as an entry-level tester.

2. Know the Application.

That's the other side of the knowledge coin. The ideal tester has deep insights into how the users will exploit the program's features and the kinds of cockpit errors that users are likely to make. In some cases, it is virtually impossible, or at least impractical, for a tester to know both the application and programming. For example, to test an income tax package properly, you must know tax laws and accounting practices. Testing a blood analyzer requires knowledge of blood chemistry; testing an aircraft's flight control system requires control theory and systems engineering, and being a pilot doesn't hurt; testing a geological application demands geology. If the application has a depth of knowledge in it, then it is easier to train the application specialist into programming than to train the programmer into the application. Here again, paralleling the programmer's qualification, I'd like to see a university degree in the relevant discipline followed by a few years of working practice before coming into the test group.

3. Intelligence.

Back in the 60's, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were dipping into other fields for trainees. The most infamous of these was IBM's

Page 13: What is Testing

programmers' Aptitude Test (PAT). Strangely enough, despite the fact the IBM later repudiated this test, it continues to be (ab)used as a benchmark for predicting programmer aptitude. What IBM learned with follow-on research is that the single most important quality for programmers is raw intelligence-good programmers are really smart people-and so are good testers.

4. Hyper-Sensitivity to Little Things.

Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given bug can have many different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the symptom is fully explained (i.e., fully debugged) that you have the right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but not last time-a bug. The keyboard is a little sticky-another bug. The account balance is off by 0.01 cents-great bug. Good testers notice such little things and use them as an entree to finding a closely-related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention. Luckily, this attributecan be learned through training.

5. Tolerance for Chaos.

People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to be fully resolved before starting test design or testing, she won't get started until after the software has been shipped. Testers have to be flexible and be able to drop things when blocked and move on to another thing that's not blocked. Testers always have many (unfinished) irons in the fire. In this respect, good testers differ from programmers. A compulsive need to achieve closure is not a bad attribute in a programmer-certainly serves them well in debugging-in testing, it means nothing gets finished. The testers' world is inherently more chaotic than the programmers'.

A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill, research has shown, also correlates well with programmer and tester aptitude. This skill is very similar to the kind of unresolved chaos with which the tester must daily deal. Here's the theory behind the notion. If you do a crossword puzzle in ink, you can't put down a word, or even part of a word, until you have confirmed it by a compatible cross-word. So you keep a dozen tentative entries unmarked and when by some process or another, you realize that there is a compatible cross-word, you enter them both. You keep score by how many corrections you have to make-not by merely finishing the puzzle, because that's a given. I've done many informal polls of this aptitude at my seminars and found a much higher percentage of crossword-puzzles-in-ink afficionados than you'd get in a normal population.

6. People Skills.

Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-social; that won't work for a tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help the tester survive. Testers may have to be diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a ready smile-all work to the independent tester's advantage. This may explain one of the (good) reasons that there are so many women in testing. Women are generally acknowledged to have more highly developed people skills than comparable men-whether it is something innate on the X chromosome as some people contend or whether it is that without superior people skills women are unlikely to make it through engineering school and into an engineering career, I don't know and won't attempt to say. But the fact is there and those sharply-honed people skills areimportant.

7. Tenacity.

An ability to reach compromises and consensus can be at the expenseof tenacity. That's the other side of the people skills. Being socially smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both-socially adept and tenacious where it matters. The best testers are so skillful at it that the programmer never realizes that they've been had. Tenacious-my picture is that of an angry pitbull fastened on a burglar's rear-end. Good testers don You can't intimidate them-even by pulling rank. They'll need high-level backing, of course, if they're to get you the

Page 14: What is Testing

quality your product and market demands.

8. Organized.

I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers use files, data bases, and all the other accouterments of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can make mistakes, so they double-check their findings. They have the facts and figures to support their position. When they claim that there's a bug-believe it, because if the developers don't, the tester will flood them with well-organized, overwhelming,evidence.

A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and editor, I've learned that the inability to express oneself clearly in writing is often symptomatic of a disorganized mind. I don't mean that we expect everyone to write deathless prose like a Hemingway or Melville. Good technical writing is well-organized, clear, and straightforward: and it doesn't depend on a 500,000 word vocabulary. True, there are some unfortunate individuals who express themselves superbly in writing but fall apart in an oral presentation- but they are typically a pathological exception. Usually, a well-organized mind results in clear (even if not inspired) writing and clear writing can usually be transformed through training into good oral presentation skills.

9. Skeptical.

That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only tangible evidence in documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I know where the bugs are.")-and do it with a smile-they ignore all such in-substantive assurances.

10. Self-Sufficient and Tough.

If they need love, they don't expect to get it on the job. They can't be looking for the interaction between them and programmers as a source of ego-gratification and/or nurturing. Their ego is gratified by finding bugs, with few misgivings about the pain (in the programmers) that such finding might engender. In this respect, they must practice very tough love.

11. Cunning.

Or as Gruenberger put it, "low cunning." "Street wise" is another good descriptor, as are insidious, devious, diabolical, fiendish, contriving, treacherous, wily, canny, and underhanded. Systematic test techniques such as syntax testing and automatic test generators have reduced the need for such cunning, but the need is still with us and undoubtedly always will be because it will never be possible to systematize all aspects of testing. There will always be room for that offbeat kind of thinking that will lead to a test case that exposes a really bad bug. But this can be taken to extremes and is certainly not a substitute for the use of systematic test techniques. The cunning comes into play after all the automatically generated "sadistic" tests have been executed.

12. Technology Hungry.

They hate dull, repetitive, work-they'll do it for a while if they have to, but not for long. The silliest thing for a human to do, in their mind, is to pound on a keyboard when they're surrounded by computers. They have a clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll f ind ways to eliminate all such error-prone procedures. I've seen excellent testers re-invent the capture/playback tool many times. I've seen dozens of home-brew test data generators. I've seen excellent test design automation done with nothing more than a word processor, or earlier, with a copy machine and lots of bottles of white-out. I've yet to meet a tester who wasn't hungry for applicable technology. When asked why didn't they automate such and such-the answer was never "I like to do it by hand." It was always one of the following: (1) "I didn't know that it could be automated", (2) "I didn't know that such tools existed", or worst of all, (3) "Management wouldn't give me the time to learn how to use the tool."

13. Honest.

Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously agonize over it. This fundamental honesty extends to a brutally realistic understanding of their

Page 15: What is Testing

own limitations as a human being. They accept the idea that they are no better and no worse, and therefore no less error-prone than their programming counterparts. So they apply the same kind of self-assessment procedures that good programmers will. They'll do test inspections just like programmers do code inspections. The greatest possible crime in a tester's eye is to fake test results.

How to Turn Your Software Testing Teaminto a High-Performance Organization

Introduction

Testing is often looked upon by many as an unmanageable, unpredictable, unorganized practice with little structure. It is common to hear questions or complaints from development such as:

-What is testing doing? -Testing takes too long -Testers have negative attitudes

Testers know that these complaints and questions are often unfair and untrue. Setting aside the development/testing debate, there can always be room for improvement. The first step in improving strategy and turning a test team into a higher performance test team is getting a grasp on where you are now! You want to know:

-What testing is effective? -Are we testing the right things at the right time? -Do we need a staffing upgrade? -What training does our team need? -How does the product team value the test effort?

In this article we provide a framework for assessing your team, including: how to plan for an assessment, how to execute the assessment and judge your current performance, what to do with the information, and how to chart an improvement plan toward higher performance.

The Test Process Assessment

The goal of doing a test process assessment is to get a clear picture of what is going on in testing, the good things, the problems, and possible paths to improvement. Fundamentally, a test assessment is a data gathering process. To make effective decisions we need data about the current test process. If done properly; the assessment will probably cross many organizational and management boundaries.

It is important to note when embarking upon such an assessment that this effort is much larger than the test team alone. Issues will arise over who owns quality as well as what is the goal of testing? It is also important to note that a possible result of the assessment is that work may actually increase. There may be:

-More demands for documentation -More metrics -More responsibility for communication and visibility into testing

For such an assessment process to succeed requires:

-Executive sponsorship -A measurement program -Tools to support change -An acceptance of some level of risk -Avoidance of blaming testing for project-wide failures -Commitment about the goal of testing -An understanding of testing or quality assurance across the product team -Responsibility for quality

Components of a Test Strategy - SP3

A test strategy has three components that need to work together to produce an effective test effort. We have

Page 16: What is Testing

developed a model called SP3, based on a framework developed by Mitchell Levy of the Value Framework Institute. The strategies (S) components consist of:

-People (P1) - everyone on your team -Process (P2) - the software development and test process -Practice (P3) - the methods and tools your team employs to accomplish the testing task

Phase 1 Pre-Assessment Planning: The goals for this phase are to set expectations, plan the project, set a timeline, and obtain executive sponsorship. The actions that occur in phase 1 include meeting with the management of various groups, laying out expectations for the results of the process, describing the plan, and establishing a timeline. The intended result is to obtain agreement on expectations and buy-in on the assessment process and follow-up commitment for improvement. The phase 1 deliverable is a schedule and a project plan.

In phase 1 it is important to:

-Get executive buy-in -Make a schedule and stick to it -Give a presentation of what you are doing, why and what you hope to get out of it -Make a statement of goals or outline of work as a commitment -Make a scope document a pre-approval/budget deliverable

It is important to note up front that assessment is only the beginning of the process.

Phase 2-Information Gathering: The goal of phase 2 is to develop interview questions and surveys which become the backbone of your findings. Actions in phase 2 include gathering documentation, developing interview questions, and developing a test team survey. The result of this phase is that you will be ready to begin your assessment using the documentation, interview questions, and test team survey. The deliverables include complete development process documentation, interview questions, and the tester survey.

Examples of the documentation to be collected include: SDLC documentation, engineering requirements documentation, testing documents (test plan templates and examples, test case templates and examples, status reports, and test summary reports). Interview questions need to cover a wide range of issues, including (but not limited to): the development process, test process, requirements, change control, automation, tool use, developer unit testing, opinions about the test team from other groups, expectation of the test effort, political problems, communication issues, and more.

Phase 3-Assessment: The goal of phase 3 is to conduct the interviews and develop preliminary findings. Actions include gathering and reviewing documentation, conducting interviews, sending out and collecting the surveys. As a result of this phase there will be a significant amount of material and information to review.

Phase 4-Post-Assessment: The goal of phase 4 is to synthesize all of the information into a list of findings. Actions include reviewing, collating, thinking, forming opinions, and making postulations. The result of this phase is that you will develop a list of findings from all of the gathered information, reviewed documentation, interviews, and the survey. The phase 4 deliverable is a list of findings, collated survey answers, collated interview responses, a staff assessment, and a test group maturity ranking.

The findings can be categorized into:

-People - Technical skills - Interpersonal skills -Process - Documentation - Test process -SDLC - Practice - Strategy -Automation - Environment - Tools

Page 17: What is Testing

More subcategories may also be developed to suit your needs.

Phase 5-Presentation of findings with project sponsor, executive sponsor and team: The goal of phase 5 is to present preliminary findings to executives and the project sponsor, and to obtain agreement on the highest priority improvement areas. It is important in this phase to be prepared for a very different interpretation of the findings than you perceived. The deliverable for phase 5 is an improvement roadmap.

Phase 6 -Implementation of Roadmap: The goal of phase 6 is to establish goals with timelines and milestones and sub tasks to accomplish the tasks agreed upon for improvement. The action of phase 6 is to develop a schedule for implementation of the improvement plan. It is helpful at this point to get some aspect of the project implemented immediately so people can see tangible results right away-even if they are the smallest or easiest improvement tasks. The deliverable for phase 6 is implementation of items in the roadmap for improvement according to the developed schedule.

Conclusion

A test strategy is a holistic plan that starts with a clear understanding of the core objective of testing, from which we derive a structure for testing by selecting from many testing styles and approaches available to help us meet our objectives. Performing an assessment helps to provide the "clear understanding" and "understanding of the core objective of testing". Implementing the resulting roadmap for improvement can help to substantially improve the performance of your software testing organization and help to solidify your test strategy.

Work Flow for Software Testing Projects

Minimizing Software Defects via Inspections

Page 18: What is Testing

Many of us have experienced projects that drag on much longer than expected and cost more than planned. Most times, this is caused either from inadequate planning (requirement collection and design) or from an inordinate number of defects found during the testing cycle.

A major ingredient to reducing development life cycle time is to eliminate defects before they happen. By reducing the number of defects that are found during your quality assurance testing cycle, your team can greatly reduce the time it takes to implement your software project.

The key to reducing software defects is to hold regular inspections that find problems before they occur. Below is a list of 5 Tips for Reducing Software Defects:

1. Conduct Requirement Walkthroughs - The best time to stop defects is before coding begins. As the project manager or requirements manager begins collecting the requirements for the software, they should hold meetings with two or more developers to ensure that the requirements are not missing information or are not flawed from a technical perspective. These meetings can bring to surface easier ways to accomplish the requirement and can save countless hours in development if done properly. As a rule of thumb, the requirements should be fully reviewed by the developers before the requirements are signed off.

2. Conduct Peer Code Reviews - Once coding begins, each programmer should be encouraged to conduct weekly code reviews with their peers. The meeting is relatively informal, where the programmer distributes source code listings to a couple of his/her peers. The peers should inspect the code for logic errors, reusability and conformance to requirements. This process should take no more than an hour and if done properly, will prevent many defects that could arise later in testing.

3. Conduct Formal Code Reviews - Every few weeks (or before a minor release), the chief architect or technical team leader should do a formal inspection of their team's code. This review is a little more formal, where the leader reviews the source code listings for logic errors, reusability, adherence to requirements, integration with other areas of the system, and documentation. Using a checklist will ensure that all areas of the code are inspected. This process should take no more than a couple of hours for each programmer and should provide specific feedback and ideas for making the code work per the design.

4. Document the Results - As inspections are held, someone (referred to as a scribe) should attend the meetings and make detailed notes about each item that is found. Once the meeting is over, the scribe will send the notes to each team member, ensuring that all items are addressed. The scribe can be one of the other programmers, an administrative assistant, or anyone on the team. The defects found should be logged using your defect tracking system and should note what phase of the life cycle the defect was found.

5. Collect Metrics - Collect statistics that show how many defects (along with severity and priority) are found in the different stages of the life cycle. The statistics will normally show over time that when more defects are resolved earlier in the life cycle, the length of the project decreases and the quality increases.

Best Practices for Software Projects - Software Measurements

Most software projects fail to deliver on-time and on-budget. To reduce the risk of failure, project managers should implement measurements, allowing them to more accurately estimate projects and to enhance the quality of releases.

The key to efficient measurement is to first determine what goals you are trying to accomplish and what problems you are attacking. Many organizations waste time and money by measuring more things than are necessary. Before beginning a measurement strategy, determine the goals for your measurement. Here are some common reasons for not delivering on-time and on-budget:

-Software coding efforts are not properly estimated -Testing efforts are not properly estimated -Software quality is poor, therefore the testing duration is longer than need be -Customer changes impact the software project, thereby extending the project dates -Attacking the Common Problems

Software Coding Efforts are Not Properly EstimatedThis problem normally arises due these issues:

-Customer Requirements - To properly estimate coding effort, you must create solid customer requirements.

Page 19: What is Testing

The requirements must contain adequate detail to allow the programmers to create detailed designs. From a measurement perspective, you should track the amount of time it takes to develop each customer requirement. Track both estimated and actual hours so that you can use this information to improve future estimates. -Detailed Designs - It is impossible to estimate coding effort without creating a detailed design. The detailed design allows the developer to think through all the tasks that must be done to deliver each requirement. From a measurement perspective, you should track the amount of time it takes to develop each detailed design. Track both estimated and actual hours so that you can use this information to improve future estimates.

Testing Efforts are Not Properly EstimatedThis problem normally arises due these issues:

-Test Plans - Once the customer requirement and detailed design is created and estimated, the test leader should create a detailed test plan that estimates the testing effort. This is done by thinking through the test cases that will be created based on the requirement and design. From a measurement perspective, you should track the amount of time it takes to develop each test plan. Track both estimated and actual hours so that you can use this information to improve future estimates.

Software Quality is PoorThis problem normally arises due these issues:

-No Code Reviews - If regular code reviews are not done, there is a much higher chance of delivering software with poor quality. For large projects, these problems are compounded over time, so it is best to do code reviews early and often (at least weekly). From a measurement perspective, you should track the amount of rework time required due to failed code reviews. This can aid you in planning for rework on future projects. -Failed Smoke Tests - By running weekly smoke tests, you can shorten the testing phase as issues are caught early in the coding and testing cycle. From a measurement perspective, track the number of test cases passed and failed during smoke tests, week by week. The goal is to reduce the number of failed test cases as the project progresses. -Defect Tracking - As testing commences, track the number of open defects vs. total defects to help predict project release dates. Track the number of defects found during code reviews vs. test case execution. This will help track and improve estimation accuracy. Track the percentage of total defects before product release, as this will help assess product quality.

Customer Changes Impact the Software ProjectThis problem normally arises due these issues:

-Missing Change Control Processes - As a project progresses, clients sometimes ask for features to be changed or for features to be added or removed. Before making any changes to the project, each request should be thoroughly investigated and a risk assessment should be done for each request. If changes are necessary and agreed upon by the client, project timelines are adjusted. From a measurement perspective, track the number of change requests, how many were approved vs. rejected, and the effort for estimating reviewing and assessing each change request. This information can be used in future projects to predict the number number of change requests that are approved and estimated as to build time into your projects to mitigate that risk.

Top 10 Negative Test Cases

Negative test cases are designed to test the software in ways it was not intended to be used, and should be a part of your testing effort. Below are the top 10 negative test cases you should consider when designing your test effort.

1. Embedded Single Quote - Most SQL based database systems have issues when users store information that contain a single quote (e.g. John's car). For each screen that accepts alphanumeric data entry, try entering text that contains one or more single quotes.

2. Required Data Entry - Your functional specification should clearly indicate fields that require data entry on screens. Test each field on the screen that has been indicated as being required to ensure it forces you to enter data in the field.

Page 20: What is Testing

3. Field Type Test - Your functional specification should clearly indicate fields that require specific data entry requirements (date fields, numeric fields, phone numbers, zip codes, etc). Test each field on the screen that has been indicated as having special types to ensure it forces you to enter data in the correct format based on the field type (numeric fields should not allow alphabetic or special characters, date fields should require a valid date, etc)

4. Field Size Test - Your functional specification should clearly indicate the number of characters you can enter into a field (for example, the first name must be 50 or less characters). Write test cases to ensure that you can only enter the specified number of characters. Preventing the user from entering more characters than is allowed is more elegant than giving an error message after they have already entered too many characters.

5. Numeric Bounds Test - For numeric fields, it is important to test for lower and upper bounds. For example, if you are calculating interest charged to an account, you would never have a negative interest amount applied to an account that earns interest, therefore, you should try testing it with a negative number. Likewise, if your functional specification requires that a field be in a specific range (e.g. from 10 to 50), you should try entering 9 or 51, it should fail with a graceful message.

6. Numeric Limits Test - Most database systems and programming languages allow numeric items to be identified as integers or long integers. Normally, an integer has a range of -32,767 to 32,767 and long integers can range from -2,147,483,648 to 2,147,483,647. For numeric data entry that do not have specified bounds limits, work with these limits to ensure that it does not get an numeric overflow error.

7. Date Bounds Test - For date fields, it is important to test for lower and upper bounds. For example, if you are checking a birth date field, it is probably a good bet that the person's birth date is no older than 150 years ago. Likewise, their birth date should not be a date in the future.

8. Date Validity - For date fields, it is important to ensure that invalid dates are not allowed (04/31/2007 is an invalid date). Your test cases should also check for leap years (every 4th and 400th year is a leap year).

9. Web Session Testing - Many web applications rely on the browser session to keep track of the person logged in, settings for the application, etc. Most screens in a web application are not designed to be launched without first logging in. Create test cases to launch web pages within the application without first logging in. The web application should ensure it has a valid logged in session before rendering pages within the application.

10. Performance Changes - As you release new versions of your product, you should have a set of performance tests that you run that identify the speed of your screens (screens that list information, screens that add/update/delete data, etc). Your test suite should include test cases that compare the prior release performance statistics to the current release. This can aid in identifying potential performance problems that will be manifested with code changes to the current release.

Testing GUI

General Guidelines for GUI Testing of any Windows based Application.

To test any Windows based application following points are to be considered:

- It is essential to have GUI consistency within the Application.- Should be alike in look and feel as per any other standard Window software.- Should have standard set of keys implemented for the software.- Should have clean and neat exit.

While testing any Windows based application the testing can be broadly categorized into following compartments. They are:- Standardization Testing.- GUI Testing.- Validation Testing.- Functionality Testing.

Standardization TestingThis compartment mainly emphasizes on the standardization part of the application. Standardization means

Page 21: What is Testing

that the application being developed should have standard look and feel like any other window application. The general guidelines are as follows:1. The application should have first "About Application" screen displayed.2. Most of the screens/ dialog box (as on context) should have Minimize, Restore and Close clicks.3. Proper icon should be attributed to the application4. All screens/ dialog box should have a proper caption as per the context used.5. The application should be seen in the Windows Task Bar as well as status bar.

GUI TestingThis compartment mainly emphasizes on the GUI - Graphics User Interface aspect of the Application. It is not concrete that once GUI guidelines are set that can be followed blindly. GUI standards may vary from company to company and also from application to application. But still one can set general guidelines to have an overall idea on how to start GUI testing. These guidelines apply for every screen/ dialog box of the application. General guidelines are:1. All the dialog box should have a consistent look through out the Application system. For e.g.- If the heading within a dialog box is blue then for each dialog box the heading should be of this color.2. Every field on the screen should have an associated Label.3. Every screen should have an equivalent OK and cancel button.4. The color combination used should be appealing.5. Every field in the dialog box should have a Short Cut Key support. For e.g.- User Name6. Tab order should be normally set horizontally for the fields. In some case as per the case the Tab Order can be set vertically.7. Mandatory fields should have * (RED ASTERIK) marked to indicate that they are mandatory fields. 8. Default key <Enter> should be set as OK for the dialog box.9. Default key <Esc> should be set as Cancel for the dialog box.

Validation TestingThis compartment mainly emphasizes on the Validation aspect of the Application. Validation testing mainly depends on the fields set in the dialog box and the functions it has to perform. But still there are certain common rules that can be applied. General guidelines are:1. For text box fields where value entered has to be numeric check following:¨ It should accept numbers only and not alphabets.¨ If field usage is such that for e.g., To accept Total number of daysTelephone numberZip code etc.then it should not accept 0 and negative values.2. For text box fields where value entered has to be alpha-numeric check following:¨ It should accept alphabets and numbers only.¨ If field usage is such that for e.g., accepting First NameMiddle NameLast NameCityCountry etc.then field value should start with an alphabet only.¨ Depending on the condition this fields may accept special characters like -, _, . etc.3. If the field is a combo box then it has to be checked for following points:¨ Check the combo box has drop down values in it, it is not empty.¨ Drop down values should be alphabetically sorted. This might change as per requirement but as standard practices it should be alphabetically sorted. For e.g. to select data type from the list it will be as follows:DateIntegerStringText, etc.¨ Selection of any drop down value is displayed on closing and opening the same dialog box.¨ By default some value like "Select Value" or "_______" string is displayed. This is because User comes to know that value is to be selected for this field. Avoid displaying the first default value in the list.4. If the field is a list box then it has to be checked for following points:¨ Check the list box has values in it, it is not empty.¨ List box values should be alphabetically sorted and displayed. This might change as per requirement but as standard practices it should be alphabetically sorted.

Page 22: What is Testing

¨ Selection of any list box value should put a check before the value and should display the correct value(s) selected on closing and opening of the same dialog box.¨ If the list box supports multiple selection then check whether multiple values can be selected.5. If the field is a list of radio button then it has to be checked for following points:¨ Check whether as per requirements all the values are listed. For e.g. to select date format. Possible values displayed will be as follows:mm/dd/yyyydd/mm/yyyymm/dd/yydd/mm/yyyyyy/mm/dd etc.¨ Same selected value should be displayed on closing and opening of the same dialog box.6. Data Controls are to be tested as part of functionality testing.

Functionality TestingThis compartment mainly emphasizes on the Functionality aspect of the Application. The first step to test the Functionality aspect of the Application is to check whether all the requirements are covered in the software. The actual functionality testing totally depends from software to software. Still one can frame general guidelines. General guidelines are:1. Check the functionality is covered as per Requirement specifications or Functional specifications developed for the software.2. Within a dialog box identify the dependent fields. Depending on the dependency check the enabling and disabling of the fields. For e.g.: to create Contact addresses in any application. To create contact addresses user should be able to add, delete and modify the information. Contact Addresses will contain information like, First Name, Last Name, Address1, Address2, City, State, Country, Zip, Phone, etc., any other information may also be added.This form will have the required fields and in addition to that will have Add, Delete and Update buttons. The functionality of the buttons is as follows:¨ Initially only Add button will be enabled. Delete, Update buttons will be disabled. This is because initially there is no data available and unless one adds one cannot delete or update. In short, unless there is a single valid record available it is not possible to update or delete.¨ Only on selecting a record from the list Delete and Update buttons are enabled and Add button is disabled. By default No records will be selected.¨ Delete and Update should always give confirmation message before actually performing the operation.¨ Delete operation should not show the deleted item in the list.

Best Practices for Software Projects - Risk Management

To deliver software on-time and on-budget, successful project managers understand that software development is complex, and that unexpected things will happen during the project life cycle. There are 2 types of risks that may affect your project during it's duration:

Risks you know about - There are many risks that you know about, that you can mitigate. For example, let's assume that you have assembled a team to work on the project and one of the stellar team members has already scheduled a 3 week vacation just before testing is scheduled, which you agreed to allow. The successful project manager will identify this risk and provide some contingency plans to control the risk.

· Risks you don't know about - There are also risks that you don't know about, so a general risk assessment must be done to build time into your schedule for these types of risks. For example, your development server may crash 2 weeks into development and it may take you 3 days to get it up and running again.

The key to managing risks is to build contingency plans for risk and to build enough time into your project schedule to mitigate risks that you do not know about. Below are a list of the 5 most common scheduling risks in a software development project:

1. Scope and feature creep - Here is an example: Let's say the client agrees to a requirement for a Logon page. The requirement specifies that the client will enter their userid/password, it will be validated and will allow entry upon successful validation. Simple enough. Then in a meeting just before coding is commencing, the client says to your project manager "I was working with another system last week and they send the client a report each day that shows how many people log in each day. Since you have that information already anyway, I'm sure it will only take a couple of minutes to automate a report for me that does this."

Page 23: What is Testing

Although this sounds simple to the client, it requires many different things to happen. First, the project manager has to amend the requirement document. Then the programmer has understand the new requirement. The testing team must build test scenarios for this. The documentation team must now include this report in the documentation. The user acceptance team must plan to test this. So as you can see, a simple request can add days of additional project time, increasing risk.

2. Gold Plating - Similar to scope and feature creep, programmers can also incur risk by making the feature more robust than is necessary. For example, the specification for the Logon page contained a screen shot that showed very few graphics, it was just a simple logon process. However, the programmer decides that it would be really cool to add a FLASH based movie on the page that fades in the names of all the programmers and a documentary on security. This new movie (while cool in the programmer's eyes), takes 4 hours of additional work, put their follow-on tasks are n jeopardy because they are now behind schedule.

3. Substandard Quality - The opposite of Gold Plating is substandard quality. In the gold plating example, the programmer got behind schedule and desperately needed to catch up. To catch up, the programmer decided to quickly code the next feature and not spend the time testing the feature as they should have. Once the feature went to the testing team, a lot of bugs were found, causing the testing / fix cycle to extend far beyond what was originally expected.

4. Unrealistic Project Schedules - Many new team members fall into this trap. Project members (project managers, developers, testers, etc), all get pressure from customers and management to complete things in a certain time frame, within a certain budget. When the timeframes are unrealistic based on the feature set dictated, some unseasoned team members will bow to the pressure and create their estimates based on what they think their managers want to hear, knowing that the estimates are not feasible. They would rather delay the pain until later, when the schedule spirals out of control.

5. Poor Designs - Many developers and architects rush the design stage in favor of getting the design behind them so the "real" work can begin. A solid design can save hundreds of programming hours. A design that is reusable, allows changes to made quickly and lessens testing. So the design stage should not be rushed.

Calculating RiskThe key to successful risk management is to identify all risks you know about and build time in for risks you do not know about. The mechanics of doing this is to simply list the risks, and figure out the loss (in hours) you expect would happen if the risk occurs, and then list the probability of the risk happening. Then you can factor that risk into your project schedule by multiplying the loss by the probability.For example, let's assume that you have decided to use a new outsourcing company to do some of the coding for your project. Since you have not worked with them before, you surmise that there is a 50% probability that you could incur 40 hours of additional development because of the new resource. You would build 20 additional hours (40 hours * 50% probability) into your project plan to mitigate the risk.

Documentation Produced by Quality Assurance

Documentation produced by Quality Assurance. It describes strategy and approach to testing QA will use to validate the quality of the product.

Test Plan

Description: Testing strategy and approach to testing QA will use to validate the quality of the product prior to release.A test plan provides the following information:General description of the project, its objectives, and the QA test schedule. Resource requirements including hardware, software and staff responsibilities. Features to be tested, as well as features not to be tested. Details for the test approach. Lists of test deliverables such as test cases and test scripts. References to related test plans (which focus on specific topics) and project documentation. Dependencies and/or risks. Descriptions of how bugs will be tracked

Page 24: What is Testing

Milestone criteria. Lists of required reviewers who must provide approval of the test plan.

Test plan components

1. Introduction

Description of this DocumentThis document is a Test Plan for the -Project name-, produced by Quality Assurance. It describes the testing strategy and approach to testing QA will use to validate the quality of this product prior to release. It also contains various resources required for the successful completion of this project.

The focus of the -Project name- is to support those new features that will allow easier development, deployment and maintenance of solutions built upon the -Project name-. Those features include:

[List of the features]

This release of the -Project name- will also include legacy bug fixing, and redesigning or including missing functionality from previous release

[List of the features]

The following implementations were made:

[List and description of implementations made]

Related Documents

[List of related documents such as: Functional Specifications, Design Specifications]

Schedule and Milestones

[Schedule information QA testing estimates]

2. Resource Requirements

Hardware

[List of hardware requirements]

Software

[List of software requirements: primary and secondary OS]

Test ToolsApart from manual tests, the following tools will be used:

Staffing

Responsibilities[List of QA team members and there responsibilities]

Training[List of training's required]

Page 25: What is Testing

3. Features To Be Tested / Test Approach

[List of the features to be tested]

Media Verification[The process will include installing all possible products from the media and subjecting them to basic sanity testing.]

4. Features Not To Be Tested

[List of the features not to be tested]

5. Test Deliverables

[List of the test cases/matrices or there location]

[List of the features to be automated ]

6. Dependencies/Risks

Dependencies

Risks

7. Milestone Criteria

Test Case Checklist (Matrices)

Description: An outline or matrices of test cases that test a feature, or set of features.

Test Matrices Sample

Question:I need information about Metrics, which is used to find faults. it is something related with measurement. I need to know how it is used in Quality Assurance.

Answer:You can measure the arrival and departure times of developers, if you have them clock in, but that won't tell you much since not all work is done in the office (and it doesn't mean that they're working when they're in the office). This is, however, still a metric.

The same holds for a true "quality metric". The most familiar one is defects per thousand lines of (uncommented) code. But this metric assumes that:1) you count the lines of code2) the complexity of the code isn't an issue3) the programmers aren't playing games (like using continuance charactersso that what could have been written in one line isn't done in five lines)4) all defects are uncovered in the code in a single pass5) each defect discovered is all others6) defects are uncovered in a linear manner between revisions or builds

The fact is that first you need to know what your goal is. Then you need todiscover or create a metric that will help you achieve that goal. Then youneed to implement it and be prepared to adjust it.

Page 26: What is Testing

You can't use measurements (metrics) to find faults, at least not in software, so that's not a reasonable goal. You can use metrics to help determine if most of the defects have been discovered already. You can use them to tell you how much longer it will take to uncover a reasonable amount of defects. For either of these metrics you will need to know how previous projects of similar size and complexity (using similar languages, etc.) were done in order to get a reasonable comparison.

Posted by Walter Gorlitz

Test Case: File Open #

Test

Description

Test Cases/ Samples

Pass/

Fail No. of

Bugs Bug# Comments

N/A Setup for [Product Name]

setup- - -  

1.1

Test that file types supported by the program can be opened

1.1 P/F # #  

1.2

Verify all the different ways to open file (mouse, keyboard and accelerated keys)

 1.2  P/F  #  #  

1.3

Verify files can be open from the local drives as well as network

1.3  P/F # #  

Test Case

Description: A specific set of steps and data along with expected results for a particular test objective. A testcase should only test one limited subset of a feature or functionality. A test case does the following:Details test setup. Details test procedure (steps). Describes testing environment. Specifies the pass/fail criteria. References results of test.

Sample Testcase

Question:I entered the company first day, suppose the project asked me to write test case on what basis should I write the test case and how should I write it.

Answer:I will bullet them in points for you:1. You need to have a thorough understanding of the application that theproject is working on QC, testers need to have a complete understanding of the project.2. You need to have a signed off Business requirements from the BA (business analysts or business development team)3. Basing these as bench marks you will start writing test cases.4. Some of them prefer writing each business requirement as a seperate testcase and some companies don't do it that way.5. Try to talk to any other QC in the same project or other project to understand the process and procedures they have to write test cases or test plans or all the other stuff.

Page 27: What is Testing

Answer:The process of writing a test case is easy. Only thing is that you need to understand the concept given in your module's FSD (Functional Specification Document). A module is just a part of the project and the FSD is a description of your module!!

Now coming to the test case writing, take a simple example, your windows calculator where you are going to test only one operation which is adding 2 numbers. Here you need to assess the situation and your imagination should be on the right track as you wont get your product or software beforehand.

In case of testing the calculation 1 + 1 = 2, you assume that your windowscalculator has 4 buttons and a text box / display for giving you the answer.Again assume that you have the buttons 1, 1, + and a =. So if you give 1 the first time, it should be displayed on the text box! Again you need to press the operator i.e. + and press the second input 1, which should appear on the display and then press =. The desired output you should get is 2.This is what will be given in the FSD. And moreover they would also give you some fine details about how the process takes place which is not testable. Now the stuff in the second paragraph will be given in your FSD in a refined language.

So first you check when we press 1 do we see it on the screen.Test case should be:"Verification of display of 1 on the screen when the input was given or when the button was clicked"and similarly for the second input:"Verification of display of 1 on the screen when the input was given or when the button was clicked"Now comes the result:"Verification of display of to on the screen when the = button was pressed"

In some FSDs you would be given about a warning when you give non-numerical values like a, b instead of 1..If its given, well you can strait away write another test case "Verification of warning message stating "invalid input (or anything given as per FSD) on the screen when the input given is a character (specifically: A)". If its not given you can note it in your testware (excel sheet) as an observation.

When you are writing a test case please check you test whatever is given in your FSD, Business Requirement.

Answer:No one would expect you to write a usable test case the first day. You must spend time learning the application and reading the requirements etc. This will take several weeks. Then you would determine what test cases have already been written so as not to duplicate efforts. You should also find outwhat is expected of you from your manager .To write the test cases, usually there are test case standards imposed by the organization. The main components are the description, the setup conditions, the objectives of the test case, the actual steps which should be as simple as possible describing how to input something and what is expected as an output. Then a Pass/Fail statement and maybe a comments area describing what you think caused the failure if it failed.

File Open - Test case

Steps to reproduce:1. Launch Application2. Select "File" menu File menu pulls down3. Choose "Open" "Open" dialog box appears4. Select a file to open5. Click OKResult: File should open

Test case

Page 28: What is Testing

Test case ID: B 001Test Description: verify B - bold formatting to the text Revision History:3/ 23/ 00 1.0- Valerie- CreatedFunction to be tested: B - bold formatting to the text Environment: Win 98Test setup: N/ATest Execution:1. Open program2. Open new document3. Type any text 4. Select the text to make bold. 5. Click BoldExpected Result: Applies bold formatting to the text Actual Result: pass

Weekly Status Report Description: Use to inform management regarding latest issues that arise during the software development lifecycle on weekly basis.

Test DocumentType of test file. It is a sample document that contains the feature being tested.

The Release Package

The Release Package is the final document QA prepares. This is the compilation of all previous documents and a release recommendation. Each release package will vary by team and project, but they should all include the following information.

Page 29: What is Testing

Project Overview - This is a synopsis of the project, its scope, any problems encountered during the testing cycle and QA's recommendation to release or not release. The overview should be a "response" to the test strategy and note areas where the strategy was successful, areas where the strategy had to be revised etc.

The project overview is also the place for QA to call out any suggestions for process improvements in the next project cycle.

Think of the Test Strategy and the Project Overview as "Project bookends".

· Project PRAD - This is the Product Requirements Analysis Document, which defines what functionality was approved for inclusion in the project. If there was no PRAD for the project, it should be clearly noted in the Project Overview. The consequences of an absent PRAD should also be noted.

· Functional Specification - The document that defines how functionality will be implemented. If there were no functional specifications, it should be clearly noted in the Project Overview. The consequences of an absent Functional Specification should also be noted.

· Test Strategy - The document outlining QA's process for testing the application.

· Results Summaries - The results summaries identify the results of each round of testing (see section VI - Results by Build). These should be accompanied in the Release Package by the corresponding reports for Test Coverage by Test Type and Test Coverage by Risk Type/Priority from the corresponding completed Test Matrix for each build. In addition, it is recommended that you include the full Test Matrix results from the test cycle designated as Full Regression.

· Known Issues Document - This document is primarily for Technical Support. This document identifies workarounds, issues development is aware of but has chosen not to correct, and potential problem areas for clients.

· Installation Instruction - If your product must be installed as the client site, it is recommended to include the Installation Guide and any related documentation as part of the release package.

· Open Defects - The list of defects remaining in the defect tracking system with a status of Open. Technical Support has access to the system, so a report noting the defect ID, the problem area, and title should be sufficient.

· Deferred Defects - The list of defects remaining in the defect tracking system with a status of deferred. Deferred means the technical product manager has decided not to address the issue with the current release.

· Pending Defects - The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from a technical product manager before a developer addresses the problem.

· Fixed Defects - The list of defects waiting for verification by QA.

· Closed Defects - The list of defects verified as fixed by QA during the project cycle.

The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed by the QA Process Manager during the QA Process Review Meeting and is provided to the Release Board and Technical Support.

· Readiness Review Meeting:The Readiness Review meeting is a team meeting between the technical product manager, project developers and QA. This is the meeting in which the team assesses the readiness of the product for release.This meeting should occur prior to the delivery of the Gold Candidate build. The exact timing will vary by team and project, but the discussion must be held far enough in advance of the scheduled release date so that there is sufficient time to warn executive management of a potential delay in the release.The technical product manager or lead QA may schedule this meeting.

· QA Process Review Meeting:

Page 30: What is Testing

The QA Process Review Meeting is meeting between the QA Process Manager (Barbara Thornton) and the QA staff on the given project. The intent of this meeting is to review how well or not well process was followed during the project cycle. This is the opportunity for QA to discuss any problems encountered during the cycle that impacted their ability to test effectively. This is also the opportunity to review the process as whole and discuss areas for improvement.After this meeting, the QA Process Manager will give a recommendation as to whether enough of the process was followed to ensure a quality product and thus allow a release.This meeting should take place after the Readiness Review meeting. It should be scheduled by the lead QA on the project.

· Release Board Meeting:This meeting is for the technical product manager and senior executives to discuss the status of the product and the teams release recommendations. If the results of the Readiness meeting and QA Process Review meeting are positive, this meeting may be waived.The technical product manager is responsible for scheduling this meeting. This meeting is the final check before a product is released.