WHY SINGLE USER PERFORMANCE TESTING IS IMPORTANT
-OVERVIEW-
FEBRUARY 2012GIHAN B [email protected]
SENIOR SYSTEMS ENGINEER
Going for low hanging fruits first…
NONINVASIVE PERFORMANCE TUNING
Why Performance
No matter how well applications are designed,
no matter how well they meet business requirements,
they are virtually useless to end-users if the performance is sluggish or unreliable
Why this is doable and its benefits
But the good news is very majority of these performance issues can be found without sophisticated, expensive testing tools.
You can find majority of performance issues with baseline testingYou can find a sub set of these performance issues with single
user testing (SUT)Benefit two fold. Verify feature as well as single user
performance testing (when the duration for the full scenario is measured)
This could be a health check of the product (when run regularly)Test high ROI feature scenarios
Have you observed any of these behaviors in your PC or in the Test Server?
Computer suddenly freezes and unusable often start working on Monday
Any operations that you try to execute is extremely slow
Running out of disk space due to huge log filesOut of Memory Exception from applicationsStack Overflow errors from applicationsRendering of some pages of applications is extremely
slow.
why single user performance testing is so important?
Gives an early indication of system design feasibilityCan be measured with existing set of tools availableCost of single user testing is low and can be done prior to
multiple user testingCan be done with minimal experience in Performance
TestingCan be done repetitively with minimum efforts
Single User, Baseline and Benchmark testing
Baseline Performance testing (Done for each scenario say with 1, 2, 10, 20 and 50 users to determine baselines for mainly response times) But the performance objective could be to run 500 users
Suppose you determine that 5 seconds for a webpage load is acceptable performance. If you execute your performance test cases and see that the webpage is loading at 8 seconds, that 8 second load time is your baseline results. But this is not acceptable performance. From that point, the development team need to make the webpage load faster. Each time you run the same test, these will be your regression tests. If the webpage makes it to a 5 second load time, at that point your test case passes
Contd…
Baseline performance testing1 User
Baseline performance testing5 User
Baseline performance testing20 User
Baseline performance testing50 User
Benchmark testing (IT)
Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
A benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.
Compare with competition for availability and stability etcUsually done by certified labs. VeriTest, Lions Bridge and
Microsoft etc
SUT Scenarios are…
SUT can be considered as a sub set of baseline and benchmark performance testing
If your performance requirement is 5 seconds for a SUT this should be in sub second range
Determine the ideal performance of the systemSUT scenarios should cover most used scenarios of the
application or systemSUT scenarios can include think timeSUT can be done manually(stop watch) or automated(tool)Should consider measuring performance of multiple
scenarios (Probably 10 to 20 scenarios but no more)
When SUT can be executed
Each weekEach iterationEach releaseEach customer deploymentEach product upgrade (Software and hardware)When network mode or infrastructure changesWith data centre relocations
Be organized and methodical…
Follow a processDo benchmarks regularlyDedicate approved test bench for this testingPublish results where it is visible to everyone in
development teamShare results and raise concerns when needed with
appropriate peopleWork closely with key people in development teamForm a performance SWAT team if necessary
Benchmarking Single User Performance manually (numbers are superficial)
Example SUT Scenario
1. Create a visitor2. Create an equipment manufacturer, model and unit3. Assign the equipment to a mobile device4. Assign a visitor for the equipment created5. Delete an equipment6. Modify an equipment7. Create a a prerequisite for the equipment8. Create a designated area policy9. Assign the policy to equipment10. Create speed limits for roads11. Simulate movement of the equipment12. Verify the feature MEASURE – Total duration from Strep 1 to 12
Weekly SUT dashboard - Automated
Workload profile in aid of SUT scenarios
SUT Baseline Graph
SUT Response time composition
Why it is important to define performance targets in the beginning of the project
This will result in technology selectionImpact on application architecture and designStage optimizations can be done until meeting the target
Response Time Target
Deviation Allowed
Response time with deviations
Small 1 Sec 0% 1 |Sec
Medium 1.5 Sec 50% 0.75 – 2.25 Sec
Large 2 Sec 100% 1 – 4 SecDuration Response Time
Report A Up to 6 months < 4 seconds
Report A Up to 2 years < 8 seconds
Report A Up to 4 years < 12 seconds
PERFORMANCE OBJECTIVES
Following four areas we need to look into when meeting performance objectives. Values for these need to be defined under requirements in the beginning of the project
Response TimeThis is the time it takes for a system to complete a particular operation, such as a user operation
ThroughputThis is the amount of work a system can support. Throughput can be measured in terms of bytes per second in our application (This could be requests per second or transactions per second in a web application)
Resource UtilizationThis is the % of system resources that are used by particular operations and how long they are used. This is the cost of the server and network resources, including CPU, memory, Disk I/O and network I/O
Workload or PayloadThis is usually derived from marketing data and includes total number of users, concurrently active users, data volumes, and byte rates etc.
Probable performance bottlenecks
- A bottleneck is the resource that constrains throughput- Identifying bottlenecks Measure response time Measure throughput Measure resource utilization
- System resource issues: (Need System tuning) CPU Memory Disk I/O
Network I/O - External resource issues: (Need .NET Framework tuning) Application Server - Session and state management, thread contention, interfaces Database Server - Database design, inefficient indexes, stored procedures and queries
Is the bottleneck related to CPU USAGE
Is the bottleneck related to MEMORY
Is the bottleneck related to DISK I/O USAGE
Is the bottleneck related to NETWORK I/O Usage
SOME SPECIFIC GUIDELINES FOR SINGLE USER PERFORMANCE TESTING
Consider raising issues first with more frequently used operations. They are the ones which are going to reflect most as bad user experience for your customers.
Mobile client performance (or response time) is more important than office client performance.
If you want to run performance counters of a different machine in your network from your PC then run \\machinename\C$ to open them.
When measuring response time with a stopwatch take reading of more than one run. .NET applications may reflect a bad reading first time when you run a certain operation. It is always better if you record the second reading or even better average out 2nd and 3rd readings.
Contd…
Do not try to use too many performance counters first time when you are trying to identify performance issues. Start with a set 20 to 25 counters the most and then when you sense a problem then dig in more with increasing the number of counters in that area of the bottleneck.
Don’t do SQL profiling and use of performance counters to monitor performance issues at the same time. There is too much overhead to the system that you are measuring performance.
If you are using the SQL Profiler to look at how the system performs any query taking more than 50ms with a single user may lead to potential performance and scalability issues when running with multiple users.
APPROACH FOR MEETING PERFORMANCE OBJECTIVES
Plan Understand the need Identify requirements (Objectives)
Prepare Provide a plan with an approach Set aside a test bench
Execute Organize and prioritize Execute regularly Measure and analyze
Publish Share and publish results Execute regularly
Retuning and Postmortem Optimize/tune as necessary Take appropriate action
SOME TIPS FOR USING WINDOWS PERFORMANCE MONITOR
Use counter logs to monitor activity of certain operation or a simulation for a long time. Once you add a set of performance counters to the log you can start them manually or start this automatically at a certain time or triggered by another counter exceeding a limit.
Work with small set of counters if you are measuring any response time of an application. Otherwise the overhead from this could affect your reading.
You can set the interval of monitoring points longer if you are running these counters for a longtime. Say from default every 15 seconds to every 1 minute.
Change the scale of the performance counters appropriately for it to be properly visible in the system monitor
If you are creating a counter or trace log, store it in the local disk not in a shared location in the network
You can agree or disagree with these…
Successful test is not a test case that passes execution. It is a one that fails and which can find a defect
Software engineers and test engineers need to have different mindset to be successful in their positions. SEs’ mostly build systems and in contrary TEs’ break systems.
Intelligent short duration ad hoc testing exercise can be more effective and successful than a well planned well thought of regression test exercise in certain times. Success will vastly depends on the test engineer’s knowledge of the product and his/her testing skills.
References… Q&A
Improving .NET Application Performance and Scalabilityhttp://www.performancetesting.co.za/Baseline
%20Testing.htm Performance tuning and Optimizing ASP.NET ApplicationsSilk Performer User manualsThe Art of Software Testing
Q&A
Top Related