B2 2005 introduction_load_testing_blackboard_primer_draft
-
Upload
steve-feldman -
Category
Documents
-
view
876 -
download
1
description
Transcript of B2 2005 introduction_load_testing_blackboard_primer_draft
© Blackboard, Inc. All rights reserved.
An Introduction to Load Testing: A Blackboard Primer
Steve FeldmanDirector of Performance Engineering and ArchitectureBlackboard Inc.
July 18th 3pm
2
What is Software Performance Engineering?
SPE
SystemSoftware
SoftwareExecutionModel
SystemExecutionModel
3
» Response Time Performance Absolutely Critical Performance Measurement.
» Emphasis on optimizing the application business logic.
» Design Pattern implementation is primary concern.
The Software Execution Model
SPE
SystemSoftware
4
» Response Time Performance Remains Critical Performance Measurement.
» Emphasis on optimizing the deployment environment.
» System Resource Utilization of primary concern.
The System Execution Model
5
» Awareness of system performance peaks and valleys.
» Knowledge of capacity planning needs.
» All of the data is available, but little is done other then basic optimization.
» Looking to extend performance management via environmental optimization.
The System Execution Model and the Performance Maturity Model
Level 1: Reactive
Fire Fighting
Level 2: Monitoring
AndInstrumenting
Level 3: PerformanceOptimizing
Level 4: Business
Optimizing
Level 5: Process
Optimizing
6
How Do We Optimize our Environment Using the System Execution Model?
» Study existing behavior, adoption, growth and system resource utilization patterns.
» Measure live system response times during periods of variations for watermark analysis.
» Simulate synthetic load mocking the usage patterns of the deployment.
7
Introduction to Load Testing
» What is load testing?» Why do we load test?» What tools do we use?» Preparing for load testing?» How do we load test?» What to do with the results of a load test?
8
What is Load Testing?
» Load testing is a controlled method of exercising artificial workload against a running system.» A system can be hardware or software oriented.» A system can be both.
» Load testing can be executed in a manual or automated fashion.» Automated Load Testing can mitigate
inconsistencies and not compromise scientific reliability of data.
9
Why do we Load Test?
» Most load tests are executed with false intentions, (performance barometer).
» Understanding the impact of response times for predictable behavioral conditions and scenarios.
» Understanding the impact of response times for patterns of adoption and growth.
» Understanding the resource demands of the deployment environment.
10
What tools do we use?
» Commercial Tools» Mercury LoadRunner (Currently Used at Blackboard)» Segue SilkPerformer (Formally Used at Blackboard)» Rational Performance Studio
» Freeware Tools» Grinder (Occasionally Used at Blackboard)» Apache JMeter (Occasionally Used at Blackboard)» OpenSTA
» Great Resources for Picking a Load Testing Tool» Performance Analysis of Java Web Sites by Stacy Joines
(ISBN: 0201844540) » http://www.testingfaqs.org/t-load.html
11
Preparing for load testing?
» Define Performance Objectives» Use Case Definition» Performance Scenarios» Data Modeling» Scripting and Parameterization
12
Define Performance Objectives
» Every Load Test Should Have a Purpose of Measurement.
» Common Objectives» Sessions Per Hour» Transactions Per Hour» Throughput Per Transaction and Per Hour» Response Time Calibration» Resource Saturation Calibration
13
Define Use Cases
» Use Cases should be prioritized based on the following:» Criticality of Execution» Observation of Execution (Behavioral Modeling)» Expectation of Adoption» Baseline Analysis
14
Define Performance Scenarios
» Collection of one or more use cases sequenced in a logical manner (compilation of a user session)
» Scenarios should be realistic in nature and based on recurring patterns identified in session behavior models.» Avoid simulating extraneous workload.» Iterate when necessary.
15
Design an Accurate Data Model
» Uniform in Construction» Naming Conventions (Sequential)
» User000000001, LargeCourse000000001, 25-MC-QBQ-Assessment, XSmallMsg000000001, etc…
» Data Constructions (Uniform for Testability)» XSmallMsg000000001 Contains 100 Characters of Text» XLargeMsg000000001 Contains 1000 Characters of Text
(Factor of 10X)
» Multi-Dimensional Data Model» Fragmented in Nature
» Data Conditions for Testability» Scenarios for Testability
16
Scripting and Parameterization
» Script Programmatically» Focus on Reusability, Encapsulation and
Testability» Componentize the Action Step of the Use Case» Use Explicit Naming Conventions» Example: Measure the Performance of Opening a
Word Document.» Authenticate(), NavPortal(), NavCourse(),
NavCourseMenu(Docs), ReadDoc()
17
Scripting and Parameterization: Example
/** * Quick Navigation: Click on Course Menu * Params Required: course_id * Params Saved: none */CourseMenu()CourseMenu(){
static char *status = "Course Menu: Course Menu";
bb_status(status);lr_start_transaction(status);bb_web_url("{bb_target_url}/webapps/blackboard/content/courseMenu.jsp?mini=Y&course_id={bb_course_pk}", 'l');lr_end_transaction(status, LR_AUTO);lr_think_time(navigational);
}
Code Comments
Reusable Action Name
Echo Status and Transaction
HTTP Representation with Parameterization and Abandonment
18
Scripting and Parameterization
» Parameterize Dynamically» Realistic load simulations test against unique data
conditions.» Avoid hard-coding dynamic or user-defined data
elements.» Work with uniform, well-constructed data sets
(sequences)» Example: Parameterize the username for
Authentication().» student000000001, student000000002,
instructor000000001, admin000000001, observer000000001, hs_student000000001
19
Scripting and Parameterization: Example
// Save various folder pks and go to the course menu folder
web_reg_save_param("course_assessments_pk", "NotFound=Warning", "LB=content_id=_", "RB=_1&mode=reset\" target=\"main\">Assessments", LAST);
web_reg_save_param("course_documents_pk", "NotFound=Warning", "LB=content_id=_", "RB=_1&mode=reset\" target=\"main\">Course Documents", LAST);
Parameterization Name: Course_Assessments_PkIf Not Found: Issues a WarningFinds the Values b/w Left Boundary and Right Boundary
These are LoadRunner Terminology References. However, other scripting tools use same constructs.
20
Scripting and Parameterization: Blackboard Gotchas
» RDBMS Authentication» One Time Token» MD5 Encrypted Password» MD5 (MD5 Encrypted Password + One Time
Token)
21
Scripting and Parameterization: Blackboard Gotchas
» Navigational Concerns» Dynamic ID’s
» Tab IDs» Content IDs» Course IDs» Tool IDs
» Modes» Reset» Quick» View
» Action Steps» Manage, CaretManage, Copy, Remove_Proc» Family
22
Scripting and Parameterization: Blackboard Gotchas
» Transactional Concerns» HTTP Post
» Multiple ID submissions» Action Steps» Data Values» Permissions» Metadata
23
Scripting and Parameterization: Blackboard Gotchas (Example)
/** * User Modifies Grade and Submits Change * Params Required: Random Number from 0 to 100 * Params Saved: none */
ModifyGrade(){
static char *status = "Modify Grades";bb_status(status);start_timer();lr_start_transaction(status);web_submit_form("itemGrades",
"Snapshot=t28.inf",ITEMDATA,"Name=grade[0]", "Value={randGrade}", ENDITEM,"Name=grade[1]", "Value={randGrade}", ENDITEM,"Name=grade[2]", "Value={randGrade}", ENDITEM,"Name=grade[3]", "Value={randGrade}", ENDITEM,"Name=submit.x", "Value=41", ENDITEM,"Name=submit.y", "Value=5", ENDITEM, LAST);
lr_end_transaction(status, LR_AUTO);stop_timer();abandon('h');lr_think_time(transactional);}
Transaction Timer
Dynamic Parameterized Values
Explicit Abandonment Policy and Parameterized Think Time
24
How do we load test?
» Initial Configuration» Calibration» Baseline» Environmental Clean-Up» Collecting Enough Samples» Optimization
25
Load Testing: How To Initially Configure
» Optimize the Environment from the Start» Consider it your baseline configuration» Knowledge of embedded sub-systems» Previous Experience with Blackboard and/or
current deployment Configuration» Think twice about using the out of the box
configuration.
26
Load Testing: Calibration
» Definition: The process of identifying an ideal workload to execute against a system.
» Blackboard Performance Engineering uses two types of Calibration.» Identify Peak of Concurrency (Key Metric for
Identifying Sessions per Hour)» Calibrate to Response Time » Calibrate to System Saturation
27
Load Testing: Response Time Calibration
X-Axis: IterationsX-Axis: Iterations
Y-A
xis
: R
esp
on
se T
ime
Y-A
xis
: R
esp
on
se T
ime
Response Time Threshold LineResponse Time Threshold Line
Optimal WorkloadOptimal Workload
28
Load Testing: Response Time Calibration
X-Axis: IterationsX-Axis: Iterations
Y-A
xis
: R
esou
rce U
tiliza
tion
Y-A
xis
: R
esou
rce U
tiliza
tion
Resource Saturation ThresholdResource Saturation Threshold
CPUCPU
Optimal WorkloadOptimal Workload
29
Load Testing: How To Baseline
» The baseline is the starting point or comparative measurement» Defined Use Cases » Arrival Rate, Departure Rate and Run-Time iterations» Software/System Configuration.
» Arrival Rate: Rate in which virtual users are introduced on a system.
» Departure Rate: Rate in which virtual users exit the system.
» Run-Time Iterations: The number of unique, iterative sessions executed during a measured test.
30
Load Testing: How To Baseline
Arrival Period
Run-Time Iterations
Departure Period
X-Axis: TimeX-Axis: Time
Y-A
xis
: U
sers
Y-A
xis
: U
sers
WorkloWorkloadad
Departure Period
31
Load Testing: How To Clean-Up between Tests
» Tests Should Be Identical in Every which Way» Restore the Environment to it’s previous state
» Remove Data Added from Test» Truncate Logs
» Keep the Environment Pristine» Shutdown and Restart Sub-Systems
» Remove All Guessing and What If Questions» Automate these Steps because you will test
more then once and hopefully more then twice.
32
Load Testing: Samples and Measurements
X-Axis: TimeX-Axis: Time
Y-A
xis
: U
sers
Y-A
xis
: U
sers
Samples = Iterations
Response TimeMeasurementBegins After
Arrivals
Response TimeMeasurementEnds BeforeDeparture
Calibrated DataCalibrated DataIs more reliableIs more reliable
33
Load Testing: How To Optimize
» Measure Against Baseline» Instrument 1 data element at a time» Never use results from an instrumentation run
» Introduce One Change at a Time» Comparative Regression Against Baseline» Changes Should be Based on Quantifiable
Measurement and Explanation» Avoid Guessing» Cut Down on Googling (Not Everything You Read on the
Net is True)» Validate improvements through repeatability
34
What to do with the Results of a Load Test
» Advanced Capacity Planning» Operational Efficiencies» Business Process Optimization
35
Advanced Topic: Behavioral Modeling
» Behavior modeling is a form of trend analysis.» Study navigational and transactional patterns of
user activity within the application.» Session lengths and click path depths» Study patterns of resource utilization and peak
usage» Deep understanding of seasonality usage versus
general usage adoption.
» Many tools to diagnose collected data.» Sherlog, Webalizer, WebTrends
36
Advanced Topic: User Abandonment
» User Abandonment is the simulation of a user’s psychological patience when transacting with a software application.
» Two Types of Abandonment:» Uniform (All things equal)» Utility (Element of randomness)
» Load tests that do not simulate abandonment are flawed.
» Two Great Reads» http://www-106.ibm.com/developerworks/rational/library/4250.
html» http://www.keynote.com/downloads/articles/tradesecrets.pdf
37
Resources and Citations
Joines, Stacey. Performance Analysis for Java™ Websites, First Edition, Addison-Wesley, ISBN: 0201844540;
Maddox, Michael. “A Performance Process Maturity Model,” 2004 CMG Proceedings.
Barber, Scott. “Beyond Performance Testing part 4: Accounting for User Abandonment,” http://www-128.ibm.com/developerworks/rational/library/4250.html, April 2004;
Savia, Alberto. “http://www.keynote.com/downloads/articles/tradesecrets.pdf,” http://www.keynote.com/downloads/articles/tradesecrets.pdf, May 2001;