Leveraging Functional Tools and AWS for Performance Testing
-
Upload
thoughtworks -
Category
Technology
-
view
701 -
download
2
Transcript of Leveraging Functional Tools and AWS for Performance Testing
Leveraging Functional Test Tools and AWS for Performance Testing
Siva A
sqlandsiva.blogspot.in
Journey from developing Utilities and scaling it for Performance
Perf testing experience
Key Focus
Leading Retail solutions product for managing Traffic, EAS, Inventory Solutions for large retailers like Macys (Link)
Product Under Test
Daily Continuous builds
Test bed management Tasks
Test data management
APIs Testing / Data Services Testing
Functional Testing Challenges
Minimum testable functionality / continuous builds requires quick reusable tools
Smaller components with dedicated code ownership would help in better maintenance, customization
Usually tools developed by one person ends up modified by someone else in team
"Writing small components will give your software a high chance of survival: all individual components are easy to use and understand, and are usable on their own in various use cases“ (Source)
Smaller Test utilities / tools than a large consolidated Suite
Horizontally scaling existing architecture for performance improvements
Performance Validation
Measure Response time of requests for multi user scenarios
Concurrency and Race Conditions
Failover Testing / Upgrade Testing
Metadata Sync Services
Web Services
Non Functional Testing Demand
Web Services – SOAP UI
Metadata Sync Services – Tablediff
Measuring Response time – custom code
Concurrency and Race Conditions– custom code multithreaded application
Failover /Upgrade testing approach – Reboot cases/ Upgrades
Non Functional Testing Demand
Challenges
Environment set up
Different customer environments setup required
Upgrade / Changing configurations is hard
AWS was one of evaluated options that met our needs
Secret behind Performance Story line
SQL Server
IO
CPU
Memory Schema
Data
Code
Minimum Reads and WritesOptimum memory usage
Effective CPU Utilization Well configured and maintained
Index Tuning / TSQL Tuning
Relevant
Optimized
Area Counter Definition Significance Accepted Values
CPU Analysis Total %Processor Time
Total percentage usage of processors, this means usage of all cores divided by their numbers
A High total processor usage means that system is of shortage of CPU resources, and that will result in longer time of processing queries and requests. It can also state that DB Design, Indexing and queries are not optimal
Should be less that 85% for OLTP Systems. Number higher than 85% for longer periods (more than 10 minutes) would indicate CPU Contention
CPU Analysis Processor Queue Length
This is the number of threads ready in processor queue but not currently able to use processor
The more of these processes, the indication that CPU is under pressure and it is putting processes on queue
Generally < 4 per CPU, < 8 is good, < 12 is fair
Memory Analysis Available Mbytes
Amount of free memory in Mbytes on the server that can used by new processes, If the server is under pressure then this value will decrease because processes keep acquiring memory
The more the free memory server has, then chances that processes won't fight for memory they need.
Value should not be less than 300 Mbytes, Otherwise it suggests memory (RAM) bottleneck
Databases (TempDB, Transaction Databases)
Transactions / SecWrite Transactions / SecActive Transactions
Baseline values for comparison of performance against different configurations
SQL Server Behaviour Index Searches Vs FULL Scans
Index Seeks - Number of seek operations engine performs. This includes physical tables, tables created in memory or tempdbFull Scans - Number of FULL Table Scans Table Scans are very expensive
Ensure table scans are minimal and indexes are used eliminating need for table scans
IO and Disk Analysis
Total Disk Queue Length vs Total current disk queue length
Outstanding requests waiting for the disk resources to become available and be processed High value indicates disk bottleneck
Acceptable values should be usually less than 3
360 Degree view of Performance
Take VanillaDB Backup, Restore Versions of DB based on test setting (Automated)
Configure Perf Counters
Clean-up previous run logs / counters / DB traces (Automated)
Enable Deadlock Trace
Start Perf Counters
Start Run (Start LoadGenerators to hit to sites) – Tool, Automated
After run completion, Collect stats from LoadGenerators, CPU, RAM, DB usage
Perf Test Cycle
Perf Test Landscape
Invoking Rest API
Result Comparison (Sample REST API Loadgenerator) – Github Code
From workstation – 4 Proc , 8 GB, (Execution time is -
17.44 minutes, Rate 172.00 - calls per minute, Rate 2.86 - calls per second)
From AWS Machine – 8 Proc, 30GB
Learn to Play with AWS in Mins
Leverage same environments for multiple customers issues repro / fix verification
Save images when not needed, reuse for next cycle
Create and clone multiple load generators on need basis
Able to benchmark and recommend load handling capacity for client specified hardware / recommend hardware for client performance needs
Lessons Learnt
Use same environments to run tests
Develop reusable utilities
Don’t expect commercial tools to do everything
In Progress - CI Integration for functional testing using WIN32 automation, installation, setup, run tests
Key Takeaways
Database and Monitoring Utilities
WSDL comparison across multiple versions
SSMS Tools Pack, Atlantis SQL Server (Data generation / schema analysis)
Tablediff – Data comparison across multiple DB’s
Perf Counters Interpretation & Analysis
Tools