Web App Performance and Scalability Testing
— software development, coding, web development, pluralsight — 4 min read
Performance and scalability are two important attributes to examine whenever you're building a web application.
Performance
Performance consists of responsiveness (POV of a client), throughput (POV of system), and resource utilization (POV of system). This will vary by load, so when testing performance, load should be constant.
Scalability
Scalability is related to an application's ability to increase load while holding performance constant by adding resources. This will vary when performance changes, so performance should be fixed when testing scalability.
In an ideal world, applications would have linear scale. That is, each additional resource should allow for the same increase in load as did the previous resource added. Essentially, to use a term from economics, your goal is to have constant returns to scale:
X in -> Y out, and nX in -> nY out, no matter how big n gets.
Testing can help you move towards linear scale by helping you identify bottlenecks in your application.
Measurement
Measuring performance and scalability is important because it allows developers to know whether they are improving these metrics (or making them worse) and whether the system is production-ready, based on thresholds/goals that have been set for the application.
Whatever type of testing you're doing, it's important to keep track of how your application performs in the tests over time. Is it improving? Did the changes you made make a difference? Keep track of the information you'll need to answer these questions.
What to Measure
Don't measure irrelevant things - in an application without a database, for example, don't measure database communication. Pretty self explanatory.
On the flipside, better to measure too much than too little. But do keep in mind that more counters = more data that needs to be stored.
Performance Counter Categories:
- Processor
- Memory
- Disk
- Network
- ASPNET
- Database
Performance Requirement Definition
These should be specific.
- What is a request?
- What is the time threshold?
- At what load?
- What kind of data will the application be working with?
- Resource constraints?
Testing
Tools
Applications like Visual Studio, along with some third-party plugins, provide the tools necessary for some very specific performance tests.
Performance Testing
Performance tests... test an application's performance.
Load Testing
Load tests generally test performance based on user load, requests/sec, and errors/sec. These tests should also consider resource utilization - how many servers, how these servers are being used, etc. Load tests are a lot like performance tests, but load tests involve a number of simulated users all testing performance at the same time. Additionally, load tests generally involve a collection of different individual performance tests, rather than just running the same test over and over.
Think time is an estimation of the time a user spends on each page. In performance tests (especially load tests), think time is implemented as a delay between web requests. This is a useful tool, and the most realistic implementation of it is also the default, at least in Visual Studio: normal distribution of think time centered on think time recorded when test was recorded.
Two options for load pattern:
Constant Load - same number of users for the whole test
Step Load - increase number of users over the course of the test
You can also configure load tests to run from multiple machines at once. This is something you really should do, especially for larger tests.
Stress Testing
Load testing, but you're trying to determine how the application responds at very high user loads. Essentially, you know the system will fail at that level of load, and you want to see how it fails.
Special Cases
Smoke Test - does the application work under normal circumstances?
Capacity Test - what, exactly, is the application's maximum load?
Endurance Test - how does the system function over an extended period of time (as it would have to in production)?
Analysis
When examining the results of your tests, one of the most useful tools at your disposal is a graph. Look at the graphs for your counters in order to identify bottlenecks. Constant values are not interesting; horizontal lines probably won't tell you much. However, values that increase (or decrease) as users are added and values that spike at different times are counters you should pay attention to, because they will help you optimize the system.
One key counter that can be really useful in bottleneck identification is the requests per second counter. When using a step load pattern in a load test, the requests per second should increase as the number of users increases. At some point, though, requests per second may plateau, and this is an indication of the application's highest possible requests per second, given its current configuration.
Final Notes
- Premature optimization is bad
- Optimizing (when it's determined necessary) should be done one variable at a time, so you can test and see what changes actually achieved optimization
Resources
This post has notes from the following sources:
- Web Application Performance and Scalability Testing Pluralsight course by Steve Smith
- Load Testing and the Requests per Second Curve by Steve Smith
Thanks for reading! I hope you find this and other articles here at ilyanaDev helpful! Be sure to follow me on Twitter @ilyanaDev.