This project is read-only.

How To: Conduct Performance Testing Using Automated Load Test Tools

Mark Tomlinson

Applies To

  • Performance Testing


This document describes an approach to conducting Performance Testing for your applications using an automated load testing tool. Performance testing your application is very valuable to assessing the quality of the application, identifying serious bottlenecks and improving the scalability of the underlying systems that support your applications. You will also find the results from performance testing can help you to estimate the hardware configuration required to support the application before you go live to production.


  • Objectives
  • Overview
  • Summary of Steps
  • Step 1. Performance Test Planning
  • Step 2. Developing Performance Tests
  • Step 3. Build Testing Environment
  • Step 4. Execute Tests
  • Step 5. Analyze Test Results
  • Step 6. Create Test Reports
  • Resources


  • Become familiar with performance testing fundamentals.
  • Learn a basic approach to performance testing Web Applications.


Performance testing can be described as identifying how an application utilizes system resources when a component or application is loaded with some type of activity. To accomplish this, you will typically use a performance testing tool that will emulate increasing quantities of end-user activity on your application. These test tools require you to develop multiple, individual performance test scripts in order to cover the most important parts of the application. By examining your application's behavior under emulated load conditions, you can typically identify whether your application performance is good, or bad – or whether it will meet the specified requirements for performance.

The basic approach to conducting performance testing starts with test planning and preparation steps, followed by the test execution and performance analysis. It is important to spend proper attention during the planning and preparation steps to ensure you have everything ready when it comes to executing the tests on your application. These steps can be used for testing projects that are at the smallest unit-level testing or a large-scale production simulation and capacity planning initiative. The same overall approach and activities can be applied equally to the varying sized project scope.

The most common reasons for conducting performance testing can be summarized as follows:
  • To ensure fast response times for end-users
  • To ensure the system can handle production traffic and not crash
  • To ensure the application doesn’t over-utilize system resources like CPU, or RAM.
  • To ensure the system can handle more and more users, over time.
  • To ensure the application is free from limits and bottlenecks
  • To determine how much hardware and what kind of hardware to use
  • To ensure different system configurations or architectures will work in production
<reference here to Types of Performance Testing>


Here’s a list of the documented information commonly required to start performance testing:
  • Performance test plans, techniques, tools and strategies
  • Performance Requirements from Application Specification
  • Application Scenarios and Use Case Descriptions
  • System Architecture or Diagram
  • Workload Characteristics (estimated or from production measures)
  • Key Performance Metrics


Here’s a list of the common output items that result from conducting performance testing:
  • Measurements of performance of the application at various load levels
  • Pass/Fail results about the Application Performance, or system capacity
  • Updated test plans that can be reused
  • Bottleneck suspects deserving additional analysis
  • Current operating capacity and future estimates for maximum load


  • Step 1. Performance Test Planning
  • Step 2. Developing Performance Tests
  • Step 3. Build Testing Environment
  • Step 4. Execute Tests
  • Step 5. Analyze Test Results
  • Step 6. Create Test Reports

Step 1. Performance Test Planning

Just as with any project, your Performance Test project should begin with a Performance Test Plan. The test plan is a document that captures all the information required to conduct a performance test. Creating a test plan enables you to make key decisions and is critical to the success of the whole testing effort. Take extra time to think critically about each part of the test plan, and be as specific as possible.

Here are the items you should consider including in a performance test plan:
  • Introduction – write a brief overview of what the application is, what it does, why you are testing the application’s performance.
  • Features to be tested - write a listing of each component to be tested; which could be individual web pages, database stored procedures, web services or method calls.
  • Item pass/fail criteria – detail the critical performance objectives for each item in the list of features to be tested and be as specific as possible about response time, throughput or system resources.
  • Testing tasks – create a list of the test scripts that must be written, and also the other tasks required to conduct the test like installing hardware, configuring a test network and loading test data.
  • Workload Profile – create a table or matrix that consists of an aggregate mix of the activity or operations to be tested simultaneously. Identify the workload associated with each of the identified key user scenarios.
  • Environmental Needs – write a listing of hardware test systems required to conduct the testing and if you can create a diagram to illustrate the system architectural. Be sure to include all components needed, including test tool hardware, network emulators or SAN storage.
  • Schedule – put together a sequence and schedule for the testing tasks to do, and detail the dependencies between each task on the schedule
  • Risks and contingencies – list the critical items that are required to conduct your test like test data being imported, and a test network being configured. Also document the risks if any tasks are incomplete, like there is not enough time to write all the scripts, or there is a potential electricity outage in the test lab.

It is important to capture your performance test plan in a way that can be updated and shared throughout the entire testing effort; which may not necessarily be single document. Using new collaboration tools are a great way to share information and decisions as you work on the plan. Make sure the test plan is available as you work on the other steps in this approach.

<Insert reference to Identify Performance Testing Objectives>
< How To: Quantify End-User Response Time Goals>
<Insert reference to Performance Test Planning Explain>?
<Insert reference to Model the Workload for Web Applications>

Step 2. Developing Performance Tests

Using the information from the test plan, you should start developing the performance test using a test tool or utility. For each component to be tested, you should write a test to make calls or generate load for that component. Also, you should include the test criteria for each component where you plan to measure response time or throughput of data.

Here are some of the steps used to develop a test:
  • Record or program the requests to your application or component
  • Add necessary settings to measure response time and/or throughput
  • Randomize the data used as input to the application
  • Configure error-handling and logging for the test script calls
  • Write test measurements to a log or in the test tool

Many of the test tools you may be using will have built-in features to capture response time and throughput measures from the tests you write. Be sure to familiarize yourself with the testing tool you are using, especially for end-user time measurements, data parameter settings and end-user simulation configuration.

In addition to developing test scripts, you will also configure the testing tool with the workload profile you created in the test plan. Most test tools enable you to combine several scripts together and execute them simultaneously, thus you can generate a comprehensive load on your application. Here are the steps for creating the load test project or scenario:
  • Add individual tests to the test project, according to the workload profile
  • Configure a load pattern to control how the load is gradually increased
  • Specify scenario settings like network and browser simulation, or mix of tests to run
  • Add load generators to the project to increase the tool’s ability to add load
  • Configure performance counters to capture measurements from your application
  • Specify where the tool with save results from the load test

There are four basic metrics that should be included in performance monitoring:
  • Processor Resource Utilization (%, kernel, user)
  • Memory Resource Utilization (% of total, bytes in use)
  • Network Resource Utilization (% bandwidth, bytes)
  • Disk Resource Utilization (bytes written, read)

<Insert reference to Step Through Creating a Load Test In VS 2005>
<Insert reference to How To - Create a Load Test Plug-in to Control the Number of Test Iterations During a Load Test in VS.NET 2005>

Step 3. Build Testing Environment

From the test plan you have documented the existing or planned architecture for your application. This document should include the hardware, software and network architecture required to support the application. To ensure accurate test results, the test environment should be configured like the production environment, as closely as possible.

The following tasks may be included when building the test environment:
  • Install and configure test hardware, including servers and storage
  • Install and configure test network with the proper routing and simulators
  • Install the testing tool components, including load agents and controllers
  • Enable and configure performance monitoring software, and diagnostic tools
  • Install the supporting software required for the application
  • Install and configure the application you are testing
  • Import application test data used for the test execution and scripts

Step 4. Execute Tests

Test execution is the most exciting part of conducting a performance test. Coordination and change control are the key skills to doing test execution well. It is important to coordinate the beginning and ending of each test run and to keep a comprehensive test log. Change control should be used to manage all changes between each test run and communicating each change with everyone involved is also very important.

Here are some general tips to help you get started:
  • If you will be repeating the test, you should have a test data restore point before you begin testing
  • The first run should be a smoke test run, where you ensure that the load testing tools and test environment systems are working properly.
  • Don’t use your smoke test results as an official or formal set of results from your testing.
  • Make sure that the test results and metrics are being captured correctly
  • Capture a performance baseline before you continue testing and tuning
  • Designate someone as the test coordinator to initiate the beginning and ending of each test cycle
  • Create an ongoing test execution log capturing notes and observations for each run

Test execution will continue and repeat according to the performance testing objectives listed in the test plan. Test execution phases may include the following types of test runs:
  • Capturing a performance baseline
  • Fixing application errors or debugging
  • Stressing the application to maximum capacity or user load
  • Measuring application response-time at a specified load
  • Validating application tuning or optimizations
  • Evaluating effect of application failover and recovery
  • Measuring effects of different system configurations

At the end of each test run you should gather a quick summary of what happened during the test and add these comments to the test log. This can include machine failures, application exceptions and errors, network problems or exhausting disk space or logs. When you complete your final test run, be sure to save all the test results and performance logs before you tear down the test environment.

<Insert reference to Load Test Web Applications>
<Insert reference to Transactional Stress Test in Web Applications>
<Insert reference to Tune Performance of Web Apps>

Step 5. Analyze Test Results

Analyzing the results to find performance bottlenecks can be performed between each test run, or after all the test execution has been completed. This requires training and experience with graphing the performance measurements for system resource utilization and the ability to correlate the graph data with end-user measurements. The testing tool will typically have capabilities for displaying all these results in an organized way.

Using the results from the tool, you should produce the following graphs:
  • Test tool # of users running (total)
  • Test tool # of tests or requests (tests per second)
  • Test tool response times (seconds, or milliseconds)
  • Test tool throughput (kb/sec, total and for each vuser)
  • Application Processor Resource Utilization (%, kernel, user)
  • Application Memory Resource Utilization (% of total, bytes in use)
  • Application Network Resource Utilization (% bandwidth, bytes)
  • Application Disk Resource Utilization (bytes written, read)
  • Application Metrics (requests/sec, # connections, queuing)

You must correlate the data between the graphs, looking for points during the test where the results are similar or dissimilar. For example, you may observe a slower end-user response time at the same time you see application requests queuing up on the server. The end goal is to determine the root cause of application request queuing which could be a slow or blocking transaction in the back-end database, or the system has maxed out the CPU or Disk throughput.

For tuning and optimization projects, you will commonly do this analysis quickly between each test run (as in Step 4.). As you find more opportunities for code optimization or system tuning, you will analyze the results, implement the optimizations and retest to determine the new results.

Step 6. Create Test Reports

Once you have completed all the test execution and analysis on the performance results, you should prepare a report which summarizes and details the entire performance testing experience. This report should be comprehensive and communicate the total understanding of the results of the test, without additional explanation from the testing team. Save all the graphs and reports used in your analysis and collect them to include in the report as the proof for all your recommendations and learning.

Good performance test reports include the following components:
  • overview about how the testing project was conducted
  • recommendations, or lessons-learned from the testing
  • listing of each test objective from the test plan
  • did the application pass or fail the test, and why?
  • in-depth analysis of each bottleneck found and how it was resolved
  • diagram of the test environment, as updated after test execution
  • appendix of all the graphs used in performance analysis

Additional Resources


Last edited Feb 7, 2007 at 4:06 AM by mycodeplexuser, version 4


DouglasBrown Mar 6, 2007 at 3:13 PM 
Hi, I think that there should vbe a mention here of 'soak' testing an application. For example running the average load through the application for a 25h plus period and then reviewing the statistics to see if performance decreaces over time. This can reveal memory leaks or blocking of resources over time.