This project is read-only.

How To: Tune Performance of Web Applications

J.D. Meier, Prashant Bansode, Carlos Farre, Scott Barber, Mark Tomlinson

Applies To

  • Performance Testing
  • Web Applications


This how to provide an iterative process to systematically identify, tune, and eliminate bottlenecks until your application meets its performance objectives.


  • Objectives
  • Overview
  • Summary of Steps
  • Step 1 – Identify Objectives
  • Step 2 – Establish a Baseline
  • Step 3 – Collect Data.
  • Step 4 – Analyze Results.
  • Step 5 – Configure.
  • Step 6 – Retest and Measure.
  • Resources


  • Learn process of performance tuning for web applications.


Performance tuning is an iterative process that you use to identify and eliminate bottlenecks until your application meets its performance objectives. You start by establishing a baseline. Then you collect data, analyze the results, and make configuration changes based on the analysis. After each set of changes, you retest and measure the resulting data to verify that your application has moved closer to its performance objectives. The purpose is not to load-, stress-, or capacity-test your application, but to understand how the various tuning options and configuration settings affect your application.

The tuning process is an iterative processing that consists of the following set of activities:
  • Establish a baseline. Ensure that you have a well defined set of performance objectives, test plans, and baseline metrics.
  • Collect data. Simulate load and capture metrics.
  • Analyze results. Identify performance issues and bottlenecks.
  • Configure. Tune your application setup by applying new system, platform, or application configuration settings.
  • Retest and measure. Retest and measure the resulting data to verify that your configuration changes have been beneficial

Summary of Steps

Following figure shows the basic tuning process
Tune Web App.JPG
Figure: The performance tuning process
The steps involved in the tuning process are as follows:
  • Step 1. Identify Objectives
  • Step 2. Establish a Baseline
  • Step 3. Collect Data
  • Step 4. Analyze Results
  • Step 5. Configure
  • Step 6. Test and Measure

Step 1. Identify Objectives

There are two ways to develop the objectives for a tuning and optimization test. First, in a situation where you are responding to a known problem or bottleneck, the testing objective—to solve the existing problem—is self evident, so your job will be to identify the root cause of the problem, fix it, and then retest your remedy to ensure that the bottleneck is truly repaired. The second way to formulate tuning objectives is to start with a “blank slate” and start testing the system to see what performance bottlenecks may (or may not) be present in the system components. This is more common with applications that have never been promoted into production, and often where there is very little performance testing included as part of the normal testing methodology for the project.

Either way you get started, the end point is the same as it is for load testing: establish performance goals and objectives based on the key transactions that are most important or most critical to the system architecture, and transactional pathways.

Step 2. (Optional) Establish a Baseline

Before you start to tune your application, it can be beneficial to first establish a baseline, although this is not required.

To establish a baseline for your tuning, you need to ensure that the following are well defined:
  • Performance objectives
Your application’s performance objectives are usually measured in terms of response times, throughput (requests per second), and resource utilization levels. These measurements should include budgeted times for specific scenarios, together with resource utilization levels such as CPU cycles, memory, disk input/output (I/O), and network I/O allocated for your application. For more information about setting performance goals, see "Performance Best Practices at a Glance" at
  • Test plans and test scripts
You need a test plan and a set of test scripts that you can use to apply load to your application. For more information about how to approach testing, see "Chapter 16 - Testing .NET Application Performance." at
  • Baseline metrics
Make sure that you capture a baseline set of metrics for your system, platform, and application. Baseline metrics helps you evaluate the impact of any changes made to your configuration during the performance-tuning process. For more information about metrics, see "Metrics" in "Chapter 16 - Testing .NET Application Performance." at

Step 3. Collect Data

Use test tools such as Visual Studio .NET Test Edition to simulate load. You can use tools like System Monitor or Microsoft Operations Manager to capture performance counters.

When you run tests for the first time, make sure that you use the same version of the application that you used to establish your baseline metrics. For subsequent iterations of the tuning process, you test performance by using the same workload and test scripts but with modified configuration changes.

Use a Constant Workload

For all iterations of the tuning process, make sure that you use the same test scripts and a constant workload. Doing so enables you to accurately measure the impact of any configuration changes that you have applied.

If you run short duration tests, make sure you include an appropriate warm-up time in your test scripts to ensure that your results are not skewed due to initial slow response times caused by just-in-time (JIT) compilation, cache population, and so on. Also make sure to run your tests for an adequate and realistic period of time.

Format the Results

A typical test generates a vast amount of data in different locations, from different sources, and in different formats. For example, captured data might include system performance counters from all servers, Internet Information Services (IIS) log files from Web and/or application servers, Microsoft SQL Server™ metrics on the database server, and so on. You must collect the data and format it in preparation for the next step: analyzing the results.

You can format the data in such a way that you are able to map the cascading effect of changes in one part of your configuration across your application. Organizing the data into the categories described earlier (system, platform, and application) helps you analyze the application as a whole, rather than analyzing it in parts.

As an example of how configuration changes can have a cascading effect, changing the thread pool settings on the Web server might cause requests to be processed faster, which may in turn causes increased resource utilization (CPU, memory, and disk I/O) on your database server.

Step 4. Analyze Results

In this step, you analyze the captured data to identify performance issues and bottlenecks. To identify the root cause of a problem, start tracing from where you first noticed the symptom, keeping in mind that the most obvious observation often is not the cause of the problem. When analyzing your data, consider the following points:
  • The data you collect is usually only an indicator of a problem, not its source. Indicators such as performance counters can give you directions to help isolate your debugging or troubleshooting process to target specific areas of functionality.
  • Intermittent spikes in your data, as shown by performance counters, may not be a serious concern. If it makes sense to, ignore anomalies.
  • Make sure that your test results are not skewed by warm-up time. Also ensure that your test scripts run for a period of time before you start capturing metrics.
  • If the data you collect is not complete, your analysis is likely to be inaccurate. You sometimes need to retest and collect the missing information or use further analysis tools. For example, if your analysis of Common Language Runtime (CLR) performance counters indicates that a large number of generation 2 garbage collections are occurring, you should use an allocation profiler to profile the overall memory usage pattern for the application.
  • You should be able to identify and isolate the areas that need further tuning. This assumes that you have already optimized your code and design for any changes, and that only the configuration settings need tuning.
  • If you are currently in the process of performance tuning, you need to compare your current set of results with previous results or with your baseline performance metrics.
  • If, during your analysis, you identify several bottlenecks or performance issues, prioritize them and address those that are likely to have the biggest impact first. You can also prioritize this list on the basis of which bottleneck you encounter first when running a test.
  • Document your analysis. Write down your recommendations, including what you observed, where you observed it, and how you applied configuration changes to resolve the issue.

Step 5. Configure

You tune your application setup by applying new system, platform, or application configuration settings. The analysis documentation from the previous step can contain several recommendations, so use the following guidelines when you apply configuration changes:
  • Apply one set of changes at a time. Address changes individually. Making multiple configuration changes can distort the results and can make it difficult to identify potential new performance issues. A single change may actually include a set of multiple configuration changes that need to be applied and evaluated as a single unit.
  • Fix issues in a prioritized order. Address the issues that are likely to provide maximum payoff. For example, instead of fine-tuning ASP.NET, you might achieve better initial results by creating an index on a database table that you identified was missing.

Step 6. Retest and Measure

Performance tuning is an iterative process. Having applied one set of changes, you retest and measure to see whether the configuration changes have been beneficial. Continue the process until your application meets its performance objectives or until you decide on an alternate course of action, such as code optimization or design changes.



Last edited Mar 16, 2007 at 3:47 AM by prashantbansode, version 7


No comments yet.