pnp.gif

How To: Programmatically Set Think Time Between Test Iterations in Visual Studio Team System

J. D. Meier, Prashant Bansode, Carlos Farre, Mark Tomlinson, Scott Barber

Applies to

  • Performance Testing
  • Microsoft® Visual Studio® Team System

Summary

This How explains the process of using a Load Test plug-in in Visual Studio Team System to set the pacing, or “think time” between iterations of a Web test. Programmatically setting this value ensures a steady rate of transactions. For a load test, you can set pacing time by adjusting the Think Time Between Test Iterations settings, globally configuring the scenario. To maintain a steady transaction rate for a test, you will need to conditionally vary your pacing rate per iteration cycle.

Contents

  • Objectives
  • Overview
  • Summary of Steps
  • Step 1 – Create a Load Test Containing One or More Web Tests in a Test Mix
  • Step 2 – Define the Test Iteration Delay Time
  • Step 3 – Create a Load Test Plug-In Class
  • Step 4 – Add the New Plug-In Class to the Test Project and the Load Test
  • Step 5 – Add Context Parameters
  • Step 6 – Execute a Smoke Test to Validate the Pacing

Objectives

  • Learn how to use a Load Test plug-in to set the pacing.
  • Learn how to timeout iteration of a particular Web test, in a load test designed to simulate real-end users based on business requirements.

Overview

In the context of Visual Studio Team System, think time is the time delay that occurs between HTTP requests in the same test iteration (session). Iteration think time—also known as pacing—is the time delay between different test iterations. When a Web test completes, it will be delayed characterizing a test delay. You can set think time as a general global setting on the load test, based on the requirements of the scenario being tested. When designing a test that is required to introduce test iteration pacing based on the test execution thresholds (where the test iteration think time will be based on the difference between threshold and test execution during runtime), if execution time should be less than the value of the threshold. This allows time for the system to respond while increasing the number of users on the system, thereby controlling the total load during runtime execution based on consistent transaction throughput. Consider the following key points when using adaptive pacing time during the execution of a load test:
  • Do not use a variable iteration pacing technique in cases where users would be likely to abandon the application if it slows down. This technique is only appropriate for situations where abandonment is not a viable option for the users. Examples might be a corporate sales order application, where employees are assigned production goals based on the number of orders processed per day, or a Helpdesk system that needs to process a certain number of calls per hour.
  • Iteration pacing does not replace think time between requests—both should be used in load testing..
  • The idea of having interactively calculated iteration pacing is to enable the test script to intelligently maintain an emulation of the business process execution, while adapting to the dynamic system response times.
  • System response time is only relevant when the system fails to support the business process demand. Until then system response time is not the primary factor in governing the business process. End-users are tolerant of slower response time, up and to a specific point where the system is so slow it prevents them from doing their business.
  • Pacing delay time should be defined as an interval greater than the test iteration execution time, when a controlled transaction rate is desired, simulating business goals. For example a help desk system should be able to handle one call at the maximum of 10 minutes. For test iterations that take less than 10 minutes the test will wait the difference between the time limit (10 minutes) and test iteration execution.

The following table presents example pacing criteria for a load test with test mix of three Web tests (Browse, Search, and Place Order).

User Scenarios Test Execution Time (Seconds) Test Execution Delay Meeting Business requirements (Seconds) Iteration Pacing Delay (Iteration Think Time)(Seconds)
Browse 300 600 300
Search 200 400 200
Place Order 400 800 400


Browse iteration delay time = 600(Max time) – 300(test execution time)
Search iteration delay time = 400(Max time) – 200(test execution time)
Place Order iteration delay time = 800(Max time) – 400(test execution time)

Summary of Steps

  • Step 1. Create a Load Test Containing One or More Web Tests in a Test Mix.
  • Step 2. Define the Test Iteration Delay Time
  • Step 3. Create a Load Test Plug-In Class.
  • Step 4. Add the New Load Test Plug-In Class to the Test Project and the Load Test
  • Step 5. Add Context Parameters
  • Step 6. Execute a Smoke Test to Validate the Pacing

Step 1 – Create Load Test Containing One or More Web Tests in a Test Mix

First, create a test project that will contain your load test and Web tests. Next, create individual Web tests that reflect key business user scenarios. Finally, create a load test to which you will add your Web tests. While creating the load test, you can set many run-time properties to generate the desired load simulation; for example, you can specify the load pattern, browser, and network types and add the performance counters to be monitored.
For more information on creating load tests, refer to “How To: Create a Load Test Using VS.NET 2005” at << to link to how to >>

Step 2 – Define the Test Iteration Delay Time

Start by describing the type and duration of end user activity, qualifying that time as a business requirement.

For example, if you were managing a Help desk call center where each employee is expected to answer a minimum of six phone calls per hour, the call center computer system should have response times that are fast enough to support the processing of a single Help desk call within 10 minutes. If the average call involves 10 steps, the system should be able to save the data in no more than 1 minute of response time between each step. In this scenario, you also must include the amount of time it takes an end user to complete activities such as talking with a customer on the phone or filling in forms. If the combined end user activity time and system response time exceeds one minute between steps, the Help desk employees will fall behind their goal of processing six calls per hour.

The matrix below illustrates the relationship between call frequency and the system performance required to support the call center employees.

End User Quantity End users Activity Time (per step) System Response Time (per step) Total Call Time (for 10 steps)
5 20 seconds 5 seconds 250 seconds
10 20 seconds 10 seconds 300 seconds
20 20 seconds 20 seconds 400 seconds
25 20 seconds 45 seconds 650 seconds


As the table shows, when the end user activity duration is constant at 20 seconds per step, system response time degrades as more and more users are added to the system. When the number of users on the system reached a maximum of 25, the system took 45 seconds to save data and move on to the next step in the call process. The resultant total call time was 650 seconds. Typically, this is the point at which employees start to complain that the system is too slow for them to meet their target goal of six calls per hour, per employee.

If you chose to set a constant think time between iterations, the test throughput would be skewed at the beginning of test execution and would continue to slow down as the number of end users increases. The following table shows that the test simulation starts out simulating more than twice the expected call volume normally observed in the real world. As the system gets slower, the test also slows down to the point where 25 users are processing less than the required call volume observed in the real world.

End User Quantity Total Call Time (for 10 steps) Test Iterations (30 second constant pacing) Test Throughput: Total Calls / hour
5 250 seconds 280 seconds 12.8
10 300 seconds 330 seconds 10.9
20 400 seconds 430 seconds 8.3
25 650 seconds 680 seconds 5.2


The difficulty here is that the initial measurements generate too many transactions, while the later measurements generate too few transactions, which makes it increasingly difficult to calculate the actual results. One solution is to provide a programmatic think time between iterations in order to keep the test throughput constant during the ramp-up and steady transactional rate for the test. In this situation you would calculate the duration of end user activity, including the system response time, and then subtract that time from the desired transactional throughput goal. In the previous example of the Help desk call center, the throughput should be six calls per hour, or one call completed every 10 minutes. The table below illustrates how the calculation is performed.

End User Quantity Total Call Time (for 10 steps) Iteration Time Calculation Actual Test Iteration Delay Time Test Throughput: Total Calls / hour
5 250 seconds (600 – 250) 280 seconds 6
10 300 seconds (600 – 300) 300 seconds 6
20 400 seconds (600 – 400) 200 seconds 6
23 600 seconds (600 – 600) 0 seconds 6
25 650 seconds (600 – 650) -50 seconds 5.2


The first row shows five concurrent users executing the test and a total call time of 250 seconds – which includes both end-user activity and the system response time. Column 3 displays the calculation for the iteration delay time, which subtracts the total call time of 250 seconds from the desired test throughput time of 600 seconds. It is necessary to wait for exactly 280 seconds at the end of the test iteration, before beginning the next iteration.

Step 3 – Create a Load Test Plug-in class.

Step 3 – Create a Load Test Plug-In Class
In this step, you create a plug-in class named PaceLoadTestPlugin for your load test project. The Load Test Plug-in class is a point of extensibility that allows code to be executed during various stages of test iteration. The Load Test Plug-in will introduce the timeout at the end of the test iteration, based on the test execution delay that you have determined will meet business requirements. The test iteration timeout will be configured on the basis of each Web test in the test mix of a load test. The process of writing a Load Test Plug-in can be summarized as follows:
  • Create a plug-in Class.
  • Implement Initialize method.
  • Implement the call back function.

Creating a Load Test Plug-in Class

  • For information on how to create a Load Test plug-in class, see Step 2 in “How To - Create a Load Test Plug-In Using Visual Studio .NET 2005” at <<Add url>>.

Implementing Initialize Method

Implement the Initialize method in your class. To do so, add the following code to your class:

public void Initialize(LoadTest loadTest)
{
    mLoadTest = loadTest;
    mLoadTest.TestFinished += new 
    EventHandler<TestFinishedEventArgs>(mLoadTestTestFinished);
}


More Information
Implement the Initialize method of the ILoadTestPlugin interface, which takes the LoadTest object as a parameter. In the Initialize it subscribes the call back function mLoadTestTestFinished to event handler TestFinished. Before starting the test, Initialize is called once. mLoadTest is assigned with LoadTest passed as a parameter; and the callback function is subscribed to the TestFinished event handler. This will execute when test iteration is completed.

Implementing the callback function

You will implement the call back function as follows:

void mLoadTestTestFinished(object sender, TestFinishedEventArgs e)
{
     //we get the test name from the context paramater
     //if there is none we do not do pacing;
     //so for each web test we need to create a 
     //context paramater with Name=testName
     //value=threshold
     if (mLoadTest.Context.ContainsKey(e.TestName))
     {
         //the duration of that particular test
         TimeSpan mTimeSpan = e.Result.Duration;
         int toTalTime = (int)mTimeSpan.TotalMilliseconds;
         //the threshold of the test  

         int testMaxduration =
             int.Parse((string)mLoadTest.Context[e.TestName]);
         //here is conditional wait, if the test duration is less
         //than Max test duration as dictated by 
         //performance requirements
         if (testMaxduration > toTalTime)
         {
             //we wait still need to validate this sleeping 
             Thread.Sleep(testMaxduration - toTalTime);
         } 
         else if(testMaxduration < toTalTime)
         {
             //throw an error message, and start the next
             //iteration 
             //right away
             //the message goes to the UI 
             throw new 
             ApplicationException("Web test "+e.TestName+
                                  "  exceeded performance 
                                    requirements of "+
                                  "(miliseconds)"+   
                            testMaxduration.ToString());
         }
     }
}

More Information
Every time a test is completed; the TestFinished event is raised, which will be handled by this callback function. This event checks if there is a context parameter with the name of the test in which pacing is to be controlled: mLoadTest.Context.ContainsKey(e.TestName) . If the context parameter does not exist, the test continues without doing pacing. Because the name of the test is passed by the parameter TestFinishedEventArgs to the callback function, when any of the Web tests finishes its iteration, the callback function will check whether think time between iterations will happen for that Web test.

The test duration is read from the TestFinishedEventArgs parameter e.Result.Duration and stored in an integer; this represents the total execution time for the test, including think times between requests. The maximum duration of the test is read from the value context parameter
int testMaxduration = int.Parse((string)mLoadTest.Context[e.TestName]) . This is the maximum allowable duration of the test as specified in the associated business requirement. If the maximum duration is greater than the total time of test execution, the test waits for the difference between the two. If the total test execution time is greater than the maximum duration, the test flags an exception that will be displayed in the UI of Visual Studio Team System.

Example Load Testing Plug-In Class

The following is the complete code sample for the Load Test plug-in class. You can simply copy and paste this code for convenience.

using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using Microsoft.VisualStudio.TestTools.LoadTesting;
namespace LoadPacePlugin
{
    public class PaceLoadTestPlugin : ILoadTestPlugin
    {
        private LoadTest mLoadTest;
       
        //we initialize once, setting the handler
        //and the context parameter
        public void Initialize(LoadTest loadTest)
        {
            mLoadTest = loadTest;
            mLoadTest.TestFinished += new
            EventHandler<TestFinishedEventArgs>(mLoadTestTestFinished);
        }

        void mLoadTestTestFinished(object sender, 
                                   TestFinishedEventArgs e)
        {
            //we get the test name from the context paramater
            //if there is none we do not do pacing;
            //so for each web test we need to create a 
            //context paramater with Name=testName
            //value=threshold
            if (mLoadTest.Context.ContainsKey(e.TestName))
            {
                //the duration of that particular test
                TimeSpan mTimeSpan = e.Result.Duration;
                int toTalTime = (int)mTimeSpan.TotalMilliseconds;
                //the threshold of the test  
                int testMaxduration =
                    int.Parse((string)mLoadTest.Context[e.TestName]);
                   //here is conditional wait, if the test duration is
                   //less than pacing

                if (testMaxduration > toTalTime)
                {
                    //subtract duration from pacing time
                    Thread.Sleep(testMaxduration - toTalTime); 
                }
                else if(testMaxduration < toTalTime)
                {
                    //throw a warning message, and start the next
                    //iteration 
                    //right away
                    //the message goes to the UI 
                    throw new
                    ApplicationException("Web test "+e.TestName+
                                         " exceeded performance
                                           requirements of "+ 
                                           "(miliseconds)"+
                                           testMaxduration.ToString());
                }
            }
        }
    }
}

Step 4 – Add the New Plug-In Class to the Test Project and Load Test

After the Load Test plug-in class has been developed, add the class project containing the plug-in into the test project, and then add the Load Test plug-in to the load test so that it can be executed. For more information, see Steps 3 and 4 in “How To - Create a Load Test Plug-In Using Visual Studio .NET 2005” at << Add Url>>

Step 5 – Add Context Parameters.

The context parameters are data object variables that you pass to the Load Test plug-in. These parameters can be assigned to local variables during the initialize phase before the load test executes, or during execution of the callback function. Context parameters are used to allow the passing of variables during test execution to make it possible to change load test settings such as current load, based on conditions such as the Web test name and number of test executions.
  • In the load test, right-click Run Settings.
  • Click the Add Context Parameter option

Parameter Context.JPG
  • In the Properties pane, you will see Name and Parameter1 fields with an empty Value. Give the parameter name the same name as the load test name (case-sensitive), and set as the Value the maximum test duration in milliseconds, as specified in step 2, as in the following example, which shows a parameter name of HelpDeskCall and a value of 15000000.
Parameter Context1.GIF

Step 6 – Execute a Smoke Test to Validate the Pacing

In this step, you gradually increase the load of users and run a short (5-7 minutes) duration test to validate your pacing timeout and the number of tests that failed to meet performance requirements. The table below shows the results of running a load test with a Web test involving four requests, with one second of think time between the requests and a maximum test time of 15 seconds.

User Scenario Help Desk Call # Users Tests Per second Total Tests Test Execution Time (Seconds) Test Execution Delay Meeting Business requirements (Seconds) Iteration Pacing Delay (Iteration Think Time)(Seconds) #Tests Execution Not Meeting Requirements
1 0.063 19 4.9 15 5.1 0
5 0.32 85 7.8 15 8.1 0
10 0.65 190 14.9 15 0.1 72
25 1.6 434 15.2 1500 0 420

  • During the test run or at the end of the test, in the UI pane, expand the test and then the Avg. Test Time. Drag the counter to the graph pane to see the response times for the tests. In the bottom left pane, the readings will show the total number of tests and the tests per second (throughput).
Iteration1.GIF
  • Click the errors link at the upper left to obtain a reading of the number of tests that failed to meet the performance requirements. In the following figure, 279 out of 473 total tests failed to meet the 15-second test time execution.
Iteration2.GIF

Resources

Last edited Mar 16, 2007 at 2:22 AM by prashantbansode, version 11

Comments

No comments yet.