This project is read-only.

How To: Managing an Agile Performance Test Cycle

J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber

Applies To

  • Performance Testing in an Agile Development Environment


This How To describes an agile approach to managing application performance testing. Performance testing in an agile project environment allows you to manage the testing in a highly flexible way. In particular, this approach allows you to revisit the project vision and reprioritize tasks based on the value they add to the performance test at a given point in time.


  • Objectives
  • Overview
  • Introduction to the Approach
  • Summary of Steps
  • Step 1. Understand the Project Vision and Context
  • Step 2. Identify Reasons for Testing Performance
  • Step 3. Identify the Value Performance Testing Adds to the Project
  • Step 4. Configure the Test Environment
  • Step 5. Identify and Coordinate Immediately Valuable Tactical Tasks
  • Step 6. Execute Task(s)
  • Step 7. Analyze Results and Report
  • Step 8. Revisit Steps 1-3 and Consider Performance Acceptance Criteria
  • Step 9. Reprioritize Tasks
  • Additional Considerations
  • Resources


  • Learn an approach to agile performance test management.
  • Learn how to maximize flexibility without sacrificing control.
  • Learn how to provide managers and stakeholders with progress and value indicators.
  • Learn how to provide a structure for capturing information that will not noticeably impact the release schedule.
  • Learn how to apply an approach designed to embrace change, not simply tolerate it.


The key to an agile approach is flexibility. Flexibility, however, does not mean sloppiness or inefficiency. To remain efficient and thorough in an agile environment, you may need to change the way you are used to managing your performance testing.

Implementing an agile philosophy means different things to different teams, ranging from perfectly implemented eXtreme Programming (XP) to projects with many short iterations and documentation designed for efficiency. The approach outlined in this How To has been successfully applied by teams across this spectrum with minimal modification.

This How To assumes that the performance specialist is new to the team in question and focuses on the tasks that the performance specialist frequently drives or champions. This is neither an attempt to minimize the concept of team responsibility nor an attempt to segregate roles. The team is best served if the performance specialist is an integral team member who participates in team practices, such as pairing. Any sense of segregation is unintentional and a result of trying to simplify explanations.

This approach to managing performance testing may seem complicated at first because it:
  • Embraces change during a project’s life cycle.
  • Iterates (not always in a predictable pattern).
  • Encourages planning just far enough in advance for team coordination but not so far that the plan is likely to need significant revision in order to execute.

However, when viewed at the task or work item level, this approach is actually an intuitive process based on the principle of continually asking and answering the question, “What task can I do right now that will add the most value to the project?”

Introduction to the Approach

When viewed from a linear perspective, the approach starts by examining the software development project as a whole, the reasons why stakeholders have chosen to include performance testing in the project, and the value that performance testing is expected to add to the project. The results of this examination include the team’s view of the success criteria for the performance testing effort.

Once the success criteria are understood at a high level, an overall strategy is envisioned to guide the general approach to achieving those criteria by summarizing what performance testing steps are anticipated to add the most value at various points during the development life cycle. Those points may include key project deliveries, checkpoints, sprints, iterations, or weekly builds. For the purposes of this How To, these events are collectively referred to as “performance builds”. Frequently, while the strategy is evolving, the performance specialist and/or the team will begin setting up a performance-test environment and a load-generation environment.

With a strategy in mind and the necessary environments in place, the test team draws up plans for major tests or tasks identified for imminent performance builds. When a performance build is delivered, the plan’s tasks should be executed in priority sequence (based on all currently available information), appropriately reporting, recording, revising, reprioritizing, adding, and removing tasks and improving the application and the overall plan as the work progresses.

Summary of Steps

This approach can be represented by using the following nine steps:
  • Step 1. Understand the Project Vision and Context
  • Step 2. Identify Reasons for Testing Performance
  • Step 3. Identify the Value Performance Testing Adds to the Project
  • Step 4. Configure the Test Environment
  • Step 5. Identify and Coordinate Immediately Valuable Tactical Tasks
  • Step 6. Execute Task(s)
  • Step 7. Analyze Results and Report
  • Step 8. Revisit Steps 1-3 and Consider Performance Acceptance Criteria
  • Step 9. Reprioritize Tasks

Step 1. Understand the Project Vision and Context

The project vision and context are the foundation for determining what performance testing is necessary and valuable. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule). Decisions made during the work session can be refactored during other iteration and work sessions as the system becomes more familiar.

Project Vision

Before initiating performance testing, ensure that you understand the current project vision. Because the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well. Although every team member should be thinking about performance, it is the performance specialist’s responsibility to be proactive in understanding and keeping up to date with the relevant details across the entire team. The following are examples of high-level project visions:
  • Evaluate a new architecture for an existing system.
  • Develop a new custom system to solve a specific business problem.
  • Evaluate the new software-development tools.
  • As a team, become proficient in a new language or technology.
  • Re-engineer an inadequate application before a period of peak user Step to avoid user dissatisfaction due to application failure.

Project Context

The project context is nothing more than those factors that are, or may become, relevant to achieving the project vision. Some examples of items that may be relevant to your project context include:
  • Client expectations
  • Budget
  • Timeline
  • Staffing
  • Project environment
  • Management approach

These items will often be discussed during a project kickoff meeting but, again, should be revisited regularly throughout the project as more details become available and as the team learns more about the system they are developing.

Understand the System

Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user.

With many agile projects, the system’s architecture and functionality change over the course of the project. Expect this. In fact, the performance testing you do is frequently the driver behind at least some of those changes. Keeping this in mind will help you ensure that performance-testing tasks are neither over-planned nor under-planned before testing begins.

Understand the Project Environment

In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Asking the following or similar questions may be helpful:
  • Does the team have any scheduled meetings, stand-ups, or scrums?
  • How are issues raised or results reported?
  • If I need to collaborate with someone, should I send an e-mail message? Schedule a meeting? Use Instant Messenger? Walk over to his or her office?
  • Does this team employ a “do not disturb” protocol when an individual or sub-team desires “quiet time” to complete a particularly challenging task?
  • Who is authorized to update the project plan or project board?
  • How are tasks assigned and tracked? Software system? Story cards? Sign-ups?
  • How do I determine which builds I should focus on for performance testing? Daily builds? Friday builds? Builds with a special tag?
  • How do performance testing builds get promoted to the performance test environment?
  • Will the developers be writing performance unit tests? Can I pair with them periodically so that we can share information?
  • How do you envision coordination of performance-testing tasks?

Understand the Performance Build Schedule

At this stage, the project schedule makes its entrance, and it does not matter whether the project schedule takes the form of a desktop calendar, story cards, whiteboards, a document, someone’s memory, or a software-based project management system. However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria.

Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.

Typically, you will find during this step that you add performance build–specific items to your success criteria, and that you start aligning tasks related to achieving success criteria with anticipated performance builds.

Step 2. Identify Reasons for Testing Performance

The underlying reasons for testing performance on a particular project are not always obvious based on just the vision and context. Explicitly identifying the reasons for performance testing is critical to being able to determine what performance-testing steps will add the most value to the project.

The reasons for conducting performance testing often go beyond a list of performance acceptance criteria. Every project has different reasons for deciding to include, or not include, performance testing as part of its process. Not identifying and understanding these reasons is one way to virtually guarantee that the performance-testing aspect of the project will not be as successful as it could have been. Examples of possible reasons to make integrated performance testing a part of the project include the following:
  • Improve performance unit testing by pairing performance testers with developers.
  • Assess and configure new hardware by pairing performance testers with administrators.
  • Evaluate algorithm efficiency.
  • Monitor resource usage trends.
  • Measure response times.
  • Collect data for scalability and capacity planning.

It is generally useful to identify the reasons for conducting performance testing very early in the project. Because these reasons are bound to change and/or shift priority as the project progresses, you should revisit them regularly as you and your team learn more about the application, its performance, and the customer or user.

Success Criteria

It is also useful to start identifying the desired success criteria associated with the reasons for conducting performance testing, early in the project. As with the reasons for testing, the success criteria are bound to change, so you should revisit them regularly as you and your team learn more about the application, its performance, and the customer or user. Success criteria not only include the performance requirements, goals, and targets for the application, but also the objectives behind conducting performance testing in the first place, including those objectives that are financial or educational in nature. For example, success criteria may include:
  • Validate that the application be able to handle X users per hour.
  • Validate that the users will experience response times of Y seconds or less 95 percent of the time.
  • Validate that performance tests predict production performance within a +/- 10-percent range of variance.
  • Investigate hardware and software as it becomes available, to detect significant performance issues early.
  • The performance team, developers, and administrators work together with minimal supervision to tune and determine the capacity of the architecture.
  • Conduct performance testing within the existing project duration and budget.
  • Determine the most likely failure modes for the application under higher-than-expected load conditions.
  • Determine appropriate system configuration settings for desired performance characteristics.

It is important to record the performance success criteria in a manner appropriate to your project’s standards and expectations, in a location where they are readily accessible to the entire team. Whether the criteria appear in the form of a document, on story cards, on a team wiki, in a task-management system, or on a whiteboard is important only to the degree that it works for your team.

The initial determination of performance-testing success criteria can often be accomplished in a single work session involving the performance specialist, the lead developer, and the project manager. Because you are articulating and recording success criteria for the performance-testing effort, and not creating a performance-test plan, it is not important to concern yourself with dates and resources.

In general, consider the following information when determining performance-testing success criteria:
  • Application performance requirements and goals
  • Performance-related targets and thresholds
  • Exit criteria (how to know when you are done)
  • Key areas of investigation
  • Key data to be collected

Step 3. Identify the Value Performance Testing Adds to the Project

Using the information gained in steps 1 and 2, clarify the value added through performance testing and convert that value into a conceptual performance-testing strategy.

Now that you have an up-to-date understanding of the system, the project, and the performance testing success criteria, you can begin conceptualizing an overall strategy for performance-testing imminent performance builds. It is important to communicate the strategy to the entire team, using a method that encourages feedback and discussion.

Strategies should not contain excessive detail or narrative text. These strategies are intended to help focus decisions, be readily available to the entire team, include a method for anyone to make notes or comments, and be easy to modify as the project progresses.

While there is a wide range of information that could be included in the strategy, the critical components are the desired outcomes of the performance testing of the performance build and anticipated tasks that achieve that outcome. Although it seldom occurs, if you need significant resource coordination to accomplish a task, it might make sense to complete strategies a few performance builds in advance. Strategies are most often completed roughly concurrent with the release of a performance build.

Discussion Points

In general, the types of information that may be valuable to discuss with the team when preparing a performance-testing strategy for a performance build include:
  • The reason for performance testing this delivery
  • Prerequisites for strategy execution
  • Tools and scripts required
  • External resources required
  • Risks to accomplishing the strategy
  • Data of special interest
  • Areas of concern
  • Pass/fail criteria
  • Completion criteria
  • Planned variants on tests
  • Load range
  • Tasks to accomplish the strategy

Step 4. Configure the Test Environment

With a conceptual strategy in place, prepare the necessary tools and resources to execute the strategy as features and components become available for test.

Load-generation and application-monitoring tools are almost never as easy to implement as one expects. Whether issues arise from setting up isolated network environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing, or version compatibility between monitoring software and server operating systems, issues always seem to arise.

Also, it is inevitable that load-generation tools always lag behind evolving technologies and practices. Tool creators can only build in support for the most prominent technologies, and even then, the technologies have to become prominent before the support can be built.

This often means that the biggest challenge involved in a performance-testing project is getting your first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between simulated and real users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly.

Additionally, plan to periodically reconfigure, update, add to, or otherwise enhance your load-generation environment and associated tools throughout the project. Even if the application under test stays the same and the load-generation tool is working properly, it is likely that the metrics you want to collect will change. This frequently implies some degree of change to or addition of monitoring tools.

Step 5. Identify and Coordinate Tasks

Performance-testing tasks do not happen in isolation. The performance specialist needs to work with the team to prioritize and coordinate support, resources, and schedules to make the tasks efficient and successful.**

As a delivery approaches, you will want to create performance test execution plans for each one to two days’ worth of performance-testing tasks. This means you are likely to have several performance test execution plans per performance build. Each of these execution plans should communicate the remaining details needed to complete or repeat a work item or group of work items.

Performance test execution plans should be communicated to the team and stakeholders by using the same method(s) the strategy uses. Depending on the pace and schedule of your project, there may be one execution plan per strategy, or several. It is important to limit the execution plans to one or two days of anticipated tasks for several reasons, including the following:
  • Even though each task or group of tasks is planned to take one or two days, it is not uncommon for the actual execution to stretch to three or even four days on occasion. If your plans are for tasks longer than about two days and you get delayed, you are likely to have the next performance build before completing any valuable testing on the previous build.
  • Especially on agile projects, timely feedback about performance is critical. Even with two-day tasks and a one-week performance build cycle, you could end up with approximately eight days between detecting an issue and getting a performance build that addresses that issue. With longer tasks and/or a longer period between performance builds, those 8 days can quickly become 16.

Performance test execution plans should be communicated far enough in advance to be shared with the team — for recommendations or improvements, and for necessary resource coordination to take place — but nothing further. Due to the specificity of the execution plan, preparing them well in advance almost always leads to significant rework. In most cases, the team as a whole will prioritize the sequence of execution of tasks.

Discussion Points

In general, the types of information that a team finds valuable when discussing a performance test execution plan for a work item or group of work items include:
  • Work item execution method
  • Specifically what data will be collected
  • Specifically how that data will be collected
  • Who will assist, how, and when
  • Sequence of work items by priority

Step 6. Execute Task(s)

Conduct tasks in one-to-two day segments. See them through to completion, but be willing to take important detours along the way if an opportunity to add additional value presents itself.

When each performance build is delivered, the performance testing begins with the highest-priority task related to that build. Early in the development life cycle, those tasks are likely to contain work items such as “collaborate with administrators to tune application servers,” while later in the development cycle, a work item might be “validate that the application is achieving response time goals at 50 percent of peak load.”

The most important part of task execution is to remember to modify the task and subsequent strategies as results analysis leads to new priorities. After a task is executed, share your findings with the team, and then reprioritize the remaining tasks, add new tasks, and/or remove planned tasks from execution plans and strategies based on the new questions and concerns raised by the team. When reprioritizing is complete, move on to the next-highest-priority task.

Keys to Conducting a Performance-Testing Task

In general, the keys to conducting a performance-testing task include:
  • Analyze results immediately and revise the plan accordingly.
  • Work closely with the team or sub-team that is most relevant to the task.
  • Communicate frequently and openly across the team.
  • Record results and significant findings.
  • Record other data needed to repeat the test later.
  • Revisit performance-testing priorities after no more than two days.

Step 7. Analyze Results and Report

To keep up with an iterative process, results need to be analyzed and shared quickly. If the analysis is inconclusive, retest at the earliest possible opportunity. This gives the team the most time to react to performance issues.

Even though you are sharing data and preliminary results at the completion of each task, it is important to pause periodically to consolidate results, conduct trend analysis, create stakeholder reports, and pair with developers, architects, and administrators to analyze results. Periodically may mean a half-day per week, one day between performance builds, or some other interval that fits smoothly into your project workflow.

These short pauses are often where the “big breaks” occur. Although continual reporting keeps the team informed, these reports are generally summaries delivered as an e-mailed paragraph with a spreadsheet attached, or a link to the most interesting graph on the project Web site.

On their own, these reports rarely tell the whole story. One of the jobs of the performance specialist is to find trends and patterns in the data, which takes time. This task also tends to lead to the desire to re-execute one or more tests to determine if a pattern really exists, or if a particular test was flawed in some way. Teams are often tempted to skip this step, but do not yield to that temptation. You might end up with more data more quickly, but if you do not stop to look at the data collectively on a regular basis, you are unlikely to extract all of the useful findings from that data until it is too late.

Step 8. Revisit Steps 1-3 and Consider Performance Acceptance Criteria

Between iterations, ensure that the foundational information has not changed. Integrate new information such as customer feedback and update the strategy as necessary.

After the success criteria, strategies, and/or tasks have been updated and prioritized, it is time to resume the performance-testing process where you left off. However, this is easier said than done. Sometimes, no matter how hard you try to avoid it, there are simply no valuable performance-testing tasks to conduct at this point. This could be due to environment upgrades, mass re-architecting/refactoring, significant detected performance issues that someone else needs time to fix, and so on.

On the positive side, the performance specialist has possibly the broadest set of skills on the entire team. This means that when the situation arises, continued performance testing or paired performance investigation with developers or administrators is not going to add value at this time, and the performance specialist can be temporarily given another task such as automating smoke tests, optimizing HTML for better performance, pairing with a developer to assist with developing more comprehensive unit tests, and so on. The key is to never forget that the performance specialist’s first priority is performance testing, while these other tasks are additional responsibilities.

Step 9. Reprioritize Tasks

Based on the test results, new information, and the availability of features and components, reprioritize, add to, or delete tasks from the strategy, and then return to Step 5.

Some agile teams conduct periodic “performance-only” scrums or stand-ups when performance testing–related coordination, reporting, or analysis is too time-consuming to be handled in the existing update structure. Whether during a special “performance-only” scrum or stand-up or during existing sessions, the team collectively makes most of the major adjustments to priorities, strategies, tasks, and success criteria. Ensure that enough time is allocated frequently enough for the team to make good performance-related decisions, while changes are still easy to make.

The key to successfully implementing an agile performance-testing approach is continual communication among team members. As described in the previous steps, it is a good idea not only to communicate tasks and strategies with all team members, checking back with one another frequently, but also to plan time into testing schedules to review and update tasks and priorities.

The methods you use to communicate plans, strategies, priorities, and changes are completely irrelevant as long as you are able to adapt to changes without requiring significant rework, and as long as the team continues to progress toward achieving the current performance-testing success criteria.

Additional Considerations

Keep in mind the following additional considerations for managing an agile performance test cycle:
  • The best advice is to remember to communicate all significant information and findings to the team.
  • No matter how long or short the time between performance builds, performance testing will always lag behind. Too many performance-testing tasks take too long to develop and execute to keep up with development in real time. Keep this in mind when setting priorities for what to performance-test next. Choose wisely.
  • Remember that for the vast majority of the development life cycle, performance testing is about collecting useful information to enhance performance through design, architecture, and development as it happens. Comparisons against the end user–focused requirements and goals only have meaning for customer review releases or production release candidates. The rest of the time, you are looking for trends and obvious problems, not pass/fail validation.
  • Make use of existing unit-testing code for performance testing at the component level. Doing so is quick, easy; helps the developers detect trends in performance, and can make a powerful smoke test.
  • Do not force a performance build just because it is on the schedule. If the current build is not appropriate for performance testing, continue with what you have until it is appropriate, or give the performance tester another task until something reasonable is ready.
  • Performance testing is one of the single biggest catalysts to significant changes in architecture, code, hardware, and environments. Use this to your advantage by making observed performance issues highly visible across the entire team. Simply reporting on performance every day or two is not enough ? the team needs to read, understand, and react to the reports, or else the performance testing loses much of its value.


Last edited Sep 14, 2007 at 11:23 PM by carlpf2, version 7


No comments yet.