Performance Testing Guidance for Web ApplicationsFeedback/Comments: [email protected] Performance Testing Gu. Title Performance Testing Guidance for Web Applications; Author(s) J.D. Meier, ); Paperback pages; ebook Online, HTML, PDF ( pages, MB). Testing the performance of web applications is easy. It's easy to . Performance Testing Guidance for Web Applications provides an end-to-end approach.
|Language:||English, Japanese, Dutch|
|ePub File Size:||15.77 MB|
|PDF File Size:||18.47 MB|
|Distribution:||Free* [*Register to download]|
Performance Testing Guidance for Web Applications. By: J.D. Meier, Carlos Farre , Prashant Bansode, Scott Barber, Dennis Rea. What to Performance Test. Welcome to the patterns & practices Performance Testing Guidance for Web Applications! This guide shows you an end-to-end approach for implementing. Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring the performance of a variety of.
Keep in mind that these definitions are intended to aid communication and are not an attempt to create a universal standard. You perform capacity testing in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data.
For example, to accommodate future loads, you need to know how many additional resources such as processor capacity, memory usage, disk capacity, or network bandwidth are necessary to support future usage levels. Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out. Component test A component test is any performance test that targets an architectural component of the application.
Commonly tested components include servers, databases, networks, firewalls, and storage devices. Endurance test An endurance test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
Endurance testing is a subset of load testing. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.
Latency Latency is a measure of responsiveness that represents the time it takes to complete the execution of a request. Latency may also represent the sum of several latencies or subtasks. Metrics Metrics are measurements obtained by running performance tests as expressed on a commonly understood scale.
Some metrics commonly obtained through performance tests include processor utilization over time and memory usage by load. Performance testing is the superset containing all other subcategories of performance testing described in this chapter. Performance budgets or allocations Performance budgets or allocations are constraints placed on developers regarding allowable resource consumption for their component.
Performance goals Performance goals are the criteria that your team wants to meet before product release, although these criteria may be negotiable under certain circumstances. For example, if a response time goal of three seconds is set for a particular transaction but the actual response time is 3. Performance objectives Performance objectives are usually specified in terms of response times, throughput transactions per second , and resource-utilization levels and typically focus on metrics that can be directly related to user satisfaction.
Performance requirements Performance requirements are those criteria that are absolutely non-negotiable due to contractual obligations, service level agreements SLAs , or fixed business needs. Performance targets Performance targets are the desired values for the metrics identified for your project under a particular set of conditions, usually specified in terms of response time, throughput, and resource-utilization levels.
Performance targets typically equate to project goals. Performance testing objectives Performance testing objectives refer to data collected through the performance-testing process that is anticipated to have value in determining or improving product quality.
However, these objectives are not necessarily quantitative or directly related to a performance requirement, goal, or stated quality of service QoS specification. Performance thresholds Performance thresholds are the maximum acceptable values for the metrics identified for your project, usually specified in terms of response time, throughput transactions per second , and resource-utilization levels.
Performance thresholds typically equate to requirements. Resource utilization Resource utilization is the cost of the project in terms of system resources. Response time Response time is a measure of how responsive an application or subsystem is to a client request. Saturation Saturation refers to the point at which a resource has reached full utilization. Scenarios In the context of performance testing, a scenario is a sequence of steps in your application.
A scenario can represent a use case or a business function such as searching a product catalog, adding an item to a shopping cart, or placing an order. Smoke test A smoke test is the initial run of a performance test to see if your application can perform its operations under a normal load. Spike test A spike test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.
Spike testing is a subset of stress testing. The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Throughput Throughput is the number of units of work that can be handled per unit of time; for instance, requests per second, calls per day, hits per second, reports per year, etc.
Unit test In the context of performance testing, a unit test is any test that targets a module of code where that module is any logical subset of the entire existing code base of the application, with a focus on performance characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested.
Utilization In the context of performance testing, utilization is the percentage of time that a resource is busy servicing user requests.
Guidance for Web
The remaining percentage of time is considered idle time. The workload includes the total number of users, concurrent active users, data volumes, and transaction volumes, along with the transaction mix. For performance modeling, you associate a workload with an individual scenario. Summary Performance testing helps to identify bottlenecks in a system, establish a baseline for future testing, support a performance tuning effort, and determine compliance with performance goals and requirements.
Including performance testing very early in your development life cycle tends to add significant value to the project. For a performance testing project to be successful, the testing must be relevant to the context of the project, which helps you to focus on the items that that are truly important. If the performance characteristics are unacceptable, you will typically want to shift the focus from performance testing to performance tuning in order to make the application perform acceptably.
Performance, load, and stress tests are subcategories of performance testing, each intended for a different purpose. Creating a baseline against which to evaluate the effectiveness of subsequent performance-improving changes to the system or application will generally increase project efficiency.
Understand the values and benefits associated with each type of performance testing. Understand the potential disadvantages of each type of performance testing. Overview Performance testing is a generic term that can refer to many different types of performance-related testing, each of which addresses a specific problem area and provides its own benefits, risks, and challenges. This chapter defines, describes, and outlines the benefits and project risks associated with several common types or categories of performance-related testing.
Using this chapter, you will be able to overcome the frequent misuse and misunderstanding of many of these terms even within established teams.
How to Use This Chapter Use this chapter to understand various types of performance-related testing. This will help your team decide which types of performance-related testing are most likely to add value to a given project based on current risks, concerns, or testing results. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test.
Performance Testing Guidance for Web Applications
Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Key Types of Performance Testing The following are the most common types of performance testing for Web applications. To verify application behavior under normal and peak load conditions. Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement SLA.
An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time. Collectively, these risks form the basis for the four key types of performance tests for Web applications. Identifies mismatches between performancerelated expectations and reality.
Supports tuning, capacity planning, and optimization efforts. Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.
Determines the adequacy of a hardware environment. Evaluates the adequacy of a load balancer. Detects concurrency issues. Detects functionality errors under load.
Collects data for scalability and capacity-planning purposes. Helps to determine how many users the application can handle before performance is compromised. Helps to determine how much load the hardware can handle before resource utilization limits are exceeded. Is not designed to primarily focus on speed of response.
Results should only be used for comparison with other related load tests. Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness. Allows you to establish application-monitoring triggers to warn of impending failures. Ensures that security vulnerabilities are not opened up by stressful conditions.
Determines the side effects of common hardware or supporting application failures. Helps to determine what kinds of failures are most valuable to plan for. Provides information about how workload can be handled to meet business requirements. Determines the current usage and capacity of the existing system to aid in capacity planning. It is often difficult to know how much stress is worth applying.
Capacity model validation tests are complex to create. Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value. In practice, however, the likelihood of catastrophic performance failures occurring in a system that has been through reasonable not even rigorous performance testing is dramatically reduced, particularly if the performance tests are used to help determine what to monitor in production so that the team will get early warning signs if the application starts drifting toward a significant performance-related failure.
Some of these terms may be common in your organization, industry, or peer network, while others may not. These terms and concepts have been included because they are used frequently enough, and cause enough confusion, to make them worth knowing. Term Component test Notes A component test is any performance test that targets an architectural component of the application.
Commonly tested components include servers, databases, networks, firewalls, clients, and storage devices. Summary Performance testing is a broad and complex activity that can take many forms, address many risks, and provide a wide range of value to an organization. It is important to understand the different performance test types in order to reduce risks, minimize cost, and know when to apply the appropriate test over the course of a given performance-testing project.
To apply different test types over the course of a performance test, you need to evaluate the following key points: Learn how performance testing can be used to mitigate risks related to speed, scalability, and stability.
Learn about the aspects of these risks that performance testing does not adequately address. Overview Performance testing is indispensable for managing certain significant business risks. For example, if your Web site cannot handle the volume of traffic it receives, your customers will shop somewhere else. Beyond identifying the obvious risks, performance testing can be a useful way of detecting many other potential problems. While performance testing does not replace other types of testing, it can reveal information relevant to usability, functionality, security, and corporate image that is difficult to obtain in other ways.
Many businesses and performance testers find it valuable to think of the risks that performance testing can address in terms of three categories: How to Use This Chapter Use this chapter to learn about typical performance risks, the performance test types related to those risks, and proven strategies to mitigate those risks.
Is this component reasonably well optimized? Is the observed performance issue caused by this component?
Are there slowly growing problems that have not yet been detected? Is there external interference that was not accounted for? To what should I compare future tests? Are the network components adequate? What type of performance testing should I conduct next? Does this build exhibit better or worse performance than the last one? What happens if the production load exceeds the anticipated peak load? What kinds of failures should we plan for?
What indicators should we look for? What indicators should we look for in order to intervene prior to failure? Did I stay within my performance budgets? Is this code performing as anticipated under load? Is this version faster or slower than the last one?
Speed is also a factor in certain business- and data-related risks. Some of the most common speed-related risks that performance testing can address include: Is the application capable of presenting the most current information e. Is a Web Service responding within the maximum expected response time before an error is thrown? Speed-Related Risk-Mitigation Strategies The following strategies are valuable in mitigating speed-related risks: You can allow data to accumulate in databases and file servers, or additionally create the data volume, before load test execution.
Users value consistent speed. For example, a user updates information, but the confirmation screen still displays the old information because the transaction has not completed writing to the database. Scalability-Related Risks Scalability risks concern not only the number of users an application can support, but also the volume of data the application can contain and process, as well as the ability to identify when an application is approaching capacity.
Common scalability risks that can be addressed via performance testing include: Will functionality be compromised under heavy usage?
Can the application withstand unanticipated peak loads? Scalability-Related Risk-Mitigation Strategies The following strategies are valuable in mitigating scalability-related risks: Stability-Related Risks Stability is a blanket term that encompasses such areas as reliability, uptime, and recoverability. Although stability risks are commonly addressed with high-load, endurance, and stress tests, stability issues are sometimes detected during the most basic performance tests.
Some common stability risks addressed by means of performance testing include: In particular, does it not attempt to resume cancelled transactions? Are there any transactions that cause system-wide side effects? Can one leg of the load-balanced environment be taken down and still provide uninterrupted service to users? Can the system be patched or updated without taking it down? Stability-Related Risk-Mitigation Strategies The following strategies are valuable in mitigating stability-related risks: Work with key performance indicators network, disk, processor, memory and business indicators such as number of orders lost, user login failures, and so on.
You can allow data to accumulate in databases and file servers, or additionally create the data volume, before stress test execution. This will allow you to replicate critical errors such as database or application deadlocks and other stress failure patterns. Compare the results. You can use an identical approach for recycling services or processes. Generally, the risks that performance testing addresses are categorized in terms of speed, scalability, and stability.
Speed is typically an end-user concern, scalability is a business concern, and stability is a technical or maintenance concern. Identifying project-related risks and the associated mitigation strategies where performance testing can be employed is almost universally viewed as a valuable and time-saving practice.
Understand the seven core activities in sufficient detail to identify how your tasks and processes map to these activities. Understand various performance-testing approaches that can be built around the core activities. Overview This chapter provides a high-level introduction to the most common core activities involved in performance-testing your applications and the systems that support those applications.
Projects, environments, business drivers, acceptance criteria, technologies, timelines, legal implications, and available skills and tools simply make any notion of a common, universal approach unrealistic.
That said, there are some activities that are part of nearly all project-level performancetesting efforts. These activities may occur at different times, be called different things, have different degrees of focus, and be conducted either implicitly or explicitly, but when all is said and done, it is quite rare when a performance-testing-project does not involve at least making a decision around the seven core activities identified and referenced throughout this guide. These seven core activities do not in themselves constitute an approach to performance testing; rather, they represent the foundation upon which an approach can be built that is appropriate for your project.
How to Use This Chapter Use this chapter to understand the core activities of performance testing and what these activities accomplish.
Overview of Activities The following sections discuss the seven activities that most commonly occur across successful performance-testing projects. The key to effectively implementing these activities is not when you conduct them, what you call them, whether or not they overlap, or the iteration pattern among them, but rather that you understand and carefully consider the concepts, applying them in the manner that is most valuable to your own project context.
Starting with at least a cursory knowledge of the project context, most teams begin identifying the test environment and the performance acceptance criteria more or less in parallel. This is due to the fact that all of the remaining activities are affected by the information gathered in activities 1 and 2.
Generally, you will revisit these activities periodically as you and your team learn more about the application, its users, its features, and any performance-related risks it might have. Once you have a good enough understanding of the project context, the test environment, and the performance acceptance criteria, you will begin planning and designing performance tests and configuring the test environment with the tools needed to conduct the kinds of performance tests and collect the kinds of data that you currently anticipate needing, as described in activities 3 and 4.
Once again, in most cases you will revisit these activities periodically as more information becomes available. With at least the relevant aspects of activities 1 through 4 accomplished, most teams will move into an iterative test cycle activities where designed tests are implemented by using some type of load-generation tool, the implemented tests are executed, and the results of those tests are analyzed and reported in terms of their relation to the components and features available to test at that time.
To the degree that performance testing begins before the system or application to be tested has been completed, there is a naturally iterative cycle that results from testing features and components as they become available and continually gaining more information about the application, its users, its features, and any performance-related risks that present themselves via testing. Summary Table of Core Performance-Testing Activities The following table summarizes the seven core performance-testing activities along with the most common input and output for each activity.
Note that project context is not listed, although it is a critical input item for each activity. Activity Activity 1. Figure 4. Identify the Test Environment The environment in which your performance tests will be executed, along with the tools and associated hardware necessary to execute the performance tests, constitute the test environment. Under ideal conditions, if the goal is to determine the performance characteristics of the application in production, the test environment is an exact replica of the production environment but with the addition of load-generation and resourcemonitoring tools.
Exact replicas of production environments are uncommon. The degree of similarity between the hardware, software, and network configuration of the application under test conditions and under actual production conditions is often a significant consideration when deciding what performance tests to conduct and what size loads to test.
It is important to remember that it is not only the physical and software environments that impact performance testing, but also the objectives of the test itself. Often, performance tests are applied against a proposed new hardware infrastructure to validate the supposition that the new hardware will address existing performance concerns.
The key factor in identifying your test environment is to completely understand the similarities and differences between the test and production environments. Some critical factors to consider are: Do any of the system components have known performance concerns? Are there any integration points that are beyond your control for testing? You will likely need their support to perform tasks such as monitoring overall network traffic and configuring your load-generation tool to simulate a realistic number of Internet Protocol IP addresses.
This may account for significant latency when opening database connections. Identify Performance Acceptance Criteria It generally makes sense to start identifying, or at least estimating, the desired performance characteristics of the application early in the development life cycle.
This can be accomplished most simply by noting the performance characteristics that your users and stakeholders equate with good performance. The notes can be quantified at a later time. For example, the product catalog must be displayed in less than three seconds. For example, the system must support 25 book orders per second. Resource utilization.
For example, processor utilization is not more than 75 percent. Considerations Consider the following key points when identifying performance criteria: Plan and Design Tests Planning and designing performance tests involves identifying key usage scenarios, determining appropriate variability across users, identifying and generating test data, and specifying the metrics to be collected.
Ultimately, these items will provide the foundation for workloads and workload profiles. When designing and planning tests with the intention of characterizing production performance, your goal should be to create real-world simulations in order to provide reliable data that will enable your organization to make informed business decisions.
Real-world test designs will significantly increase the relevancy and usefulness of results data. Key usage scenarios for the application typically surface during the process of identifying the desired performance characteristics of the application. If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script.
Consider the following when identifying key usage scenarios: In addition, metrics can help you identify problem areas and bottlenecks within your application. It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design. When identifying metrics, use either specific desired characteristics or indicators that are directly or indirectly related to those characteristics.
Considerations Consider the following key points when planning and designing tests: Better tests almost always result from designing tests on the assumption that they can be executed and then adapting the test or the tool when that assumption is proven false, rather than by not designing particular tests based on the assumption that you do not have access to a tool to execute the test.
Realistic test designs include: Configure the Test Environment Preparing the test environment, tools, and resources for test design implementation and test execution prior to features and components becoming available for test can significantly increase the amount of testing that can be accomplished during the time those features and components are available.
Load-generation and application-monitoring tools are almost never as easy to get up and running as one expects. Whether issues arise from setting up isolated network environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing, or version compatibility between monitoring software and server operating systems, issues always seem to arise from somewhere.
Start early, to ensure that issues are resolved before you begin testing. Additionally, plan to periodically reconfigure, update, add to, or otherwise enhance your load-generation environment and associated tools throughout the project. Even if the application under test stays the same and the load-generation tool is working properly, it is likely that the metrics you want to collect will change.
This frequently implies some degree of change to, or addition of, monitoring tools. Considerations Consider the following key points when configuring the test environment: Typically, load generators encounter bottlenecks first in memory and then in the processor.
Doing so can save you significant time and prevent you from having to dispose of the data entirely and repeat the tests after synchronizing the system clocks. For example, ensure the correct full-duplex mode operation and correct emulation of user latency and bandwidth. Consider using load-testing techniques to avoid affinity of clients to servers due to their using the same IP address.
Most load-generation tools offer the ability to simulate usage of different IP addresses across load-test generators. Implement the Test Design The details of creating an executable performance test are extremely tool-specific.
Regardless of the tool that you are using, creating a performance test typically involves scripting a single usage scenario and then enhancing that scenario and combining it with other scenarios to ultimately represent a complete workload model. Load-generation tools inevitably lag behind evolving technologies and practices. Tool creators can only build in support for the most prominent technologies and, even then, these have to become prominent before the support can be built.
This often means that the biggest challenge involved in a performance-testing project is getting your first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between the simulated users and real users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly. Considerations Consider the following key points when implementing the test design: Test data feeds are data repositories in the form of databases, text files, in-memory variables, or spreadsheets that are used to simulate parameter replacement during a load test.
For example, even if the application database test repository contains the full production set, your load test might only need to simulate a subset of products being bought by users due to a scenario involving, for example, a new product or marketing campaign. Test data feeds may be a subset of production data repositories.
Application data feeds are data repositories, such as product or order databases, that are consumed by the application being tested. The key user scenarios, run by the load test scripts may consume a subset of this data. Many transactions are reported successful by the Web server, but they fail to complete correctly. Examples of validation are, database entries inserted with correct number of rows, product information being returned, correct content returned in html data to the clients etc.
This refers to data returned by Web server that needs to be resubmitted in subsequent request, like session IDs or product ID that needs to be incremented before passing it to the next request. Execute the Test Executing tests is what most people envision when they think about performance testing.
It makes sense that the process, flow, and technical details of test execution are extremely dependent on your tools, environment, and project context. Even so, there are some fairly universal tasks and considerations that need to be kept in mind when executing tests. Much of the performance testing—related training available today treats test execution as little more than starting a test and monitoring it to ensure that the test appears to be running as expected.
In reality, this activity is significantly more complex than just clicking a button and monitoring machines. Test execution can be viewed as a combination of the following sub-tasks: Coordinate test execution and monitoring with the team.
Validate tests, configurations, and the state of the environments and data. Begin test execution. While the test is running, monitor and validate scripts, systems, and data. Upon test completion, quickly review the results for obvious indications that the test was flawed.
Archive the tests, test data, results, and other information necessary to repeat the test later if needed. Log start and end times, the name of the result data, and so on. This will allow you to identify your data sequentially after your test is done. As you prepare to begin test execution, it is worth taking the time to double-check the following items: In the context of performance testing, a smoke test is designed to determine if your application can successfully perform all of its operations under a normal load condition for a short time.
Considerations Consider the following key points when executing the test: Review and reprioritize after each cycle.
Note that the results of first-time tests can be affected by loading Dynamic-Link Libraries DLLs , populating server-side caches, or initializing scripts and other resources required by the code under test. If the results of the second and third iterations are not highly similar, execute the test again.
Try to determine what factors account for the difference. Observe your test during execution and pay close attention to any behavior you feel is unusual. Your instincts are usually right, or at least valuable indicators. Additionally, inform the team whenever you are not going to be executing for more than one hour in succession so that you do not impede the completion of their tasks.
Do not process data, write reports, or draw diagrams on your load-generating machine while generating a load, because this can skew the results of your test. Turn off any active virus-scanning on load-generating machines during testing to minimize the likelihood of unintentionally skewing the results of your test.
While load is being generated, access the system manually from a machine outside of the load-generation environment during test execution so that you can compare your observations with the results data at a later time. Remember to simulate ramp-up and cool-down periods appropriately.
Do not throw away the first iteration because of application script compilation, Web server cache building, or other similar reasons. Instead, measure this iteration separately so that you will know what the first user after a system-wide reboot can expect. Test execution is never really finished, but eventually you will reach a point of diminishing returns on a particular test. When you stop obtaining valuable information, move on to other tests.
If you feel you are not making progress in understanding an observed issue, it may be more efficient to eliminate one or more variables or potential causes and then run the test again.
Analyze Results, Report, and Retest Managers and stakeholders need more than just the results from various tests — they need conclusions, as well as consolidated data that supports those conclusions.
Technical team members also need more than just results — they need analysis, comparisons, and details behind how the results were obtained. Team members of all types get value from performance results being shared more frequently.
Before results can be reported, the data must be analyzed. Consider the following important points when analyzing the data returned by your performance test: If you fix any bottlenecks, repeat the test to validate the fix.
Performance-testing results will often enable the team to analyze components at a deep level and correlate the information back to the real world with proper test design and usage analysis. Performance test results should enable informed architecture and business decisions. Frequently, the analysis will reveal that, in order to completely understand the results of a particular test, additional metrics will need to be captured during subsequent testexecution cycles.
Immediately share test results and make raw data available to your entire team. Talk to the consumers of the data to validate that the test achieved the desired results and that the data means what you think it means.
Modify the test to get new, better, or different information if the results do not represent what the test was defined to determine. Use current results to set priorities for the next test. Collecting metrics frequently produces very large volumes of data.
Although it is tempting to reduce the amount of data, always exercise caution when using datareduction techniques because valuable data can be lost. Most reports fall into one of the following two categories: The key to effective reporting is to present information of interest to the intended audience in a manner that is quick, simple, and intuitive.
The following are some underlying principles for achieving effective reports: Filter out any unnecessary data. If reporting intermediate results, include the priorities, concerns, and blocks for the next several test-execution cycles.
Summary Performance testing involves a set of common core activities that occur at different stages of projects. Each activity has specific characteristics and tasks to be accomplished. These activities have been found to be present — or at least to have been part of an active, riskbased decision to omit one of the activities — in every deliberate and successful performance-testing project that the authors and reviewers have experienced. It is important to understand each activity in detail and then apply the activities in a way that best fits the project context.
Learn how to detect and solve major issues early in the project. Learn how to maximize flexibility without sacrificing control.
Learn how to provide managers and stakeholders with progress and value indicators. Learn how to provide a structure for capturing information that will not noticeably impact the release schedule.
Learn how to apply an approach that is designed to embrace change, not simply to tolerate it. The chapter describes the concepts underlying the activities necessary to make performance testing successful within an iterative process, as well as specific, actionable items that you can immediately apply to your project in order to gain a significant return on this investment. Performance testing is a critical aspect of many software projects because it tests the architectural aspects of the customer experience and provides an indication of overall software quality.
The potential side effect to this approach is that when major issues are found near the end of the development life cycle, it becomes much more expensive to resolve them. The key to working within an iteration-based work cycle is team coordination.
For this reason, the performance tester must be able to adapt what he or she measures and analyzes per iteration cycle as circumstances change. How to Use This Chapter Use this chapter to understand the activities involved in performance testing in iterative development environments, and their relationship with the core performance-testing activities.
Also use this chapter to understand what is accomplished during these activities. This will help you to apply the concepts behind those activities to a particular approach to performance testing. Introduction to the Approach When viewed from a linear perspective, the approach starts by examining the software development project as a whole, the reasons why stakeholders have chosen to include performance testing in the project, and the value that performance testing is expected to bring to the project.
Once the success criteria are understood at a high level, an overall strategy is envisioned to guide the general approach to achieving those criteria by summarizing what performance testing activities are anticipated to add the most value at various points during the development life cycle. Those points may include key project deliveries, checkpoints, sprints, iterations, or weekly builds. With a strategy in mind and the necessary environments in place, the test team draws up plans for major tests or tasks identified for imminent performance builds.
Iterative Performance Testing Activities This approach can be represented by using the following nine activities: Figure 5. Understand the Project Vision and Context. The outcome of this activity is a shared understanding of the project vision and context. Identify Reasons for Testing Performance. Explicitly identify the reasons for performance testing.
Translate the project- and business-level objectives into specific, identifiable, and manageable performance-testing activities. Set up the load-generation tools and the system under test, collectively known as the performance test environment. Identify and Coordinate Tasks.
Prioritize and coordinate support, resources, and schedules to make the tasks efficient and successful. Execute Task s. Execute the activities for the current iteration. Analyze Results and Report. Analyze and share results with the team. Activity 8. Between iterations, ensure that the foundational information has not changed.
Integrate new information such as customer feedback and update the strategy as necessary. Reprioritize Tasks. Based on the test results, new information, and the availability of features and components, reprioritize, add to, or delete tasks from the strategy, and then return to activity 5. Relationship to Core Performance Testing Activities The following graphic displays how the seven core activities described in Chapter 4 map to these nine activities: Understand the Project Vision and Context The project vision and context are the foundation for determining what performance testing activities are necessary and valuable.
Because the performance tester is not driving these items, the coordination aspect refers more to team education about the performance implications of the project vision and context, and to identifying areas where future coordination will likely be needed for success. A critical part of working with an iteration-based process is asking the correct questions, providing the correct value, and performing the correct task related to each step. Although situations can shift or add more questions, values, or tasks, a sample checklist is provided as a starting point for each step.
Checklist Questions to ask: Value provided: Tasks accomplished: Coordinate with: Identify Reasons for Testing Performance The underlying reasons for testing performance on a particular project are not always obvious based on the vision and context alone.
Project teams generally do not include performance testing as part of the project unless there is some performance-related risk or concern they feel needs to be mitigated. Explicitly identifying these risks and areas of concern is the next fundamental step in determining what specific performance testing activities will add the most value to the project.
Having a full-time performance tester on the team from the start of the project may frequently be a good idea, but it does not happen frequently. Generally, when a performance tester is present at project inception, it means there is a specific, significant risk that the tester is there to address. The following checklist should help you to accomplish this step.
Identify the Value Performance Testing Adds to the Project Using information gained from activities 1 and 2, you can now clarify the value added through performance testing, and convert that value into a conceptual performance-testing strategy.
The point is to translate the project- and business-level objectives into specific, identifiable, and manageable performance-testing activities. The coordination aspect of this step involves teamwide discussion and agreement on which performance-testing activities are likely to add value or provide valuable information, and if these activities are worth planning for at this time.
Configure the Test Environment With a conceptual strategy in place, prepare the tools and resources in order to execute the strategy as features and components become available for test. Take this step as soon as possible, so that the team has this resource from the beginning. This step is fairly straightforward. Set up the load-generation tools and the system under test — collectively known as the performance test environment — and ensure that this environment will meet engineering needs.
Identify and Coordinate Tasks Performance testing tasks do not happen in isolation. The performance specialist needs to work with the team to prioritize and coordinate support, resources, and schedules to make the tasks efficient and successful.
During the pre-iteration planning meeting, look at where the project is now and where you want to be to determine what should and can be done next. When planning for the iteration cycle, the performance tester is driven by the goals that have been determined for this cycle. This step also includes signing up for the activities that will be accomplished during this cycle. Execute Task s Conduct tasks in one- to two-day segments.
See them through to completion, but be willing to take important detours along the way if an opportunity to add additional value presents itself. Step 5 defines what work the team members will sign up for in this iteration. Now it is time to execute the activities for this iteration. Are there other important tasks that can be conducted in parallel with this one? Do the preliminary results make sense?
Is the test providing the data we expected? Analyze Results and Report To keep up with an iterative process, results need to be analyzed and shared quickly. If the analysis is inconclusive, retest at the earliest possible opportunity to give the team maximum time to react to performance issues.
As the project is wrapped for final shipping, it is usually worth having a meeting afterward to collect and pass along lessons learned. In most cases it is valuable to have a daily or every-other-day update to share information and coordinate next tasks.
Is tuning required? If so, do we know what to tune? Do the results indicate that there are additional tests that we need to execute that have not been planned for? Do the results indicate that any of the tests we are planning to conduct are no longer necessary?
Have any performance objectives been met? Have any performance objectives been rendered obsolete? Revisit Activities and Consider Performance Acceptance Criteria Between iterations, ensure that the foundational information has not changed. Integrate new information, such as customer feedback, and update the strategy as necessary. Reprioritize Tasks Based on the test results, new information, and the availability of features and components, reprioritize, add to, or delete tasks from the strategy, and then return to activity 5.
To be effective, performance testing should be managed correctly in the context of iteration planning and processes. Learn how to apply an approach designed to embrace change, not simply tolerate it.
Overview This chapter describes an agile approach to managing application performance testing.
ebook Performance Testing Guidance for Web Applications
As the term implies, the key to an agile approach is flexibility. Flexibility, however, does not mean sloppiness or inefficiency. To remain efficient and thorough in an agile environment, you may need to change the way you are used to managing your performance testing. Implementing an agile philosophy means different things to different teams, ranging from perfectly implemented eXtreme Programming XP to projects with many short iterations and documentation designed for efficiency.
The approach outlined in this chapter has been successfully applied by teams across this spectrum with minimal modification. This chapter assumes that the performance specialist is new to the team in question and focuses on the tasks that the performance specialist frequently drives or champions. This is neither an attempt to minimize the concept of team responsibility nor an attempt to segregate roles. The team is best served if the performance specialist is an integral team member who participates in team practices, such as pairing.
Any sense of segregation is unintentional and a result of trying to simplify explanations. This approach to managing performance testing may seem complicated at first because it: Also use the chapter to understand what is accomplished during these activities.
Use the various activity sections to understand the details of the most critical performancetesting tasks. Additionally, use Chapter 4 — Core Activities in this guide to understand the common core activities involved in successful performance testing projects. This will help you to apply the concepts underlying those activities to a particular approach to performance testing.
Introduction to the Approach When viewed from a linear perspective, the approach starts by examining the software development project as a whole, the reasons why stakeholders have chosen to include performance testing in the project, and the value that performance testing is expected to add to the project. Agile Performance-Testing Activities This approach can be represented by using the following nine activities: Figure 6.
The project vision and context are the foundation for determining what performance-testing activities are necessary and valuable. These are not always clear from the vision and context. Explicitly identifying the reasons for performance testing is critical to being able to determine what performance testing activities will add the most value to the project.
With the information gained from steps 1 and 2, clarify the value added through performance testing and convert that value into a conceptual performance-testing strategy. With a conceptual strategy in place, prepare the necessary tools and resources to execute the strategy as features and components become available for test. Performancetesting tasks do not happen in isolation.
For this reason, the performance specialist needs to work with the team to prioritize and coordinate support, resources, and schedules in order to make the tasks efficient and successful.
Conduct tasks in one- to two-day segments. To keep up with an iterative process, results need to be analyzed and shared quickly. If the analysis is inconclusive, retest at the earliest possible opportunity. This gives the team the most time to react to performance issues. Integrate new information such as customer feedback and update the strategy as necessary Activity 9.
Based on the test results, new information, and the availability of features and components, reprioritize, add to, or delete tasks from the strategy, then return to activity 5. Relationship to Core Performance-Testing Activities The following graphic shows how the seven core activities described in Chapter 4 map to the nine agile performance-testing activities.
Understand the Project Vision and Context The project vision and context are the foundation for determining what performance testing is necessary and valuable. Decisions made during the work session can be refactored during other iteration and work sessions as the system becomes more familiar. Project Vision Before initiating performance testing, ensure that you understand the current project vision.
Because the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well. The following are examples of high-level project visions: Project Context The project context is nothing more than those factors that are, or may become, relevant to achieving the project vision. Some examples of items that may be relevant to your project context include: Expect this. In fact, the performance testing you do is frequently the driver behind at least some of those changes.
It might also be the case that the test environment itself is changed to reflect new expectations as to the minimal required production environment. This is unusual, but a potential outcome of the tuning effort.
Performance is concerned with achieving response times, throughput, and resource- utilization levels that meet the performance objectives for the project or product. In this guide, performance testing represents the superset of all of the other subcategories of performance-related testing. This subcategory of performance testing is focused on determining or validating performance characteristics of the system or application under test when subjected to workloads and load volumes anticipated during production operations.
This subcategory of performance testing is focused on determining or validating performance characteristics of the system or application under test when subjected to conditions beyond those anticipated during production operations.
ebook Performance Testing Guidance for Web Applications
Stress tests may also include tests focused on determining or validating performance characteristics of the system or application under test when subjected to other stressful conditions, such as limited memory, insufficient disk space, or server failure. These tests are designed to determine under what conditions an application will fail, how it will fail, and what indicators can be monitored to warn of an impending failure.
Baselines Creating a baseline is the process of running a set of tests to capture performance metric data for the purpose of evaluating the effectiveness of subsequent performance-improving changes to the system or application. A critical aspect of a baseline is that all characteristics and configuration options except those specifically being varied for comparison must remain invariant. Once a part of the system that is not intentionally being varied for comparison to the baseline is changed, the baseline measurement is no longer a valid basis for comparison.
With respect to Web applications, you can use a baseline to determine whether performance is improving or declining and to find deviations across different builds and versions. For example, you could measure load time, the number of transactions processed per unit of time, the number of Web pages served per unit of time, and resource utilization such as memory usage and processor usage. A baseline can also be created for different layers of the application, including a database, Web services, and so on.
It is important to validate that the baseline results are repeatable, because considerable fluctuations may occur across test results due to environment and workload characteristics. Baselines can help product teams identify changes in performance that reflect degradation or optimization over the course of the development life cycle.
Identifying these changes in comparison to a well-known state or configuration often makes resolving performance issues simpler. Baselines are most valuable if they are created by using a set of reusable test assets. It is important that such tests accurately simulate repeatable and actionable workload characteristics. Baseline results can be articulated by using a broad set of key performance indicators, including response time, processor capacity, memory usage, disk capacity, and network bandwidth.
Sharing baseline results allows your team to build a common store of acquired knowledge about the performance characteristics of an application or component. If your project entails a major reengineering of the application, you need to reestablish the baseline for testing that application. A baseline is application-specific and is most useful for comparing performance across different versions. Sometimes, subsequent versions of an application are so different that previous baselines are no longer valid for comparisons.
It is a good idea to ensure that you completely understand the behavior of the application at the time a baseline is created. Failure to do so before making changes to the system with a focus on optimization objectives is frequently counterproductive. At times you will have to redefine your baseline because of changes that have been made to the system since the time the baseline was initially captured.
You can then compare your application against other systems or applications that also calculated their score for the same benchmark. You may choose to tune your application performance to achieve or surpass a certain benchmark score. A benchmark is achieved by working with industry specifications or by porting an existing implementation to meet such standards.
Benchmarking entails identifying all of the necessary components that will run together, the market where the product exists, and the specific metrics to be measured. Benchmarking results can be published to the outside world. Since comparisons may be produced by your competitors, you will want to employ a strict set of standard approaches for testing and data to ensure reliable results. Performance metrics may involve load time, number of transactions processed per unit of time, Web pages accessed per unit of time, processor usage, memory usage, search times, and so on.
Terminology The following definitions are used throughout this guide. Every effort has been made to ensure that these terms and definitions are consistent with formal use and industry standards; however, some of these terms are known to have certain valid alternate definitions and implications in specific industries and organizations. Keep in mind that these definitions are intended to aid communication and are not an attempt to create a universal standard.
Capacity The capacity of a system is the total workload it can handle without violating predetermined key performance acceptance criteria. You perform capacity testing in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data.
For example, to accommodate future loads, you need to know how many additional resources such as processor capacity, memory usage, disk capacity, or network bandwidth are necessary to support future usage levels. Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
Component test A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, and storage devices.
Endurance test An endurance test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
Endurance testing is a subset of load testing. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues. Latency Latency is a measure of responsiveness that represents the time it takes to complete the execution of a request. Latency may also represent the sum of several latencies or subtasks.
Metrics Metrics are measurements obtained by running performance tests as expressed on a commonly understood scale. Some metrics commonly obtained through performance tests include processor utilization over time and memory usage by load. Performance testing is the superset containing all other subcategories of performance testing described in this chapter.Term Component test Notes A component test is any performance test that targets an architectural component of the application.
Whether you are new to performance testing or looking for ways to improve your current performance-testing approach, you will gain insights that you can tailor to your specific scenarios. Expect this. Can the application withstand unanticipated peak loads?
How to Use This Chapter Use this chapter to learn about typical performance risks, the performance test types related to those risks, and proven strategies to mitigate those risks.
- PDF APPLICATION LETTER
- FORCE PDF FILE USING PHP
- FLIP MARTYN BEDFORD PDF
- VLSI TEST PRINCIPLES AND ARCHITECTURES PDF
- AVATAR THE LEGEND OF KORRA BOOK 2 SUB INDO INDOWEBSTER
- SANFORD ANTIMICROBIAL GUIDE PDF 2012
- YASMEENAS CHOICE PDF
- CAN CSA Z614 07 EBOOK DOWNLOAD
- BE500R AS EBOOK DOWNLOAD
- VLSI TEST PRINCIPLES AND ARCHITECTURES PDF
- PROJECTION OF LINES PDF
- ASTM D2974 EBOOK
- DOC DUOI PDF MIEN PHI
- EPUB BOOKS ON KINDLE
- SNOW WHITE AND THE SEVEN DWARFS EBOOK