Performance testing life cycle have different phases; each phase has its own significance to go into the next phase. Below diagram shows the same:
3.1 Performance Requirements Gathering (NFRs’):
In this phase, a
performance tester’s job is to get answers to all the questions he/she has in
mind before moving further. Below are a couple of questions usually asked in
this phase
- What is the type of application and its architecture?
- What are the performance goals?
- Which application scenarios to be tested?
- What will be the workload model?
- Which different performance tests are required?
- Whether a dedicated performance test environment will be given for testing?
- Whether the given performance environment is replica of production?
- Who is responsible to create performance test data?
- Which tool will be used for testing?
- What is the time frame to finish performance testing?
- What is the known current as well as previous performance bottlenecks (if any)?
- How many resources are planned to be on boarded to complete performance testing?
- What are the normal usage hours of the application?
3.2 Performance test plan creation:
Based on the Performance requirements, we will progress into the test plan creation phase of the application, mostly in case of the post success of the Performance Requirements for any application. In this phase we will develop the performance test plan for the application.
- Objective of document
- Performance testing Scope
- Architectural overview
- Performance test approach
- Test data availability
- Planning of performance tests
- Assumptions
- Risks and mitigations
- Deliverable
Let us discuss each
of them in a high level
3.2.1 Objective of document:
The purpose of this
document is to provide the process and methodologies that shall be followed by company
performance testing team.
This document will
provide the details about the reason behind the development of the document for
the future reference and make sure that will be in line with the client
requirement.
3.2.2 Performance testing scope:
This section will
give what are things are considering for the application performance testing
called as inside the scope. If we want to restrict some of the changes which
are not considered for current cycle of the performance testing called Out of
scope.
3.2.3 Architectural Overview:
This section will
explain all the details about the components that are affecting in the
performance point of view in the current cycle changes or newly build
application of the project.
Architectural
overview clearly explains how the performance of various components can be
identified in the final performance report.
3.2.4 Performance test approach:
This section will
give idea about the approach following to reach the performance goal set by the
client.
The last minute’s changes in the application can also be accommodated for the performance testing progress to make sure that production environment will not be affected any performance issues.
3.2.5 Planning of performance tests:
Need to plan
various types of the performance tests to make sure that the application
performance may not be impacted with the load of users.
The planning of the
tests purely dependent upon, how well we have studied the architecture of the
application, to make sure that future changes in the application may not impact
the performance of the application by any means.
3.2.6 Test data availability:
Test data must be
created prior to the execution of the performance scenarios, so that the
required data which need to give input for the execution of scripts will be
readily available. Test data must be 100% accurate and mostly should avoid any
redundancy between the values to make the data more realistic as the real time
user supplied input values.
3.2.7 Assumptions:
- The application is performance tested for the commonly used transactions and transaction mix as obtained from the project development team/ Client. So, problems in any of the rarely used transaction may not be ascertained. The Client/Responsible teams will provide all the data and support requested.
- Schedules are based on the assumption that all pre-test criteria will be met on time. Any lapse in fulfilling the criteria will have an impact in the test schedules.
- In depth Monitoring and Analysis of all the servers will be done by respective teams.
- A dedicated and isolated Performance Test Environment must be provided, which should be a replica of the Production environment, for the results to be accurate.
- The Application when provided for Performance Testing is functionally stable.
3.2.8 Risks and mitigations:
- Any unplanned or planned environment outages may affect the test schedule depending on the impact. Planned outage information will have to be shared to reschedule the planned activities.
- Any rework of scripts may affect the test schedule depending on the efforts.
- If the required data is not provided to the Performance Testing team, then the schedule would vary and would include the test data generation time also.
3.2.9 Reporting the performance results:
- Test results with high level information on client and server statistics of all the multiple user tests shall be shared.
- If any performance bottlenecks are observed in any of the tests, the behavior will be documented and produced as a part of interim report.
- Also, discussions will be initiated with development team to discuss the findings and implement any tuning effort.
3.3 Environment readiness check:
In this phase, a performance tester needs to make sure the environment has a stable application deployed in the performance test environment. He/she should do a smoke test to verify that the application is ready for testing.
3.4 Scripting and Enhancements:
- We need to develop the traversal flows for the application, need to finalize the flows with the clients for reviewing and getting the final version with the required changes.
- After finalizing the flows need to develop the scripts according to the traversal flows without any changes as the client has approved the traversal flows as the exact application navigation flows.
3.5 Execution and Monitoring:
Once the scripting
is done, execution can be started as per the plan. Different monitoring tools
are available in the market using which server statistics can be monitored.
3.6 Analyzing the Result:
- In the Analysis phase is mainly used to analyze the results obtained from the execution of the scenarios.
- Always need to maintain the consistency in case of results because to make the required number of runs for the same scenarios to make sure that environment is stable for all the executions.
3.7 Report Creation:
The Report Creation
phase is the one most important phase in the performance testing life cycle, in
which client is interested to consider, how the application performance for the
different transactions. Always early reporting the issues in the application
will helps to tune the performance problems easily and cost effect will
decrease to larger extent also business of the application will be in an impact
mode.
Generally, there
will be two types of reports
3.7.1 Preliminary report:
This Report gives
the high-level details to client Whenever the project has the numerous number
of executions and tests. In order to give the pre-status about the how the
performance scenario executions are conducting against the different user
loads.
This Preliminary
report for the performance tests is useful for preparing the client as the
expected with respective service level agreement.
3.7.2 Summary report:
This gives detailed
summary report to sign off Once all performance executions are completed, the
detailed report is provided with the following information
- Environment and hardware capacity used for performance testing.
- Different types of Performance tests conducted with results.
- Client metrics and server metrics.
- Detailed graphs for better visualization of application performance.
- Recommendations for optimizing the application performance.
No comments:
Post a Comment