What’s the deal with the performance?
The performance of the program during performance tests is assessed on the basis of three subcharacteristics:
Time behavior – that is, how the system reacts to user input over time and under certain conditions.
Resource utilization – that is, how many and what resources the system consumes during a specific load.
Capacity – carried out in order to assess the system architecture with the assumed required performance limits.
Performance tests can cover both web and mobile applications as well as various client-server architectures, distributed systems, and embedded systems. The purpose of the tests is to reveal system defects, and their elimination has a significant impact on ensuring a positive user experience when using the system. Efficiency accompanies us in everyday life. From operating home devices in smart home systems to the responsiveness of the braking system in passenger cars. It may happen that even a seemingly well-functioning online store application shows the characteristics of a performance problem during a large sales promotion. For example, by long response times to user requests. Such risks can be minimized by conducting appropriate performance tests.
When and why do we test application performance?
Performance tests should be planned, designed and executed wherever there are identified performance risks. Unfortunately, the reality shows that even in very large projects, non-functional criteria are not taken into account or risks related to non-functional characteristics are not foreseen. The consequences of the above are data leaks on large platforms such as Facebook or LinkedIn, performance problems on many government websites or portals, as is the case today, supporting, for example, the sale of fossil energy sources. The consequences of such failures may also be lack of access to services, loss of company image or financial loss.
To protect the enterprise from the above consequences, performance testing should be implemented at various levels during the software development process. The principle of early testing is very important in the case of performance testing. This is because these tests may reveal defects related to a faulty system architecture, inadequate database design, or insufficient resources. Correcting these defects in the late stages of the project can prove very costly or even make the software unacceptable. According to the principle of “prevention is better than cure”, it is worth assessing project risks in terms of system performance, taking into account not only the current, but also future non-functional requirements of the application.
How do we test?
Performance tests are carried out in static and dynamic form. The first of them is much more important in the case of this type of testing in relation to functional tests. It is carried out by reviewing the system by the architect and tester in terms of possible gaps, bottlenecks and technologies used. The review allows you to predict the occurrence of failures and risks before running the system load test scripts. Some of the performance-related defects can be identified during functional tests: e.g. by repeatedly calling the resource through the user interface. These failures are reported and corrected during functional system tests.
The performance tests themselves should be carried out from the earliest levels, from module testing, algorithm tuning, through module integration testing, to system testing or system integration. Thanks to this approach, defects from lower levels will not move to those located higher.
After the expected level of coverage with application functional tests is achieved, you can proceed to performance tests of the entire system. Currently, the main concept of generating a load is the use of tools that allow intercepting network traffic over selected communication protocols. The script created this way imitates the real user behavior in the application, arranging the sequences of API queries. Selected system functions are transformed into test cases, which in turn are grouped into test scenarios mapped in the script.
In order to create a good script, you need knowledge about the real use of the application. This allows the creation of an operational profile. Based on this profile and the purpose of the tests, a load profile is created. Load generation tools allow you to simulate the activities of many – even several hundred thousand – virtual users. The last step before running the test is to ensure that the test environment reflects the existing or future production environment as much as possible. The tests prepared in such a way are reliable and authoritative. When you run tests, collect metrics and data from resource usage monitoring tools. This is essential in identifying potential defects due to resource utilization such as RAM, CPU, and non-volatile memory sources. The test result is presented in the form of a report adjusted to the expectations of stakeholders, based on the requirements and established metrics. The report is the basis for the analysis of the implementation of the assumed requirements.
The initial cost
The main problem with performance testing is its initial cost. For this reason some recipients decide not to test in the early life cycle of the software, postponing their implementation in time before its delivery. Often the cost of removing defects revealed by performance tests carried out later in the life of the project is greater than the cost of testing in the shift left approach. The answer to this problem is building the recipient’s awareness by the software producer and carrying out a product risk assessment in terms of expected and future burdens.
Another challenge is establishing non-functional requirements and then introducing appropriate acceptance criteria at the user story level. Their lack may lead to the implementation of inefficient functionalities or the lack of tests in terms of program behavior during load.
A software is a complex product that should meet its recipients needs. Their spectrum is wide. The satisfaction of some of them should also be viewed from the perspective of the software performance, which translates into both the user experience while using the system, and access to the services offered under various load conditions. The omission of this perspective may lead to negative consequences, sometimes difficult to remove, less often unprofitable to repair, and therefore irreversible. This perspective is worth sharing, taking into account non-functional requirements in project documentation and approaching their implementation iteratively by testing performance at various test levels. In addition to revealing defects, during such tests, knowledge about the efficiency of the solution itself is collected which allows you to know its limits and find solutions that will ensure scalability during the possible increase in interest in the product among end customers.