Unit tests
It’s often said that a unit test is code that checks other code, and it’s hard to disagree with that. Therefore, these tests are written mainly by programmers. While working on projects, we can notice different approaches to unit tests, but the awareness of the advantages of their implementation allows me to encounter the approach: “Unit tests … why even bother?”.
Unit tests concern so-called “units” understood as parts of the program code, such as classes or objects. After the tests are executed, the actual results are compared with the expected results, which were defined before they were run. These results can be states, exceptions, returned data, calling an external system, etc. The essence of these tests is to detect errors in the basic logic of the system as soon as possible, which should result in faster and cheaper repair of the defects found compared to the repair of errors found as a result of functional tests performed after the release of a given version of the software.
Integration tests
Integration tests check whether communication between systems, components, modules or external services is working properly. Most often, these tests verify interfaces, transferring information between various components, as well as saving data to the database.
Communication between systems might look something like this:
System1: Hi, I’m sending you some data.
System2: I don’t know you, 401 unauthorized.
System1 after the fix: Hi again… Look, I have a token and I’m sending you some data.
System2: Ooooh hi System1. What are these weird signs? I don’t know what it is and what to do with it.
System1 after the fix: Hi, long time no see, but look, it’s me System1, I have a token and nice data in XML format.
System2: Hi System1, OK, I can take that too… that’s so cool that you’re sending it, but what is this addressForSystem3 object and what am I supposed to do with it?
And so it could be a long time, but I think you understand the assumptions of these tests.
Functional tests
Functional tests concern a specific behavior of a system in accordance with the developed requirements or expectations. They are also often called black-box tests due to the fact that the person testing the functional requirements does not need to have access to the source code, programming knowledge or software architecture. Testing and evaluation are performed on both positive scenarios for a given business requirement and negative (sometimes unspecified) scenarios that verify the system’s behavior in the event of, for example, incorrect data being entered or other sometimes unexpected user behavior.
Let’s imagine that we create an abstract calculator as a joke for a friend learning to program. The calculator is to support simple mathematical operations such as addition, subtraction, multiplication and division. It has 3 free places on the display to show the result and should send it to an email. In the case of impossible actions, it should return the error: “Back to school 😉“, in the case of too large calculations, the message: “I don’t understand, these are not simple things”.
As examples of functional tests for this case, various scenarios can be given, such as:
positive:
adding positive numbers whose result fits in 3 places
adding an email
sending an email
negative:
trying to add two letters together
trying to send an email without performing a mathematical operation
operation which result does not fit in 3 places
division by zero, which should fail, and we check whether the message is: “Back to school” or maybe we get an indescribable error.
The above scenarios are a non-exhaustive example of the pool of possible test cases. In fact, it is a substitute for what can be tested. The number of scenarios is limited only by the tester’s knowledge and imagination.
End to End tests
End to End tests are designed to check whether it is possible to pass a complete process on a given system. These tests should reflect the typical transition paths of the application as faithfully as possible. This type of testing also checks to some extent the validation and integration between the systems and components being tested. Particularly in large, complex systems, this is important because if one component fails, the entire system may become unusable. Due to the fact that the maintenance and time-consuming end-to-end tests are expensive, these tests often cover only key paths, and specific functionalities are tested in shorter functional tests which are cheaper to maintain.
Do you remember our calculator and verification of its functionality? There, we verified various operations separately: adding, and then after starting the calculator again, division by zero, in the next test, adding an email, and in the last test sending the result. When creating End to End tests, we want to go through the whole process, i.e. we perform an adding operation, then add an email, and send the result/message to this address. Then, we check whether it has been sent, and in the next End to End test case, we can try to verify the possibility of making several calculations within one test, such as adding and then dividing by zeroing the result/message before dividing, or using it in the case of a result (using message for further calculations is rather out of the question 😉), and then add, send and check if the email has been sent.
Of course, we can create more cases and much depends on requirements, a tester and specifics of the project. An additional End to End test (if there was an email database) could be e.g. performing a calculation and then selecting a previously saved email address, sending the result/message and verifying whether it was sent, but the purpose of the article is not to create test cases, but the outline of test levels.
Acceptance testing (UAT)
Acceptance testing is most often conducted by a client, who performs verification based on previous use cases and business requirements. In addition to functional requirements, it can also measure system performance. Often its goal – apart from the acceptance of a given product – is to confirm confidence in the quality built in the earlier stages of the project to the system. Acceptance tests may also include system-related activities, such as backups, installations, uninstallation of software or data migrations.
Performance tests
Performance tests allow you to check the behavior of a system under certain simulated conditions. The most frequently measured values are reliability, speed, scalability of the system based on the number of active users, operations performed at once or the performance of hardware on which an application is built. These studies usually indicate the following metrics: response times, average load time, percentiles load time, bandwidth error rate, request per second, number/percent of transactions failed/passed.
Smoke tests/Sanity tests
The smoke coming out of a device is usually not a good sign… Unless it is a smoke generator at a party, a steam locomotive or an Indian treating his blanket as a device for giving smoke signals, etc. 😉 More or less similarly, the name “smoke tests” was created. No, I don’t mean the Indian and the blanket flammability test. More a verification, or if we connect something to the power supply, we will briefly check if it does not burn (and in the case of software, whether there are no serious failures) and whether it works in the basic scope with acceptable faults. So much for a small anecdote to bring the outline of smoke tests a bit closer. In summary, for such tests to make sense, they should be short, quick to perform and focus on the operation of the main parts of the system.
Smoke tests are performed both in the initial and later stages of software development, not necessarily on stable versions (e.g. after major changes in functionality, refactor, etc.). They can then quickly give an answer whether the system works in a basic scope, or it will be necessary to inform the developer in a rather diplomatic way that the code needs to be thoroughly rewritten.
Sometimes we can also meet the term “sanity test”. These tests are similar to smoke tests, but the main difference is to verify the operation of the software after the fix and make sure that no new bugs were found due to the fix. Sounds like regression? If so, it’s really great that you noticed it, because it’s quite obvious from this description. The difference, however, is the level of detail and time spent on these tests. Sanity tests are by definition short and quick. Regression (in more complex systems than our calculator), on the other hand, is most often lengthy and time-consuming.
Summing up
This concludes the entry on test levels. I hope that I managed to bring them a bit closer in an accessible, clear form and it will help you at the beginning of your software tester journey. Fingers crossed and good luck!