How does the OZiTAG software testing process look like?
How often in life have you encountered last-minute problems arising from someone’s mistakes, staff incompetence, unaccounted factors and risks? Delays in project launch? The need to “rev-up” and ask employees to work extra hours?
The same goes for software development. If some feature wasn’t fully tested, some bug was missed, the application wasn’t checked on all devices – trouble is brewing.
How to make sure nothing will fail at the production, the app or site will be successfully released, the client will reach the goals, and users will be happy? Find out all the answers in the article! Get insight into the software testing process in OZiTAG.
Tasks of a QA engineer
QA engineers play no less important role than the developers. Even if programmers wrote quality code, run the product on the device, and saw that everything was working, the fully functioning system is still far away.
Why do some applications (that even seem to be good and quality) get flows of negative reviews after launch? Often one of the reasons is that the app looks or works differently on different devices.
The situation is getting worse due to the fact that the market is full of various devices. Let’s say someone launched the product on an old Huawei and saw an ugly layout on it.
Or it was installed by a user from Saudi Arabia, who didn’t understand anything, as the text in this country is read from right to left, and the interface wasn’t customized to this cultural peculiarity (you’ve probably heard of the terrible failure of the Coca-Cola advertising campaign because of it, but know – it isn’t true. A few years ago, this case spread throughout the Internet, many authors included it in the most famous advertising campaign failures, but it’s not confirmed by any trusted source).
So, who will check the system functioning in all conditions? Find bugs and vulnerabilities? Consider a variety of untypical cases? And, what’s more, ensure that the product complies with the customer requirements? The tester is the very person who is responsible for these tasks.
The goals of a QA engineer are to achieve the correct system functioning, prevent defects, improve software quality, and throughout the project provide the end customer with the information about its quality.
The tester must also ensure that a software solution works as intended. For this purpose, he/she compares the actual system behavior with the expected one using various types of testing.
The tasks of a QA engineer:
- Identify bugs and errors, describe and send for revision. Though the project manager generally prioritizes tasks for debugging, in our company the tester often does it. When the time is strictly limited, the customer can also take part in the process, for example, determine what need to be fixed first.
- Check the implemented improvements.
- Test the product again – and repeat the process until the desired result is achieved.
- Test the software solution on all necessary devices and screen resolutions. A QA engineer prepares a list of devices of environmental parameters, enabling to make test coverage for the required versions of the operating system, screen resolutions, etc.
- Check the product performance in various conditions, including the use of the key system functionality and testing it for different untypical use cases.
- Test the product compliance with the requirements.
- Work through all uncharacteristic cases and ensure the product functions as intended. The software testing team checks non-standard system use, data overflow limits, illogical clicking on buttons and a set of symbols (e.g., the sensor was activated on the smartphone in the briefcase, the app opened, which resulted in a chaotical set of symbols).
You may think that the software testing process takes a lot of time but it’s not quite so. If the development and communications processes are well-established and specialists know their onions, generally there are no delays.
To go into our software QA process, let’s get through the product development cycle in OZiTAG.
Software development stages in OZiTAG
- Project start – receiving and processing of the client request, collection and investigation of the requirements.
- Requirements analysis and approval – a business analyst, project manager, lead developer, and lead QA engineer get involved. The aim of this step is to understand customer tasks, analyze the requirements to the product, identify errors and shortcomings, and suggest alternative solutions.
For this end, a QA specialist conducts the requirements testing, representing their processing, formulation of the list of questions to the client, making corrections and recommendations. Then the project manager agrees on the final version with the customer.
This stage also includes the development of mockups and wireframes to fixate the requirements and visualize the future solution. They serve as a helping hand in design creation. - Work planning – we make time and work scope estimations, divide the project into iterations, and prepare the list of tasks for the team.
The planning stage impacts the further course and success of the project as at this step we’re looking for the best ways to achieve the client’s objectives, think through what and how to do, and fix it all in the task list.
Afterward, we use this list to control the goal accomplishment, task completion and meeting the deadlines. - Design – creation, testing, and approval with the customer. Throughout the project we investigate the usability of the software solution interface, check the design for compliance with the requirements.
- Development – the product implementation according to the functional and non-functional requirements.
- Testing and stabilization – testing of the developed system, debugging, preparation to the release.
- Project launch – mobile app publication in stores/website launch/product integration with the systems in the client’s company.

* The project development doesn’t always strictly follow the described sequence. For instance, the customer can come with already tested requirements and drawn wireframes, sometimes – with a ready-made design.
In our company, we connect the tester to the project at the requirements processing stage. There are many benefits of this approach: QA engineers have the ability to investigate business requirements and the client operating field, plan the work and prioritize tasks, identify and estimate possible risks together with the team.
Once the requirements are approved, the QA team gets down to preparing a test plan, checklists, and test cases, which helps optimize the app/system development and testing process.
The test plan is formed at the project estimation stage. It’s a document that describes the list of testing activities, relevant techniques and approaches, testing strategy, responsibilities, resources, schedule, key dates, and much more.
There are several approaches to its writing, and in both cases, a tester receives a mile long document. We’re trying to improve it and create a hybrid model so that the document is simple and clear, while also providing all the necessary information.
While developers are writing code, testers are preparing a testing strategy, writing checklists and test cases – so-called testing artifacts, and defining, which functionality “zones” need test cases, which – checklists, and where exploratory testing will be enough.
Exploratory testing is the simultaneous tests’ preparation and execution. This type of software testing doesn’t require writing test cases and is used for checking critical system parts in small and middle-sized projects or when there is little time to pass tests.
A checklist is a document containing the list of what should be tested, status and result of each test. Checklists enable QA specialists to remember all the necessary tests and keep a record of test results.
And what’s the main, it’s much easier to check the system functioning using the list. At OZiTAG, we have several checklists prepared in advance, one of them for mobile app testing consists of 200+ points.
A test case describes a set of steps, specific conditions and parameters, required to test the functionality or its part, as well as an expected result of passing the test: Action > Expected result > Test result.
The test case also includes the data required to pass it. For example, if we need to check login on the website, login and password are indicated in the document. Helping not to miss anything, improve and streamline the work, checklists and test cases serve as an irreplaceable tool for each tester.
Software testing process in our company
Here at OZiTAG, we use several environments in software development depending on the task being solved on each specific stage.
An environment is a field the product operates in. Generally, it involves the app codebase, database, some third-party services, and other elements, required for the system work. Different environments are either loosely coupled or completely independent.
The independence of the environments from each other allows making any changes to them with no fear that it will disrupt the system operativity in other environments.
When building software solutions, at least 3 environments are used:
- Dev – serves for developing and testing the new functionality and is applied only by software developers.
- Test (stage) – is used for handling testing by QA specialists and other people (including customers).
- Release – serves for mobile app/system production – this is what the client works with after the product is released.

Dev environment – the app/system functionality implementation and testing by the programmers. They are responsible for the correctness of the changes made to the product.
The developer must find, fix, and eliminate at least the most obvious problems and often think through various options for the software application. Thus, the specialist should get rid of potential errors before the new functionality integration into other environments.
We adhere to the approach when that the tester, ideally, shouldn’t search for bugs but confirm the programmers’ work quality. Surely, it’s impossible to write a 100% successful complex system at once (for this end QAs test the product) but the developer can send the work for checking after ensuring that the system works correctly on the main cases.
Often programmers write unit tests to test a specific feature or module. When the feature is introduced in the dev environment, the developer catches most bugs and immediately fixes them.
This enables the QA team to effectively test software for the presence of hidden and difficult to reproduce defects, and not to spend too much time describing a large number of errors once they receive the update from the dev environment.
Test (stage) – QA specialists come to work. A test build is set up, a test environment is configured. To prevent conflicts between the developers’ and testers’ work, the work of QA is placed on the test environment, representing a test server.
Next, a general build test is performed – a so-called installation testing. At this stage, QAs checks the build correct installation and uninstall.
Then smoke testing begins – testing of the key functionality, which lack or shutdown makes the system use meaningless, testing of the critical path – testing, in which the relevant product elements and features are checked for proper operation with their standard use, and integration testing, when QA engineers check the interaction of different functionality parts.
Now, get to know how each individual feature is tested!
The feature testing life cycle looks as follows:
- The developer delivers the implemented feature for testing.
- QA engineer tests the developed feature.
- Tester creates a bug report for each detected bug. He/she can also write a recommendation for the feature, which is approved by the client and the project manager.
- Priorities of bugs and features are set. The team determines what goes to the next development iteration and what needs correction in the current sprint.
- Product debugging – programmers fix bugs and prepare the build.
- Feature testing. If bugs are detected, the testing and debugging process is repeated. When everything works well, the feature is marked as implemented.
- When all features are ready, the release build passes regression testing.
The release build passes regression testing – a QA engineer tests implemented changes (code merge, fixing defects, migrating to another server or platform, etc.) to make sure that the previous functionality works as before.
The success formula of each project is the solution to the user problems. At the testing stage, it’s important not only to identify and fix bugs but also ensure that the program answers the client’s tasks.
The purpose of functional software testing is to verify compliance with functional requirements, that is, the system ability to solve user problems.
Functional tests are conducted in two main aspects – requirements and business processes. They take into account the specifics of the product application and peculiarities of the target audience.
Exactly here the understanding of the customer operation field and his/her business needs is a must. To test how the system solves problems, the QA team focuses on “running” scenarios of its daily work.
Functional tests can be presented at all levels of the software testing process:
- Modular (component) – individual small parts of the application are checked (this includes unit tests, written by developers);
- Integration – the interaction of the product functionality is tested;
- System – a software solution is checked as an entity.
Sometimes the test automation comes into play, when the main functions and steps of the test – launch, initialization, execution, analysis, and result obtainment – are performed automatically using a program written by the tester.
We apply automation less frequently, as we mainly work with small and middle-sized projects in which it doesn’t pay off.
Besides verifying the compliance of the product real functions with functional requirements, it’s necessary to check such its properties as reliability, performance, scalability, usability, and safety.
In other words, properties that don’t relate to the system functionality and are determined by non-functional requirements.
Non-functional testing is used to check them – a QA engineer estimates the product quality as a whole. Since security vulnerabilities, an inconvenience of use, inability to withstand high loads – any factor may become critical and destroy all efforts, we pay special attention to this type of testing.
Our QA team tests the system performance – how ready it is to work at peak loads, and checks its security – analyzing the system for vulnerabilities to malware and hacker attacks.
Another type of testing that can’t be mentioned is alpha testing. The main goal is to identify the most critical errors in the code. It’s applied in the early development stages when the software solution is still far from the state in which it should reach the end user.
Generally, alpha testing is held internally – testers and programmers are connected but users and the customer don’t get involved in the process. In some cases, the system functioning is verified by potential users or the client.
Then, the application/site/system goes into the release environment. At this step, beta testing is held – QA specialists check a beta version, i.e. the product which is almost ready for release.
The purpose is to find the maximum number of errors that weren’t identified at the development and testing stages and eliminate them before the project launch.
The client and employees of his/her company, as well as specially invited users, can participate in beta testing. This is called a closed beta testing and most often we focus on it.
But when you need to get the maximum of user feedback, beta testing is opened. In this case, volunteers are invited to check the product.
They can be driven by curiosity, a desire to attract the software development process or some perks (for instance, I was one of the beta testers of a new online community, which allowed me to attract followers much easier).
Interestingly, that beta testing is often a part of a product promotion strategy: beta testers often become its regular users, the product gets a lot of preliminary reviews, thereby creating an understanding of the target audience needs and expectations. What’s more, it helps clarify how to develop the product in the future.
Finally, when testing a mobile application, it’s important to ensure that it meets the store requirements. The App Store has always been strict with products – verification usually takes several days and, if something is wrong, the application is sent back for revision.
Even previously loyal Google Play is constantly tightening the publication rules. Therefore, our specialists monitor the store requirements and their updates, check a mobile app for their compliance, test UI/UX, and also look at the requirements in different countries.
Thus, we mainly apply the following types of software testing:
- Requirements testing
- Unit testing
- Smoke testing
- Installation testing
- Critical path testing
- Exploratory testing
- Functional testing
- Non-functional testing
- Regression testing
- Alpha testing
- Beta testing
The main tools we use for software testing
Postman – a set of tools for testing API – application programming interface. It helps test the functionality before integrating the API into the client application.
It also enables to create API documentation, write and run tests, and replace real data from the server with test values, simplifying the work of QA engineers.
Postman + Newman + Jenkins – this combination is used to automate API testing. We’re going to introduce it in our software QA process in the near future.
Cucumber + Appium – applied for automating mobile app testing.
Charles – allows engineers to monitor HTTP/HTTPS traffic. The program works as a proxy server between a mobile application (in our case) and its server. Charles records and saves all requests that pass through the phone connected to it, and enables to edit them.
PICT – a very convenient tool for testing values (and their combinations) of the checked parameters (here we mean the input data for obtaining some result, e.g., calculation of the retirement pension (value) depending on the person’s gender, age, and time in employment. Gender, age, and time in employment are parameters, and PICT will be used to check their values.
In another case, there may be even 50 and more parameters, and the number of their combinations can reach thousands). To test everything, you need 1000 (and sometimes more) checks, and using PICT you can generate a specific test sequence and use it to check all the parameters with 20 tests instead of 1000 possible. Imagine how it accelerates and improves the software testing process!
Apache JMeter, YandexTank – we use these tools to test application performance.
JUnit, NUnit – serve for writing unit tests.
Fiddler – used to test requests and services.
Pixel Perfect – a browser extension that helps test layout.
Burp Suite – a great tool for testing system security.
Checklist – QA specialists can either compile checklists specifically for the project needs or use already prepared checklists, customizing them for specific tasks. For example, our testers have a common checklist for mobile app testing, which consists of 200+ points.
Fabric Crashlytics – a popular tool for sharing builds within the team and collecting user statistics.
Final words
As you see, proper and thoughtful software testing is a difficult process that plays a crucial role in obtaining a quality solution.
When creating the product, remember that even a powerful marketing company won’t help you achieve success. Yes, at first you will attract many users, but they will immediately run away if some bugs are found, the stores will quickly fill up with bad reviews, and the lifetime value will be zero.
If you have some questions about the software testing process or you need a team of experienced QA engineers, you’re welcome to contact us! The consultation is for free!
[button_contact]