What happens if the testing phase is skipped
Does it happen that the written code is not tested at all? Paradoxically, more often than you would think! Let’s consider one intriguing example I came across on the web a few days ago.
Image source: reddit.com
In the screenshot, we can clearly see that the airport software allowed not just to minimize tips, but cancel the charge at all, by inputting a negative value. Can you imagine the scale of damage in case such a software goes live in one of the busiest world airports, such as Hartsfield-Jackson Atlanta Airport or Heathrow in London?
Being a Quality Assurance expert myself, I came to the conclusion that the app hadn’t been tested at all. Why? Read on to know my opinion.
Types of testing that shouldn't be skipped
First and foremost the QA professional checks if system logic is implemented in compliance with the given requirements – conducts positive functional testing. In this particular case, it means that we check if the system makes all the calculations correctly — be it 18%, 20%, 22%, or a custom tip amount added to the obligatory charge. Once we make sure that the system functions as expected, we are moving on to check how it works in case of unforeseen scenarios initiated by the user. This is called negative testing and this is when the real fun begins.
The tester must think broadly and have the creativity to predefine unexpected user behavior. One of the methods applied in negative testing is the division into equivalence classes. Implementing this approach, the tester checks which symbols the system allows to input in the field. First, if we can select some other symbols in addition to the digits – letters or special characters. Second, if we are able to input a negative value, 0 or the amount exceeding 1000. Applying the approach, we expect the following results:
- A positive number up to 1000 will increase the final payment by the amount we’ve inputted.
- If we select 0, the final amount will stay the same.
- In case of a negative value or special characters, the system is expected to produce the error and not to allow the user proceed with payment.
As you may have noticed, the whole point of negative testing is to define how the software responds to non-standard requests. So basically, the tester thinks through scenarios provoking system errors. Generally, it goes beyond the requirements, which complicates testing even more, especially if there is a developer who writes test scenarios for the application. And this is one more compelling argument against burdening developers with quality assurance but entrusting it to professional testers. In most cases, developers, even if they test their own code, do not always pay decent attention to negative testing. Their main task is to implement the specified requirements, and in case these exceptions are not clearly described in the SRS (Software Requirements Specification), from their standpoint the given application works just fine.
Another eye-catching issue we can observe in this example is the inconsistency of the app's visual component. In the screenshot we clearly see that the digits in the buttons are left-aligned, and the dollar sign is slightly cut off. If the application had been tested, these flaws would have barely popped up. Obviously, it seems to be a minor thing in comparison with functional issues, and the user may even not notice it immediately, but user-friendliness is still an essential part of any application, implying a huge amount of active users. Therefore, one must conclude that this application has not been tested at all – neither its functional, nor visual component. This is quite alarming since the app performs the function of financial calculations and transactions.
Best time to involve a tester in the project
Spoiler alert: the earlier – the better. Let’s draw a parallel with a serious disease, asymptomatic in the early stages. Everybody knows that it’s much cheaper and safer to prevent the disease or at least start curing it before it morphs into a severe illness. The same thing with software testing – the statistics show that without investments in testing at the moment of project design, the risks are higher than you might think.
According to the IBM research, it’s much more reasonable to engage a professional tester even before the development phase starts. One may ask: What to test if there is no basic functionality, no design, no even a single line of code written? Thinking this way, customers consider paying a tester at the design stage as a prodigality and involve a tester at the middle or even late development stage (in the worst case when the application goes live).
In reality, not only the software itself but also the requirements should be tested if you don’t want nasty surprises. Say, the business analyst elicits functional requirements and stipulates them in the SRS. (Customers often don’t resort to BA’s help as well, but you’d better read a whole new blog post on that from our BA expert coming soon on the Symfa blog) If a BA does it in cooperation with the quality assurance professional, the BA scans the requirements through the prism of his/her working specifics and can detect potential weak spots in the future system. After that, the tester provides recommendations to the business analyst, so the BA could make adjustments and document the requirements in more detail. The developer, in his/her turn, will be guided by the transparent set of requirements, which improves chances of creating a high-quality product without unexpected expenditures and within the agreed time frames.
Ok, let’s imagine, you’ve skipped the requirements testing, and go for development fully armed, with a complete team. Even if there were some flaws in the requirements, at the early development stage it would still be possible to fix arising issues with a little blood. The reason is that before the application is assembled in a one-piece system with multiple dependencies and interconnected modules, it is easier for a QA to cover each separate unit with tests and ensure each of them functions properly.
Nothing to say about the complete system, where multiple functions are already linked and depend on each other. Such a scenario is a kind of a nightmare for every development team – you fix one only to find ten more bugs popping up somewhere else. Therefore, it turns into a massive avalanche that one cannot stop that easily. In most severe cases, further work on the existing application is utterly pointless, and it’s more reasonable in terms of time and finance to restart the project at all.
Of course, it is not just hopeless. In most cases, the problem can be settled, but the question is how much time and resources you will have to spend to fix it. Here are several examples of world-known giants, which experienced substantial losses when fixing serious bugs at the maintenance stage:
- Salesforce underwent Multi-Instance Core and Communities Service Disruption, which made the entire ecosystem unusable for days. According to the release, I can see that there was no QA professional involved in testing.
- Amazon Web Services (AWS) encountered a 4-hour service disruption, due to late debugging, which cost approximately $1 million per hour.
I have an experience of fixing bugs at late development stages, and here is a real-life case from my practice:
Some time ago, I joined a project related to financial calculations at the late development stage. The key problem was in vague requirements developers had to follow. Because of this, I came across multiple contradictions that had to be fixed. For example, there was a single formula applied for different calculations. In one case the value was calculated correctly, while in another, the calculations were wrong. The developers amended the formula, which helped to fix the faulty part, but broke the one where it worked fine. Therefore, the formula was amended several times by different developers, who tried to fix the separate features they were responsible for. In the end, the issue was resolved through the creation of separate formulas for each feature and the process parallelization. As a result, the issue that could have been prevented had been fixed for several months. This wasn’t the only issue on the project, and the client would have to pay much less for bug-fixing should the QA had been involved at the requirements preparation stage.
On average, feature delivery takes twice as little time without testing. I totally understand that it’s quite tempting to try to create a working application within six months, instead of a year. Of course, it’s only up to you to decide, at which stage a QA should be engaged, and if to engage them at all. But my duty as a practicing QA engineer is to warn you about possible risks and do our best to help you mitigate them. So before you make a final decision, I urge you to assess several aspects:
- Application domain
If your application belongs to the Healthcare, Insurance, or Financial domain, has multiple interconnected modules, implies financial calculations, and has a large number of users my recommendation is unequivocal – such an app can’t do without a tester or even without a dedicated QA team. In other cases, QA is optional, although you might want to discuss it with an expert and make an informed decision.
Read more from this series of articles coming soon on our blog, all based on expert opinions of our non-development talents:
- Clients Skimping the BA part: This is what a BA has to say on the matter.
- PM too expensive for you? After reading the comment from the ex-developer (a Head of Presales now), you may reconsider your priorities.