While testing may seem easy, there’s quite a few ways to get it wrong. Planning. Scheduling. Communication. If you’re not careful, testing and fixing can quickly become an afterthought. Once that happens, issues are going to slip through the cracks, and nobody wants that. So, let’s review some of the key ways to make sure that your testing cycles go off without a hitch.
1. Leave Time for Fixing
Remember that knowing is only half the battle. Setting aside time for testing is pointless if you haven’t also set aside time for fixing. Testing isn’t about testing so much as it is about enabling fixing. Gerald Weinberg says it best…
Testing gathers information about a product; it does not fix things it finds that are wrong. Testing does not improve a product; the improving is done by people fixing the bugs that testing has uncovered. Often when managers say, “Testing takes too long,” what they should be saying is, “Fixing the bugs in the product takes too long”—a different cost category. 1
This may seem obvious, but it’s easy to forget during crunchtime. Once issues are discovered, developers need time to fix them, and the organization needs time to retest those fixes. Without a plan and time for both, testing isn’t very useful.
2. Encourage Clarity
Reporting bugs and requesting more information can create unnecessary overhead. A good bug report can save time by avoiding miscommunication or the need for additional communication. Similarly, a bad bug report can lead to a quick dismissal by a developer. Both of these can create problems.
Anyone reporting bugs should always strive to create informative bug reports, but it’s just as important that developers go out of their way to communicate effectively as well. For instance, if a developer needs more information, it’s best if they take the time to write a detailed request. Teach people to write good reports, but hold your developers to high standards as well. If everyone is going above and beyond to communicate effectively, everyone’s productivity benefits.
Additionally, when developers are resolving issues, few things are more effective at improving communication than writing detailed resolutions. Just as developers expect detailed and well-written bug reports, so too should testers expect detailed and well-written issue resolutions. Good communication goes both ways, and it’s everyone’s job to ensure that it doesn’t break down. Retesting is important, and clear resolutions facilitate better retesting.
3. Discourage Passing the Buck
Just as testers can fall short in their reports, developers can easily fall short in their effort to understand the reports. Ideally, developers should be slow to refuse or push back on a bug report due to lack of information. Instead, they should make at least a brief attempt to do as much as they can and then, if they still can’t reproduce the problem, should use the poor bug report as an educational opportunity to help the tester understand how the report could have been better.
Testing and fixing is about collaboration. There’s a gray area of debugging between those two steps that involves reproducing the bug and identifying the parameters that cause it to be reproduced. When developers are buried under a mountain of bugs, it can be tempting to push back on bug reports that aren’t perfect, but frequently, they don’t need to be perfect. While it’s important that developers don’t waste their time on wild goose chases, it’s also important that bugs aren’t just shuffled back and forth between testers and developers.
4. Manual Testing Should be Exploratory
Many teams prefer to script manual testing so that testers follow a prescribed set of steps and work their way through a set of predefined tasks to test the software. This misses the point of manual testing. If something can be scripted or written down in precise terms, it can be automated and belongs in the automated test suite. Real-world usage of software won’t be scripted. So testers should be free to probe and break things without following a script.
Real-world usage of software won't be scripted. So testers should be free to probe and break things without following a script.
This is one of the primary strengths of manual testing. People can think for themselves. They should be able to explore and wander without being expected to follow a certain route. 2 It’s important that testers have clear understandings of the expectations of what the software is supposed to do, but they shouldn’t be restricted in how they go about ascertaining that. They more they are allowed to apply their intuition, experience, and skills, the more likely they are to uncover precisely the type of problem that automated testing would never catch.
5. Test Frequently
I’ve been referring to the manual testing process as a phase, but don’t misinterpret that as something that only happens at the end of a project. Like all other forms of testing, manual testing works best when it happens frequently throughout the project; generally weekly or bi-weekly. This helps prevent large backlogs of issues from building up and crushing morale.
Software testing expert and author James Bach uses the analogy of driving a car by asking when the “look out the windshield” phase of driving a car occurs. Of course, looking out the windshield is virtually non-stop while driving. The idea of driving and only looking out the window to check that you’re on course is risky to the point of being crazy. You’d quickly go off the road or hit another car. Testing is the same way. Frequent testing is the best approach.
None of this is to say you can’t or shouldn’t have a really intense final testing phase before launching, but it should be intense by choice, not because you’ve let bugs pile up for a lack of testing.
6. Fix Bugs Once
While manual testing is cheaper than having customers encounter bugs, it’s still the most resource-intensive way to find and fix issues before launch. Any time you can save on the testing and fixing cycles will pay for itself time and again. Once an issue has been found by a human tester, you want to minimize the chance of it being found by another human tester after being fixed. There are two components to this.
- Take the time to ensure that it’s fixed correctly. This generally involves communicating with testers or domain experts to eliminate grey areas or confusion. Better to spend a little extra time fixing it right the first time than going through the process multiple times and wasting cycles.
- Ensure that it doesn’t quietly break again in the future. Once you know how something broke, you should always add automated tests to make sure that it doesn’t break again.
Remember, bug fixes are actually more likely to introduce new bugs than when the code was originally written. Additionally, developers occasionally fix problems incorrectly due to miscommunication or misunderstanding. For the best long-term results, developers should not only fix it, but also add automated tests to ensure that it won’t break again.
7. Dedicated Testers Work Best
If at all possible, it’s best to have dedicated testers rather than have developers test their own work. 3 It’s not only difficult for developers to switch from a building to a breaking mindset, but some issues may be due to a fundamental misunderstanding of the requirements. This isn’t to say that developers can’t test their own work, but that it should be avoided if possible.
It's not only difficult for developers to switch from a building to a breaking mindset, but some issues may be due to a fundamental misunderstanding of the requirements.
If your developers have to do the testing, the best approach is to have them test each others’ work rather than their own work. On the surface, this might seem counter-intuitive. Who would know better how to break software than the individual that wrote it, right? But it turns out that it’s rather difficult for people to test their own code. Not impossible. But difficult. Having a second set of eyes involved will work wonders for uncovering issues.
Finally, if you only have one developer, and that developer must do the testing, it’s best to at least have time dedicated to testing. Writing software and breaking software require different mindsets. If you dedicate time to get into the mindset of looking for holes and trying to break your software, you’ll invariably uncover more problems than if developers only tested during development.
8. You Can’t Find Everything
Testing can reveal the presence of bugs, not their absence.Edsger Dijkstra
Unlike most estimation tasks, testing is difficult to estimate because the whole process revolves around finding an unknown amount of problems. Trying to find every bug isn’t practical. 4 Instead, it’s best to understand the rate of bug discovery. All else being equal, at some point, the rate with which bugs are discovered will eventually decrease. Then,at some point the cost of pursuing further bugs will make additional testing impractical. Finding this point in testing will help you gauge when testing should begin winding down.
The key here is to ensure that your discovery rate is declining due to the amount of bugs remaining and not due to decreases in testing effort. For instance, if you have three people testing one week and only one person testing the next week, your discovery rate will likely decline due to fewer testers rather than higher quality. Similarly, three days of testing will generate fewer bugs than five days of testing. So long weekends and vacation can often skew the results. Ultimately, you’ll have to make a judgement call based on your comfort in a given level of confidence about the quality of the software.
9. Bugs Come in Clusters
It may sound unintuitive, but the more bugs a given area of the application, the more likely that it will continue to have more bugs. 5 The takeaway here is that if you find a particularly error-prone area of your software, it’s not necessarily due to exhaustive testing of that area. Instead, it’s likely to be a sign that the area will continue to have a disproportionate amount of bugs and needs heavier testing not lighter testing.
…if you find a particularly error-prone area of your software, it's not necessarily due to exhaustive testing of that area. Instead, it's likely to be a sign that the area will continue to have a disproportionate amount of bugs and needs heavier testing not lighter testing.
As I mentioned earlier, these clusters should also serve as a red flags that there may be a problem in the underlying process that’s leading to the increase it bugs. Alternatively, there could be a fundamental misunderstanding by the developer or developers responsible for that area. In any case, a large cluster of problems usually warrants deeper investigation to see long-term improvement.
10. Avoid Politics
Despite its seemingly objective nature, testing can be a highly political issue. Project managers may feel like high bug counts reflect negatively on their work and may try to make bug counts look lower than they actually are. Or, developers and testers can turn situations into “Us. vs. Them” over a variety of communication issues. In other situations, we’ve seen teams that want to separate issues reported by the clients from those reported internally so that clients don’t see any “behind-the-scene” issues. These are just a few of the possible problems that can and do arise during testing, and it’s just as important to keep these problems in check as it is to perform the testing and fixing.
If quality takes a back seat to politics and bureaucracy, then the effort will likely be in vain. It’s important for everyone to remember that they’re on the same team working towards the same goal. Everyone wants to ship the highest quality software in the shortest time. There will always be tradeoffs, and not everyone will agree with every tradeoff, but staying focused on the big picture can help everyone put the tradeoffs and decisions in context.
Testing and fixing software can be subtle, tricky, and even political, but as long as you're able to anticipate and recognize common problems, you can keep things running smoothly. The key is to remember that it takes planning and preparation to run a good testing phase.
- Gerald Weinberg, Perfect Software And Other Illusions About Testing (Dorset House Pub., 2011), 13. ↩
- Cem Kaner, James Back, and Bret Pettichord Lessons Learned in Software Testing: A Context-Driven Approach (Wiley Computer Publishing, 2002), 18. ↩
- Glenford J. Myers. The Art of Software Testing, Second Edition (John Wiley & Sons, 2004), 17. ↩
- Myers. The Art of Software Testing, Second Edition, 12. ↩
- Myers. The Art of Software Testing, Second Edition, 19-20. ↩