When we talk about quality assurance, we’re not only talking about testing. Quality should never be relegated just one phase of software development. It’s an ongoing approach to both uncovering problems and a commitment to fixing them. It’s not the responsibility of one person or one team, but an effort that involves everyone in some form or fashion.
Quality should never be relegated just one phase of software development. It’s an ongoing approach to both uncovering problems and a commitment to fixing them.
By itself, a testing strategy doesn’t create quality. It only creates an opportunity to improve the quality. That improvement happens when your team becomes good at addressing all of the bugs and issues that are brought to its attention through the various tools and processes below.
Tools & Processes
Quality is multi-faceted. Is the code well-written? Does it work? Is it bug-free? Fast? Secure? With that diversity of quality metrics there’s no silver bullet for improving quality. Each tool or process brings with it unique strengths and weaknesses uniquely complemented by the others.
You may not put all of these tools and processes in place from day one, but if you want to uncover as many potential problems as possible, you’ll want to use a combination of tools and techniques.
Using a variety of tools will create a better and stronger net to prevent problems from slipping through the cracks. You may not put all of these tools and processes in place from day one, but if you want to uncover as many potential problems as possible, you’ll want to use a combination of tools and techniques. 1
Deciding which tools and processes are right for your team depends on quite a few variables. We’ll review some guidelines for each, but I could never prescribe a single approach and set of tools that would be perfect for your team. You’ll need to consider everything we discuss and put together a plan that fits your team, but don’t worry, the final section of this course is focused exclusively on helping you make those decisions. You’re likely already familiar with at least some of these, and that’s great. If not, however, I suggest setting aside some time to learn at a little about each of them before creating a plan of attack.
If there’s one tool that you can’t create quality software without, it’s source control.
Source control, also known as version or revision control, is your vault for all of the code that you create, and it ties into almost all of your quality-related tools as well as providing a layer of quality control in its own way. If there’s one tool that you can’t create quality software without, it’s source control.
Source control can help you revert changes if you find out that they break your application. It can help you compare different versions and find out where you may have introduced a bug. And it can show you who wrote a given line of code when you have questions. You could also use it to associate specific changes in the code to bugs in your bug tracker. In some ways, you can think of source control as the foundation for all of your quality efforts, but we’ll discuss that more in-depth shortly.
In conjunction with source control, release management is another process that isn’t directly related to improving the quality of your software but plays a key role in enabling higher quality. If you think of your development process as a machine, release management is the oil that keeps it running smoothly. Release management helps make it easy for you to both update your software and recover from updates that go bad. Along with source control, we’ll discuss this in much more depth later.
Smoke tests are used to ensure that all of the key parts of your application are working after a new release. This can be done by manually inspecting key functionality, but it’s best to automate it as part of your release process.
Whenever you release new code, you’re introducing risks. Even if your staging or QA environment matches your production environment almost exactly, there’s always a chance of one little thing throwing a wrench into the process. Smoke tests ensure that when this happens, you know right away and don’t have to wait for customers to alert you to the problem. That way, you can rollback to your previously deployed state and fix things without the time pressure of having your application offline for your customers.
For instance, if your application has background processing and search in addition to the main application, you might want your release process to automatically test to ensure that the site is live, the and both the search and background processes are still running immediately after a release.
Even with all of the quality assurance in the world, you’ll still occasionally have post-release bugs that could cause significant problems. In these cases, it’s best if you’re able to quickly roll back to a previous working release so that customers aren’t affected while you work on finding a solution to the problem. This is a common feature with most release management tools, but it will likely require some testing and tweaking to get it running smoothly for your environment.
Peer Code Reviews
Code reviews, also known as inspections or walkthroughs, involve team members explicitly reviewing each other’s code to both look for problems and provide subjective feedback and constructive criticism. These not only help uncover current problems but prevent future problems through sharing and education. When used reliably, peer code reviews can uncover 60% of the defects in a given application by themselves. 2
You may not hear companies advertise their use of code reviews, but make no mistake, it’s a very common practice among companies that take quality seriously. 3 Most studies have discovered that code reviews are generally cheaper than testing because they require less time to find errors. 4 Due to the benefit of finding defects earlier in the process and leading to more maintainable code, reviews are the most effective way to both increase quality and decrease costs. Moreover, the added benefits of educating junior team members and increasing domain knowledge across the team, pays off in the long-term as well.
Since peer code reviews are a manual process, they’re also relatively easy to roll into your existing processes. The only challenge is making time for them. Most importantly, though, is that code reviews fall into the class of human processes that are generally better than automated computer-based testing at uncovering certain classes of mistakes. Of course, automated testing is good at catching and preventing different errors that manual processes tend to miss. Using both manual and automated approaches to testing in complementary ways will achieve better results than either individually. 5 That brings us to our next topics.
Static Code Analysis
Humans aren’t perfect, so peer code reviews won’t be perfect either. Static code analysis can help improve code review coverage by adding an automated review to help find common mistakes. Static code analysis can warn you about code smells, potential security problems, cyclomatic complexity, as well as modules that change too frequently. All of these metrics can help you keep an eye on problem areas and even identify which areas could use refactoring.
One huge benefit of static code analysis in conjunction with peer code reviews is that static code analysis can catch a lot of the sometimes nit-picky smaller issues before a code review. This not only saves time because developers can focus on more important facets of the code, but it can also help prevent developers from feeling picked apart over minor issues during code reviews. As always, lean on automation where you can, but back it up with human processes.
I’m going to use “automated testing” as an umbrella term for any scenario where a machine is running your tests for you. There are quite a few tools and approaches to automated testing, but regardless of the methodology, they help. This topic can be intimidating due to the diversity of opinions, but your best bet is to do just enough research to get you started. You’re better off with a good test suite today than a perfect test suite tomorrow.
Automated testing is great for helping you detect or even prevent bugs, but it works best when it’s run regularly and acts as an angel on your shoulder. Having an automated test suite requires a fair amount of upfront investment in a testing framework and methodology, but the time it saves you in the long-term is enormous.
Automated testing isn’t foolproof though, and it’s important to remember to not have blind faith with regards to your automated tests. Since humans can’t create bug-free software, we also aren’t capable of writing bug-free tests. 6 It’s just as common to write buggy tests as it is to write buggy code as tests are often treated as an afterthought. 7 Similarly, if the requirements are wrong to begin with, the tests will be wrong as well. So while automated testing plays a key role, it definitely has weaknesses that you’ll want to address with other forms of planning and testing.
Continuous integration is an extension of automated testing. Your test suite is only helpful if it’s run regularly, and CI is one way to ensure that happens. If you have an automated test suite in place, you can set up a CI server to automatically run the test suite whenever someone updates your codebase. If the tests fail, then the “build” is considered to be broken, and your team can be notified about the problem right away rather than discovering it at an inconvenient time. Some teams even have penalties associated with breaking the build like a small fine or a good-natured ribbing from teammates.
Formal Manual Testing
The number one testing tool is not the computer, but the human brain—the brain in conjunction with eyes, ears, and other sense organs. No amount of computing power can compensate for brainless testing, but lots of brainpower can compensate for a missing or unavailable computer. Gerald Wienberg 8
Formal manual testing, commonly referred to as QA, helps fill in the gaps left by all of the other tools and processes. You can think of it as your last line of defense before shipping your software. The idea is to have a formal process for manually testing your software. Depending on the project, your developers may do the testing, or you may have a dedicated team of testers.
As software developers, it’s easy to get caught up in the benefit of automated tools, but automation isn’t enough. Computers are good at detecting certain kinds of problems, but there’s no substitute for the logic, reasoning, and subjective skills of a human. We’re going to stop there for now, but we have an entire section dedicated to to helping you design a good QA process. So we’ll revisit this topic shortly.
Platform & Device Testing
While it should be a safe assumption that browser, operating system, and device testing should be a part of your QA process, it’s worth calling out on its own. With the increasing diversity of browsers, operating systems and the proliferation of devices, screen sizes and resolutions, testing your products against different platforms and devices is as important as ever. Some teams even use device labs for incredibly thorough testing across multiple mobile devices.
Acceptance testing is a specialized instance of the more traditional QA testing. While QA strives to uncover a variety of objective defects by trying to break the software, acceptance testing requires that the people who will actually use the software take it for a spin to verify that it does what they need it to do. Acceptance testing should always be performed by the people using the software.
Of all the tools and processes we’ve discussed, none address problems with usability. While usability testing is more involved, costly, and time-consuming, it’s the only way to find subjective problems in execution. Bugs are objective. They’re reproducible. Without usability testing, usability execution falls somewhere between personal preference and best guess. That leaves a lot of room for error.
Usability problems can be just as costly, if not more costly, than normal bugs.
For most teams, this is the most challenging level of quality assurance to embrace. Due to its subjective nature and costs, it’s often left behind in the rush to get software out the door. Unfortunately, there’s a whole class of issues that would only be caught with usability testing. Although, some of these might be uncovered indirectly through business analytics, at that point, the problem is already costing money. Usability problems can be just as costly, if not more costly, than normal bugs. As far as defects go, usability problems are just as severe as technical bugs. 9
Exception monitoring is handled by installing a plugin that automatically notifies you about any unhandled exceptions that your customers may encounter. It’s your last automated line of defense because it doesn’t rely on customers reporting the errors. (Although, it sure helps if they do.)
If a bug makes it through to production, and something breaks, good exception monitoring will let you know. Even if you’ve managed to release your application without any major problems, you’re going to run into unanticipated situations in production. For example, if you release an update and then receive a flood of errors related to file uploading, you can immediately get to work addressing the problem and hopefully minimize the impact to your customers.
Exception monitoring provides several significant benefits. First and foremost, it’s automated, so if a user encounters a bug but doesn’t report it, you’ll be notified. Second, with re-occuring bugs, it can also help provide insight into the frequency of the problem. And finally, it provides technical details that may be difficult to ascertain by asking non-technical customers. Combined with the fact that you’ll be recording multiple instances of the problem, you can then use the technical data from multiple exceptions to identify patterns that may help you troubleshoot particularly nasty bugs.
Like exception monitoring, performance monitoring is generally handled by installing a component on your servers or in your application to automatically record and store performance information. This helps ensure you have the data that you need to track down and fix performance problems.
Even if you’ve worked out all of the bugs, it’s always possible that you’ve slowed down your application. People still hate slow sites and keeping your application snappy is just as important as fixing bugs. Of course, you can do some performance testing in isolation prior to release, but putting code into production under real-world scenarios can change everything. Keeping an eye on page load and response times can help you stay ahead of the curve.
All of the quality assurance in the world won’t matter if your site or application is offline. Uptime monitoring makes sure that the most important quality mistake of all doesn’t happen quietly. With uptime or availability monitoring, you use a hosted service to frequently check your site’s availability from different locations around the world.
You simply tell it a domain and protocol to monitor, provide a frequency, and, if necessary, provide login credentials, and the service will do the rest. If your site goes offline, they’ll automatically text, email, or sometimes even cal your phone so that you know as soon as something goes wrong.
Nobody wants a security problem to make it to production, but it happens. And frequently, it’s not necessarily something that a code review can catch because it may be a problem with a library or piece of software that your application uses. In these cases, a good automated security monitoring tool can help keep you stay current on vulnerabilities to your application or its components.
Providing a responsible disclosure policy, and a bug bounty if you can, with relevant information about reporting security problems provides a way for security researchers to securely report security problems. Just like automated testing and manual testing complement each other, so do security monitoring responsible disclosure.
While it’s straightforward to quantify code quality or minimize bugs, there’s an entire class of problems that you’re likely to only discover through the use of business analytics. Whether you’re using Google Analytics or A/B testing, adding a layer of business analytics to your process can save your bacon in the case of a big conversion problem. For example, if your registration page conversion is down 25% since the last release, that’s just as important as any technical issue.
Help Desks and Support
Your absolute last line of defense for quality is your help desk and support process. At this point, the goal is to react as efficiently as possible to minimize the impact to other customers and to help the customers that report the issue recover as easily as possible.
If someone has to contact support about a bug or other issue, that means that all of your other tools and processes have failed. Once someone has to reach out to you, it’s unlikely that they’re happy about the problem. However, with good support in place, it’s often easy to turn the problem around with great customer service. In our experience, if you look at support as your customer’s nuclear option and react accordingly by going overboard to help them, you have an opportunity to really earn that customer’s trust.
Issue tracking helps juggle and prioritize the resulting issues from your tools so nothing slips through the cracks.
Each of these tools plays an important role in helping to ensure higher quality software, but they each do so in their own ways. Some are proactive while some are reactive. 2 Similarly, some tools and processes are manual while others are automated. Often, they involve bits of both automation and manual effort.
As we discussed at the beginning, the best way to find and fix the most defects is to use a variety of tools to ensure the highest levels of coverage. It’s not trivial to implement everything all at once, so often it’s helpful to focus on the tools and processes that give you the widest range of benefit while you fill in the gaps with new tools as time permits. To do this, it’s important to understand how these tools work and relate to each other.
Proactive vs. Reactive
The simplest way to classify tools is thinking in terms of whether they help you prevent issues or help you recover from issues. Some, like automated testing, are proactive to the point of being preventative, with the goal of helping you avoid issues entirely. Others, like bug or issue tracking, are proactive, but not necessarily preventative. That is, they focus on helping you uncover issues before you ship the software. Other tools around release management are reactive in that they they help you respond more quickly when there is a problem. Finally, many of the tools, such as help desks or monitoring, are reactive because they’re primarily ways to discover problems only after the software has been released.
The key here is to remember that the earlier you find and fix issues, the more time and money you save. So the more proactive a tool is, the higher its value. However, some tools require significant effort to set up and maintain. For a tiny project, they may be overkill.
Automated vs. Manual
Another good facet for classifying tools is whether they’re automated or manual. For instance, static code analysis is almost entirely automatic once set up. However, usability testing is entirely manual due to it’s subjective nature. It’s impossible to automate every kind of test, and you wouldn’t want to even if you could. Things like QA testing and usability testing provide a structured way for humans to help uncover issues that the machines won’t catch on their own.
Similarly, you can think of your help desk software is the last line of defense for handling problems that slipped through far enough that they were discovered by a customer. It’s very manual, and in most cases, it’s the most expensive way to fix problems because they aren’t caught until they’ve inconvenienced customers. While there are a lot of tools that can be used to minimize the frequency of that happening, it’s a critical piece of the puzzle. Also, many reports by customers won’t even be bugs. Instead, they’ll be about confusing interfaces, personal preference, or other feedback that may require input from the designers or developers.
Software development is about making rapid progress in a safe repeatable way, and these days, that involves entire suites of tools that need to work together to prevent, identify, and fix problems. The challenge is managing it all. How do you organize, prioritize, and manage all of the different tasks that result from the processes and tools?
We’ve found that issue tracking can be the glue to help centralize bugs, issues, questions, and even ideas and suggestions. You’ll still need something to ensure that those problems are addressed. Not only do they need to be addressed, but they need to be prioritized, assigned, and reviewed. That’s where an issue tracker can help. Issue tracking helps juggle and prioritize the resulting issues from your tools so nothing slips through the cracks.
Let's not get too far ahead, though. Your team and situation may not even need issue tracking. So let's explore some of the reasons to set up a formal issue tracking process to see if it's right for your team. On to the next lesson. Why use issue tracking?
- Steve McConnell, Code Complete, Second Edition (Microsoft Press, 2009), 477. ↩
- Forrest Shull, et. al., ”What We Have Learned About Fighting Defects“ (2002), 6. ↩
- Jason Cohen, Steven Teleki, and Eric Brown, Best Kept Secrets of Peer Code Review (SmartBear Software, 2013), 14. ↩
- McConnell, Code Complete, Second Edition, 477. ↩
- Glenford J. Myers. The Art of Software Testing, Second Edition (John Wiley & Sons, 2004), 22. ↩
- Andrew Hunt and David Thomas, The Pragmatic Programmer (Addison-Wesley, 2000), 244. ↩
- McConnell, Code Complete, Second Edition, 522 ↩
- Gerald Weinberg, Perfect Software And Other Illusions About Testing (Dorset House Pub., 2011), 149. ↩
- Hunt and Thomas, The Pragmatic Programmer, 241. ↩