What is Quality Assurance (QA) Testing? Types & Methods

Quality assurance is the process of testing and monitoring a product to ensure that the finished product is created without defects. In physical manufacturing, QA testing might include failure testing, statistical control, and other industry-specific practices.

But what happens when your product isn’t a physical product at all? What if it’s a piece of software that’s being continuously revised and released in new iterations?

In this article, we’ll look at some of the principles of quality assurance testing, and how they can be updated to meet the needs of modern businesses.

QA testing basics

According to the American Society for Quality, the PDCA method is one of the earliest approaches to quality control, developed W. Edwards Deming, an American engineer and statistician. His methods are credited for Japan’s post-war economic growth, and are still the basis for many QA practices at major brands like Toyota.

The PDCA approach, also known as the Deming cycle, involves four steps:

  • Plan: Decide on a change or course of action
  • Do: Perform a test or study on the change
  • Check: Review the results of the study
  • Act: Incorporate the results into your next plan

Essentially, it’s an iterative cycle that can be repeated again and again, with each new change to the product or software.

Because it’s so simple, it can be applied to nearly any industry, from testing products on an assembly line to looking for bugs in a piece of software.

But new trends in the way that software is developed have led to some divergence from old-school testing approaches.

Traditional development models, such as the waterfall method, included room for QA testing at the end of each stage or cycle.

Newer, agile methodologies, call for faster iterations, with less time to pause and take stock of how changes might affect the software’s quality.

In some cases, instead of comprehensive testing, “testing is only done on the outputs of each sprint team or piece of work,” writes Rob Mason in Forbes. “This approach doesn’t test products in the context in which they will be used and doesn’t even test the product as a whole, just a portion…. QA is indeed being killed by agile.”

Why QA testing is so important

While QA testing is primarily focused on finding defects, its importance goes far beyond that. Ultimately, it helps you provide the best possible user experience.

QA testing can help you determine whether or not a product is broken, but also whether or not it’s working as intended. It can prevent you from releasing a product or an update before it’s ready for market, where any bugs can have real-world consequences.

It also ensures that the same actions lead to the same result each time, and that users don’t have jump through hoops to get your software to do what they want it to do.

These days, many tech-savvy consumers are aware of the inevitability of bugs in new releases, and may wait to try a new app until others have tried it out first. Other users are comfortable submitting bug reports from their smartphones.

But end-users can only really test the user experience of your software; without access to the underlying code, they have no way of testing for major security flaws.

Either way, depending on your users to report bugs instead of doing your own testing will only work for so long. Eventually, customers will switch to a piece of software that they know they can trust to work properly from the start.

How is Quality Assurance different from Quality Control?

At this point, it’s important to make a distinction between quality assurance and quality control. These two practices are often spoken of as equivalents, but in fact they differ slightly, and refer to different stages of the testing process.

It can be helpful to think of quality assurance (QA) as a proactive process, and quality control (QC) as a reactive process.

QA testing is intended to prevent defects before the product is made, while QC is used to check for defects in the end product after it’s produced but before it’s released.

In other words, QA takes place throughout the development process, so you don’t start up the asssembly line only to end up with a defective product, or deploy a buggy piece of software to the cloud. QC is used to test or spot-check the finished product.

For example, when it’s too destructive or impractical to test every single product, quality control practices might include techniques like acceptance sampling. In this approach, a statistically significant sample of the product is tested to see if the number of defective items in each batch is above or below acceptable limits.

Because software bugs don’t turn up randomly for only certain users, this kind of testing in less useful in software development. Instead, quality control practices usually involve verification and validation to ensure that a product meets its design requirements.

This might include making sure that the software works properly on various devices and browsers, on different operating systems, and on different types of networks.

Types of QA tests:

So, what kinds of QA tests can you incorporate into you development process? We’re focusing primarily on software tests, but some of these test apply to physical products as well, which may be necessary if your product includes both hardware and software components. Several of the most common types of QA tests include:

Failure testing is used to uncover problems that arise when a program is executed. Not all software defects will show up in failure tests, particulary if the bug is only triggered by a little-used feature. Some bugs can lie dormant in code for long periods.

At the same time, not all failures are the result of defects. Some programs may not be “defective”, but they’re designed in such a way that they prompt a user to perform the wrong action. Poor UX design can lead to increased failures due to human error.

Failure testing may involve stress testing, which tests how well a system can deal with extreme heat, humidity, or other environmental factors. This can be used to test critical infrastructure, such as cloud servers, to see how they cope in extreme situations.

Automated testing refers to tests that are performed by automated tools, which are able to test components of the software that human testers cannot.

Automated QA testing is often used in agile development, because the same QA tests can be run repeatedly, checking for new bugs with each change to the code.

Automated QA tests range from unit tests, which focus on the smallest testable units, to system integration tests, which test the entire system as a whole.

These tests are especially important for companies that rely on continous deployment, since they reduce the time and money it takes to update cloud-based software.

Manual testing plays a smaller role now that automated QA testing is readily available, but there are still some tests that are best run by human testers. User acceptance tests and usability tests depend on the interaction between human and code, and bugs may not turn up until they’re discovered by a human in a real-world environment.

In some types of tests, referred to as black box testing, the human tester is not aware of how the internal system works, and in fact, may not even be tech-savvy. This kind of testing can eliminate the biases developers have when looking at their own code.

In white box testing, the tester is someone who can view and understand the source code. In some cases, such as penetration testing, their aim may be to attack or break the software in order to uncover unknown security vulnerabilities.

Some automated tests can be used to simulate human testers, but they must still be programmed in order to achieve the desired result. For example, a load test can be used to test the impact of thousands of users accessing the system at once.

Finally, more and more companies are turning to real-world testing and beta testing, which involves everyday users in the quality assurance process.

In beta testing, the software is released to some, but not all, real-world users, giving the developers a larger data sample to draw from but without risking widespread confusion if a buggy product is released onto the market.

Beta testers typically opt-in to the process, so are usually more tech-oriented than other users. They may be asked to give specific feedback on their user experience, and may be asked to avoid publicly reviewing the product before it’s released.

Real-world testing, or crowdsourced testing, is similar to beta testing, but is designed to reach a more representative sample of users.

For example, they can be distributed across specific regions, devices, or demographics, and are asked to go about using the software as they normally would.

They may or may not have previous testing experience, and are usually paid for every bug they uncover that is shown to be valid.

Get QA testing done professionally.

As you can see, there are many ways to undertake QA testing, from beta testing and crowdsourcing, to automated QA testing. If you’re unsure which tests to apply to your product or software, why not hire a team of QA professionals to do it right?

The team at Zibtek has experience developing and testing a range of different types of software, and can help you find the best QA testing strategy for your needs. We’ll help you decide whether to incorporated automated testing, manual testing, or both.Contact us to have a quick chat about your quality assurance needs today!