6 steps: How to measure software quality

Contents

  1. Define the goal of your software
  2. Determine how to measure the success of your software
  3. Identify what software quality metrics are important
  4. Choose a test metric that will be easy to implement and analyze
  5. Set up a system for collecting data on this test metric over time
  6. Analyze the results to identify trends in quality if any exist at all
  7. Over to you

1. Define the goal of your software

What are the goals of your software company? Customer satisfaction, sales growth or shareholder value? This should determine how you measure software quality. A key principle in any business is to focus on those areas that drive profits and growth – so make sure you define why it’s important to start measuring software quality before spending a lot of time trying to do this! Otherwise it will be difficult to justify further investment if there isn’t a clear link between these measures and your business goal.

2. Determine how to measure the success of your software

You have defined the software goals so now it’s time to think about how you will measure software quality. There are many things that can be measured such as:

  • Productivity – The number of bugs or issues found per unit of software created, typically a day or hour
  • Test Coverage – How much testing has been done for each feature and area in your code base?
  • User satisfaction surveys – Do customers like using your software enough? What features do they find most useful? Which would they prefer if there were no alternatives available on the market right now? If not this could be an indicator that maybe what you’re offering isn’t very good…!

These metrics should provide insight into whether your software is fit for purpose. However, software quality is not always about features – it can be equally important to make sure software works when used in the real world.

  • How frequently does your software crash? What are end users reporting? You should look at both quantitative data (from monitoring or logging) and qualitative data from support requests/user surveys. If people aren’t using your software because it’s too buggy then they’re probably going to switch to something else pretty quickly!
  • Can you monitor how long software takes to load for different devices and network speeds? This will help you get a picture of where optimization opportunities exist so that customers experience an optimal user experience no matter what device they use. There may also be times when having some feature bloat actually benefits software, so you shouldn’t automatically assume that software is better after being trimmed. This is why it’s important to have a broad set of measures in place so your software can be well rounded and not just full of bells and whistles for the sake of it!
  • Is your software secure? Does it meet industry standards such as OWASP or PCI DSS requirements? Not having up to date security features could leave customers vulnerable to attack – which will inevitably lead them looking elsewhere!

There are many more questions here but hopefully this gives an idea about where we’re heading with measuring software quality… It’s clear there isn’t one solution that fits all cases – each business has different goals and ambitions.

3. Identify what software quality metrics are important

Once you’ve defined software goals and determined how the process works, it’s time to think about what metrics are important for your business. What do you need them to show? How will they contribute towards software goals? Some examples of things that should be tested by metrics:

  • Number of bugs found per unit (known as defects) – the fewer there are in a certain period of time/per feature etc., the better! Ideally this should never go above zero but if it does make sure you know where these issues came from so that improvement efforts can be targeted at high priority areas first.
  • Test coverage – The percentage of code covered by tests is often an indicator used when measuring software quality. In theory, more test coverage means less software bugs. However, software testing is not perfect and testers often miss certain things so an overall test coverage percentage isn’t always reliable; it’s more important to know which parts of your codebase are covered by tests (e.g. all main features) than the exact number itself!
  • Load speed – How long does software take to load? If a customer has a slow experience when using software they may be tempted to go elsewhere for something that works better/faster or find another solution altogether… Something as simple as stylesheets being served from the wrong server can have big impacts on performance so should definitely be monitored closely!
  • Time spent in support – The time taken by people reporting issues through different channels such as support tickets/email can be an indicator of software quality if it’s measured correctly. For example, you wouldn’t want to see that software is constantly being reported as having crashing and lots of time spent in support fixing these problems! Recurring mistakes point to the opposite of high-quality software. However, this obviously depends on the type of software and its purpose so there aren’t always hard rules about what will happen when software is buggy…
  • Security – How secure does your software need to be? Different industries have different compliance requirements which should inform where security features are placed within a solution. It would probably serve most businesses well to implement OWASP best practices since they’re widely recognized but again every business has different needs so metrics around security for one customer may not necessarily apply directly across all use cases.

4. Choose a test metric that will be easy to implement and analyze

This step is really important. By now you should have an idea of software quality goals and everything that entails, what metrics will be important for your software to perform well. Then it’s time to choose one metric in particular that can act as a starting point for measuring software quality and keep quality under control. What should you measure and in what way?

  • A good way to start is by choosing the simplest option; whether this means using something like Google Analytics (GA) or Mixpanel when tracking website load speed, collecting data through support tickets/email etc. You don’t want measurement efforts adding too much overhead so keeping things simple at first is usually best! With just these examples alone there are lots of different ways software quality could be measured depending on business needs but GA is generally the one that comes up time and time again in software development discussions.
  • Once you’ve got a measure of software quality, it’s not just about collecting data; what is also important is making sure this information can be analyzed properly so trends/changes over time can be identified and actioned upon quickly. This might sound like an obvious step but many metrics aren’t always collected accurately or reported in such a way where meaningful analysis can take place… Someone tracking software load speed via GA for example may find out they’re getting slow responses from certain parts of their site but no way to compare these areas with other key pages etc., which makes identifying ways to improve performance really difficult to maintain!
  • Remember: software metrics don’t have to be complex! It all depends on software goals and software quality requirements.
  • Run analyzers on code repositories when writing new software components/modules etc., which will give you a good idea of overall test coverage before they’re released into production environments. You can also run these tests more regularly against current live software as part of monitoring for issues.
  • Keep software test metrics simple at first and collect just enough data for meaningful analysis
  • Remember about quality assurance, to ensure that software measurements are accurate so trends/changes over time can be identified and acted upon quickly.

5. Set up a system for collecting data on this test metric over time

Once software is out in the wild, set up a system for collecting data on this software metric over time. This might require writing software to upload information directly into your chosen analytics tool every day, but could also be as simple as sending email reports/information from support tickets etc., depending on what tools are being used and how much work needs doing upfront. If you’re using something like Google Analytics (GA) then there may already be plugins available which can do all of this automatically without any extra code changes needed!

  • Write software that will upload information directly into your chosen analytics tool daily
  • Send email reports or use other automated mechanisms when necessary so software measurements aren missed e.g. if someone only ever uses software on a laptop while at work then they aren’t going to get any information uploaded automatically… But this software user may still be important so think about ways of collecting data for these people too.
  • Collect software measurements regularly over time
  • Automate software measurement collection where possible – it’s more reliable! When writing new code, run software analyzers such as Sonarqube to get an idea of overall test coverage before it’s released into production environments. You can also use these tests more regularly against current live software, for example, every time there is a software update – however remember that software measurements don’t have to be perfect!
  • Run software analysis during the development process

This might involve looking at software load speed figures via GA and seeing if there are any issues as users access your site or application from different locations across the world… Or maybe you’ll discover that mobile clients aren’t loading software components fast enough so decide to look for ways to improve this? For code repositories such as Sonarqube there may be automated analysis tools available that can produce reports on an ongoing basis (e.g. every time new changes/code modules etc., are added) but make sure not everything is automatically considered ‘bad’ just because something failed since we’re only interested in measuring how well our existing solution works rather than finding ‘reasons to improve further’ – software doesn’t need to be perfect!

Also, a great way to measure software quality is through customer satisfaction. This can be done by interviewing customers and asking them about their experience with the product, whether they were satisfied or dissatisfied. You could then ask what problems occurred during this process in order to improve it for future products.

Another option would be looking at bugs after a new release of the product has been made, specifically how many have still not been fixed from previous versions of this particular piece of software. However if you don’t want any negative consequences from releasing a buggy version there are other ways such as measuring code coverage when testing each feature before being released into production so that none slip past developers’ eyes unnoticed which may cause more serious issues down the line.

Over to you

You should bear in mind how essential high-quality software is. Therefore, you should pay more attention to getting quality metrics, allowing you to control emerging programming errors constantly. Currently, the quality of software products on the market is a feature that is higher than quantity. Also, don’t forget to follow up on new user feedback on the software requirements and their requests.

Reach us if you need help with software development.