Quality

What is Quality?

A few months ago I asked people to complete a short survey on quality. Thanks to all who took part.

Quality is often defined using Jerry Weinberg’s definition of “Value to some person who matters”. Although testers tend to use this definition frequently, and comfortably, I’ve found it to have some limitations. The word ‘value” is almost as fuzzy as “quality” and who exactly are those people who matter? It seems that even with a widely accepted definition it’s still hard to translate quality into a tangible concept for developers or customers.

I decided to try turn a definition of quality into a feeling that people could actually relate to, and hopefully use to better their own work. To do this I asked people to rate the quality of LinkedIn. I know many of you will be laughing at the idea that LinkedIn might represent quality but it turns out to be a very divisive topic. People who want to manage a professional network generally rate LinkedIn’s quality highly. Testing professionals usually don’t.

Although heavily leaning towards the “no”, we can see a decent number of people do think LinkedIn is a quality service.

islinkedinquality

If a service offers value then many people are able to ignore, or maybe don’t even notice shortcomings. I asked people to rate the quality of LinkedIn using just a yes, or a no option. As many respondents pointed out quality is definitely on a scale. What’s interesting to me is that the people who wanted to give a “yes, but no” response tended to say that LinkedIn offers value in managing a professional network (so it is a quality service), but that it it unintuitive to use (so it isn’t a quality service).

The frequency of “intuitiveness” as a measure of quality is of particular interest to the tester side of me. How often do we get caught up testing something against the requirements and ignore the unspecified side of the system? We spend hours hunting down weird and wonderful edge cases, maybe too often choosing to spend our time logging bugs rather than helping to improve the intuitiveness of a system.

We frequently discuss whether testers should learn to code or not. Maybe now is the time to start talking about whether testers should be learning about UX design. At Songkick we see huge benefit in including all team members in testing and quality activities. UI designers are experts at creating usable designs. UX designers understand how users interact with the system. Developers understand how code complexities can cause unexpected behaviour. Ops Engineers know how the system actually runs in production. Product Owners know exactly what they want the system to do. Including all of them in testing discussions brings a wealth of knowledge and skills that, when combined with a testing expert, can have a huge impact on quality.

Quality isn’t the (sole) responsibility of the test team. Everyone involved in developing software should have a good understanding of who the intended users are, their goals, and at least a working understanding of the methods we can use to meet expectations. Facilitate discussions to identify the “people who matter” for your particular system. Help connect team members to users. Even a single session on the customer support desk can have a huge impact on helping people see the real-life impact of a poor design choice. Even just making sure you have a solid approach to managing code maintainability will impact quality as least as much as prioritising features correctly.

Treat quality as a team responsibility. Work together to assist each other in building systems that people actually want to use.

Advertisements

What’s the cost of shipping bugs?

A tiny question to reveal huge insight.

At Songkick we use Continuous Deployment for many of our releases. This approach requires a risk-based approach for every change. Are we tweaking the copy, or re-designing the sign up system? Are we changing a core system, or an add-on feature? The answers will determine how much risk we’re willing to take. And subsequently decide whether this feature can be automatically deployed by our Jenkins pipeline or not. The most useful question we ask ourselves before writing any code is “What’s the cost of shipping bugs?”.

If the answer is “low”, perhaps because this is an experiment, easily fixed, invisible to users etc then we know we can move faster. Developers can be more autonomous. Maybe they don’t need a code review. Maybe the testers don’t need to spend so long testing things before we release. Maybe we don’t need to update our test suites.

If however the answer is “high”, perhaps because we’ve broken something like this in the past, or it’s going to be hard to fix, or highly damaging, or we’re all about to take a week off to visit New York. Then we know that we need to be more cautious. Testers need to be more involved. We need to consider releasing this with feature flippers, or using a dead canary release. We’ll make sure the release takes place at a time when there are people available to monitor the release, and get involved if needed.

It’s a tiny question that takes just a minute to ask but this tiny question can shape our entire development and release approach.

How do you estimate the cost of shipping bugs?