Should developers own acceptance tests?

A couple of weeks ago I watched a talk from Dave Farley that said developers should own acceptance tests. It’s a great talk, you should watch it if you haven’t already. Afterwards I realised that I was going to have to write this blog post to explain why I thought the talk was brilliant, but misleading.

First, let’s define acceptance tests. Dave explains them as “trying to assert that the code does what the users want the code to do”. For me they are the tests that we perform (with, or without tools), to decide whether to accept a feature/system or not.

Dave’s talk addresses an important topic. Namely that testers shouldn’t be the owners of these tests. I totally agree with this statement. In all the projects I’ve worked on, I have never seen a test suite solely owned by testers being a useful thing. Of course the testers usually design brilliant tests, and these tests often uncover serious issues before any code gets released but they are still a terrible idea for two reasons.

Firstly, we have an entire suite of tests that exists and runs without any developer involvement. Unless you work in a team that really doesn’t think about testing in any way before code get handed over to testers then this is going to cause test duplication. If your developers really don’t think about testing then you probably have some serious silo issues that need addressing. Go and sort those out before you read the rest of this post. But seriously, developers think about testing all the time. Allowing acceptance testing to be separated from development is going to guarantee duplication in tests. It’s also likely to guarantee some gaps in testing as the approaches of developers and testers are not joined up.

Secondly, testers are brilliant at designing tests. We should expect them to be able to turn up some issues, many of them fundamental and serious. Why wouldn’t we want to do that much earlier in the process? Acceptance tests are usually run against a release candidate. Often a release candidate contains one, or even several weeks of work. Waiting that long to run these tests and then turning up a problem is going to make for some expensive re-work.

In Dave’s talk he argued that developers should be the owners of acceptance tests. To me this sounds like a bad idea. Acceptance tests should be a measure of how well the feature meets the requirements. Developers are probably least qualified, or at least biased enough, to be unable to make this decision. That’s not to say that testers should be the owner either. We still want to avoid creating any kind of “wall” for code to be thrown over.

So who should own the tests? Maybe the team would be a better owner. If acceptance tests are truly going to be a measure of how well the feature meets requirements then it seems to me that a business/product owner needs to decide what gets tested. So we ask the business owner to define the journeys that need testing. Testers know a lot about designing good tests so the Testers should be helping to turn those journeys in to good test scenarios with suitable test data. Developers know how to write robust code so the Developers should be writing and maintaining the tests.

The end result is a well designed, and implemented test suite that actually tests something the product owners wants to have tested.

This collaborate test design brings together the strengths, and input, of the entire team. We give testers the opportunity to uncover issues, but the collaborative nature of the test design means it’s likely to happen far earlier that it would if testers own the tests. Maybe even at the test design stage instead of a week later during release testing. Developers can write the tests and avoid many of the “tester written test” issues. The test scripts get treated as any other code, written and maintained by developers. Finally the real strength of this approach is being able to involve the business owner in the discussions. Hearing about the things they’re worried about right at the beginning can have a massive impact on the design and implementation of the feature.

In fairness I think Dave was actually arguing for exactly this approach. Developers should absolutely be at the heart of acceptance tests but I don’t think we should use the word ‘own’. Teams own things, not individuals. The right group of people collaborate to achieve the best result.


What’s the cost of shipping bugs?

A tiny question to reveal huge insight.

At Songkick we use Continuous Deployment for many of our releases. This approach requires a risk-based approach for every change. Are we tweaking the copy, or re-designing the sign up system? Are we changing a core system, or an add-on feature? The answers will determine how much risk we’re willing to take. And subsequently decide whether this feature can be automatically deployed by our Jenkins pipeline or not. The most useful question we ask ourselves before writing any code is “What’s the cost of shipping bugs?”.

If the answer is “low”, perhaps because this is an experiment, easily fixed, invisible to users etc then we know we can move faster. Developers can be more autonomous. Maybe they don’t need a code review. Maybe the testers don’t need to spend so long testing things before we release. Maybe we don’t need to update our test suites.

If however the answer is “high”, perhaps because we’ve broken something like this in the past, or it’s going to be hard to fix, or highly damaging, or we’re all about to take a week off to visit New York. Then we know that we need to be more cautious. Testers need to be more involved. We need to consider releasing this with feature flippers, or using a dead canary release. We’ll make sure the release takes place at a time when there are people available to monitor the release, and get involved if needed.

It’s a tiny question that takes just a minute to ask but this tiny question can shape our entire development and release approach.

How do you estimate the cost of shipping bugs? 

Get outside your comfort zone

The testing community is awesome. There are so many friendly faces. So many people reading things, discussing things, watching things, and developing their ideas. As the communities grow stronger the people you spend most of your time with are likely to be similar to yourself. Maybe you all belong to a similar school of thought. Maybe you’ve worked on projects together before, or attended training courses or conferences together.

It’s great.

Or is it?

The problem is these people are likely to be very similar to you. They share your ideals, and your ideas. You start to think that you’re in the majority whereas is many cases you’re not.

When was the last time you read something you disagreed with? Or attended a conference that wasn’t solely about your craft? Now I’m not saying that you have to go out there and engage everyone in debate. You’re not looking to convert these people, or even to change your own opinions. Broadening your view might simply give you something to measure your ideas against.

If you’re an agile advocate do you really understand why not everyone is into it? Do you know why testers are often excluded from projects? Have you asked a developer why they haven’t attended a test conference? Have you asked yourself why you haven’t attended a design conference?

We all work in development and yet we all hold these independent, and often incompatible views. Look up and see the world that your work fits in to. It might just make you a better tester.


Are you an Independent Tester?

March, the time of TestBash Brighton. As always it was a pleasure to return to Brighton and catch up with some of the smartest, friendliest, and most inspirational testers.

The final talk of the day was from Nicola Owen. Her story was about moving from being a tester in a large organisation to being the only tester in a company. She talked of the challenges and benefits that came from being a sole tester.

Nicola’s talk made me ponder the role of a tester. Often we struggle to gain recognition in teams and companies. Testers are frequently the ones who get forgotten about when teams are thanked, or team lunches are organised. Is that really because we’re so forgettable? Does the sheer number of developers make testers invisible?

I believe that testers can, and should, be seen as a beacon of expertise throughout the company. Testers have so much knowledge about the product, the users, and the project risks. Every tester should know exactly when the project deadline is, who the customer is, and what the project goals are. Hopefully they also know the technology stack being used, the experience-level of the developers, and have a deep understanding of every step of the development and release process.

Knowledge is power. Testers don’t have to be technical. The ability to write code doesn’t have to be a measure of how good a tester you are. If you work with good developers then it probably doesn’t matter if you know how to configure a web server, or submit a binary to Google. What does matter is being able to initiate, and contribute to important conversations.

Talking about a problem out loud triggers your brain to think about things in a different way. For this reason many developers use the “Rubber duck debugging” technique to find issues in their code. Talking things through, even with just a rubber duck, can make you realise that you’re missing something, or spot an obvious problem in the design. If a rubber duck can bring this much value to a developer just imagine what a creative, and knowledgeable tester can bring to the conversation.

Whether you work in a small team or a large team you should take responsibility for your own role. A bigger team doesn’t mean you get to take less responsibility.

Behave as you would if you were the only tester around. Ask questions, and make notes, connect the information people give you and turn it into knowledge. If you come across a casual group conversation then get involved. Kitchens are a great place to spend time. Tell people your ideas and ask for their input. Not only will you learn some interesting, and possibly useful things, but you’ll also meet some new people. Don’t under-estimate the power of being well-connected within a company.

Your own experience is worth more than a thousand books. Reflect, search, and understand how your actions impact the team. At the same time read widely, watch talks, and engage with people outside of your organisation. Compare your experiences with others. What can you learn?

Always have an opinion about everything – even if you don’t always share it. Learning to question things, to spot the areas that could have been better will help you become better. Do this with your own work and with others’. When you read an article, or a book, question what it is telling you. How does your own experience differ?

Independent testers are resilient and self-supporting. They have the knowledge and skills to be able to excel as a sole tester in a company but they also have the knowledge and skills to make a larger test team a powerhouse. So don’t look around you and use your team size as a measure of how good a tester you need to be. Look around you and see the opportunities that are open to you. Now grab them.

The TestBash videos are now available over at The Dojo.

Should that link open a new window?

If you test software that allows users to navigate using hyperlinks you need to think about new windows. It sounds simple enough, links send the user off to a different place, but context should determine whether the link opens in a new window, or tab, or re-uses the existing one.

During an E-commerce checkout you expect the “Buy now” button, or link, to open in the window or tab you’re using. The same thing with “Sign Up” flows, they are part of the main journey and so should be re-using the main window. It would be pretty strange to end up with multiple tabs each displaying part of a “Sign Up” journey.

What about “FAQ” or “Terms of Service” pages? Well it depends. If you click one of these links in the main site footer then it makes sense for the window to be re-used. If however I click a “Terms of Service” link from a checkout or payment page then I really hope it opens in a new window or tab to avoid interrupting my purchase.

Resuming journeys
Have you ever tried to purchase something from a website only to be sent off into a “Sign Up” flow? If you were lucky you completed the “Sign Up” and were gracefully returned to the page you were originally on. Sadly this isn’t always the case. As a user it is hugely frustrating when websites make you repeat actions just because they wanted you to do something else first. Make this easy. Return users to the page they were originally on.

Indicating behaviour
Links to external websites or services should open new windows as the default behaviour. In addition, external links, and mailtos, should be labeled with the standard icon to indicate that they will take you away from the current site. Have a look around a service such as Spotify to see these icons in action.

Interaction between tabs
More complex tab thinking leads to considering the interaction between tabs. I often search for items on Amazon and open several tabs to compare different items. After I’ve compared the items I might add one or two to my basket. I expect my session to exist across all the tabs. That means that I am logged in on all the tabs, and my basket is gathering items from all the tabs. Checking out on any one of the tabs should show me a basket containing all of my selected items.

Following these guidelines will help make your website intuitive to use. However the question of context must still be answered. What happens on your website when you open a new tab and log in? Are both tabs now showing the correct state? Is that correct for your service? How about if you log out? Or purchase something? Think about the standard behaviour but always do what is right in your context.

Thoughts from TestBash 2015

I’ve just returned from the fourth TestBash conference. Each year it grows, getting better and better every time. A packed out conference (10 speakers!) and many social events make it easy to catch up with old friends, and make new ones too.

I kicked things off with the social event on Thursday evening. Sadly I hadn’t been able to attend the workshops during the day but there were plenty of people singing their praises over drinks. I had a whirlwind of catching up, beers, and a little bit of 99secondtalk prep with my fellow Weekend Tester facilitators Neil Studd and Dan Billing.

After just a few hours sleep, something of a tradition at testing conferences, it was run time! Once again there was a fantastic pre-conference run along the seafront. I love starting the day with a run, this year was an especially beautiful morning and we had a good turnout despite the early start. One of my favourite parts of the run is having 10 minutes or so to just chat with fellow testers completely uninterrupted. We run and we chat. Then we all dash off to try and get ready in time for Leancoffeebacon.

The conference itself had a fantastic line-up. Michael Bolton gave a predictably solid talk on language. A great reminder to actually say what you mean. Ian McCowatt was up next with a great talk on Bug Detection. He gave me the push I needed to pick up Harry Collin’s “Tacit and Explicit” book. I was also reminded of the importance of re-reading books. It’s so easy to get caught up in the endless of book list that sometimes I forget how much you can get from a book on the second, or even third reading.

There were great talks from Martin Hynie, Matt Heusser, and Stephen Janaway. There is still plenty of digesting of the ideas but it was fascinating to hear Martin’s experiences of the job title ‘Tester” actually limiting tester’s ability to get involved in projects. Stephen Janaway had some really interesting ideas in his talk “Why I Lost My Job As a Test Manager and What I Learnt As a Result”, the coaching menu was particularly interesting to me. I can see something like that being very useful on my team.

Vern Richards, and Richard Bradshaw both gave thought-provoking talks. Richard’s story of moving into automation only to find that he had “automated too many things” was really good. So many teams have the goal to automate everything. it was interesting to hear what happens if you actually succeed in doing so.

Sally Goble and Jon Hare-Winton demonstrated that it is possible to do a good double act. Maaret’s talk ” Quality doesn’t belong with the tester!” was a really resonating experience report. Being the only tester on a team is challenging and Maaret shared lots of ways that she tackled it. I really liked that she had talked to her team of developers about how they wanted to define testing. So often it seems testers want to name everything and tell developers how it should be done but developers do testing too, it’s just different.

Karen Johnson wrapped up the day (well apart form the 99second talks!) with a really engaging talk on asking questions. There were so many great ideas in this talk, and a number of interesting book references too.

All in all I have watched so many brilliant talks from engaging, interesting people. I have a list of new books to read and lots of thinking to do. I’ve come away from TestBash having seen so many friends. I’ve got a list of names of my new friends in my pocket and I feel inspired to get stuck in to some testing!

Thanks, Rosie and all of the TestBash speakers and organisers. It was an absolute blast. See you next year!

Did you scroll to the bottom of the page?

It’s easy to get caught up in the one thing you’re testing right now. This narrow view of the world (or system) makes it easy to miss obvious problems.

One technique I use is to always scroll to the bottom of the page. It doesn’t matter if my test calls for me to click a link in the page’s top nav, first I scroll down, then I click my link.

It takes just seconds to scroll to the bottom of the page. By taking just a little time to broaden your view of the world you might find you see some surprising things.

Testing In The Pub – Part one of my interview about Continuous Delivery

Part one of my interview with Testing in the Pub is now live. You can download it from http://testinginthepub.co.uk/testinginthepub/

Here are a few links for further reading (can also be found in my comment on the Testing in the Pub website):
– The place to start – http://continuousdelivery.com/
– Etsy’s developer blog and in this post in particular – http://codeascraft.com/2011/02/04/how-does-etsy-manage-development-and-operations/
– There are some interesting webinars available on http://www.thoughtworks.com/continuous-delivery
– Videos from the Pipeline conference – http://web.pipelineconf.info/2014/05/29/videos-from-pipeline-2014-are-online/

In addition I personally enjoy the following blogs:

And soon there will be a great book of experience reports all about CD. OK so I might have a report included but It’s still going to be a great book 🙂

Please comment with links to other blogs or resources that you find interesting or useful.

Testing in a Continuous Delivery World

The video of my recent talk at the LondonCD Meetup group is now live. The talk was kindly filmed by BBC Future Media.

May 2014 – Amy Phillips – Testing in a Continuous Delivery World from Software Engineering Practice on Vimeo.

Slides are also available at

It’s All in the Mindset

I recently started reading ‘Proust was a Neuroscientist’ by Jonah Lehrer, so far it has been an extremely interesting and thought provoking book, I’ll probably write a proper review once I finish it but in the meantime I wanted to explore one particular thought.

In the chapter where he writes about how Auguste Escoffier invented veal stock they come across an interesting phenomena. Your mindset determines what you taste. Serve identical wines in a cheap bottle and in an expensive bottle and nearly all tasters will think that the wine in the expensive bottle tastes better. The tasters are not lying. The brain expects the wine to taste better and so when the tastes are interpreted by the subjective brain they are judged to be better.

I started thinking about how mindset affects testing. We all know that developers tend not to make good testers because they expect the system to work. They either subconsciously don’t stress the system or in some cases become blind to the errors.  It seems that testers can be caught out in the same way. Everything from past experiences to your current happiness will affect what you see and how you judge something.

It’s normal to expect that experienced testers who have a wealth of previous bug discoveries will carry out the best testing. In fact I often find that totally new testers, with their entirely fresh mindset, can uncover some incredible bugs.

Perhaps the only way to deal with this is to embrace it. Structure your testing sessions so that you deliberately set your mindset. In the first session go in expecting everything to work. Embrace your user and confirm the main user actions can be performed. Later adopt a negative mindset and expect everything to be broken. Try to see things from the point of view of a blind person, or a colour blind person. How about if you’re in a rush and need to complete a task quickly? Each time you set your mindset to something different your brain will start seeing, and interrupting things differently.