Over the recent weeks I’ve been reminded of some of the things which work to undermine all your testing efforts. Collectively I consider them to be filters that you test through. How much they limit your ability to see the real system depends on how opaque they are.
- Test environments must be realistic. I should really list this as number 0 because it should be a total given. If the test environment has a different configuration from production then what exactly are you testing? Systems do not work independently of their environment.
- Not knowing who your user is. If you think like a developer then you’ll test like a developer. Make it your business to know who your user is and think like them. If your user is an internal business user then go and visit them. Do they use a keyboard or a mouse to navigate? How much technical knowledge do they have? Are they likely to be using wildcards in their search queries or not? Use this knowledge to shape your testing.
- Not knowing the numbers. If you work on a website them it is pretty standard to be collecting data on usage. Find out which browsers your users use. Which mobile devices are they accessing your website on? Which email client is most popular? There is no point conducting all of your testing on Internet Explorer 8 if the majority of your users have upgraded to Internet Explorer 9.
- Testing on developer spec computers. Maybe you’re one of the lucky ones who has a seriously cruddy PC but many testers are using developer spec computers – I’m talking about the ones with masses of RAM, maybe even quad-core. Basically something which is powerful enough to run those Selenium tests. It is pretty unlikely that your users have the same sort of power so they’re far more likely to see how slow your system really is.
- Throw in a huge monitor and things get worse. It’s pretty easy to forget that the average user is likely to have a single 17inch monitor (probably). Many of your users will be on laptops or maybe even tablets. Again those usage stats will be useful here. Keep your set-up realistic otherwise you’re likely to miss some pretty obvious usability issues.
- Network latency. Test environments are likely to be running inside your office which means requests will be much faster than for people accessing the system outside. Again this will depend on who your users are and how realistic your test environments are. If possible run your test environment from the same location as the production one.
- Office LAN speeds. Related to the above but I find this one is easy to forget. Anything you test inside the office should feel super quick. If it doesn’t then the poor person sat at home with a 4mbps wireless connection is going to be extremely frustrated. Make sure you consider both the LAN speed and the network latency when you’re planning performance tests.
- Test Data. This one deserves a whole post to itself but in short if the test environment doesn’t contain realistic amounts of test data and if you don’t enter realistic test data then you’re likely to miss bugs.
So in summary, find the numbers to help you identify who your users really are. Fight to have a test environment and test machines which actually reflect what the system will be like in production. Perform your testing against realistic test data and above all make sure you do at least some testing on a slow PC with a small monitor.