I frequently talk about best practices when writing test cases in the Mobile Application Testing Course that I teach. I recently ran into an issue on a project and figured it was worth throwing this information out for all to share. Test cases are great; obviously, they’re important to have, whether they’re automated or manual, as they demonstrate that your application has the desired capability and that the capability actually behaves properly. Building up these sets of tests into a larger regression suite is also important so that as developers make changes to the application, we can ensure nothing that was previously working broke. I ran into an issue at my current client where we finally had a decent set of regression tests, and we had automated them (woot), but these tests started failing, which meant we couldn’t release our software. Of course, the question became: were the tests finding bugs, were the tests outdated, or were the tests just wrong/broken?

So I went in and started reviewing the tests, trying to answer the age-old question, whose fault was it. Of course I jest. We’re not looking for blame, but still, the cause of the failures needed to be determined. What I found was a little disturbing. Many of the tests weren’t following some relatively simple testing best practices. Why did this matter? Because it made troubleshooting the failing tests extremely difficult. Some were tests that I hadn’t written, some I wrote, and were subsequently edited by multiple people, and some I just hadn’t looked at in over 6 months.

There are MANY ‘Best Practices’ when writing test cases, and below is by no means an exhaustive list, but they are a few that are important to keep in mind for diagnosing why tests fail, particularly automated ones.

Consistency

This should be fairly obvious, but ensure your tests always run consistently. For manually executed tests, this is relatively easy to accomplish, but for automated tests, this can be a bit trickier, especially with the technologies used in today’s applications. More and more applications are moving ‘into the cloud’, particularly utilizing Web 2.0 capabilities, and with that comes a new set of challenges. Content is often dynamically generated and loaded, many calls to retrieve data are loaded asynchronously, and there may be the use of third party libraries that you don’t own or even control. Why does this matter? Because it makes it much harder to ensure that the data you are looking for exists, or will exist at the time you expect it.

Some tips for keeping your tests running smoothly? Ensure you have good test data that is always refreshed or reloaded before each test. Make sure all of your waits are dynamic. Content loading from a third party site, or from an asynchronous call are difficult to judge, so don’t just hard-code in wait time. Include some logic into your tests to wait for elements before manipulating or verifying them. Don’t rely on third-party services for your testing; see if you can mock them. And of course, ensure you test with and without the services. What happens if your CDN goes down? What backup does your application have? Remember when S3 went down recently? Did your application completely break? Proper tests could have prepared you for that.

Brevity

This is something I most often see people struggle with when converting their manual tests to automated ones. For manual tests, frequently I see pages and pages of instructions for just one test. Step a informs step b, which in turn eventually informs step z. If step f fails, a manual tester can usually continue the test, smartly figuring out how to proceed, because, in reality, MANY things are actually being tested. For an automated test, this can spell disaster. If step f fails, your automated test probably doesn’t know how to continue, which many leave many steps untested. Why does this matter? Well, maybe there are 2 or three actual issues, but because they’re all being tested at once, they’ll be reported one at a time, delaying your final application fix for a ‘clean run.’ Additionally, it makes troubleshooting painful. If I can run through 3 steps to reproduce the issue, instead of 20, I want to do that. Especially if this occurs in multiple places.

When looking at your tests, see if you can break up your tests as much as possible. Can you seed some data so that you can skip some setup, but still test what you need to? Are you just testing related things? Break those into two different tests. If you can’t determine what went wrong with your application based on the test title, you have too many steps and checks in your test. If you have more than 10 steps in your test case, you probably need to decompose it more. More than one ‘assert’, you’re probably doing it wrong.

Traceability

When I talk about testing frameworks one thing I harp on is selecting a framework that makes your life easier. This means many different things for many different projects, but one thing that all projects should have in common is the need for documentation. We all need our tests to indicate exactly what was run, what passed, what failed, and why, and ideally, we can find a test framework to do this for us. Not only do we want the test steps documented, we want also some verification that the tests are achieving the validation goal. We might also want to capture screen shots as further evidence of correct execution and result. Ensure you can prove that your tests are finding issues, and you can simply send that information over to your development team, obviating the need to run through these issues time and time again, with different team members.

There are lots of frameworks out there that automatically include this information, but sometimes they don’t have all of the tools/capabilities that you need. Either way, find the framework that works best for you or supplement the framework with additional details. Maybe you can add logging statements into your automated tests, maybe you can use Reporters. Most tools have something included in them for exactly these purposes. If you’re rolling your own testing framework, just be sure to include this capability from the beginning, maybe even design it around the need for this verification.

Understandability

This is important for both manual and automated tests. You need to ensure you can actually understand what the heck your tests are doing, along with what they are trying to accomplish. Just as your code needs to be commented, your tests may need the same sort of treatment. Looking back at something you wrote 6 months ago, it might be hard to determine what feature is being tested, or why. For automated tests, this can be very painful. As I mentioned above, you want some self-documentation in your test cases, but ensure it’s something useful. These steps should be easy and obvious for anyone to read through, instantly understand, and reproduce if needed. If that’s the case, congrats, you truly do have some good self-documenting tests.

If you can, split up long complicated steps into multiple parts, it makes them easier to read and reproduce. Don’t split up short actions (i.e. typing a different letter of a word for each test step), as this becomes difficult and painful to understand. Think about the spacing and layout of your report. Perhaps annotate your tests with a description, to add some information on the goal or intent. Maybe add the author or test creation/update date, so if there are questions, you know who to turn to. Ensure you have commented your code as well, as trying to read through the code to understand the steps doesn’t always workout properly. Remember, your automated tests are code, and need to be treated as such.

Conclusion

As I mentioned above, these are just some of the best practices you should look at following. There are plenty others. Just because you follow all of these doesn’t mean you’ll have perfect, easy, always passing automated tests, but it’s a great place to start. And if you don’t, well, you probably won’t. Have questions our some thoughts to add? Leave them below.

And of course, it wouldn’t be a good post without an offer of a piece of software to help you out with all of this. If you’re doing testing, (especially API, or Front End), be sure to checkout Selenified. While it won’t guarantee your tests are written perfectly, it does add automatic checks and waits for asynchronous loads, is self-documenting (and takes screenshots), and does so in a nice readable way. It even allows you to add test descriptions. Happy testing!

Leave a comment

Your email address will not be published. Required fields are marked *

X