I recently had the pleasure of doing a webinar with Jeff Payne on test automation. In it, I rehashed some of the points from my Keynote out at STARWEST, but mainly we got to have a wonderful discussion on some key talking points. Despite what some people thought, none of these questions were pre-planned or discussed—Jeff was kind enough to blindside me with some great questions.
One of the main points Jeff brought up was using the word ‘checking‘ vs. ‘testing‘ when doing automated testing. Should we call automated testing, ‘automated checking‘ from now on? Is all testing just checking? He didn’t seem to like me channeling my inner James Bach.
The main point I wanted to get across was regardless of what you want to call your automation, automation can only verify what it knows—it can’t look at the system as a whole, only the whole that you describe. Testers on the other hand, by virtue of running a manual test, see much more than the few buttons they interact with. And they can and SHOULD respond to that. I don’t care if you’re not testing the header, if the logo is missing, open a bug. Conversely, an automated test doesn’t think to look at the header or logo if it’s verifying content on the page. The goal isn’t to belittle automation, but to ensure testers realize that automation is pointed, specific, and only looks at what you tell it to look at.
Unfortunately, in the industry, this often isn’t properly communicated from the testers side. Testers often take a manual test case, convert it to an automated one, and assume the same coverage is obtained. This doesn’t happen. As a result, automation often gets blamed for not catching bugs that it was never designed to. Properly scoping your test, identifying what is and isn’t covered in the test is vital. Ensure your tests have a clear, concise purpose, and each one’s actual intent is properly communicated to those relying on the test.