I’m going to take a little turn this month from what I usually post and talk about what many (including myself) consider the ‘more boring’ aspect of testing: test planning.

About a month a ago, I participated in a “Hackathon” as part of a proposal effort. To prepare for this, the Coveros team spent some time building out a basic architecture and framework for our application, along with a nice robust DevOps pipeline. I was the testing lead, and as such, was essentially “responsible” for the quality of the application. Of course you can’t “test in” a quality application, but I ensured we had a good testing framework setup to run our automated tests, both front end, and services.

A major focus of our effort was setting up a build pipeline that could successfully build, deploy, and test our application. My ‘testing team’ spent a good amount of time working with the DevOps team to ensure that the framework we chose (my own open sourced testing framework) ran successfully within the pipeline, and that the test results could be easily viewed. In my mind, we were doing great work, as our functional (and some non-functional such as API) tests got kicked off automatically, and the results were simply viewed in the pipeline. The problem came a little later, when we really started talking quality.

We decided we should formalize our Test Strategy into a document. This was just fine. I knew what our strategy was, I was designing all of our tests directly from this. Because of this, I procrastinated actually writing up this document; I hate documentation, and in my mind, I had more important things to do.

I eventually got around to writing up my Test Strategy, and did a pretty great job (if I do say so, myself). It wasn’t too verbose, but hit the main testing points I had in mind, each with measurable outcomes (think SMART). Below are a few excerpts from my Testing Strategy:

Unit Tests will be written for each method by developers, ensuring statement coverage of every applicable section of code. JUnit will be used as the java code testing framework, with JaCoCo being used to calculate the coverage. Jasmine will be used as the javascript code testing framework. The branch coverage of both the back end and front end application code needs to be at least 70% before code can be released to the testers.

Acceptance criteria associated with each story will be turned into Selenium Tests, executed using the Coveros automated testing framework. These tests will comprise the Acceptance Test Suite. 100% of these tests need to pass in order for Acceptance Test suite execution to be considered successful.

ZAP will also be run in active scan mode over the deployed application, spidering and attacking the application. Any Critical or Major issues identified will be considered a failure.

All of these came from the Testing Criteria section of the document. So, did you notice the same thing that I did? I had identified metrics to capture to determine if our application should ‘pass’ or ‘fail’ our testing, but I hadn’t determined how these fit into our pipeline. 100% success of automated tests is simple enough, as, by default, most frameworks look for this. Specific coverage percentages (including line and branch coverage) as well as security vulnerabilities of different levels is a different stories. Our DevOps pipeline needed to understand how to identify these metrics, and what to do with them.

By waiting so late in the game, our pipeline wasn’t initially designed to look for or handle these quality thresholds. While we had a great DevOps team, by the time I finally got around to realizing what checks we really needed in the pipeline, we were just days away from our “Hackathon.” Our pipeline was on a code freeze, like any other application would be, to reduce the risk of breaking the pipeline that could prohibit us from successfully completing the “Hackathon.”

So I was kind of stuck. As a team, we had several discussions, and agreed that these checks would be done manually. Day of the “Hackathon,” and what do you know, the first code we push through, didn’t meet these metrics. Making this a manual instead of an automated check, allowed some things to slip through the cracks, and we needed to initiate a hotfix for the lower quality code. Future releases worked, but took more time due to the manual nature of verifying our metrics.

What is the point of my story? Test planning really is important. Had I generated this document sooner, we would have had plenty of time to bake in the required automated checks into the pipeline. It’s possible that the pipeline wouldn’t have even have been capable of performing these checks, and by determining the checks early enough, it could have helped shape the pipeline. While agile promotes “working software over comprehensive documentation,” it doesn’t say ignore documentation (as I’d often like to think).

When determining you’re desired DevOps pipeline capabilities, ultimately think about what and how you need to verify your application before its too late.

Leave a comment

Your email address will not be published. Required fields are marked *

X