I was reading the other day, and happened to come to a section of my book discussing the scientific method. While I grew up with a heavy background of math and science, and am familiar with the scientific method, it had been awhile since I’d thought about it at all. It was interesting to hear it all laid out fresh, as I found an impressive amount of similarities between it, and software testing. So much in fact, that I believe all testers should (and maybe even unwittingly do) follow it when testing.

Form a Question

The first step of the scientific method is to form the question to be asked. What is it you are curious about, what do you want to understand?

For testers, many times this question is given to us: “Does this feature work?” However, we can’t just stop there, we need to do a little digging. We don’t want to start off with something this vague. We want to make our ‘question’ SMART. What does ‘work’ mean? What is the scope of the feature? So, let’s gather some information so that we can form a better question.

Do Some Research

I’ve often seen this step lumped in with the above. It really depends on who is describing the scientific method. I like to pull it out, as for me, it is a totally separate step. The goal is to engage in some actual thinking.

For testers, this step is important. It’s refining the above question and answering the specifics that we’ll need in order to create our hypothesis. Often times, this involves talking to developers, business analysts, and whomever else can give us more information about the requirements, and implementation.

If you’re running SCRUM, these questions, this research, should all be done in sprint grooming. Find out as much as you can about this new feature, how the feature is supposed to be functioning, and what you are supposed to be testing (e.g. in vs out of scope). As a result of this research, you should have enough information in order to arrive at a hypothesis.

Construct a hypothesis

A hypothesis is simply an answer to the question posed above. In a good sprint grooming session, all participants should be arriving at the same hypothesis, i.e. agreeing on how the new feature should work.

This hypothesis will then be broken down, and the feature will be ‘experimented’ on. This is where the true art of being a tester and critical thinker comes into play.

During sprint planning, it’s good to come up with an outline of what you are going to be testing for the feature, what are the happy paths, negative paths, malicious paths, etc. These down need to be deconstructed any further, but having the team know what you will be looking at, not necessarily the how, will help inform the team of how to build your feature.

Form predictions

This is a critical step, as it encompasses formulating your prediction about how the feature should work. Everyone should be on the same page before the feature is built, and before the feature is tested. We need a known goal that we are working for, otherwise, we’re just confirming what we saw, instead of thinking critically about the outcome. The goal is to have something to compare what we expect (our hypothesis) to what we observed (our results).

In science, this helps eliminate observation bias. In testing, this helps solve the complaint of ‘works as designed’. Ensuring everyone is on the same page before testing even begins, builds confidence that the end product is actually what is desired.

This is a personal pain point of mine, as I often see people run tests, and then just confirm the outcome. Without critical thinking before the experiment begins, you are subjecting yourself to confirmation bias. This holds true for both automated and manual testing, and can hide a lot of bugs, or worse, tout non-working software as software.

Doing some experiments

In my mind, this is the most difficult part of the both the scientific method, and testing. It’s not that testing, or doing experiments, is hard, it’s that constructing the test/experiment to be rigorous, is difficult. When we perform an experiment, the whole goal is to isolate all variables, and simply manipulate one element, and determine the impact that one element has on the system.

Testing should be doing the same thing. While I do care about the system as a whole, if something doesn’t work, how can I tell what the problem is? Without starting in a known state, controlling my inputs, and being minimalistic in the changes implemented, how can I ensure my end result is due to the change I caused in my software?

When we’re performing our ‘experiments’ on our application, keep a few things in mind:

  • What data do we need?
  • How can we isolate these components?
  • What are the smallest changes I can perform to get the expected result?

I find this even more important for my automated testing. Because automation can only confirm what you tell it to, I make my automated tests as atomic as possible. This means controlling as much as possible, and manipulating the system in a controlled state as much as possible, before even starting the test.

For example, if I’m testing that something is present on the homepage, I’d like to avoid testing the login screen. This means, maybe I don’t login through the UI, but instead login through the API, grab the access token, shove it in my browser, and then go directly to my homepage. In this fashion, if something isn’t present that I expected, I know it’s due to the homepage issues, not that I had problems with logging in.

Analyze the data and draw a conclusion

This step is all about results. What happened during the ‘experiment’? This is where it is vital to refer back to your hypothesis and predictions. Remember: we don’t want our observation bias to impact our decisions (e.g. assuming what you see is correct)! After all developers do make mistakes, otherwise, we wouldn’t be testing!

Be sure these conclusions are based on data and results. This again ties back to having a good hypothesis, something that is measurable, and having an experiment that is deterministic.

Share your results

Once we’ve determine our results, based on our prediction, be sure to gather the evidence (screenshots, reports, etc), and prepare it for review. Most scientists will publish their results when completed, even submit them for peer review. No good test results should live in a bubble, otherwise why did you run them? Share them with the team, management, etc, so that the effort you put into verification goes to some good use.

Like any good scientist, ensure your results are repeatable, and enough data is shared so that someone could reproduce your results. You want to be able to prove your software works the way you claim it does. Think of your peers as an IV&V, able to look at your work, and draw the same conclusions.

Final Thoughts

Remember, testing should be fun, but needs to be rigorous with clear results. When someone asks about the tests you ran, don’t worry about if they passed or not, just worry that you can backup your answers. Afterall, there is no shame in disproving your hypothesis, so why be ashamed of a failed test. You’ve discovered something either way, just be sure to share it with your organization.

As always, happy testing, and leave your thoughts below!

Leave a comment

Your email address will not be published. Required fields are marked *

X