What are Feature Toggles

There are lots of ways for developers to develop code, with many different strategies when it comes to releasing capabilities. While many development organizations prefer to use feature and release branches, some utilize feature toggles (sometimes even combining these with branching). This gives developers the ability to turn a particular feature on or off either with environment variables or application settings. While this is a great technique for getting features released at the correct time, it can cause chaos for testers. Is the feature on or off? What tests should I run? What do I do with my old tests when the feature that they test isn’t enabled? What happens if/when the feature is turned back off?

Why it’s Critical for CI

When tests are running within a CI pipeline this becomes a critical question, as manual triage for testing can delay a release and cost testers and developers time and money. It’s not acceptable to simply state that this test failure is permissible because the feature isn’t enabled.

Standard Automation Implementation

The code that I see to handle feature toggles typically isn’t pretty. Many if statements are added to get around the fact that the feature may or may not be turned on.

private static final submitButton;

public SubmitClass(App app) {
    submitButton = app.newElement(LOCATOR.ID, "submit");
}

public void clickSubmitButton() {
    if (submitButton.is().present()) {
        submitButton.click();
    }
}

Not only does this make for non-deterministic results, it poses a real problem for actually verifying the application. For the above code, what if there was a bug, and the submit button was left out? What if the locator changes? What if the locator button was simply slow to load (think asynchronous calls to load elements). What would your test do? It would probably fail, but maybe not. You couldn’t quickly determine the root cause, however. Even worse, if this is the last step of your test, it might just be skipped altogether. False positives are about the worst case scenario anywhere, and definitely possible here.

Is it a dev issue or a test issue? A lot of work might be involved to determine the cause of the failure.

With many feature toggles, code can quickly become spaghetti code, and will cause robustness and maintainability issues.

But don’t fret, there is a better way!

Testing Feature Toggles

What I like to do,is write my own feature toggles. I’ve found they can solve all of the issues above. Typically, there is some way to determine what features are turned on or off. Most often, I’ve found them inside the application APIs. They might not always be straightforward, but talk to your developers and see where they point you. For example, on my current project, they have a nice API that you can call which will return a list of features and their status. The result of the API call looks something like this:

{
  "featureOneEnabled": false,
  "featureTwoEnabled": false,
  "featureThreeEnabled": false,
  "featureModules": {
    "Hello": true,
    "World": true
  }
}

What you can then do is make a call to that API to determine if the feature is enabled or not, and make your tests respond accordingly. I like to write a custom class to handle this, something like:

public class Feature {

    public enum Features {
        Hello, World
    }

    /**
     * Retrieves the application details from the app config
     *
     * @return
     */
    public static JsonObject getAppConfig() {
        HTTP service = new HTTP(Setup.getURL().toString());
        Response response = service.get("/get/configuration");
        return response.getObjectData();
    }

    /**
     * Retrieves the application feature details from the app config
     *
     * @return
     */
    public static Map<String, Boolean> getFeatureModuleDetails() {
        JsonObject featureModules = getAppConfig().get("featureModules").getAsJsonObject();
        Map<String, Boolean> features = new HashMap<>();
        for (Map.Entry<String, JsonElement> feature : featureModules.entrySet()) {
            features.put(feature.getKey(), feature.getValue().getAsBoolean());
        }
        return features;
    }

    /**
     * Determines if the passed in feature is enabled or not, by querying the admin console. This value can be
     * overridden by supplying the enabled/disabled values from the commandline
     *
     * @param feature which feature to check for being enabled or disabled
     * @return
     */
    public static Boolean isFeatureModuleEnabled(Features feature) {
        String featureProperty = "feature." + feature.name().toLowerCase() + ".enabled";
        if (System.getProperty(featureProperty) != null) {
            return Boolean.valueOf(System.getProperty(featureProperty));
        }
        return getFeatureModuleDetails().get(feature.name());
    }

    public static Boolean isFeatureOneEnabled() {
        if (System.getProperty("feature.one.enabled") != null) {
            return Boolean.valueOf(System.getProperty("feature.one.enabled"));
        }
        return getAppConfig().get("featureOneEnabled").getAsBoolean();
    }
}

I can then perform a simple call to check if the feature is enabled or not, so, my above SubmitClass looks like this:

private static final submitButton;

public SubmitClass(App app) {
    submitButton = app.newElement(LOCATOR.ID, "submit");
}

public void clickSubmitButton() {
    if (Feature.isFeatureOneEnabled()) {
        submitButton.click();
    }
}

I now have a deterministic method, which isn’t subjected to the errors described previously. You may have noticed, that there is some additional code above that checks System properties for values. What I’ve done is allowed the API checks for features being enabled to be overridden from passed-in command line variables. This way, if I want to ensure a feature is turned on, I can tell my tests to run a certain way. Without this, Ops might forget to enable a feature and my tests would pass. With this, if Ops forgets to enable a feature I can still have my tests fail, as the feature should be turned on, even though it is not.

Custom Listeners

I like to take this approach one step further. I’ve mentioned before in my posts that I like to follow unit test style practices when writing my functional tests. Mainly, making them atomic and ensuring they test one specific feature. Well, if we have feature toggles, and I have tests to verify that feature, many of my tests will just become empty based on the above implementation.

A great workaround for this is to simply skip the test if the feature is not turned on. Using TestNG, I’ve written a custom Listener which, before any tests run, checks to see if the feature is disabled or if the test is not related to that feature, and if either of these conditions are true, just skips the test.
For example:

/**
 * Before each test runs, check to determine if the corresponding module, based on the tags, is turned
 * on or off. If the module is turned off, skip those tests
 *
  * @param result the TestNG test being executed
  */
@Override
public void onTestStart(ITestResult result) {
    for (Feature.Features feature : Feature.Features.values()) {
        if (TestCase.containsTag(result, "@" + feature.name().toLowerCase()) &&
                !Feature.isFeatureEnabled(feature)) {
            log.warn("Skipping test case '" + result.getInstanceName() + "', as feature " + feature + " is disabled");
            result.setStatus(ITestResult.SKIP);
            throw new SkipException("Skipping the test case");
        }
    }
    if (!Feature.isFeatureOneEnabled() && TestCase.containsTag(result, "@feature-one")) {
        log.warn("Skipping test case '" + result.getInstanceName() + "', as FeatureOne is disabled");
        result.setStatus(ITestResult.SKIP);
        throw new SkipException("Skipping the test case");
    }
}

This works seamlessly with my test execution, and even shows which tests passed, failed, and were skipped as a result of the feature toggles.

Happy Testing!

Leave a comment

Your email address will not be published. Required fields are marked *

X