One of the bigger discussions I see occur frequently is the discussion about how to deal with and address Flaky Tests.

Just go back to Google and search for "How to deal with flaky tests". You'll get over 10 million hits with a front page littered with blog posts about how to handle flaky tests. There will even be a hit from the Google Testing Blog, where they offer up a definition of a "flaky" test.

We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code.

The problem that I have with this blog post, and the numerous others that cover this topic, is that they classify an Error and Failure as the same test result.

In a previous post of mine, Top 5 pitfalls writing integration tests, I went over the difference between an Error and a Failure.

Error: An unexpected error has occurred during the execution of the test. Failure: An assertion in a test has failed.

If there is a Failure during a test execution, it should mean that there is a defect in the production code of your unit under test. If the test failed but there isn't a defect, then you need to rethink your test design.

If there is an Error during a test execution, it should mean that there is a defect in the APIs that you are using to setup your preconditions.

If you are running into the same issues in which I am, the reason that you are seeing flakiness in your test results is because your tests are not flaky, it's your libraries that your tests are using that cause intermittent errors.

I want to amend the flaky test definition:

A flaky test result exhibits both a passing and an error result with the same code.

I think that we can all be more diligent in the language we use when we describe flaky failures and errors in test code.

I'd love to hear your thoughts on Flaky tests. Should we treat Errors and Failures the same?

© 2018. All Rights Reserved.

Proudly published with Ghost