The Pragmatic Programmer states Integration Testing shows the major subsystems that make up the project work and play well with each other.

Unit tests isolate individual components and mock out the behavior of the external system dependencies while Integration tests flow through your code with the actual systems in place and with real data.

Integration Testing is often one of the toughest parts of the software development life cycle. Depending on the complexity of your application, your integration tests can be quite a burden to get up and running. Many software engineers write Integration tests, but they tend to struggle along the way, myself included.

Here is my list of the top things that I have struggled with when writing my Integration tests. It certainly isn't a complete list, but it's a place to start.


1. Writing too many tests.

The number one thing that I see happen is that people are writing too many Integration tests. When you find yourself with hundreds or even thousands of tests, you are likely writing tests that could be pulled down to the unit level.

When you are developing software, faster feedback on the quality of changes is a big deal. If you are stuck waiting for hundreds or thousands of integration tests to run, you are going to kill developer productivity.

xkcd

One of the benefits of moving them to Unit tests is the faster execution time. Comparatively, a thousand Unit tests are going to run significantly faster than a thousand Integration tests.

Another thing to consider is the cost to maintain Integration tests. Not only are they costly to create, they are arguably more costly to maintain them. Making sure they are up-to-date and kept in a runnable state takes a lot of time, and that is time that you aren't devoted to writing new features.

This is also a good time to mention the testing pyramid. If you aren't familiar with the concept, I came across an article that defines it fairly well. Basically, you want to have a strong foundation of Unit tests, with fewer Integration tests, and even fewer Acceptance tests.

2. Lack of error and exception handling.

It took me far to long to figure out what the difference between and Error and a Failure was in a test execution.

Error: An unexpected error has occurred during the execution of the test.
Failure: An assertion in a test has failed.

Perfect tests will never error and will only ever fail when there is a defect in the software. Since you never write perfect tests, you need to make sure that you have proper error handling. Your future self will thank you when trying to figure out why your tests have errored on the CI server!

Errors in Integration tests typically happen when configuration values haven't been passed in properly. If you don't have any error handling, at least put some around any place you make connections to external services.

An exception that says that a SqlTimeout has occurred is going to be harder to track down then an exception saying that the connection string was not passed into the test.

3. Not automating the setup and preconditions.

Have you ever gotten all your tests passing locally only just to have your teammates attempt to run them where they fail immediately?

"But they worked on my machine!"

machine

This is something I hear all too often (and sometimes catch myself saying).

Make sure that you document all the preconditions! If you downloaded any packages, altered your environment variables, performed any manual steps, or anything else to get preconditions in place.

Once it is documented, go all out and automate those steps! This way you can run your tests on any machine, especially if you want to get these running on a CI server.

4. Leaving Artifacts laying around.

Have you heard of the Boy Scout rule, "Always leave the campground cleaner than you found it."? This is kind of like that, except it's an application instead of a campground! So instead: "Leave the application cleaner than when you started testing it."

Let’s look at Facebook for this pitfall.
(In this example, I am classifying their Groups, Pages, Posts, Users, etc as physical artifacts.)

Say you are writing tests to verify that you can create a Page artifact. If this runs against a dev server, and each Page creation creates a new database under the covers (I know it probably doesn't, but let's pretend!). You could take down the dev server just by filling up the HDD if you don't delete those Pages after they've been created.

Most frameworks have Teardown events that you can hook into. I suggest that in your test fixtures, you write a service that tracks all of the artifacts that you create during the test execution. That service then can handle the deletion of those artifacts in the Teardown.

5. Using physical artifacts inside of a TestCaseSource

If you read my other post about the TestCaseSource attribute, you know that I use it as often as I can. This has gotten me in trouble when using real data though.

If the TestCaseData that you are building up is performing these Read and Write operations on your artifacts, you are going to be affecting every suite that is in your test project. This is due to how testing frameworks execute tests. They build up a map of all the tests that are in project before they get executed, even for those that may not be selected.

Let’s stick to the Facebook example. Pretend that you have a fixture that is testing Pages. One of the inputs you are testing is the different types of Posts that you can make on a Page. If you are building up those Posts inside of the TestCaseSource by calling out to the Posts service, the testing framework is going to actually call out to that service and create them while it's building up that map.

So if it takes 1 minute to create those posts, you are adding 1 minute of overhead for every other fixture in the project. Even if you have a test in a separate fixture that has no preconditions and runs in under a second, it’s still going to take that test over a minute to execute.

This issue was plaguing me for the longest time. When I finally found why all my tests were loading so slow, I thought it was a really interesting problem. I will likely create another post devoted entirely to this pitfall because I want to go more in-depth on the technical aspects of how tests are loaded into memory and executed.


At the end of the day, Integration tests should be there to help you. For a long time, my Integration tests were hurting more than helping. Hopefully now when you are writing your Integration tests, you can avoid these pitfalls.

This was just my list of top 5 pitfalls that I have run into. I am curious to hear what pitfalls you have run into when writing Integration tests. Leave a note so we can discuss!

If you want to hear more about this topic, take a listen to the TestTalks podcast episode 205 where I talk to Joe Colantonio about this topic.

https://joecolantonio.com/testtalks/205-net-integration-testing-james-penning/

© 2018. All Rights Reserved.

Proudly published with Ghost