Skip to content
Testing Microservices as The Average Joe
Api Testing·

Testing Microservices as The Average Joe

Shachar Landshut
by Shachar Landshut
Testing Microservices as The Average Joe

The software world has shifted to microservices architecture. Microservices architecture is the antithesis of the monolithic architecture. Instead of having all components of a product bundled in one deployable artifact, the product is broken into small pieces that are decoupled.

database-chart

Why microservices?

Microservices architecture was introduced as an answer to the challenges imposed by monolithic architecture. Specifically, to help deliver better products that are resilient to failure and can scale efficiently. But with it came other challenges: managing many teams each responsible for a single service, coordinating the deployment of services and, perhaps the most important and challenging of all, testing the architecture.

chart-microservices

New and better but still challenging

So what are the challenges in testing microservices? The first thing to keep in mind is that microservices architecture is fragmented and is made up of a lot of different pieces. We need to consider that although services are decoupled, they still depend on one another. Another factor to consider is how services communicate. Services communicate over the network via API calls, and they request and receive data. Testing has to provide coverage for various scenarios such as network latency, malformed payloads, and HTTP errors.

The challenge of properly testing microservices

To better understand how to deal with the challenges in testing microservices, we need to break down and analyze testing methodologies and practices.

The building blocks of testing

Unit testing

Unit testing goes down to the individual class or method in our code. By running unit testing, we make sure that all parts of a service do what they are supposed to do. That’s the first level and the first thing we need to test.

Component testing

Once unit testing verified that all parts of a service behave, we need to test the service as a whole. Here, we check that the service responds with A when we give it B, or does C when we ask it to.

Integration testing

After checking that all parts of a service and the service itself function as expected, it’s time to introduce it into the bigger picture. This is integration testing. Integration testing puts together all services, or most of them, and tests that they cooperate properly. The idea behind integration testing is to check that service A sends a request to service B, which then processes the request and passes it on to Service C. Service C might then communicate back to B or A or even continue to communicate with a third-party API. We need to outline the architecture, understand service dependencies, map request-response routes, and test for all of it.

These three forms of testing are just the building blocks. We use them to build an infrastructure that supports proper testing. Proper testing is about facilitating faster and more efficient testing. Proper testing is also about test coverage and testing for failure and bad outcomes as much as testing for the expected.

Best practices for testing microservices

Now that we know the building blocks of testing microservices, let’s look into some best practices.

Testing against the real world

Testing microservices against real-world scenarios and data is the best way to go. After all, the architecture will eventually be exposed to users out there. Verifying that it performs as expected under real-world conditions is the logical thing to do. Apart from feeding tests with real-world data, we also need to keep in mind that in the real world, failure is inevitable. So we need to check how services, and the system as a whole, deal with network errors and malformed requests and responses. More on that up ahead.

Setting up testing environments

The fast-paced nature of software development didn’t skip testing practices. We want to be able to set up and tear down testing environments fast and with minimum effort. Today, infrastructure-as-a-service (IAAS) such as AWS, Azure and Google Cloud, allows us to set up environments quickly and easily. On top of that, there’s containers. With a few lines of code, we can describe the services that we want to spin up without thinking about the underlying hardware or operating system. This is why setting up testing environments using docker containers is the recommended approach. We spin it up, test it, and tear it down.

Test coverage and testing for failure

In the section about testing against the real world, we mentioned testing for failure. By testing for failure, we anticipate the failure of services, service components, and network infrastructure. One of the reasons why microservices are preferable to monolithics is that the failure of one or more services doesn’t shut down the whole system. It might degrade its performance, but it won’t bring it to a halt. Moreover, with the use of containers and IAAS, we can quickly replace services that failed. But we still need to test for it.

Test coverage means that we don’t just test for the good things. We don’t just test for the expected outcome. We want to test for the unexpected and misbehaving services. When formulating test coverage, we want to test for network latency and errors and see how services handle that. This might reveal that some services don’t tolerate network failures and therefore need to be refactored. Since microservices rely on one another to deliver well-formed data, we need to see what happens when service A sends malformed data to service B and how service B handles it. Lastly, we need to look into integrations with third-party services. We can fix issues when it comes to our services, but third-party services are out of our control. This merits even more testing for failure. Since we can’t fix third-party services, we need to make sure that our system remains resilient when facing failures that originate outside of it.

Extreme chaos testing

So far testing for failure was discussed with failure in mind. That is to say, we acknowledge that failure is inevitable so we incorporate it into our testing. But there’s another form of testing that is outside the scope of the develop-test-deploy chain. This is called chaos testing or chaos experiments. With chaos experiments, we shut down or degrade services randomly. We do so to check that the system can withstand random failures and recover quickly and efficiently. It’s basically testing for high availability and full redundancy to make sure that peak in traffic, computational overload, and the occasional failures of services never bring the system down or severely degrade its performance.

Caution must be exercised with chaos experiments. Since advocates of this practice argue that it should run in production, or close to production, the experiments should be developed slowly and only after mastering conventional forms of testing as discussed here.

The Tower of Babel by Pieter Bruegel the Elder (1563)

Are we ready to test?

Yes and no. We now have a clear understanding of what it means to test microservices and how to go about it. But practice beats theory. Take the points discussed here to guide you through building your testing practices and methodologies. To wrap it up nicely, know that practicing testing makes for better developers. Testing reveals our fallacies as developers and forces us to improve. Hopefully, this should convince you why testing is so important, and why it is so important to do it right.