Problems I’ve Learned After Using Microservices Architecture for 10 Years

Don’t use a microservices architecture to create a new project.

Don’t add complexity until needed.

I’ve worked on 3 projects with microservices architecture which shouldn’t have use it.

Does the project take advantage of the benefits of this architecture over its complexity?

In the following post, I will describe some of the drawbacks I’ve encounter while working with a microservices architecture.

Distributed Transactions

When storing an entity in the database, sometimes, my entity will create children entities. Those children may be in other microservices.

I create a row on the Product table, which creates a row on the Availability table.

Which one must be created first?

Let’s say it’s the parent, the row on the Product table. What happens if I’m unable to add a row on the Availability table?

There is no magic solution for that.

In some cases, I’ve first created the Product row in the product microservice. Then requested the availability microservice to create a row on the Availability table.

If the second request returns an error, the Product row is removed and an error is returned to the user.

This solution works well when the response time is very low. When both microservices can create their entities and return a status very quickly.

But if there are some more processes, or I want an asynchronous workflow, I have to use an alternative.

Another solution I’ve used is the following.

The most important entity in the previous case is the Product item. So, will first create the Product item and return OK to the user.

Then, in an asynchronous workflow, I will create the Availability rows.

What if this goes wrong?

Try again after 5 minutes, after 30 minutes and after 2 hours. This approach works fine when the availability microservice is dependent on another external service. As it may be unavailable due to external reasons, a retry may solve the problem.

Or make the application ready for missing Availability rows. This means: create missing rows at the read or throw an error to alert the support service.

Distributed Logs

The microservices architecture is mostly used in big applications, with a high load of requests.

What happens when I need to trace a single request from microservice to microservice to know it’s behavior?

I can’t find my request per time, as it may be hundreds of requests at the same time. And how to link the logs from the second microservice to my initial request?

There are some solutions for that. I’ve written another post about using Sleuth and Zipkin for that.

Another alternative I use a lot is the following.

I create a UUID to prefix all the logs. The same UUID will be shared across all the microservices for a single request.

Let’s see how to implement it.

I create an HTTP filter at each microservice. And for each request received, I create a new header X-Trace-Id with a UUID. But I add it only if there is no previous value.

@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
        throws IOException, ServletException {
    String tracingId = ((HttpServletRequest) request).getHeader("X-Trace-Id");
    if (tracingId == null) {
        tracingId = generateId();
    }

    if (responseHeader) {
        ((HttpServletResponse) response).addHeader("X-Trace-Id", tracingId);
    }

    MDC.put("X-Trace-Id", tracingId);
    chain.doFilter(request, response);
}

Then, when calling the subsequent microservices, using OkHttpClient, I add the UUID to the same header.

public class TracingInterceptor implements Interceptor {
    @Override
    public Response intercept(Chain chain) throws IOException {
        Request request = chain.request();
        String tracingId = MDC.get("X-Trace-Id");
        request = request.newBuilder()
                .addHeader("X-Trace-Id", tracingId)
                .build();

        return chain.proceed(request);
    }
}

Finally, I can use the previous Interceptor in the OkHttpClient configuration.

@Bean
public OkHttpClient okHttpClient() {
    return new OkHttpClient.Builder()            
        .addInterceptor(tracingInterceptor)
        .build();
}

The OkHttpClient can then be used with Retrofit or OpenFeign when calling other microservices.

And to use the UUID in the logs, you can check the details in the following article.

But wait, adding the Interceptor configuration and the HTTP filter at each microservice isn’t repeating the code? Let’s solve this point.

Duplicated code, DTOs or entities

When implementing several microservices, I start having duplicated code everywhere. The same Availability DTOs are present in the availability microservice, in the product microservice, in the prices microservice and more.

The same occurs with the configuration of Logback, OkHttpClient, HTTP filters…

Even sometimes, I need two microservices to read from a single table. I have to duplicate the database connection and the entity configuration.

When this occurs, I group all the common code into libraries. I don’t create a single library with everything. I group the DTOs together, the configurations together…

Then, import each needed library in the pom.xml file for the impacted microservices.

Still, when upgrading and deploying one of those libraries, I need to update the version number on each impacted microservice and deploy it again.

Dependent Microservices

The microservices architecture is so cool, because if a single microservice is down, my application can continue accepting requests.

This is the theory.

But you know that if the service configuration is down, no other service can be run.

Something similar happens when the API Gateway is down.

What about the products services? If it’s down, I can’t have the prices or the availabilities.

This leads to few microservices which are independent. But how to deal with the dependent microservices?

I’ve used two solutions. None is good, but they are acceptable and real.

The first one, is to have a good maintenance page which is easily deployed when needed.

The second case, is to prepare your frontend to have no backend running. And show some empty pages or maintenance pages too.

Both solutions need to have a lot of metrics to trace an unstable microservice.

If you want to learn more about good quality code, make sure to follow me on Youtube.

My New ebook, How to Master Git With 20 Commands, is available now.

One response to “Problems I’ve Learned After Using Microservices Architecture for 10 Years”

  1. Sleuth is no longer compatible with spring boot 3+. I had to use micrometer, which works with zipkin.

    Like

Leave a comment

A WordPress.com Website.