Most developers are wrong about microservices. We treat them like a trophy—a badge of honor that proves we are a “real” tech company. In reality, microservices are a tax. You are trading code complexity for operational complexity, and often, it is a bad deal.
In a monolith, your enemy is messy code. You can fix messy code with a refactor. In microservices, your enemy is the network. The network is a terrible antagonist. It is slow, it is unreliable, and it breaks in ways you cannot easily test. We were told microservices “decouple” teams, but if Service A fails because Service B is down, they are not decoupled. They are just far apart—tied together by a wire and a significant amount of latency.
Transactional Integrity and the False Promise of Distributed Systems
Let’s look at a standard task: processing an order. In a monolith, you use a database transaction. It is simple, it follows ACID rules, and it is predictable. If the update fails, everything rolls back. You do not get partial data or “ghost” orders.
def place_order(order_data):
try:
with db.transaction():
order = orders.create(order_data)
inventory.decrement(order_data['items'])
user_balance.deduct(order_data['total'])
return order
except Exception as e:
log.error(f"Order failed: {e}")
raise
This code is easy to read and easy to test. Now look at the “modern” microservices approach. Every service has its own database, so you cannot use a simple transaction. You have to implement the Saga Pattern, managing state manually across multiple network calls.
import httpx
def place_order(order_data):
order = order_service.create(order_data)
try:
inventory_response = httpx.post(f"{INVENTORY_URL}/reserve", json=order_data['items'])
inventory_response.raise_for_status()
except httpx.HTTPError:
order_service.update_status(order['id'], 'CANCELLED')
raise Exception("Inventory failed")
try:
payment_response = httpx.post(f"{PAYMENT_URL}/process", json=order_data['payment_info'])
payment_response.raise_for_status()
except httpx.HTTPError:
httpx.post(f"{INVENTORY_URL}/release", json=order_data['items'])
order_service.update_status(order['id'], 'CANCELLED')
raise Exception("Payment failed")
order_service.update_status(order['id'], 'COMPLETED')
return order
We have replaced five lines of logic with a manual state machine. You now have to write “undo” logic for every single step. What happens if the release call fails during a rollback? You get data corruption that you will likely end up fixing with manual SQL scripts on Monday morning. This isn’t resilience; it is just moving the problem from the compiler to the network.
“Eventual Consistency” is often just a fancy way of saying “the data will be wrong for a while, and we hope the system eventually heals itself.”
Identifying Resume-Driven Development and Cargo Culting
Why do we do this? Rarely is it about technical necessity. It is often Resume-Driven Development (RDD).
You don’t get a senior role at a Big Tech firm by saying you maintained a clean, boring monolith. You get it by saying you migrated to a multi-region, event-driven architecture using Kafka, Kubernetes, and Istio. These keywords make you sound expensive, so we build complex systems to justify our salaries.
We also fall for the Sunk Cost Fallacy. Your team spent six months setting up CI/CD for 20 services. It is slow and brittle, but you can’t admit it was a mistake. Instead, you add more “observability tools” to hide the mess, spending more time fixing pipelines than shipping features.
Finally, there is the “Cargo Cult” behavior. You read how Netflix handles scale and assume you need the same setup. Netflix has thousands of engineers; you probably have ten. They have problems you will never encounter. Copying their architecture is like buying a semi-truck to go to the grocery store.
Common Rebuttals: Is Your Monolith Really the Problem?
I know the common complaints.
“But Sergio, our monolith is a nightmare! The build takes 45 minutes!”
If you cannot write clean code in one repository, you cannot write it in twenty. If your monolith is a “Big Ball of Mud,” microservices will simply become a Distributed Big Ball of Mud. Your problems are likely discipline and boundaries, not architecture.
“But we need to scale the Image Processing service independently!”
Are you actually scaling? Or are you just running three pods instead of two? Most scaling needs are solved by a better database or more RAM. Vertical scaling is significantly cheaper than developer time. If one part of your app is a genuine bottleneck, extract only that part. You do not need to explode the entire application into a hundred pieces.
Technical Roadmap: How to Simplify Your Architecture
If you find yourself drowning in microservice overhead, here is a direct plan for Monday morning:
- Audit Your Bounded Contexts: If you have to change three services to ship one feature, your boundaries are wrong. This is tight coupling in disguise. Merge those services back together.
- The “Modulith” First Rule: Use strict internal modules within a single repository. Use tools to prevent Module A from calling Module B’s private functions. If you can’t keep modules clean in one repo, you aren’t ready for microservices.
- The Rule of Three: Do not extract a service unless you have three solid reasons. “It sounds cool” or “I want to use Go instead of Python” are not valid reasons. Extract only for deployment speed, extreme resource needs, or differing security requirements.
- Fix Your Build: Often the monolith isn’t the problem—the poor build process is. Invest in parallelizing tests and cleaning up dependencies before blaming the architecture.
- Measure the Plumbing Overhead: If your team spends more than 30% of their time on IAM roles, service discovery, and deployment YAMLs, you are over-engineered. Stop and simplify.



Leave a comment