ServicesAboutBlog
Let's talk
Home/Blog/Why the big bang rewrite almost always fails
ArchitectureMarch 1, 2025 · 3 min

Why the big bang rewrite almost always fails

D

Daniel Peralta

Founder, Madariaga SAS

There's a moment in the life of every aging system when someone in the company says, with conviction and accumulated exhaustion: "we need to rewrite everything".

It's understandable. Legacy code hurts. Deploys take hours. Nobody understands why that function does what it does. Tests don't exist or lie. Every new feature is surgery under local anesthesia.

The problem is that the big bang rewrite — the idea of throwing everything away and starting from scratch — has an extremely high failure rate. And not due to lack of technical talent.

Why it fails

1. The old system knows things nobody documented

That code that seems senseless usually makes sense. There's business logic buried in strange conditions, edge cases nobody remembers but some customer somewhere in the world executes every month, integrations with external systems that have undocumented behaviors.

When you rewrite from scratch, you lose that knowledge. You don't recover it from the code, because it was unreadable. You don't recover it from documentation, because it doesn't exist. You discover it when the new system fails in production in mysterious ways.

2. The target moves while you build

A large rewrite takes months. During that time, the business doesn't stop. Features get added to the old system. Rules change. New customers come with new requirements.

When you finish the rewrite, the new system is already outdated compared to the old one. And the old one kept growing without anyone looking at it with care.

3. Technical debt regenerates

Without the processes, culture and practices that prevent technical debt, the new system will end up in the same state as the old one. Just faster, because now everyone is rushing to recover lost time.

What to do instead

The alternative is incremental migration. Instead of rewriting everything, you identify the most critical or most painful parts and modernize them one at a time, without taking the system down.

The most effective strategies:

Strangler Fig Pattern: build the new system around the old one. New functionality goes to the new system. Old ones are migrated one by one. Over time, the old system has no functionality left and you shut it down.

Feature flags: activate the new code for only a percentage of traffic. Validate in real production before the full cutover.

Anti-corruption layer: a layer that translates between the old and new system's models, allowing them to coexist without one contaminating the other.

When a rewrite does make sense

There are cases where the rewrite is legitimate:

  • The system has so little traffic you can do the cutover in a weekend
  • The technology is so obsolete it can't run on modern infrastructure
  • The rewrite scope is very limited (one module, not the whole system)

But these are exceptions, not the rule.


The next time someone proposes the big bang rewrite, the right question isn't "what technology do we rewrite it with?". It's: "what specific part is hurting us most and how do we modernize it without throwing everything away?".

Less epic. More effective.

Why the big bang rewrite almost always fails | Madariaga SAS