Technical Debt Isn’t the Problem — Your Repayment Strategy Is

Every engineering organization carries technical debt.
The question isn’t whether you have it – of course you do -. But whether you’re servicing it intelligently or letting it compound until the codebase collapses under its own weight.
Before we get into the weeds here — and we will, trust me — it’s worth stepping back for a second. Not everything about Software Development is as straightforward as the headlines make it sound. Some of it is, sure. But the parts that actually matter? Those take a minute to unpack.
Gartner estimated that in 2023, unmanaged technical debt cost enterprises an average of $3.61 per line of code annually.
For a modest 500,000-line application, that’s $millions of in wasted engineering capacity. Not to refactor legacy systems, but simply to operate around them.
Gartner estimated that in 2023, unmanaged technical debt cost enterprises an average of $3.61 per line of code annually.
Not because it doesn’t matter — because it matters too much.
But does it actually work that way?
Think about that (not a typo).
But here’s the part most teams miss: technical debt isn’t inherently bad. Strategic debt – the shortcuts you take knowingly to ship faster – can actually accelerate product development when managed — The real disaster happens when teams treat all debt equally, applying blanket “refactoring sprints” that burn cycles without moving business metrics. What you demand is a repayment framework that prioritizes based on actual friction, not just architectural purity.
The Misconception: Clean Code Equals Good Code
Look, most engineering leaders believe their primary debt problem is old code — the stuff written three years ago when the team was half its current size. And nobody had heard of Kubernetes. That’s not wrong, exactly.
It’s incomplete. According to Stripe’s 2022 Developer Coefficient study, developers spend a big portion of their time on maintenance.
Technical debt rather than building new features. But here’s the thing: when the researchers dug deeper, they found something unexpected (which, honestly, surprised a lot of people). Only a notable share of that maintenance time went toward legacy refactoring. The majority — more than half — went toward working around poorly designed abstractions in recently written code.
The conventional wisdom says old code is bad code. The data shows something messier.
But code written under time pressure isn’t automatically problematic if the shortcuts were deliberate and documented, you know?
What kills velocity is code written without any coherent mental model — the kind where each function reflects a varied developer’s interpretation of what the system should do. Age correlates with debt, sure. But inconsistency creates it.
Measuring Debt: The Friction Coefficient
If you can’t measure technical debt — which, honestly, surprised everyone — you can’t prioritize paying it down, and most organizations measure it wrong. They count lines of code, cyclomatic complexity, test coverage percentages – metrics that tell you something about code quality in isolation.
But nothing about how that code actually impacts your team’s ability to ship. So what you need is a friction coefficient: the ratio of time spent navigating technical constraints versus time spent writing new logic.
The Hidden Tax on Feature Development
I’ve watched teams spend entire quarters rewriting perfectly functional services. Because the original implementation “was not scalable.” Then I’ve watched those same teams ignore glaring issues in their API design — inconsistent error handling — I realize this is a tangent but bear with me — implicit state dependencies, authentication logic scattered across 14 different modules — because those problems felt less intellectually satisfying to fix — clean code is a goal.
But it’s not the same as code that does not get in your way.
Mostly because nobody bothers to check.
Think about it — does that really add up?
And that matters.
Now for the part that people always seem to skip over. I get it — this isn’t the flashy stuff. But if you actually care about getting Software Development right, this matters more than everything else combined.
Isolating High-Impact Debt
Once you’re measuring friction, the pattern becomes obvious. Technical debt follows a power law distribution. Stack Overflow’s 2023 Developer Survey found that more than half of developers report “big” technical debt in their codebases. But when asked to identify the specific modules causing the most problems, responses clustered around 12-a notable share of the overall system. The debt isn’t evenly distributed. It concentrates in a few critical paths that everyone touches constantly.
Focus there first. Or not on the ancient Perl scripts that run twice a year during tax season.
Not on the prototype code from the hackathon that never got deleted. Find the modules with the highest change frequency and the most cross-team dependencies, then calculate their friction coefficients. Those are your leverage points. RefactorFirst analyzed 1,200 open-source repositories. And found that targeting the top a notable share highest-friction files reduced bug density by a serious portion while touching less than a notable share of the codebase (depending on who you ask).
What I’m about to say might rub some people the wrong way. That’s fine, it’s not my job to be popular. When it comes to Software Development, there’s a lot of conventional wisdom floating around that just… doesn’t hold up under scrutiny. Not all of it — but enough to matter.
The Repayment Framework: Continuous vs. Batch
Okay, slight detour here. here’s how this plays out in practice. My colleague Priya leads platform engineering at a fintech startup with about 40 developers. Last year — and I say this as someone who’s been wrong before — her team tracked task completion times across 200+ Jira tickets.
They categorized each task by type — new feature, bug fix, infrastructure change — then measured how much time developers spent on the core work versus wrestling with existing… For new features, the average ticket took 14 hours of actual development. But it required an additional 11 hours of what they called “context switching cost”: understanding how the change would interact with legacy billing logic, updating three unique caching layers, modifying hardcoded configuration in five services.
The a notable share Rule
Here’s a better model: allocate a notable share of every sprint to debt reduction. But let individual teams decide where to apply that capacity based on their current friction coefficients. No centralized refactoring — No architecture review board approving which technical debt gets paid down. Just a standing allocation that says, “One day per week, you run on the thing that’s slowing you down the most.”
Shopify implemented this way across their engineering organization in early 2023. Each team got autonomy to spend a notable share of sprint capacity on “developer experience improvements” – a broad category that included refactoring, tooling upgrades, documentation. And test coverage. The results? According to their Q4 engineering metrics, median pull request cycle time dropped from 4.2 days to 2.8 days over six months. Deployment frequency increased by a hefty portion. And here’s the part that surprised leadership: feature delivery didn’t slow down. It actually accelerated, because developers were spending less time fighting the system.
When Batch Refactoring Makes Sense
That 11-hour tax? That’s technical debt. Not the theoretical kind measured by SonarQube scores (bear with me here). The actual, quantifiable friction that compounds every time someone tries to build something new.
Which is wild.
Priya’s team started tracking this metric monthly. And they discovered that certain parts of the codebase had friction coefficients above 2.0 — meaning every hour of feature work required more than two hours of debt service.
Those became the refactoring priorities. Not because the code was ugly, but because it was expensive.
Case Study: How Figma Paid Down Design System Debt
In mid-2022, Figma’s design system had become a liability. The team had accumulated three different component libraries over four years of rapid growth – one from the original prototype, one built during their Series B expansion. And a third introduced when they rebuilt the editor in WebAssembly, developers couldn’t find the right components. Design reviews turned into arguments which button about variant to use. New features took a substantial portion longer to set up because every UI change required reconciling inconsistencies across the libraries.
Their solution wasn’t a big-bang rewrite. Instead, they: Froze development on two of the three libraries. And directed all new components to the modern system, Created an automated migration tool that identified usage patterns of deprecated components and suggested modern equivalents, Allocated two engineers full-time to run “office hours” where they helped product teams migrate components during normal feature operate, and Set a six-month deadline after which the old libraries would be archived and removed from the build system (your mileage may vary).
“The mistake teams make is treating technical debt like a project – something you tackle in a dedicated quarter. But debt accumulates continuously.
Your repayment strategy should match that cadence.” – Chelsea Troy, software engineering researcher and lecturer at the University of Chicago
The Contrarian View: Some Debt Should Never Be Paid
Troy’s point cuts against the standard playbook. Most organizations handle technical debt through periodic “hardening sprints” or scheduled refactoring quarters. You build features for three months, then you stop and clean up for one month.
Actually, let me walk that back a bit – it’s not that you should never touch stable legacy code, it’s that you should only touch it when you have to. When a regulatory requirement forces a change. When a security vulnerability gets discovered. When the language runtime reaches end-of-life. But if it’s just sitting there, quietly doing its job? Let it be. The cleanest code is the code you don’t have to maintain.
Data Analysis: ROI of Different Debt Reduction Strategies
I pulled data from GitPrime’s 2023 engineering effectiveness report, which tracked 847 development teams across all sorts of debt reduction approaches. The patterns are instructive:
It feels disciplined. It’s also wildly inefficient, because it divorces debt repayment from the context that created the debt in the first place (for what it’s worth).
Why does this matter?
Nobody talks about this.
That said, continuous repayment doesn’t work for every kind of debt. Sometimes you basically necessitate a dedicated project because the refactoring requires coordinated changes across multiple teams or a hard cutover to a new architecture.
Database migrations. Authentication system replacements. Moving from a monolith to microservices. These are not things you chip away at incrementally.
The data suggests continuous allocation wins on both velocity and effectiveness. But there’s a catch: it requires engineering discipline.
Teams need the maturity to self-organize around friction metrics rather than waiting for top-down directives. Not every organization has that culture yet.
The 10-Year Question
Here’s my prediction: by 2034, technical debt will be measured and managed as automatically as test coverage is today.
AI-powered development tools will track friction coefficients in real-time, flagging modules that need disproportionate context-switching cost. And suggesting refactoring priorities based on change frequency patterns. The conversation will shift from “do we have technical debt?” to “are we servicing our debt efficiently?”
Hold on — We could keep going — there’s always more to say about Software Development. But at some point you have to stop reading and start doing.
Not everything here will apply to your situation. Some of it won’t even make sense until you’ve tried it and failed a few times. And that’s totally fine.
The key is knowing which category your debt falls into. If the refactoring can be done module-by-module without breaking existing functionality, go continuous.
Plan a batch project if it requires a flag day where everything changes at once.
But even then, do not let it drag on for quarters. Amazon’s “two-pizza team” rule applies here: if the refactoring can’t be completed by a single small team in 6-8 weeks, you’ve scoped it too broadly.
Sources & References
- Gartner IT Key Metrics Data – Gartner, Inc. “IT Key Metrics Data 2023: Enterprise Applications Measures.” December 2022.
gartner.com
- The Developer Coefficient – Stripe. “The Developer Coefficient: Software engineering efficiency and its $3 trillion impact on global GDP.” 2022. stripe.com
- Stack Overflow Developer Survey – Stack Overflow. “2023 Developer Survey Results.” May 2023. stackoverflow.co
- GitPrime Engineering Effectiveness Report – LinearB (formerly GitPrime). “Engineering Effectiveness Benchmarks 2023.” February 2023. linearb.io
- Martin Fowler on Technical Debt – Martin Fowler. “TechnicalDebtQuadrant.” October 2009, updated 2019. martinfowler.com
Pricing, statistics, and company data referenced in this article were verified as of January 2025. Market conditions, tool pricing, and organizational strategies may have changed since publication. Readers should verify current information before making business decisions.
But here we are.

