The Recourse Trap: Why Competition Makes Credit Scoring More Exclusive, Not Less

Financial Systems
Market Structure
AI Risk
India Tech
Credit scoring fails not because it’s inaccurate, but because competitive lending markets reward exclusion over solving information problems for 400 million unscored Indian adults.
Author

B. Talvinder

Published

April 3, 2026

In 2022, HDFC Bank raised its minimum CIBIL score requirement for personal loans from 650 to 725. ICICI and Axis followed within months. That same year, TransUnion CIBIL’s own data showed that first-time borrowers with scores between 650 and 725 had default rates under 4%. The banks weren’t responding to rising risk. They were responding to each other.

Credit scoring systems don’t fail because they’re inaccurate. They fail because accuracy isn’t the job in a competitive lending market.

The job is risk transfer. In competitive environments, the most efficient way to transfer risk is to exclude entire populations rather than solve information problems.

I’ve seen this pattern up close. At Pragmatic Leaders, I’ve trained credit risk teams at HDFC, ICICI, and four mid-tier Indian banks. The pattern is consistent: everyone knows traditional credit scoring excludes viable borrowers. No one builds the alternative system because competitive pressure rewards portfolio metrics over market expansion.

The Recourse Trap

This is what I’m calling The Recourse Trap: a system where the mechanism designed to enable access becomes the mechanism that prevents it, and competitive pressure makes the trap stronger, not weaker.

Here’s how it works:

A lender can’t distinguish between a borrower with no credit history and a borrower with bad credit history. Both score low. In a competitive market, the lender who extends credit to both will have worse portfolio performance than the lender who extends credit to neither. The rational competitive response is exclusion.

The borrower has no recourse. They can’t “improve their score” because they can’t access credit to build history. The system tells them what to do (build credit history) while preventing them from doing it.

India has 400 million adults with no credit history in any bureau. Not because they’re risky. Because the system has no mechanism to evaluate them, and no competitive incentive to build one.

The Mechanism

When lenders compete on portfolio risk metrics, they optimize for false negative reduction (don’t lend to bad borrowers) over false positive reduction (do lend to good borrowers). The asymmetry exists because the cost of a bad loan is immediate and visible, while the cost of a missed good loan is distributed across the market and invisible.

This creates a lemons problem. Borrowers without traditional credit history get pooled with genuinely risky borrowers. Lenders can’t tell them apart without incurring verification costs that competitive pressure makes prohibitive. The result: high-quality borrowers with no credit history get priced out or excluded entirely.

Falsifiable claim: In competitive lending markets, credit score requirements will trend upward over time for populations without traditional credit history, even as default rates in those populations remain stable or decline. The system optimizes for competitive position, not credit risk.

You can test this. Look at minimum credit score requirements for first-time borrowers in India between 2018 and 2024. Requirements went up across every major bank. Did actual default rates for first-time borrowers go up proportionally? No. RBI data shows gross NPA ratios for retail loans actually declined from 2.5% to 1.7% in that period. The market tightened because competitors tightened, not because risk increased.

The Transaction Cost Argument Is Circular

Here’s the tell: when you ask banks why they don’t serve underbanked populations, they talk about credit scores. When you ask why they don’t build alternative scoring systems, they talk about transaction costs. When you ask why transaction costs are prohibitive for underserved populations but not for premium segments, the conversation ends.

High costs justify exclusion. Exclusion prevents scale. Lack of scale keeps costs high.

South African banks demonstrate this clearly. Despite strong demand for credit from low-income households, banks haven’t extended access. Not because these households are uniformly risky, but because the information required to assess risk isn’t available in formats traditional scoring systems can process.

The alternative mechanisms prove the problem is solvable. Group lending models and informal systems like stokvels work precisely because they solve the information problem differently. They use peer monitoring, social ties, and collective savings as signals. Transaction costs stay low. Default rates stay manageable.

But competitive banks don’t adopt these approaches. They require different infrastructure, different risk models, and different competitive positioning. A bank that moves first takes on execution risk. A bank that moves second can copy what works. The rational move is to wait, which means no one moves.

What AI Makes Worse

AI-powered credit scoring is getting more sophisticated at predicting risk within existing data distributions. Which means more sophisticated at excluding populations outside those distributions.

An AI model trained on historical lending data will learn that borrowers without credit history are risky. Not because they default more often, but because lenders historically avoided them. The model encodes the market’s collective risk aversion as ground truth.

I saw this firsthand during a workshop with a mid-tier bank’s risk team in 2023. They’d built a gradient-boosted model on five years of loan performance data. The model performed well on their test set (AUC of 0.87). But when they scored a sample of new-to-credit applicants, 92% were classified as high risk. The data scientist on the team knew the scores were wrong. His manager knew. But nobody was going to approve a lending policy that scored worse than the competitor down the street.

The feedback loop tightens. Better prediction within the existing distribution means worse outcomes for populations outside it.

What I Got Wrong

I initially thought the solution was better data. If we could capture alternative signals like UPI transaction history, utility payments, or rental records, we could build scoring systems that include underserved populations.

That’s technically true but structurally naive.

The problem isn’t data availability. India Account Aggregator has been live since 2021. Perfios and FinBox can pull 12 months of UPI transaction data in seconds. The pipes exist. Banks still don’t use them for first-time borrowers at any meaningful scale because competitive incentive hasn’t shifted.

A bank that invests in alternative data infrastructure takes on execution risk and regulatory uncertainty. A bank that waits can copy the approach if it works. The first-mover disadvantage is real.

Beyond Credit Scoring

The recourse trap exists because competitive markets optimize for relative performance, not absolute outcomes. A lender doesn’t need to solve the information problem if their competitors don’t solve it either.

This has implications beyond financial services. Any system that provides “actionable recourse” in a competitive environment faces the same dynamic. The advice the system gives (build credit history, gain relevant experience, develop measurable skills) is only actionable if the system allows you to act on it.

When it doesn’t, you’re not dealing with an information problem. You’re dealing with a market structure problem.

AI-powered resume screening. Skills-based hiring platforms. Fraud detection systems. They all create versions of the recourse trap when deployed in competitive markets. The mechanism is the same: optimize for false negative reduction, accept false positive costs, and let competitive pressure prevent anyone from solving the underlying information problem.

The Question Worth Asking

What other systems are we building that look like they enable access but actually optimize for exclusion?

If the mechanism for proving you’re trustworthy requires access you can’t get without already being trusted, you’re in a recourse trap. If competitive pressure makes solving that problem more expensive than ignoring it, the trap becomes structural.

Are we asking this question when we deploy AI systems in hiring, lending, insurance, education? Mostly, no. We’re still arguing about bias metrics and fairness definitions while the competitive dynamics that drive exclusion go unexamined.

The recourse trap doesn’t care about bias. It cares about competitive dynamics. And those dynamics are getting stronger as AI makes within-distribution optimization cheaper and more effective.