Point-of-Need Learning: Why Application Beats Credentials

Edtech
India Tech
Build Logs
We built Pragmatic Leaders in 2018 around a contrarian bet: that corporate training would abandon certificates for verified competence, and COVID proved us right.
Author

B. Talvinder

Published

April 2, 2026

We started building Pragmatic Leaders in 2018 because traditional education was selling credentials, not competence. You could finish a Udemy course on React and still not be able to build a working app for the deli downstairs. The gap was not knowledge. It was application.

COVID did not create the problem. It just made everyone else notice.

The Bet We Made Early

I call this Point-of-Need Learning: the shift from abstract, front-loaded education to skills delivered exactly when you need to use them. Not “learn React,” but “learn React while building this specific feature for this specific user problem.”

The traditional model assumes you learn first, then apply later. That is backwards. Application comes first. Learning happens in service of getting something real done.

Pre-COVID, this was a contrarian bet. Post-COVID, it is the only model that works at scale.

The specific bet: by 2025, the majority of corporate training budgets will shift from certification-based programs to embedded, outcome-verified skill development. Not courses completed. Not certificates issued. Measurable ability to execute specific tasks.

We saw this coming because we were watching the wrong metric. Everyone tracked course completion rates. We tracked the gap between “certified” and “capable.”

The Indian Education Arithmetic

In India, the salary difference between Tier 1 college graduates and everyone else was 300%+. That gap was not about intelligence. It was about practical application. Tier 1 students got hands-on projects, mentorship, and real feedback loops. Everyone else got pre-recorded lectures and multiple-choice tests.

The numbers told a brutal story. Average graduates from non-Tier-1 colleges made 3-6 lakhs annually. Courses worth 1-2 lakhs were out of reach. There was no ramp between free YouTube tutorials and expensive postgraduate programs. The people who needed upskilling most could afford it least, and the affordable options were the worst at delivering actual competence.

The edtech boom of 2015-2020 scaled the wrong thing. It scaled content delivery. More videos. More courses. More certificates. But content was never the constraint. Application was. Poor completion rates, poor success rates, poor certificate value. The entire industry optimized for enrollment numbers and certificate issuance, not for whether graduates could actually do the job.

McKinsey projected 375 million workers globally would need to completely change their skill sets by 2030. In India alone, 400 million needed reskilling, with 100 million in managerial and professional domains. The gap between the problem and the existing solutions was not a crack. It was a canyon.

Three Technical Bets That Created the Moat

We started with 21 paying students across 3 countries. Bootstrapped. The validation was not “do people want to learn?” It was “will people pay to learn skills they can immediately use?”

The answer was yes, but only if we changed the architecture.

The traditional edtech model sells course access, measures completion rates, issues a certificate, and hopes graduates apply it somewhere. Our model identified skill gaps in real work context, delivered learning at the moment of need, verified application rather than recall, and credentialed based on demonstrated competence.

We built three things that did not exist in most platforms.

Automated skill gap identification. Not “what course do you want?” but “what can you not do right now that is blocking you?” We mapped learning paths to actual job requirements, not arbitrary curriculum structures. Each course was divided into learning objectives, competencies, and complexity levels across sub-domains. The platform created individualized learning routes for each student based on their past competency and their target destination.

Recognition of Prior Learning. We borrowed from traditional university models but applied them to working professionals. Past experience and knowledge were quantified, requisite credits awarded, and skill gaps identified automatically. This meant a developer with five years of backend experience did not sit through the same curriculum as a fresh graduate. Their learning path started where their competence ended.

Tokenized credit and applied gamification. Not badges for watching videos. Credits for solving real problems, helping peers, shipping working code. Complete a module, earn credits. Contribute to community, earn credits. Mentor someone, earn credits. The currency was not time spent but value created. Every positive interaction fed the learning algorithm.

The pedagogy was case-based, modeled on how business, medicine, and law have trained professionals for decades. Harvard’s case method, digitized and made accessible. Students did not watch someone talk for 20 hours. They solved cases, applied theory to practical scenarios, built demonstrable projects. We paired students with in-house development teams to build and ship real products.

The Platform Decision That Cost Us Three Months

The hardest technical decision: moving from a customized LMS to building proprietary platform infrastructure. We lost 3 months of velocity. But a customized LMS could not use the data we were collecting. Patterns of where people got stuck. Which learning paths actually led to competence. Which credentials correlated with job performance.

We had a working LMS, a Stack Overflow-inspired forum, and a job board. That was enough to validate the pedagogy. But it was not enough to scale individualized learning to millions of learners across domains. The data from those first 35 students across 5 countries (paying an average of $1,200 per person) proved the model worked. 70% came from Ivy League-equivalent colleges. Graduates from Product School and UpGrad joined because of our pedagogy, not our brand.

That data became the moat. Not the content. Not the platform features. The data about how people actually learn and where they actually fail.

Who Actually Pays for Competence

By early 2020, we had validation across three verticals: corporate training, university partnerships, and individual upskilling. Corporate training drove the most revenue. Companies paid for verified competence in ways individuals would not.

We initially assumed individuals would be the primary buyers. They were not. Corporations were. Individuals optimize for credentials because that is what the hiring market rewards. They need the certificate they can show on LinkedIn. Corporations optimize for outcomes because they see the gap between the certificate and the work that gets done.

This distinction shaped everything. Our live classes converted at 20% to bootcamp courses. 500 registered, 150 attended with 88% retention, 60% paid for extended access. The numbers validated that the case-based approach held attention in ways pre-recorded content never could.

The 80% success rate for job outcomes happened irrespective of pedigree. Students who would never have been considered for product management roles at top companies got in, and 100% attributed it to the case-based learning and the network they built while doing it.

Then COVID Hit

Suddenly, everyone needed what we had been building. Remote work made skills gaps impossible to hide. Managers could not rely on proximity as a proxy for productivity. “Can this person actually do the job?” became the only question that mattered.

The market went from “interesting idea” to “urgent need” in 8 weeks.

What I Got Wrong

We underestimated how much infrastructure we needed to verify competence at scale. It is easy to check if someone watched a video. It is hard to verify if they can apply what they learned in a novel context. We built rule-based scoring (this was 2018, before LLMs). The scoring was brittle. We spent 6 months refining it.

The biggest miss: we did not move fast enough on international expansion. We had validation in India, the US, and Southeast Asia by 2019. We should have scaled globally before COVID. By the time we were ready, the market was crowded.

The Competence Measurement Problem

Pragmatic Leaders now trains 10,000+ professionals annually. The model works. But the model only works if you measure the right thing: not what people know, but what they can do.

The question I am still working through: how do you scale verified competence without turning it into another credential game? The moment you standardize assessment, people optimize for the assessment instead of the skill. The moment you issue a certificate, employers use it as a filter rather than a signal. The system that was built to prove competence becomes another gatekeeping mechanism.

We are not there yet. But the direction is clear. The companies that win in education are not the ones with the most content. They are the ones with the tightest feedback loop between learning and application.