- Learning Analytics Made Easy
- Posts
- Evidence-Based Skills and Decision Intelligence
Evidence-Based Skills and Decision Intelligence
Elevating Skills — and Linking Them to Performance

The 3rd episode in the series on "Reinventing L&D in the Age of AI"
Upskilling and reskilling are top of mind right now — and for good reason. AI requires us to build new capabilities within our current roles, and it is simultaneously reshaping which roles exist at all.
I believe skills is, next to AI, the second key trend shaping the future of L&D. Not just because of the reskilling required, but because AI opens up extraordinary opportunities to do upskilling and reskilling at scale — in ways that were never possible before.
But we have to do things differently.
This newsletter explores how.
— Peter

What do you primarily use skills data for today? |
Issue’s with the current approach to Skills | What business leaders really want from Skills |
Skills and Performance | The right level of detail |
Context is King | Bringing it all Together |
Challenges of the current approach to Skills
Three Assumptions That May No Longer Hold
One of my older blog posts explains how the use and overuse of assumptions can be dangerous. Assumptions fill gaps where evidence is missing. They are not inherently bad — especially when grounded in experience and deep expertise. But in practice, assumptions are often used for convenience, to support a narrative without the evidence to back it up. The key discipline is to make them explicit and revisit them regularly.
That is exactly what I believe is missing in many skills discussions today. Most skills strategies, frameworks, and approaches rely too heavily on unverified assumptions. Three stand out as particularly problematic.
Assumption 1: Having the right skills ensures good performance
The popularity of "upskilling" rests on this belief: give people the right skills, and performance will follow. In reality, it is not that straightforward. Skills help, certainly. But performance depends on far more than skills alone. Whether people have the right tools, whether processes are efficient, whether they have sufficient time, whether incentives are aligned — all of these shape outcomes. Highly skilled people underperform in broken systems. Average skill levels produce strong results in well-designed ones. Skills matter — a lot. But they are one ingredient in a much larger performance equation. Treating them as a direct proxy for performance creates unrealistic expectations and leads to disappointment when upskilling programs fail to deliver business results.
Assumption 2: Skills are mainly developed through conventional training
When organizations talk about skill development, the conversation almost immediately turns to training — something people attend, consume, or complete. But as I have argued before, real skill development rarely works that way. Skills are built through repeated application, feedback in context, trial and error, and reflection on real work. Formal training can play a role — sometimes a critical one — but it is usually not sufficient. In fact, the largest part of most L&D budgets goes to programs that build knowledge, not skills. Treating skills as something that can be "delivered" through courses ignores how skills actually develop in the flow of work. This assumption also explains why so many skills initiatives feel heavy, slow, and disconnected from reality.

Assumption 3: Job descriptions and assessments give us reliable skills intelligence
The most common sources for skills data are job descriptions, role profiles, self-assessments, and manager or peer ratings. These feel structured and scalable. Unfortunately, they are also highly subjective. Job descriptions are often inherited, copy-pasted, or aspirational — they describe what should matter, not what actually drives performance. Self-assessments reflect confidence, not competence: a true expert may rate themselves modestly because they understand the full depth of the domain, while a beginner may rate themselves higher because they overestimate their ability. Manager assessments reflect perception, and managers may not even possess the skills they are evaluating. None of this makes these sources useless — sometimes they are all we have. But they give us opinions about skills, not proof that those skills are applied, effective, or relevant. And no matter how many opinions you collect, opinions do not become facts.
We Need to Elevate Skills Intelligence
These three assumptions are not wrong so much as incomplete and outdated. In an age where skills are becoming central to workforce strategy, AI, and decision-making, "about right" is no longer good enough. The rest of this newsletter explores what happens when we move beyond these assumptions — and what a more evidence-based, performance-oriented approach to skills could look like.
Starting with ‘What Business Leaders Actually Want from Skills’
What Business Leaders Actually Want
Once you step outside the world of skills frameworks, taxonomies, and learning catalogs, the conversation about skills sounds very different. Business leaders are not interested in skills for skills' sake. They care about outcomes, speed, and risk. Skills only matter if they help the organization perform — faster, better, and with more confidence.
When you listen carefully, their questions cluster around three themes.
Speed: How fast do people become effective?
The most urgent concern is time. Not how many people completed training or how many skills were added to a profile, but: How long until someone is productive? How long until they can work independently? How long until they truly master what is required?
This is where concepts like time-to-proficiency and time-to-autonomy come in. From a business perspective, these are not learning metrics — they are performance metrics. Every additional week someone needs to become effective represents lost opportunity, delayed output, and reliance on others. Whether it is onboarding, reskilling, or internal mobility, leaders want predictability: "If we invest in upskilling, when will it start paying off?"
Clarity: What skills actually matter — now and next?
Most organizations maintain long lists of skills. Too long. From a leadership perspective, this creates noise rather than insight. Business leaders want clear answers: Which skills are truly critical for which roles today? Where are the real gaps? Which gaps should we close first?
Without that clarity, everything becomes important — and nothing really is. Leaders increasingly expect this transparency to be data-informed, not based on gut feeling or generic trend reports.
Foresight: Where should we invest?
Leaders are not only asking what skills exist today. They are asking which gaps are getting worse, which skills need developing now to be ready for the near-term future, and which will become essential in five years. Most importantly: where should we focus investment for the biggest return?
This is where predictive analytics enters the conversation. Leaders want to anticipate problems — skill shortages that slow growth, dependencies on a few critical experts, emerging technologies that outpace internal capability. They are not looking for perfect forecasts. They are looking for directional guidance — enough insight to decide where to act, where to wait, and where not to invest.
The bottom line
Business leaders are not asking L&D to manage skills. They are asking the organization to reduce time-to-productivity, create transparency for investment decisions, and anticipate future needs. Skills only matter if they help answer those questions faster and more reliably. That is where the skills conversation must shift — from cataloguing and assessing to building skills intelligence that supports real business decisions.
Skills vs. Performance: Turning the Logic Around
This is the point where many skills initiatives quietly break down. The typical sequence runs: define skills, assess people against them, invest in developing them — and only then ask whether performance actually improved. When it does not, we conclude we picked the wrong skills, the training was not good enough, or people did not apply what they learned.
But what if the problem is not execution? What if it is the starting point?
Start with Performance, Not Skills
Most skills strategies are top-down. They begin with assumptions about what roles should require. Even when done thoughtfully with the best subject matter experts, the approach remains inferential: it relies on opinions and frameworks, not on evidence of what actually drives performance.
The alternative is both simpler and more uncomfortable: start with performance.
Instead of asking "What skills should people have?", turn the question around: "Who is performing exceptionally well — and what are they actually doing?"
I call this bottom-up skill inference.
From Performance Data to Skills
Almost everything we do while working generates data: output metrics, quality indicators, speed, throughput, rework, customer outcomes, error rates, escalations, behavioral patterns in systems and tools, communications across email, meetings, Teams, and Slack.
If we define what high performance looks like, we can use that data to identify top performers, consistent performers, people who improve fastest, and people who struggle despite having "the right skills on paper."
Only then does it make sense to ask the skills question. The logic flips: we observe what is done, not what is claimed. We look at how work is executed, not how roles are described. We identify patterns, not self-ratings. From those patterns, we derive the skills that are actually being demonstrated in real work.
This is what makes skills evidence-based — not because they appear in a framework, not because people say they have them, but because they are consistently present when high performance occurs.
Defining What High Performance Actually Means
This only works if we are explicit about something that is often left vague: what does good performance look like — in concrete data points? Resolution times, defect rates, satisfaction scores, first-time-right rates. In most organizations, this data is already being recorded. The problem is not a lack of performance data. It is that this data is rarely connected to skills conversations.
High performance is almost never a single number. It shows up as consistent patterns over time: faster cycle times without quality loss, fewer errors without increased effort elsewhere, stable output under increasing complexity, quicker recovery when things go wrong, better outcomes with less supervision. High performers are not just fast or accurate — they are reliably effective under real-world constraints. These patterns are visible in the data, if you know where to look.

Bottom Up Skill Inference
From Skills Frameworks to Skills Evidence
When we reverse the logic — define high performance, identify top performers, analyze their behavior, derive demonstrated skills — something powerful happens. Skills are no longer hypothetical. They become predictive signals of performance. And skill development shifts from "educating" to replicating what works.
This is where skills stop being an HR artifact and start becoming a business instrument. And this is also where data and analytics are no longer optional — they become the only reliable way to make this work at scale.
The Right Level of Detail
Making Skills Actionable through Jobs to Be Done
There is one critical step in making this data driven and evidence based approach work: granularity.
We talk about skills as if they are precise and measurable, but many of them are not. Labels like "problem solving" or "data analysis" may look granular in a taxonomy, but in practice they are enormously broad. Each covers dozens — sometimes hundreds — of different activities.
Take data analysis. Two people who both claim this skill may behave very differently in practice. One might be excellent at cleaning messy data and spotting patterns but struggle to explain those patterns to a business leader. Another might tell a compelling story but rely heavily on others to prepare the underlying data. At the level of skills, these differences disappear. At the level of work, they matter enormously.
This is where the concept of Jobs to Be Done becomes essential.
Jobs to Be Done originated as an approach to innovation, developed around 30 years ago by Tony Ulwick and popularized by Clayton Christensen. The core idea is that customers do not buy products — they hire them to get a job done. Applied to skills, it shifts the focus from abstract capability to execution: what does someone actually do in a real situation to move work forward?
A Job to Be Done is not "data analysis." It is, for example, interpreting a trend in operational data and turning it into a concrete recommendation under time pressure. That distinction is subtle but powerful.
Why this matters for measurement. Jobs to Be Done are tied to observable action. You can see whether someone completes a job independently, how long it takes, how often they need support, and whether the outcome meets expectations. Performance systems do not record that someone "has analytical skills." They record cycle time, error rates, rework, decision quality, and downstream impact. These signals align naturally with Jobs to Be Done, not with abstract skill labels.
Why this matters for development. When skills are defined too broadly, development conversations stay vague and investments are spread thin. When Jobs to Be Done are clearly articulated, gaps become specific. The question shifts from "How do we improve analytical skills?" to "How do we help people translate insights into decisions faster and with more confidence?" That is a question the business can engage with.
Why this matters for proficiency. Proficiency is no longer an abstract level on a scale. It becomes the ability to perform a job reliably and autonomously in a given context. Time-to-proficiency becomes measurable: the moment when someone no longer needs support to execute a critical job to the required standard. That is exactly the signal business leaders care about.

A Practical Example
Consider "stakeholder management." At the skill level, it is a single label. At the Jobs to Be Done level, it breaks down into very different activities: handling objections in a customer conversation, aligning priorities across functions under time pressure, translating technical constraints into business decisions. Each of these can be observed, measured, and compared across performers. High performers do not just "have" stakeholder management — they execute these jobs consistently and effectively.
Try this: Take one broad skill from your frameworks. Break it down into three to five Jobs to Be Done specific to a particular role. For each, ask: Can I observe this? Can I measure it? Would a business leader recognize this as something that matters? That exercise alone will change how you think about skills.
Skills in Context
Skills Only Exist in Context
By now it should be clear that skills are not static assets people simply "have" or "don't have." They emerge from work, from evidence of performance, from Jobs to Be Done executed well. But there is one more layer that determines whether skills actually translate into value: context.
A skill without context is an abstraction. It sounds precise but collapses the moment you try to use it for real decisions.
Take data analytics. On paper it looks transferable — analytics is analytics, right? But in practice, analytics in sales, finance, operations, HR, or learning are very different beasts. The tools may overlap. Some techniques may be shared. But the meaning of the data, the questions being asked, the risks involved, and the definition of good performance change dramatically from one domain to another.
Why Context Matters for Mobility and Time-to-Productivity
This explains why organizations struggle so much with internal mobility. When someone moves roles, the delay is often not caused by learning an entirely new skill from scratch. It is caused by learning a new domain context: new data, new processes, new constraints, new success criteria, new unwritten rules.
Traditional skills models cannot explain this delay because they assume skills are portable. A skills-in-context model can. It allows you to say: "This person has the skill, but not yet in this context."
That one distinction makes time-to-autonomy far more predictable. A person with the right skill in the right context is the ideal candidate — low ramp-up time. A person with the skill but in a different context has the foundations but needs time to adapt. A person without the skill will take the longest to operate independently. Without context, all three look the same on paper.

A Concrete Example
Imagine your organization needs data analysts in Finance. You search your skills data and find me — I work in L&D and I am proficient in data analytics. On paper, it looks like a straightforward move. But my proficiency exists in the context of L&D: I understand learning metrics, participation patterns, engagement data, and performance indicators specific to this domain.
Put me in Finance, and my effective proficiency drops immediately. Not because I have lost analytical capability, but because financial data has different structures, metrics have different meanings, and decision cycles work differently. Before I can truly perform, I need to learn the finance domain — additional upskilling that has nothing to do with analytics itself.
Now imagine your skills data said not just "Peter is proficient in data analytics" but "Peter is proficient in data analytics in the context of Learning & Development." That single additional layer changes every decision you make: how you assess suitability, how you estimate ramp-up time, whether I am the right candidate or merely a candidate.
Why Skills Intelligence Without Context Leads to Poor Decisions
When skills data lacks context, organizations overestimate internal mobility, underestimate ramp-up time, misjudge workforce readiness, and invest in generic learning interventions that miss the mark. Most importantly, they start believing that skills gaps are the problem, when in reality context gaps are.
Adding context does not complicate skills intelligence — it makes it usable. Skills become actionable only when they are grounded in real work, inferred from evidence, expressed through Jobs to Be Done, and anchored in context. Without context, skills remain labels. With context, they become decision-ready.
Bringing It All Together
Throughout this newsletter, we have challenged several deeply ingrained assumptions. Skills do not automatically translate into performance. They are not primarily developed through traditional training. Common sources like job descriptions and self-assessments tell us very little about what actually drives results.
Instead, we have seen how high performance in the digital world leaves evidence in the form of data. When we start with performance, break it down into Jobs to Be Done, and understand those jobs in their specific context, skills stop being abstract labels and start becoming actionable.
Skills are no longer something we define upfront and hope for. They become something we infer from reality.
This shift matters because it aligns skills thinking with how the business actually works. Business leaders care about speed, clarity, and foresight. Evidence-based, contextual skills intelligence is the only way to answer their questions credibly.
What This Means for Analytics and AI
Once you look at skills this way, analytics and AI stop being enhancements and become foundational. If skills are inferred from performance, Jobs to Be Done, and context, the data model you need looks very different from a traditional skills database. You start with performance data, work signals, role context, and evidence of execution. Skills emerge from patterns across that data.
AI becomes powerful here — but only if used correctly. Its real value lies in processing scale and complexity: identifying patterns across millions of records, clustering similar jobs, spotting early signals of emerging gaps, and updating skills intelligence continuously as work changes.
But there is a hard truth: AI is only as good as the data and logic behind it. If your skills model is abstract, context-free, and opinion-based, AI will simply automate those weaknesses. If your model is grounded in performance, jobs, and context, AI can turn it into a living system. That is the difference between skills hype and skills intelligence.
Two Ways to Start This Week
You do not need a new platform, a massive taxonomy, or an AI initiative to begin.
First, pick one critical role and define what high performance actually looks like — in concrete, measurable terms. Not competencies. Data points. Then look at what data already exists. Ask yourself: if I had to identify the top three performers in this role using only data, what would I look at? You will be surprised how much is already there.
Second, take one broad skill from your frameworks and break it down into Jobs to Be Done in context. Not dozens — just three to five. For each one, ask: Can I observe this in practice? Can I connect it to a performance outcome? Would the business recognize this as something that matters?
If you do just these two things, you will already be operating at a fundamentally different level from most skills initiatives today.
Skills do not fail because they are unimportant. They fail because we treat them as abstractions. Once you ground them in performance, granularity, and context — and support them with the right analytics and AI — skills stop being a promise and start becoming a lever.