The 4th episode in the series on "Reinventing L&D in the Age of AI"
In the previous episodes, I introduced the AI-driven 70-20-10 model and explored how evidence-based skills & performance intelligence can transform how we think about the role of L&D. Both episodes shared a common thread: AI is everywhere, and it is reshaping everything L&D does.
But there is one dimension of AI that is surprisingly absent from most L&D conversations — and it may be the most important one of all.
Not how AI supports learning. Not how AI automates content creation. But how AI shapes the quality of decisions people make at work — and what happens when it gets those decisions wrong.
This is the episode where I ask L&D to step into genuinely unfamiliar territory. I know this will feel uncomfortable for some. It should. Because the opportunity I see for L&D goes far beyond learning programs — and it starts with a deceptively simple question:
If AI increasingly influences how people work and decide, who is responsible for making sure AI learns the right things?
— Peter
What Is Decision Intelligence? | Decisions drives performance |
AI Reliability Is a Learning Problem | Training AI Is Still Training |
Decision Intelligence: A Bigger Stage | Getting Started |
We talk a lot about learning programs. But most of our work is actually decision-making. So I’m curious — what decisions do you make most often?
What Is Decision Intelligence?
Before we go further, let me explain what I mean by decision intelligence — because it is a concept that is well established in other fields but almost entirely absent from L&D conversations.
Decision intelligence is the discipline of systematically using data, analytics, AI, and contextual insight to design, support, and improve decision-making — so that better, faster, and more consistent decisions are made in complex environments.

The term was introduced to me by Cassie Kozyrkov, who served as Google's first Chief Decision Scientist. Her core argument is compelling: organizations invest heavily in data and AI, but far too little in the quality of the decisions those tools are meant to support. The technology is only as valuable as the decisions it informs. Gartner identified decision intelligence as a top strategic technology trend, describing it as a practical discipline that improves organizational decision-making by combining data science, social science, and managerial science.
Why Decision Intelligence Is Not a Buzzword
Decision intelligence matters because decisions are, quite literally, what organizations are made of. Every process, every customer interaction, every project, every strategy execution consists of thousands of decisions — most of them made by individuals, often under time pressure, with incomplete information.
The quality of those decisions determines everything: whether a customer stays or leaves, whether a project delivers on time, whether a safety protocol is followed, whether a new hire succeeds. And yet, most organizations invest heavily in what happens before decisions (training, planning, data collection) and after decisions (reporting, evaluation, correction) — but remarkably little in improving the decision itself.
Making decision is hard
The reason we underinvest in decision quality is partly that we underestimate how hard decisions actually are. We tend to think of decisions as moments — a choice is made, and we move on. But in practice, decisions are rarely that clean.
Most workplace decisions involve uncertainty. You almost never have all the information you need. You have to act with what you have, knowing it is incomplete — and knowing that waiting for perfect information is itself a decision with consequences.
Most decisions involve competing priorities. The right answer for the customer may conflict with the right answer for the budget. The short-term fix may create a long-term problem. People do not just need information — they need the ability to weigh trade-offs, and that is cognitively demanding.
Most decisions are influenced by bias — often invisibly. We anchor on the first piece of information we see. We favor evidence that confirms what we already believe. We are more influenced by recent events than by patterns over time. These are not character flaws. They are how human cognition works. And they affect every decision, every day, at every level of the organization.
And most decisions happen under time pressure. People do not have the luxury of careful analysis for every choice they make. They rely on heuristics, rules of thumb, past experience, and whatever information is most readily available. The quality of what is "most readily available" therefore has an outsized influence on the quality of the decision.
This is the reality that decision intelligence addresses. Not by trying to make people into perfect decision-makers — that is unrealistic. But by systematically improving the inputs, the context, and the support that people have at the moment they decide.
Decisions Are the Core of Performance
Once you see decisions this way, something else becomes clear. Performance — the thing every organization ultimately cares about — is not some abstract outcome that happens independently. Performance is the accumulated result of decisions made well, or made poorly, across every role, every day.
Think about what high performance actually looks like in practice. A sales professional who consistently closes deals is not just "skilled" — they are making better decisions: which lead to prioritize, when to push, when to listen, how to frame the offer. A project manager who delivers on time is making hundreds of small decisions well: where to allocate effort, when to escalate, what risk to accept. A nurse who catches a complication early is making a judgment call — a decision — based on pattern recognition, experience, and available information.
Performance, at its core, is the ability to make the best possible decision given the circumstances.
This reframes everything we have discussed in this series. In Episode 3, we argued that skills must be grounded in performance. Now we can go one step further: skills drive performance because they improve the quality of decisions people make. A skilled professional is not someone who knows more — it is someone who decides better, more consistently, under real-world constraints.
And that is what makes the connection between L&D, skills, performance, and decision intelligence so natural. L&D has always been in the business of helping people perform better. Performance has always been about making better decisions. The only thing that has changed is that AI is now part of the equation — and AI's contribution to decision quality needs the same care and attention that human learning has always received.
And this is also why AI's role in decision-making is so consequential. Because AI is increasingly becoming the thing that is "most readily available" — the first source of information, the suggested next step, the summary that frames the situation. If that input is right, it compensates for many of the limitations we just described. If it is wrong, it amplifies them.
That leads us to three forces that have made all of this dramatically harder.
First, the volume and speed of decisions has increased. As organizations become more digital, more distributed, and more data-rich, people are expected to make more decisions, faster, with less time for reflection. The luxury of escalating to a manager or waiting for a committee is disappearing.
Second, the complexity of decisions has grown. Many decisions now involve multiple systems, cross-functional dependencies, regulatory considerations, and rapidly changing contexts. The information needed to decide well is scattered across tools, people, and data sources.
Third — and most relevant for this series — AI is now actively participating in those decisions. Through recommendations, summaries, suggested actions, automated workflows, and generated content, AI shapes what people see, what options they consider, and what actions they take. In many cases, people do not even realize that AI has already filtered or framed the information they are basing their decisions on.
This means the quality of AI output is no longer a technical curiosity. It is a direct input into organizational decision quality. When AI gets it right, decisions improve at scale. When AI gets it wrong, bad decisions also happen at scale — faster and with more confidence than before.
That is why decision intelligence is not a buzzword. It is a structural necessity for any organization that relies on AI to support how people work.
The Natural Connection to L&D
Now here is what struck me when I first encountered this concept. Decision intelligence is fundamentally about equipping people with what they need to make the best possible decision in a given moment. The right information, at the right level of detail, in the right context, at the right time.
That should sound familiar. Because that is also, at its core, what Learning and Development has always tried to do.
Think about it. When we design a learning program, we are trying to prepare someone to act well — to make good judgments, apply the right knowledge, respond effectively in their role. When we build performance support, we are trying to put the right guidance in front of someone at the moment it matters. When we develop skills, we are building the capacity to make better decisions under real-world conditions.
L&D and decision intelligence share the same fundamental purpose: helping people perform better by improving the quality of their decisions.
The difference is that decision intelligence explicitly extends this to AI. It asks: how do we ensure that AI — which increasingly provides the information, recommendations, and guidance people rely on — actually supports good decisions rather than undermining them?
And that question is where L&D's new opportunity begins. Because when AI becomes the primary vehicle through which people receive guidance at work, the quality of that guidance becomes a performance issue. And ensuring performance through better guidance is exactly what L&D has always done.
Let me now explain why this matters so urgently.
What if AI Gets It Wrong?
Across every topic we have covered so far, AI has played a central role. It powers the reimagined 70-20-10. It enables skills inference, granularity, and context. It automates content, personalizes learning, and drives analytics.
AI is everywhere. That conclusion is safe.
But here is what is less discussed: the quality and reliability of what AI actually produces. And in a corporate environment, this is not a theoretical concern. It is a business risk.
We have all seen examples of AI generating outputs that are factually incorrect, outdated, misleading, or — in some cases — offensive. These are often dismissed as early-stage problems of a maturing technology. But inside organizations, the stakes are higher. Companies increasingly rely on AI for advice, recommendations, explanations, and decisions. When AI gets something wrong, the consequences are no longer trivial.
The challenge is not whether AI can generate something. The challenge is whether it can generate something that is high-quality, factually correct, and aligned with how the organization actually works. You do not want AI recommending outdated procedures. You do not want it referencing products that no longer exist. You do not want it contradicting strategy, policy, or values. And you do not just need accuracy today — you need accuracy over time, as your organization evolves.
Safeguards Are Necessary but Not Sufficient
Part of the solution lies in the algorithms. Modern AI systems include restrictions on language, content filters, behavioral constraints, and alignment mechanisms. These controls matter. They reduce risk. But they are not sufficient.
Set them too tight, and AI loses much of its value. Set them too loose, and hallucinations and inappropriate outputs increase. Even if you find the right balance, you have only addressed half of the problem.
The Half That Gets Overlooked: Data
The other half is data.
AI does not operate in a vacuum. It runs on data. It learns from data — not just the prompts you give it today, but the data it was trained on and continues to be trained on. This becomes especially relevant as organizations deploy internal language models, trained on company documents, policies, procedures, reports, and knowledge bases.
In theory, this is exactly what we want: AI that understands our organization. In practice, it raises an uncomfortable question. What if the internal data is outdated, inconsistent, incomplete, or poorly structured?
If you train AI on poor-quality data, the AI will learn the wrong things — very efficiently. The old principle still holds: garbage in, garbage out. Only now, it applies at scale, at speed, and with a level of misplaced confidence that makes the errors harder to detect.
This is not just a technology risk. This is a performance risk. When AI supports daily work — giving guidance, recommending actions, explaining procedures — and that guidance is wrong, the result is ineffective work, reinforced bad practices, and mistakes that happen faster.
Is Training AI Is Still Training?
This is where I see the most important — and most overlooked — connection to Learning and Development.
Think about what sits at the core of artificial intelligence. Not code. Not infrastructure. It’s learning. We do not tell AI what to do step by step — that would be traditional programming. Instead, we expose AI to data and allow it to learn patterns, meanings, and relationships. AI learns from examples. It internalizes behavior. It generalizes based on what it has seen. Exactly like humans do.
That means the quality of what AI learns from directly determines the quality of what AI produces. High-quality, well-structured, up-to-date data leads to accurate outputs, relevant recommendations, and trustworthy advice. Poor-quality data leads to the opposite.
Now here is the thought I want you to sit with for a moment.
L&D has spent decades doing exactly this — but for humans. Defining what "good" looks like. Selecting the right examples. Structuring information so it can be understood. Reinforcing correct patterns. Updating learning material as reality changes. These are not new capabilities. They are the foundation of what L&D does.
So why should training humans be where our responsibility ends?
Why should L&D not take responsibility for ensuring that AI — the system that increasingly guides how people work, decide, and perform — learns from the right data, in the right way, and stays aligned with reality as the organization evolves?
I am not suggesting L&D should own AI systems or become a technology function. I am suggesting something more specific: L&D should own the learning responsibility behind AI-supported work. What AI is allowed to learn from. What needs to be corrected or removed. How AI guidance evolves as work, processes, and strategy change.
This is not about controlling AI. It is about continuously educating it.
AI Does Not Improve on Its Own
There is a common misconception that AI will "get better over time" automatically. In reality, AI only improves if the data it relies on is updated, incorrect outputs are identified and corrected, outdated assumptions are actively removed, and changing work practices are reflected in the data and logic.
If none of that happens, AI does not improve. It stagnates — or degrades.
Ensuring this quality is not a purely technical task. It requires understanding how work is performed in practice, recognizing where context matters, knowing when rules apply and when judgment is required, and detecting when guidance no longer matches reality. This is not primarily an engineering problem. It is a work and learning problem.
And that makes it — naturally — an L&D problem.
Decision Intelligence: Moving Beyond L&D
At this point, you might be thinking: this sounds interesting, but is it realistic? Can L&D genuinely move into this space?
I believe it can. L&D has the opportunity here. Not to become a data science team or a technology function, but to evolve from delivering learning to also safeguarding the quality of AI-supported decisions — particularly where those decisions depend on knowledge, skills, context, and judgment.
What This Means for L&D
Let me be direct about something. I know this is uncomfortable territory for many L&D professionals. Most of us did not enter this field to work with data architecture, AI reliability, or decision frameworks. We came to help people learn and grow.
That purpose does not change. But the way it is expressed must.
In the AI-driven 70-20-10 model, most of the traditional learning cycle — content, delivery, administration, even parts of coaching and facilitation — will increasingly be handled by AI. Boutique Learning remains deeply human. But for the rest, L&D's differentiating value shifts.
The question is: shifts to where?
I believe the answer is towards decision intelligence for work and performance. L&D then becomes the function that ensures AI-supported guidance is trustworthy — not just technically functional, but grounded in how work is actually done, aligned with current practices, and continuously updated as the organization evolves.

Bringing Learning and Decision Intelligence together..
This positions L&D to take responsibility for the relevance and correctness of AI-supported guidance, the continuous updating of inputs that shape AI outputs, identifying when AI advice no longer reflects reality, and ensuring AI supports employees in doing their work better — not just faster.
This Is How L&D Earns Strategic Relevance
Positioning L&D as a guardian of decision intelligence does something powerful. It allows L&D to remain highly relevant in an AI-driven world, move beyond content and programs, operate closer to strategy and execution, and take responsibility for something that truly matters at the executive level.
This is not about asking for a seat at the table. It is about owning something that makes that seat necessary.
Of course, this requires upskilling. L&D professionals will need to understand how AI learns, how data shapes behavior, how decisions are influenced by information quality. But if there is one function that should feel comfortable with the idea of learning something new — it should be Learning and Development.
A Direction, Not a Destination
I want to be clear: I am not presenting this as the only possible future for L&D. Organizations are different. Maturity levels vary. The right positioning for L&D depends on context — just like skills depend on context.
But I do believe this direction deserves serious consideration. The intersection of learning, AI reliability, and decision quality is real. It is growing. And right now, it is not fully owned by any function — not IT, not data science, not strategy. That creates an opening. Whether L&D steps into it is a choice. But it is a choice worth making deliberately rather than by default.
Getting Started
You do not need a reorganization or a new title to begin moving in this direction. Here are four practical steps that reposition L&D in practice — long before any org chart changes.
1. Shift the Conversation
Stop talking primarily about programs, courses, and completion rates. Start framing your work in terms of the decisions employees need to make, the performance outcomes that matter, and the places where people struggle despite having been trained. This is the simplest and most immediate change you can make. It shifts how others perceive L&D — from a delivery function to a performance partner.
2. Map Decision Points, Not Just Learning Needs
Pick one critical role in your organization. Instead of asking what training that role needs, ask: what are the five most important decisions someone in this role makes regularly? Then explore: where does AI already influence those decisions? Where could it help — and where could it cause harm? This exercise alone will surface risks and opportunities that traditional learning needs analyses completely miss.
3. Make AI Output Quality Visible
Start paying attention to where AI guidance in your organization works well, where it creates confusion, and where it is simply ignored. Talk to employees who use AI-supported tools daily. Ask them: when was the last time AI gave you wrong or outdated advice? What did you do? These signals are gold. They tell you where AI's learning has broken down — and where L&D could add immediate value by flagging and correcting the inputs.
4. Partner Differently
Seek out your colleagues in strategy, business intelligence, and IT. But approach them differently than you might have before. With strategy, talk about interpretation — not dashboards. With IT, talk about feedback loops — not tool selection. With the business, talk about decision quality — not satisfaction scores. These conversations position L&D as an intelligence partner. And they cost nothing but a shift in mindset.
I realize this episode asks more of L&D than the previous ones. The 70-20-10 model and evidence-based skills intelligence stretch current practice. Decision intelligence challenges our identity.
But here is what I keep coming back to. AI is not going away. It will increasingly shape how people work, how they are supported, and how decisions are made. Someone will need to ensure the quality of that support. Someone will need to make sure AI learns the right things — and keeps learning the right things as reality changes.
That responsibility sits naturally with the function that has always been in the business of helping others learn. The question is whether L&D is ready to extend that mission — from people to the systems that support them.
I believe it can. And I believe it should.

