The Perfect Training Program That Led To Being Laid Off
When I got a call from Adam, he said “I really didn't see it coming…and that’s what still bothers me”
Not the layoff itself, but in hindsight, he can trace every step that led there. What bothered him was that he was good at his job. His team was talented, his programs were well-designed, and by every measure his function had ever used to evaluate itself, they were succeeding. Completion rates were up, satisfaction scores were strong, and the leadership program he'd spent the better part of six months building had been called "the best training I've taken here in years" by more than one participant. His VP of Talent had told him, personally, that it was some of the best work the L&D function had ever produced.
Eleven months later, Adam was escorted out of the company with a small severance and a lot of time to think about what had gone wrong. “I’m eligible to apply for another role in the company, but now I’m questioning what’s next for me.” How did this happen?
The program itself had started with everything going right. Adam's team had interviewed senior leaders across the business, mapped leadership competencies to real organizational challenges, and built a framework that was genuinely useful, clear enough to be memorable, substantive enough to matter. The facilitators were sharp, the content was engaging, and even the executive sponsors had shown up to the kickoff, which anyone who has worked in L&D long enough knows is not something you take for granted. By the time the first cohort of 120 managers completed the program, the energy felt real. People could explain the principles, point to moments where they'd applied them, and articulate what better leadership was supposed to look like on their teams.
For a stretch of time, it felt like exactly the kind of impact Adam had come into this work to have. This is the part of the story where most programs look like they’re working.
Then Q3 arrived, and with it the kind of feedback that doesn't come all at once but accumulates quietly, the way doubt does. A regional director mentioned that her managers were still avoiding the hard performance conversations. A business unit head pulled Adam aside after a town hall and said, almost apologetically, that he wasn't seeing it in the work. He couldn't quite explain what was missing, but something was. An HR Business Partner forwarded him a thread from a frustrated sales leader saying he wasn’t seeing any change in behaviors and he’s abandoning the program to go with an off-the-shelf solution.
Adam pulled the data. Completion rates were strong, assessment scores were solid, and survey results were positive. But he knew what he was hearing, and he knew the data wasn't telling the whole story. The problem was, it was the only story he had. And this is where the gap starts to show up. Not in the experience, but in what happens after it.
When organizational leaders started asking the harder questions about the business outcomes, Adam didn't have clean answers. He had satisfaction scores. He had completion percentages. He had qualitative feedback that ranged from enthusiastic to encouraging. What he didn't have was evidence that the program had moved the needle on anything the business actually cared about: retention, team performance, sales readiness, engagement on the teams these managers led. He hadn't built the program to capture any of that. He'd built it to deliver a great learning experience, and that's exactly what it had delivered.
It just hadn't been enough. Not because the training was weak, but because it was never designed to hold up under real conditions.
The budget conversations started in November. Adam had been through cycles like this before and knew how to navigate them — or thought he did. What he hadn't anticipated was how exposed his function would look when the CFO started asking every department to justify its spend with outcome data. Sales could point to pipeline. Marketing could point to attribution. Operations could point to efficiency metrics that were tracked quarterly. Adam's team could point to how many people had gone through the program and how much they'd enjoyed it.
In a room full of executives making decisions about where to cut, that was not a compelling case. Because the business wasn’t asking about activity. It was asking about impact.
He tried to reframe it. He talked about culture, about the long-term value of leadership capability, about the difficulty of attributing behavior change to a single program in a complex organization. All of it was true. None of it landed the way he needed it to. The people across the table from him weren't unsympathetic. They understood that learning is hard to measure and that some of its value is inherently long-term. But they were being asked to make decisions with limited information, and Adam kept showing up to those conversations without the information that would have made his function's value undeniable.
By January, the L&D team had been cut significantly. By March, Adam was part of the cut. Afterwards he told me, “If only I could have monetized the value of enjoyment, I probably would have been promoted.”
He spent the months after the layoff doing what most people in his position do: replaying the decisions, tracing the moments where something could have gone differently. And what he kept coming back to wasn't the quality of the training. It was the design choices made long before the program launched, when no one had thought carefully about what success would actually look like once the last cohort finished. What would be different in the business? How would they know? What data would they need, and where would it come from?
These weren't questions about measurement for its own sake. They were questions about whether L&D was designing for learning or designing for performance and whether the organization would be able to tell the difference when the budget pressure came.
The framework had been excellent. The facilitation had been sharp. The experience had been exactly what they'd set out to build. But training doesn't fail in the design. It fails, or succeeds, in the moment someone needs it, and in the organization's ability to see whether it showed up when it mattered. Adam had built something that couldn't answer that question. Not because he missed something obvious, but because that question is rarely asked early enough. And when the question became unavoidable, there was nothing left to say.
This is the pattern many learning teams find themselves in. Strong programs, positive feedback, and a growing sense that something still isn’t translating. At Unboxed, we work with L&D leaders who are navigating exactly this pressure. The organizations we partner with aren't struggling because their content is weak or their teams aren't talented. They're struggling because the gap between a well-designed program and a defensible business outcome is wider than it looks, and most learning functions aren't built to close it. Adam's story is painful, but it isn't rare. We hear versions of it more often than we should.
Our approach starts before a single piece of content is created. Before the content is built, and before success is assumed. The Skill Collision Test is a design discipline that forces the right questions early: where will this skill actually be tested under real conditions? What will make those moments hard? What does the organization stand to lose if someone isn't ready? These questions don't just make programs more effective; they force a clarity about outcomes that makes the value of learning visible, not just to learners, but to the business leaders who need to justify the investment.
From there, the Unboxed Elevate platform gives L&D teams the infrastructure to support behavior change beyond the training event itself, through manager-led coaching, AI-powered practice, reinforcement in the flow of work, and measurement that tracks what actually changes, not just who completed what. It's a system designed around the reality that training without measurement is invisible, and invisible functions don't survive hard budget seasons.
If you're building programs you believe in but struggling to connect them to outcomes the business can see, you're not alone, and the answer isn't to build more. It's to design differently, from the beginning, with the end in mind.
At Unboxed, that's where we start. Let's talk about what that could look like for your team.