Every improvement program has a version of this story. Someone identifies a change they want to make. They build the business case, get the resources, run the project. The result looks good on paper. And then, weeks later, it becomes clear: the improvement they just spent three months on wasn't actually connected to a problem anyone needed solved.
This is the most expensive failure mode in continuous improvement -- not failed projects, but successful projects aimed at the wrong target.
The pet project trap
Organizations pursuing continuous improvement tend to generate a lot of ideas. That's the point. But volume creates a sorting problem. Mixed in with genuine problem-solving work, you'll find pet projects -- changes that someone wants to make because they seem interesting, because a similar organization did it, or because the idea has been sitting on a backlog so long it feels like it must be important.
Pet projects aren't malicious. They're often championed by smart, well-intentioned people. The issue is opportunity cost. Every hour spent on a project that doesn't address a real problem is an hour not spent on one that does. And in a culture where improvement capacity is finite, misdirected effort doesn't just waste resources -- it erodes confidence in the entire program.
The tell is usually in the framing. When someone pitches an improvement that starts with the solution ("We should build a dashboard for X" or "We need to create standard work for Y"), that's a signal to pause. The question isn't whether the solution sounds reasonable. It's whether anyone has clearly articulated the problem it would solve, who's affected, and why it matters now.
"Don't come to me with a solution. Come with a problem."
Most of us grew up hearing the opposite advice: don't bring problems to your boss without a solution attached. In improvement work, that instinct is exactly backward.
When people lead with solutions, they skip the most important step in the improvement cycle -- understanding the problem well enough to know whether their proposed fix addresses the root cause. A team that shows up with "we need a new template" has already committed to an answer before asking the question. A team that shows up with "our handoff process drops critical information 40% of the time" has given everyone -- including themselves -- the foundation to find the right countermeasure, not just a plausible one.
This doesn't mean discouraging initiative. It means structuring how improvement ideas enter the system. Make the problem statement required. Make the solution optional. When someone submits an idea, the first thing a reviewer should see is: what gap are you trying to close? If that field is empty or vague ("improve efficiency"), the idea isn't ready yet.
One practical test: can you state the problem without referencing your proposed solution? "Patients wait an average of 47 minutes between check-in and seeing a provider, against a target of 20 minutes" is a problem statement. "We should redesign the intake workflow" is a solution looking for justification. The first opens a door to analysis. The second closes it.
The "I don't care what your solution is" principle
This framing can sound dismissive, but it's actually the opposite. Telling someone "I want to know what your problems are, whether you know how to solve them or not" sends a powerful message: your observations matter independent of your ability to fix them. That's a fundamentally different signal than "don't bring me problems unless you also have solutions," which inadvertently tells people that if they can't figure it out themselves, leadership doesn't want to hear about it.
This distinction matters enormously for engagement. In traditional suggestion systems, 2-3% of submitted ideas ever get implemented. One reason is that people only submit ideas when they have a solution in mind -- which means the organization never hears about the hundreds of problems that frontline workers notice but don't know how to fix. Those unspoken problems are often the most valuable ones, because they require cross-functional thinking or system-level changes that no individual contributor can design alone.
The most productive improvement cultures separate problem identification from problem solving. Anyone can (and should) flag a problem. The problem-solving method -- whether it's a quick PDSA cycle, an A3, or a full DMAIC project -- gets selected based on the complexity of the problem, not the preferences of the person who spotted it.
The target condition question
Once you've stated the problem clearly, there's a second filter that catches most remaining misdirected effort: does this move us closer to our target condition?
A target condition is a specific, time-bound description of how a process should work -- not a metric target ("reduce defects by 15%") but a description of the operating state you're working toward ("every patient receives discharge instructions in their preferred language before leaving the unit"). The target condition gives you a direction, and that direction lets you evaluate whether any given project actually belongs on the path.
This is where strategic alignment earns its keep. Teams without a clear target condition tend to optimize locally -- picking up whatever improvement looks easiest or most interesting. Teams with one can ask a pointed question about every proposed project: does this help us get there? If the answer is no, the project might still be worth doing someday. But it shouldn't be consuming resources that could go toward work that moves the needle on what actually matters.
Target conditions change over time. An initiative that made perfect sense six months ago may no longer align with current priorities. That's not a failure -- it's a signal to re-evaluate the backlog. Organizations that periodically review their open improvement projects against current target conditions often find that 20-30% of active work is no longer pointed in the right direction. Closing or pausing that work frees capacity for what is.
The backlog problem
Speaking of backlogs: they're where misdirected improvement work goes to hide. A healthy improvement system has a manageable number of active projects, each tied to a clear problem and a current target condition. An unhealthy one has a backlog of hundreds of items that "might be nice someday," accumulating like sediment.
Large backlogs create two problems. First, they make prioritization nearly impossible -- when everything is on the list, nothing stands out. Second, they create a false sense of productivity. Leaders look at the backlog and think "we have plenty of improvement ideas," when what they actually have is a graveyard of unjustified solutions mixed with a few genuine problems that are impossible to find.
The fix is periodic backlog hygiene, which sounds boring and is genuinely transformative. Review open items quarterly. For each one, ask: what problem does this solve, and is that problem still relevant? If the answer is unclear, close it. You can always reopen it later if the problem resurfaces. In the meantime, you've reduced noise and made the real priorities visible.
How to build the habit
Making "are we solving the right problem?" a reflexive question -- not just an occasional gut check -- requires a few structural changes.
Put the problem statement first in your improvement workflow. Literally first. Before the solution field, before the impact estimate, before anything else. If your improvement management system allows configurable templates, make the problem statement required and the solution field optional (or hidden at the point of creation). This single design choice changes behavior more reliably than any amount of training.
Coach the question, don't just ask it. When a direct report brings you a proposed improvement, resist the urge to evaluate the solution. Instead, ask: "
What problem are you trying to solve?" and "How do you know that's the problem?"
These aren't gotcha questions. They're the opening move in coaching someone to think more rigorously. Over time, people internalize the questions and start asking them of themselves before they ever bring the idea forward.
Connect improvement work to strategic priorities visibly. When teams can see how their daily improvement work connects (or doesn't) to organizational goals, they self-correct. A team that knows the target condition is "reduce time-to-first-appointment by 30%" will naturally question whether reorganizing the supply closet -- however satisfying -- is the best use of their next improvement cycle.
Review and prune regularly. Backlogs, active projects, and improvement boards all need periodic re-evaluation against current target conditions. Build this into your management cadence -- quarterly at minimum. Treat it as a form of standard work for leaders: not optional housekeeping, but a core practice that keeps the improvement portfolio pointed in the right direction.
What this looks like in practice
Consider a team that wants to build a more advanced project template for a specific workflow. The idea has been on the backlog for months. It's technically feasible, and the person championing it is enthusiastic.
Before greenlighting the work, a leader pauses to ask: what's the problem this solves? Is the current template causing errors? Are users unable to complete the workflow? Or is this a case of replacing something functional with something fancier?
On investigation, the current template works. It's not perfect, but it handles 90% of use cases. The advanced version would take significant time to build, and the team's actual target condition has shifted toward simplicity -- making tools easier to use, not more elaborate.
The leader closes the backlog item. Not because it was a bad idea, but because it was solving a problem that doesn't exist in the current context. The team's capacity goes to work that actually moves them toward their goal.
That kind of disciplined pruning -- evaluating proposed improvements against real problems and current direction -- is what separates organizations that improve steadily from those that stay busy without getting better.
The uncomfortable truth
Asking "are we solving the right problem?" will sometimes reduce the number of improvements that get logged. That's not a bug. An improvement program that generates 500 ideas, 400 of which are solutions without problems, is not actually healthier than one that generates 200 ideas, all of which address documented gaps. Volume is a means, not an end. What matters is whether the work you're doing is closing the distance between where you are and where you need to be.
The most effective improvement cultures make problem identification the first skill they teach and the first field they require. Everything else -- the methodology selection, the root cause analysis, the countermeasure testing -- follows from getting this step right.
Start there. The rest gets easier.
When improvement ideas, problem statements, and strategic priorities all live in the same system, it's straightforward to see which projects are aligned and which have drifted. That's what KaiNexus was built to do -- connect daily improvement work to the problems and goals that actually matter, so nothing gets lost and nothing gets misdirected. See KaiNexus in action ->


Add a Comment