Here's something that happens every day in your organization: someone solves a problem. A workaround for a broken process. A fix for a recurring error. A clever adjustment that keeps things moving when the system isn't cooperating.
They "solve" it in the moment, and the moment passes, and nobody writes anything down, and three weeks later someone else hits the same problem and solves it again from scratch. Or doesn't solve it, and just absorbs the friction as part of the job.
One healthcare executive described the pattern:
"I guarantee you people solve problems in the short term, all day long, whether it's on the patient side or in the lab or in radiology. They don't just see it and let it go. They solve it short term, but they really don't pick up and solve it so that it never comes back again."
That's the gap. TNot between organizations that improve and organizations that don't. Almost everyone improves. Or they try. The gap is between organizations that fix things temporarily and organizations that fix things permanently. Between putting the fire out and actually removing the fire hazard to protect us in the future.
Many CI programs live on the wrong side of that gap. They produce impressive activity metrics, completed projects, closed items, idea counts. But activity isn't the same as durability. The question nobody wants to ask in the quarterly review is: of all the improvements we implemented this year, how many are still holding?
The sustainment problem nobody talks about
CI conferences are full of talks about idea generation. Engagement strategies. How to get more people participating. How to get leadership buy-in. How to create a culture of improvement.
Almost nobody talks about what happens at month six. Or month twelve. Or the moment the improvement champion changes roles and the countermeasure quietly stops being followed because the person who understood why it mattered is gone.
This is the unsexy part of continuous improvement. Generating ideas is energizing. Launching projects feels like progress. Implementing a countermeasure has a satisfying sense of completion. But checking whether that countermeasure is still working four months later? That's the organizational equivalent of flossing. Everyone knows they should. Very few people actually do it consistently. And the consequences don't show up until much later, when the cavity is already expensive.
One project leader at a health system described building an extra phase into their workflow specifically for this:
"We may be live, but there's a little bit of outstanding work yet that we don't want to fall off of our radar."
They created a gateway between "implemented" and "truly done" because they'd learned that the gap between those two states is where improvements quietly fail.
That gap exists in every organization. The question is whether you can see it.
Why fixes don't hold
The reasons improvements fail to stick aren't mysterious. They're structural.
The most common: nobody is tracking whether the countermeasure is still working. The project gets marked complete. The team moves on to the next problem. The improvement exists as a line item in a spreadsheet somewhere, status "done," with no mechanism to verify that "done" is still true three months later. The spreadsheet is great at recording the moment of completion. It's useless at recording what happened after.
Related: the knowledge behind the improvement leaves when people leave. The reasoning, the failed approaches, the specific conditions that made the countermeasure work, all of that lives in someone's head or in a project folder that nobody will ever open again. When that person transfers to another department or leaves the organization, the institutional memory goes with them. The countermeasure remains, but the understanding of why it works (and what to do if it stops working) is gone.
And then there's the confidence problem. One CI leader described the state before his organization had systematic tracking:
"There wasn't a strong sense of feeling when we said we put in a countermeasure that it actually took care of the issue and it would not come back."
That's a remarkable admission. The people responsible for improvement work didn't trust their own countermeasures. Not because the countermeasures were bad, but because they had no way to verify whether they were holding over time.
When you can't verify, you can't learn. You can't distinguish between countermeasures that worked and countermeasures that seemed to work for a while and then quietly eroded. Everything looks the same in the rearview mirror: project complete, status green, move on. The organization accumulates a record of activity that may or may not represent actual lasting improvement, and nobody has the data to tell the difference.
What changes when you can see sustainment
The same leader who described that confidence gap described what happened after his organization started tracking countermeasure effectiveness in KaiNexus:
"Now I'm a hundred percent confident in the effectiveness of the countermeasures, and they're measured."
The shift from "we think it worked" to "we know it's working and here's the data that shows it lasted" changes how an organization relates to its own improvement work. Leaders stop treating completed projects as finished business and start treating them as ongoing commitments. The definition of success moves from "we implemented something" to "the problem hasn't come back."
That shift also changes what the organization learns. When you can see which countermeasures held and which ones eroded, you start developing real institutional knowledge about what makes improvements stick in your specific context. Which types of countermeasures tend to be durable? Which ones need reinforcement? Which departments are good at sustainment and which ones struggle? Those patterns are invisible when all you track is completion.
One executive described how that visibility creates accountability:
"As soon as they have 5, 10, 15 projects in a facility, and then we take them their dashboard, it's like, wow. Now tell me why that one right there hasn't done anything for six months."
That question, aimed at a stalled project, is equally powerful aimed at a completed project that's backsliding. The dashboard doesn't just show what's in progress. It shows what should still be working, and whether it is.
The math that makes this urgent
There's a reason this matters beyond professional pride in doing good improvement work. The financial impact is significant.
One organization tracked roughly $8 million in savings in a single year. Another reported driving approximately $11 million in costs out of their system. Those figures only mean something if the improvements behind them are still holding. An improvement that saves $200,000 in year one and erodes by year two didn't save $200,000. It deferred a cost. The savings are real only as long as the countermeasure is real, and most organizations have no way of knowing which category their improvements fall into.
This is the part that should concern you if your name is attached to the CI program's results. When you report annualized savings to your executive team, you're making an implicit promise that those savings will persist. If you don't have a way to verify sustainment, you're making that promise on faith. That's a career risk dressed up as a metrics slide.
KaiNexus tracks impact over time specifically because a completed project isn't the finish line. The platform lets organizations monitor whether improvements are holding, flag when results start to drift, and connect countermeasure data back to the original problem so that institutional knowledge survives turnover, reorganizations, and the general entropy that erodes good work when nobody is watching.
Permanent solutions
The executive who described his organization's pattern of short-term fixes had a phrase for what he was trying to build instead:
"We are putting in permanent solutions to perpetual problems."
That's a high bar. Permanent is a strong word, and in practice, most countermeasures need monitoring and occasional reinforcement. But the aspiration is exactly right. The goal of improvement work isn't to generate projects. It's to make problems go away and stay away--to improve performance. Everything else, the idea counts, the engagement rates, the project completion metrics, is a means to that end, not the end itself.
The organizations that take sustainment seriously don't just produce better results. They produce compounding results, because every improvement that holds becomes the foundation for the next one. An organization where fixes erode is running on a treadmill. An organization where fixes stick is climbing a staircase. The effort might look similar from the outside. The altitude is completely different.
Most CI programs are measured by how many problems they solve. The better question is how many they solve once.


Add a Comment