<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=749646578535459&amp;ev=PageView&amp;noscript=1">

How to Roll Out Continuous Improvement Software Without Losing Momentum

Posted by Matt Banna

Find me on:

Apr 14, 2026 6:00:02 AM

 

Most CI software implementations don't fail because the technology is wrong. They fail because the rollout is treated as an IT project instead of a change management effort. The platform gets configured, training sessions get scheduled, and then leadership waits for adoption to happen on its own. It doesn't.

The organizations that get rollouts right treat them the way they'd treat any improvement initiative: with a clear problem statement, a phased approach, measurable targets, and leadership behaviors that reinforce the change at every level. They also accept that the rollout itself is a PDSA cycle -- you're testing your configuration and your adoption strategy against reality, learning, and adjusting.

What follows isn't a product manual. It's a playbook distilled from organizations that have rolled out KaiNexus successfully across thousands of employees, multiple facilities, and in some cases multiple countries. Their industries range from healthcare to manufacturing to construction to food and beverage. Their approaches differ in the details but share a common structure that any organization can learn from.

Build the internal case before you configure anything

The rollout starts before you touch the software. It starts with the political and organizational work of making sure the right people understand why this matters and what it replaces.

Three customer stories illustrate three different paths to the same destination:

At Tirlan, an Irish food and ingredients cooperative, the CI team went through a rigorous three-step executive approval process. First, they articulated the strategic value -- specifically how a centralized platform would eliminate the duplication of effort that plagued their Excel-based system. Second, they proved to their Architecture Review Board that no existing internal software could meet their needs. Third, they built a formal business case with financials: time savings, reduced duplication, and the cost of maintaining their current spreadsheet infrastructure. That rigor earned them genuine executive buy-in, not just a reluctant sign-off.

At Hilti, a $7 billion global construction technology company, a single executive champion made the difference. But the framing mattered as much as the sponsorship. Rather than leading with immediate ROI and KPIs, the champion positioned KaiNexus as a long-term learning platform -- a repository for institutional knowledge and a driver for organizational growth. That reframe shifted the conversation from "can we justify this quarter's spend" to "can we afford not to build this capability."

At Electrolux, an EMS leader cut through the typical business case anxiety with a single sentence: "Don't worry about showing me ROI up front. Just tell me if this can help." That kind of psychological safety at the leadership level -- permission to experiment before proving value -- created space for the team to focus on getting the implementation right rather than rushing to produce metrics that would justify the purchase.

The common thread: every successful rollout had an executive who didn't just approve the budget but actively shaped how the organization understood the change. The champion's job isn't to send a supportive email. It's to connect the platform to problems that leadership already cares about -- visibility gaps, duplicated effort, inability to prove impact -- and to keep showing up after the launch.

Assemble the people who will shape the configuration

The fastest way to build a system nobody uses is to configure it in a conference room without input from the people who will actually interact with it every day.

UMass Memorial Health got this right from the start. Before touching the configuration, they assembled an Idea System Advisory Group -- a cross-functional team that represented the range of users who would eventually use the platform. The group surfaced three principles that guided every configuration decision: keep it simple, keep it familiar, and allow teams to customize.

Those three principles sound obvious. They're not. The natural instinct during configuration is to build for complexity -- to create elaborate workflows, add approval gates, and design for every possible edge case. The Advisory Group pushed back on that instinct because they understood something critical: the first experience a frontline caregiver has with the platform determines whether they'll use it again. If that experience involves navigating a seven-step submission process, they won't.

At CareSource, the CI team worked closely with their KaiNexus customer success manager to design a dedicated Green Belt template. Their previous approach -- a single template for both general CI projects and Green Belt student projects -- created data misattribution, messy reporting, and an inability to show executives the specific impact of the Green Belt program. The fix wasn't a technology change. It was a design conversation about what questions the data needed to answer, conducted between the people who knew the workflow and the people who knew the platform.

The lesson: configuration isn't a technical task. It's a design process that requires input from the people closest to the work. Build the team before you build the system.

Design for every level of the organization

One of the most consistent patterns across successful rollouts is that the platform is configured differently for different roles. The shop floor doesn't need the same view as the C-suite. Frontline leaders have different workflows from frontline workers. Treating everyone the same produces a system optimized for no one.

Trinity Industries made this explicit. Vic Minhas, their Sr. Director of Continuous Improvement, described three organizational tiers and how each uses KaiNexus differently:

Shop floor workers see their ideas on a kanban board -- simple, visual, trackable. They can watch their idea move through stages from submission to implementation. The barrier to entry is low, and the feedback loop is visible.

Frontline leaders work primarily on A3 problem-solving projects, using the mobile app. Their projects live on a separate kanban board designed for the kind of work they do -- more structured, longer cycle time, visible to executives as a project portfolio.

Executives had been using an X-Matrix for strategy deployment but found the format overwhelming. Trinity worked with KaiNexus to create a Strategy Waterfall view that presented the same strategic alignment data in a format executives could actually scan and act on.

The result of this tiered approach: Trinity went from 4 ideas in 12 months on a physical idea board to 653 ideas in their first 6 months with KaiNexus. The platform didn't create the ideas. The role-specific design removed the friction that had been suppressing them.

Electrolux followed a similar three-part logic. Frontline workers submit ideas from any device. Leadership gets a "single pane of glass" view of improvement health across the organization. And the strategy deployment layer links individual projects to the organization's most critical strategic goals -- making sure everyone is working on things that matter, not just staying busy.

Phase the rollout -- pilot, learn, expand

No organization in this set launched to everyone on day one. Every successful rollout was phased, and the phasing wasn't just about logistics. It was about learning.

UMass Memorial Health rolled out KaiNexus -- internally branded as "Innovation Station," a name chosen by employee vote -- to 16,000 caregivers over six months. The phased approach gave their 15 CI coaches time to support each wave, troubleshoot configuration issues in real conditions, and build internal success stories that made the next wave easier to onboard.

Hilti took a similar approach with a twist: their pilot groups were voluntary. Instead of assigning teams to go first, they invited enthusiastic leaders to participate. This meant the first users were already motivated, which produced better early data, more authentic testimonials, and a train-the-trainer pipeline. Pilot participants received extra training specifically so they could evangelize and train within their own teams.

Oceania Dairy, a New Zealand dairy manufacturer operating 17 time zones away from KaiNexus headquarters, came to the partnership with a plan and a vision already in place. They won the International Customer of the Year Nexie in 2019 and the Honorary KaiNexian award in 2021 -- the second for going "above and beyond" in embedding KaiNexus into their culture, including hosting their own internal Nexie awards. That level of cultural integration doesn't happen with a big-bang launch. It happens through deliberate, phased adoption that gives the culture time to absorb the change.

The phased approach also protects you from the most common rollout failure: launching to everyone before you've learned what doesn't work. Every pilot surfaces surprises -- a workflow that makes sense on paper but confuses users in practice, a notification setting that floods inboxes, an approval gate that creates a bottleneck. Better to find those in a pilot of 200 than a launch to 16,000.

Define what adoption means and measure it

If you don't define adoption, you'll measure activity and mistake it for engagement. Logins aren't adoption. Submissions aren't adoption. Adoption is people using the system as part of how they do their work -- not as an extra task layered on top of it.

UMass Memorial Health's approach to adoption measurement evolved deliberately across three years, and each shift taught them something:

In year one, they measured participation rate -- what percentage of caregivers were actively contributing to the system. The goal was 50%, up from a 20% baseline.

In year two, they pushed the participation target to 75%.

In year three, they made a counterintuitive move. Instead of pushing for more ideas per person, they shifted the metric to team-level activity: each team should submit and complete at least one idea per month, targeting 12 completed ideas per team per year. You might expect that asking for fewer ideas would produce fewer ideas. The opposite happened. Active teams grew from 261 to 380. The metric shift forced managers to engage their teams rather than relying on a few prolific individual contributors.

That evolution is worth studying. The first metric (participation rate) answered "are people using this?" The second metric (team-level activity) answered "is improvement embedded in how teams work?" The second question is harder and more important.

Hilti's first-year engagement was strong enough to win the Outstanding Achievement Nexie Award -- an award typically given to long-tenured customers. The judges specifically cited their "excellent change management processes" as the differentiator. That recognition came not from usage metrics alone but from the visible evidence that adoption was real: employees sharing best practices, contributing to KaiNexus's own process improvement, and generating exceptional engagement within their first year.

(For a deeper look at how engagement metrics connect to impact measurement as programs mature, see How to Measure the Impact of Your Continuous Improvement Program.)

Build the coaching and support layer

Technology without coaching produces a well-configured system that nobody uses well. Every successful rollout in this set invested heavily in human support alongside the platform.

UMass designated 15 CI coaches across its system. But the coaches didn't just answer questions about how to use the software. They ran structured brainstorming sessions —60- to 90-minute sessions in which teams generated and entered ideas with coaching support. One division generated 36 ideas in a single brainstorming session. One department that participated in a session went on to implement 126 ideas in a single year. The sessions worked because they removed the two biggest barriers to first-time use: not knowing what to submit and not knowing how to submit it.

Hilti invested in a multi-channel communication strategy that went well beyond the usual launch email. They published updates on their internal intranet explaining the "why" behind KaiNexus, featuring executive videos, employee testimonials, and photos of the Hilti and KaiNexus teams during onsite visits. They repurposed a recording of their Weekly Lean Showtime Meeting to explain the vision to people who preferred video over text. Their approach recognized something most rollouts miss: different people absorb information through different channels, and a single training session doesn't reach everyone.

UMass also made the platform stickier by centralizing processes people already had to do. Their Innovation Fund grant program -- which provides funding for improvement ideas -- moved into KaiNexus from a confusing SharePoint site. Green Belt and Black Belt training templates, DMAIC templates, and union team PDSA templates all moved into the platform. Each integration gave people another reason to open KaiNexus as part of their existing workflow rather than treating it as a separate destination.

The principle: every process you move into the platform is one more reason people have to use it. Don't ask people to go somewhere new. Put the work where the people already are.

Sustain through recognition and evolving goals

The launch is not the finish line. Most CI software rollouts that lose momentum do so between months 6 and 18 -- after the initial excitement fades but before the system has become habitual.

Recognition is the most consistent sustainment lever across these stories. UMass built recognition into the platform itself with an Honor Roll feature -- CI coaches nominate outstanding ideas each month, and those ideas get highlighted in an intranet news article. Their Innovator of the Year program, now in its tenth year, awards $250,000 to fund top projects and ideas nominated by team leaders. When UMass hit 100,000 ideas, CEO Dr. Eric Dickson filmed a personal congratulatory video for the team that submitted the milestone idea.

These aren't perks. They're signals. When a CEO takes time to film a video about an improvement idea, every manager in the organization notices what leadership values. When Honor Roll ideas get published on the intranet, people see that their colleagues' contributions are visible and valued. Recognition closes the feedback loop between contributing and being acknowledged -- and that loop is what turns first-time users into habitual improvers.

Oceania Dairy took recognition a step further by hosting their own internal Nexie awards -- adapting the KaiNexus community's recognition model for their own culture. That kind of internalization signals that improvement isn't a vendor relationship. It's part of how the organization identifies itself.

Evolving goals matter too. UMass didn't set year-one goals and leave them static. They escalated from participation rate to team-level activity to quality and completion. Each shift raised the bar in a way that matched where the organization was in its maturity. Static goals produce static engagement. Goals that evolve with the program signal that leadership is paying attention and that the standard keeps rising.

What the pattern adds up to

Across these organizations -- healthcare, manufacturing, construction, food and beverage, dairy, managed care -- the rollout playbook is remarkably consistent:

Build the case around problems leadership already feels. Get an executive champion who will stay visible. Assemble a design team that includes the people who will use the system. Configure for simplicity first and add complexity only when the data demands it. Design different experiences for different organizational levels. Phase the rollout and use each phase to learn. Define adoption metrics that evolve as the program matures. Invest in coaching and support that goes beyond training. Centralize existing processes in the platform to make it the default, not an extra. And sustain through recognition, visible leadership attention, and goals that keep rising.

None of this is complicated. All of it requires discipline. The organizations that get it right treat the rollout itself as an improvement project -- which, if you think about it, is exactly the right instinct for a continuous improvement team.

See KaiNexus in action -->

Frequently Asked Questions

How long does a typical KaiNexus rollout take?

It depends on the organization's size and complexity. UMass Memorial Health rolled out to 16,000 caregivers in six months using a phased approach. Hilti deployed across roughly 2,000 employees at multiple global locations within their first year. Most organizations complete initial configuration and pilot deployment within 60-90 days, with full enterprise rollout following over subsequent months.

Do we need executive sponsorship for the rollout to succeed?

Every successful rollout in this set had visible executive engagement -- not just budget approval, but active participation. At UMass, the CEO filmed videos and led monthly manager meetings about KaiNexus goals. At Hilti, an executive champion reframed the platform as a long-term learning investment. The common pattern: executives who connect the platform to strategic problems the organization already recognizes.

Should we launch to the whole organization at once?

No. Every successful rollout was phased. Phasing lets you test configuration in real conditions, build internal success stories, develop a coaching pipeline, and fix problems before they affect the entire organization. Hilti used voluntary pilot groups. UMass rolled out in waves over six months with 15 coaches supporting each phase.

How do we get middle managers to buy in?

Middle managers are the most critical and most overlooked audience in a rollout. Hilti addressed this by sharing a detailed rollout plan with managers that covered who the CI team was, how the platform fit their existing culture, a clear timeline, and an invitation to join voluntary pilot groups. UMass published monthly participation scorecards by team, which made manager engagement (or lack of it) visible to leadership.

What's the best way to drive frontline adoption?

Remove friction and close the feedback loop. Make submission fast (mobile-friendly, minimal required fields). Respond to every idea quickly. Make the status of ideas visible so people can track their contributions. UMass's brainstorming sessions -- where coaches helped teams generate and enter ideas in real time -- were one of the most effective frontline adoption tactics, producing 36 ideas in a single 60-minute session for one division.

How do we sustain engagement after the initial launch?

Recognition, evolving goals, and process centralization. UMass evolved their metrics annually (participation rate, then team-level activity). They built recognition into the platform (Honor Roll, Innovator of the Year with $250K in awards). And they moved existing processes (grant applications, training templates, DMAIC workflows) into KaiNexus so it became the default place where improvement work happens, not an extra step.

Topics: Change Management, Employee Engagement, Continuous Improvement Software, Software Adoption, Case Studies, KaiNexus Implementation

Recent Posts