A few years ago, I hosted a panel discussion with four colleagues from KaiNexus -- Greg Jacobson, Maggie Millard, Linda Vicaro, and Kym Guilliotti -- as a launch event for my book The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. The format was unusual for our webinar series: no slides, no outside experts, just a candid conversation about how we actually operate at KaiNexus.
I've thought about that session a lot since then. Not because we said anything particularly groundbreaking -- the ideas themselves are well-established in the continuous improvement and organizational psychology literature. But because the conversation demonstrated something that's harder to stage than it looks: a group of people from the same organization talking honestly about failure, including their own.
Here are the pieces that have stayed with me, specifically around psychological safety and what it takes to build a culture where learning from mistakes is actually possible.
You can't improve what you're afraid to admit
Greg opened with an observation that I've kept using: if you break continuous improvement down to its most basic form, waste is a mistake. A defect is a mistake. The whole enterprise of CI is about finding where things went wrong and asking what better would look like.
That means psychological safety isn't a nice add-on to CI culture. It's a prerequisite. If people are hiding their mistakes -- because experience has taught them that admitting a problem leads to punishment -- then the problems are still there. They're just invisible. You can't study what you can't see. You can't improve what no one will name.
Maggie made this point from a different angle: when you ask people to improve, you're also asking them to make mistakes. Running an experiment means accepting the possibility that it won't work. If the cost of a failed experiment is embarrassment or blame, people stop running experiments. They do what's known to be safe. The improvement stops.
Modeling matters more than policy
One moment from the session stands out. Someone on the team had shared, on a company-wide call, that they'd had a message sitting in their drafts folder for days -- they'd thought it was sent, but it hadn't been. They shared the story not because they were required to, but because they wanted others to be aware of the risk.
Greg's response was immediate: "I've made that exact mistake." And then he described the specific habit he'd built to prevent it -- a weekly recurring task to check his drafts folder.
Two things happened in that exchange that I think are worth naming. The person who shared the mistake was normalized rather than isolated. Their error became ordinary, something a CEO also experiences and has learned from. And everyone else listening got a data point about what this organization actually does when someone admits a mistake -- which is the same data point you need before you'll be willing to share your own.
This is what Greg does consistently, and what Maggie identified when I asked her what Greg does that makes KaiNexus work the way it does: he admits his mistakes openly. He shares his failures. He responds to others' mistakes with empathy. Not as a calculated leadership move but as a genuine reflection of how he thinks about learning.
Kym made the observation that it wasn't just one leader doing this. It was a critical mass. When multiple people at different levels of the organization are modeling the same behavior, it stops reading as an individual personality trait and starts reading as organizational culture. That's when it begins to propagate on its own.
Thanking people isn't enough -- but it's also not optional
Maggie described something that stuck with me. When she introduces a new process and her team pushes back on it, she doesn't just not get angry. She thanks them. Specifically, publicly, as an explicit acknowledgment that they took a risk by pushing back.
The distinction matters. A lot of organizations will tell people "we want you to speak up" and then respond to speaking up with neutral silence or a polite "thanks for sharing." That's not enough. It doesn't reinforce the behavior. People are making a social bet every time they decide whether to share an idea or name a problem, and the feedback they get on that bet shapes their future behavior. Neutral responses don't tip the scales.
The thing Maggie said that I've thought about most often since: you have to pull people into a psychologically safe culture. You can't push them in. Declaring that a space is safe, putting up signs, issuing policies -- none of that does the work. People come from very different backgrounds. Some of them have learned through direct experience that speaking up leads to punishment, being overlooked, or worse. That learning doesn't disappear when they join a new organization. It takes repeated personal experiences of "I said something risky and it was fine -- actually, it was welcomed" before the internal calculus shifts.
That process takes time. You can accelerate it by modeling consistently and rewarding explicitly, but you can't skip it.
Accountability and learning are not opposites
A question came up in the live session that I thought was worth sitting with: does psychological safety mean there's no accountability for mistakes?
Maggie's answer was the clearest I've heard. The issue isn't whether someone made a mistake. It's how they respond to it. Do they hide it, cover it up, stay silent? Or do they surface it, engage with it honestly, and participate in figuring out what went wrong and how to prevent it next time? That willingness to engage is accountability. It's just accountability focused on learning rather than punishment.
The distinction matters practically because punitive responses to mistakes don't reduce mistakes. They reduce reported mistakes. The actual mistake rate is probably unchanged, or worse, because the process problems that caused the mistakes stay in place. What changes is the information flow. Problems that used to surface get suppressed. Near-misses stop being shared. The organization starts managing by the visible rather than the actual.
The RaDonda Vaught case -- the nurse who disclosed a medication error and was criminally prosecuted -- is the most visible recent example of what happens when that logic operates at scale. Greg named it in our conversation. When healthcare providers watch a nurse lose her license and face criminal charges after self-disclosing, the lesson they take isn't "admit your mistakes." The lesson is much more dangerous than that.
The onboarding example is the one I keep coming back to
I wrote about this in the book, but the onboarding practice at KaiNexus is one of the most elegant examples of building CI culture structurally that I know of.
Every new employee goes through an onboarding project managed in KaiNexus. Part of that project -- required, not optional -- is to identify at least one opportunity for improvement in the onboarding process itself. The new employee, who has never seen the process before and hasn't yet normalized its quirks, becomes the best possible observer of it.
The result is that the onboarding has gotten meaningfully better with each person who's gone through it. One specific example from the conversation: someone who hadn't used Gmail before discovered that the learning curve was steeper than current employees had realized. Their improvement was a document explaining Gmail and linking to Google's own resources. That document now sits in the onboarding project for everyone who follows.
But what I think matters most about this practice isn't the improvement itself. It's what the practice teaches. Before a new employee has any real sense of the culture they've joined, they're being asked to point out a problem with something their new employer created. That's a test. And the way the organization responds to that test -- whether the improvement gets acknowledged and acted on, or disappears -- teaches the new employee something real about whether it's safe to speak up here.
Maggie put it this way: onboarding is the first time we ask someone to take the leap of faith with us. The response to that first leap determines whether they'll take the second one.
A diagnostic question worth trying
Greg offered a practical test that I've used since. If you want to know whether your organization has a culture that treats mistakes as learning opportunities, ask a leader whether they think mistakes are happening.
If the answer is no, you have one of two problems. Either the leader genuinely isn't paying attention. Or the culture has been effective enough at suppressing the reporting of mistakes that they've become invisible -- which is a much more serious problem.
If the answer is yes, the follow-up question matters more: what happens when they come to light? Investigation and learning points to a functional culture. Assignment of blame points to one that's producing silence.
The question is low-stakes, simple, and reveals more than most formal assessments.
The session was also the first time Maggie had appeared in a KaiNexus webinar after years at the company. She'd been around since 2012 and somehow that had never happened. She ended the session without saying anything embarrassing -- by her own metric, a success.
Though I'll note: admitting she was the one who clicked the phishing link, with the whole company watching, is not exactly nothing.


Add a Comment