Adapting to the Moment

January 26, 2026

Something has been hanging in the air lately.

I’d had an on-call shift this weekend, so at first I thought maybe it was just that. But weekend shifts are normal and on my commute in today, I still felt weighed down.

Trying to figure it out as I walked through the emptier-than-usual (or so it seemed) streets of downtown San Francisco, I realized what it was: the recent events in Minneapolis, and the broader implications those events entail.

That situation is unfathomable and scary and angering, among a host of other emotions and thoughts. This post isn’t about that, though.

As I sat at my desk, collecting my thoughts before our team’s “plan the week” meeting, I finally realized what this moment was stirring up for me: it’s reminding me of the first weeks of COVID lockdowns.

Back then, I was on Netflix’s Critical Operations and Reliability Engineering (CORE) team. I worked with an amazing set of talented individuals and one of the first things we did when the lockdowns were announced was get together and check in on each other.

“Are you OK?”

“Are ‘your people,’ your kids, family, chosen family, close friends... are they OK?”

“Are there any operational or logistical things that we need to ensure are covered, or that we can do for each other that might help?”

High performing incident response teams think about these sorts of questions because it helps in curating the common ground they operate within. And, at the end of the day, for a role who “holds that pager” for an entire company, it makes a lot of sense for that team to talk about these types of logistics if they’re going to effectively play that role in... complex times.

Back in 2020, that discussion turned, as it invariably does, toward the nitty-gritty of how we all felt in the moment... what our states-of-mind were. At Netflix, this discussion allowed us to support each other in getting the job done, precisely because we had taken the time to understand the ways in which others needed support, and what everybody was capable of bringing to the table to help.

It also created space for each of us to acknowledge our realities and constraints. This was, of course, only to the extent that each individual wanted to share those (often vulnerable) thoughts. This helped build a psychological safety through demonstrated behavior, rather than the typical perfomative declarations of “there shalt be psychological safety in this meeting.” So often, it started with just a single person sharing a tough thought that it turned out many others were feeling and struggling with.

These sorts of discussions embody the deployment of the adaptive capacity Resilience Engineering talks so much about it. With new (in this case, external) constraints placed on the actors within a system, it’s critical to not fall into the trap of blindly assuming that sociotechnical system will respond the same way it has a perturbation (an incident, say) historically.

But of course, in incidents, the incident commander—and the bench backing them up—aren’t the only folks involved. Back at Netflix, part of this deployment of team adaptive capacity involved a necessary conversation about organizational adaptive capacity: how will we react to change?

How will we coordinate those responses?

What load are we likely to shed, sociotechnically, either explicitly or inadvertently? And what do we do about that?

What signals of “breakdown” do we need to pay attention to?

What incident processes and “ergonomics” make sense to keep right now? (Or change?)

Where are the organizational “emergency cisterns” we might need to call upon? (And have they been refilled lately?)

And what signals of a return to a more normalized operating state would we look for to start reevaluating the above?

With the fraying of 250 years of American democracy unfolding before our eyes, every incident response team needs to be asking these questions of themselves, their systems, and their organizations. And because incident response is a team sport, it’s not enough to ponder them alone.

These are also questions anyone leading a team with an on-call rotation (which, let’s be honest, are most software engineering teams these days) should be discussing.

Because incidents have a funny way of uncovering the nuanced (but often significant!) impacts of the broader situation... and as someone who focuses a lot of time on post-incident analysis I guarantee you: you can have the conversation in an incident review... or you can have it now.

And one of those conversations costs more...