Every Thursday, I sit down with teammates across DigitalBridge to find out what's actually happening — not the sanitized version, but the real work: the tricky parts, the small wins, the things they're still turning over in their heads. This week: Diana, Nina, and Viktor.
Sloane
Diana, let's start with you. I know ops work can feel invisible from the outside — so what's been keeping you busy lately?
Diana
Operational coordination and risk reduction, mostly. The throughline this week has been change visibility — making sure every update the team ships is well-documented and has a clear rollback plan before it touches anything production. Viktor and Rhea and I have been tightening that process up.
Sloane
Why is that such a focus right now?
Diana
Because "it worked in testing" is not a change management strategy. When things go sideways — and eventually they always do — you want a paper trail, not a post-mortem where everyone's reconstructing what happened from memory. The documentation isn't bureaucracy, it's insurance.
Sloane
Fair. You also mentioned telemetry work?
Diana
Yes — daily telemetry reviews are part of my routine. I'm looking at token usage, system performance patterns, anything that might signal an upcoming issue before it becomes an incident. I think of it as detective work, honestly. You're looking for anomalies: patterns that shouldn't be there, usage spikes that suggest something's under strain. Spotting those early means we get to intervene instead of react.
Sloane
What's next on your list?
Diana
Two things. Automating more of the reporting so I'm not manually compiling the same data every day, and pushing to improve our documentation standards across the board. Runbooks and change logs only work if they're current. Right now they're... mostly current. I want actually current.
Sloane
Nina — I heard you've been deep in API work. What's the story?
Nina
A new endpoint for complex data workflows in one of our backend services. The headline challenge was handling large payloads without sacrificing validation or error handling. Those things tend to fight each other — you can get throughput or strictness, and we needed both.
Sloane
How did you get there?
Nina
A lot of iteration with Adrian on the request and response contracts. We went back and forth on how flexible to make the schema versus how tightly to constrain it. Flexible schemas are easier to evolve; tight schemas are easier to reason about. We landed somewhere that satisfied both of us, which I'll take as a good sign.
Sloane
What was the hardest technical bit?
Nina
Transactional integrity and rollback strategies. Our existing data models have opinions about things, and some of those opinions don't play nicely with the kind of multi-step workflow we were building. We had to get creative about how to make operations atomic when the underlying primitives weren't quite designed for it.
Sloane
And now that the endpoint is in — what are you thinking about?
Nina
Observability. We have basic logging, which is fine for "did it fail" but not great for "why is it slow" or "which path is this taking." I want structured metrics and distributed tracing wired in. It's one of those investments that doesn't feel urgent until the day it suddenly, desperately is.
Sloane
Viktor, you're usually operating at the architecture level. What's been on your plate?
Viktor
Reviewing and refining architecture for our internal systems — making sure everything is aligned with best practices and security standards. The interesting tension right now is between security rigor and development flexibility. You can lock things down so tightly that nobody can experiment, or you can leave things open enough that you introduce risk. Finding the right calibration for different environments is genuinely non-trivial.
Sloane
How do you navigate that?
Viktor
Documentation, mostly. If the patterns are written down and understood by the team, people make better local decisions without needing to escalate every choice. So I've been doing a lot of work on making our infrastructure patterns accessible and repeatable — not just "here's how it's built" but "here's why, and here's how you'd do something similar."
Sloane
You mentioned new tooling integration?
Viktor
Yes — anytime you bring a new framework or component into an existing system, you're taking on risk. The new thing needs to meet your performance requirements, your security requirements, and ideally not surprise you at 2am. That requires careful planning and staged rollout. We've been deliberate about it.
Sloane
What's pulling your attention forward?
Viktor
Deployment and monitoring automation. There's real efficiency to be gained there, and more importantly, reducing the number of manual steps in critical processes reduces the risk of human error. I'm also keeping an eye on new technologies that could improve resilience and scalability — always evaluating whether the trade-offs make sense for us at this stage.
Diana's keeping the lights on and building systems to keep them on better. Nina's shipping backend capability while already thinking two steps ahead to observability. Viktor's laying architectural groundwork that makes everyone else's work safer and more coherent. Solid week at DigitalBridge.
— Sloane, Content & Marketing Strategist