I had the privilege of moderating a CHIME Foundation Webinar with three remarkable leaders — Dr. Zafar Chaudry of Seattle Children’s Hospital, Melanie Thomas of Nashville General Hospital, and Doug Burkott of Baptist Health Jacksonville — discussing a deceptively simple question: how do health systems define and measure ROI in digital health?
The conversation was honest, nuanced, and packed with real-world perspective—the kind you only get when leaders speak from experience, not theory. As I reflected on the discussion, several themes stood out to me as someone who’s spent years trying to align technology with clinical and operational value.
These are the insights I found most useful and believe are particularly beneficial for health system leaders trying to bridge the gap between strategy and care delivery.
Some quotes have been slightly edited for clarity and brevity.
1. Who’s responsible for value realization?
This question led to one of the clearest insights of the day. Dr. Zafar Chaudry, Chief Digital Officer & Chief AI and Information Officer of Seattle Children’s put it simply: “It’s operationally led, IT-enabled.” That stuck with me. It’s a reminder that while technology provides the tools and data, the real definition and delivery of value must come from the clinical and operational leaders who live the outcomes every day.
Melanie Thomas, CIO at Nashville General, helped paint the picture of what that looks like in practice. She described a dual approach—where a performance excellence team tracks organization-wide KPIs, while departments own local metrics like productivity and turnaround times. That hybrid model offers both structure and accountability.
2. Don’t forget about the soft ROI.
Doug Burkott, Sr. Director, IS Business Operations of Baptist Health, offered a grounded perspective on ROI: “Yes, you need a business case, but not all returns show up in a financial ledger.” He emphasized the importance of including both hard ROI (like cost savings from application rationalization) and soft ROI (like improved staff satisfaction or reduced friction in clinical workflows).
Dr. Zafar Chaudry expanded on this idea with a critique of surface-level ROI claims. He shared an example many of us can relate to—vendors promising to save nurses 15 minutes per shift. His response? “They’re already working an hour over their shift, and it’s unpaid. So how are you really saving 15 minutes?”
It’s a sharp reminder that not all time savings equate to real financial return, especially when those efficiencies don’t actually change staffing models or resource utilization.
Both Burkott and Dr. Chaudry emphasized the need to look beyond templates and challenge assumptions. If we don’t scrutinize ROI inputs—especially around labor savings—we risk overestimating impact. For healthcare leaders that means getting comfortable with soft metrics and asking tough questions about how ROI is calculated, experienced, and validated.
3. Clean data is a prerequisite for ROI.
Access to vast amounts of data is supposed to be an asset, but in practice, it often creates new challenges. Dr. Chaudry outlined how data lives in hundreds of siloed systems—EMRs, LIS, RIS, billing platforms, scheduling tools—and each has its own formats, definitions, and communication protocols. Integrating all of that into a coherent, usable view for ROI tracking? It’s hard, it’s resource intensive, and it’s exhausting.
He also reminded us that even when the systems are in place, the data entry can be inconsistent. Clinical teams focus (rightly) on patient care, not perfect documentation. That means typos, missing fields, and inconsistent codes. Add to that the complications of legacy platforms and outdated formats, and it’s easy to see why ROI reporting isn’t always straightforward.
For healthcare leaders, this highlights an essential truth: measuring value requires more than just data—it requires curated, standardized, validated data. That’s where clinical informatics teams, governance models, and investment in data stewardship become just as important as any analytics tool.
4. Use EA and TBM to Make Smarter Portfolio Decisions
One of the most actionable insights came from Burkott’s deep dive into application lifecycle management. He stressed that without disciplines like Enterprise Architecture (EA) and Technology Business Management (TBM), health systems can’t meaningfully assess what tools they have, what they cost, and where the overlap is.
EA is about mapping capabilities—what are all the things your organization can do with its existing tech stack? Doug described the value of visualizing overlaps across hundreds of systems, which helps identify candidates for rationalization. When a department wants to solve a problem, EA helps answer, “Do we already have a tool for that?”
TBM, on the other hand, is the financial counterpart. Doug called it “a flashlight into the black box of IT spend.” It allows organizations to map costs by application and function, revealing how much each capability actually costs to maintain.
Together, EA and TBM empower leaders to prioritize based not just on clinical need, but on financial sense. And for CMIOs, these tools are invaluable allies in the push to deliver clinical innovation without blowing up the budget.
5. Attribution is messy, especially when ROI isn’t financial
One of the most important reminders came when we discussed outcomes that don’t have dollar signs attached—like staff satisfaction or reduced friction in clinical workflows.
Dr. Chaudry framed it perfectly: “ROI is often cost avoidance, not cost savings. And some benefits, like improving staff experience, are just the cost of doing business.”
That rang true for me. So much of the value in digital health is felt in reduced frustration, better team collaboration, and safer patient experiences. But to make that value visible, we need to measure it deliberately.
Tactically, that might mean running pre- and post-implementation staff satisfaction surveys tied to specific projects, tracking turnover rates or absenteeism in impacted departments, or holding structured focus groups to gather user feedback over time. These methods can help quantify how technology changes affect morale, engagement, and workflow ease.
When done consistently, they give leaders concrete inputs that can be mapped to outcomes—whether it’s reduced training costs, improved retention, or more efficient use of clinical time.
6. Innovation requires guardrails, not just enthusiasm
We all talk about “failing fast,” but Dr. Chaudry made a compelling point: it only works if your organization has the culture and structure to support it.
Too often, early pilots lack clear goals, timeframes, or off-ramps. Without that structure, you end up with “never-ending pilots” that sap energy and budget.
It’s a reminder that innovation needs rigor. Pilots should be scoped tightly, aligned with organizational priorities, and assessed with honesty.
Final Thought: It all starts with the problem you’re solving
More than once during the panel, I found myself reflecting on the importance of understanding the problem you’re trying to solve in the first place. Burkott mentioned a principle called Kydland’s Law: “A problem well defined is a problem half solved.” That really resonated with me. He even added it to their business case template because it’s that critical.
Too often we rush to add technology before we’ve fully unpacked the underlying need. When that happens, we end up in cycles of rework, analysis churn, and misaligned solutions.
Health systems have the opportunity—and responsibility—to pause and ask the clarifying questions. What problem are we solving? Who feels it? Why now? And what would good look like? Without that clarity, ROI becomes a guessing game. With it, we build solutions that matter.
Thanks again to Zafar, Melanie, and Doug for their wisdom—and to all the CHIME Foundation members who joined us.