top of page

The Next Chapter in Healthcare AI: Key Learnings from Our HLTH Webinar on Production Oversight

  • Feb 9
  • 5 min read

Last week, ALIGNMT AI hosted a forward-thinking webinar in partnership with HLTH that brought together some of healthcare's most trailblazing AI leaders. Our panel explored a critical question: how do we transform AI governance from a compliance checkbox into a competitive advantage that accelerates healthcare innovation?

The conversation revealed a fundamental truth: the gap between AI's promise and its impact isn't about technology anymore—it's about the infrastructure we build around it.

The Stakes Are Higher Than Ever

The numbers tell a compelling story. AI is now helping radiologists detect cancers with over 94% accuracy. Sepsis prediction models are giving clinicians six- to eight-hour head starts to save lives. And diagnostic error reduction—affecting 12 million Americans annually—is becoming reality through AI-powered anomaly detection.

Yet research shows that 72% of AI pilots fail to reach production. The difference? Organizations with mature AI governance are twice as likely to achieve ROI within 12 months.


Our Distinguished Panel

We were joined by leaders transforming AI governance across the healthcare ecosystem:

Natalia Summerville - Director of Applied Data Science in the Strategy and Innovation Division at Memorial Sloan Kettering Cancer Center, where her team develops data analytics products to support hospital strategy, innovations in care delivery, and cutting-edge cancer research. Previously, she led a team of Operations Research and Machine Learning experts at SAS, building analytical engines for customers across industries including Healthcare, Life Sciences, Retail, and Manufacturing. Natalia has been teaching undergrad and grad-level classes in Operations Research, Data Analytics, and Machine Learning since 2005, and is currently an Adjunct Professor at Duke University. She is deeply passionate about the Data4Good movement and serves as a board member within the "Pro-Bono Analytics" committee and is part of the "Franz Edelman Award" committee at INFORMS. At MSK, she co-founded the AI governance committee three years ago.

Romy Alusi Hussain - Staff Vice President at Elevance Health (as of March 2025), bringing extensive experience across the healthcare analytics landscape. Previously, Romy served as Vice President of Data, ML, and Analytics at Cohere Health, leading data lifecycle management and the development of over 200 deployed models. Prior roles include Technical Advisor to the CEO at Optum, where she focused on strategic R&D planning, and Vice President of Machine Learning, fostering foundational research in AI for disease prediction and preventative care. At Johns Hopkins Medicine, Romy served as Senior Director of Healthcare Economics and Data Science, and held managerial roles at Oscar Health and Yale New Haven Health focusing on financial and clinical analytics. She holds an MBA from Yale University and master's degrees from both UC Berkeley and the University of Cambridge.

Vik Wadhwani - Chief Transformation Officer at NCQA (National Committee for Quality Assurance), where he leads the Transformation Office created to accelerate strategic initiatives and support NCQA's vision to improve American healthcare. His responsibilities include aligning NCQA measure development and data collection to expedite the creation and use of digital quality measures, as well as leading NCQA's growth of internal capabilities to provide best-in-class service as the organization expands its content, products, and services. Vik brings over 22 years of program management experience in business and technology transformation, having led digital development and served as a strategic advisor for organizations including Deloitte, Geisinger, xG Health, Cerner, and Motive Medical Intelligence.

Sean Carroll - CEO and Chairman of the Board at Onpoint Healthcare Partners, a veteran healthcare executive with over 35 years of experience. Sean joined Onpoint in 2024 as an operating partner for investor Peloton Equity and now leads the company focusing on "Practice Management as a Service" (PMaaS) to reduce administrative burdens with AI-powered solutions. Previously, he served as CEO and Executive Chairman at Arcadia, a data-driven population health platform, where he drove advancements in value-based care, data analytics, and risk coding accuracy for healthcare organizations. His extensive career includes leadership positions at Nuance Communications, Webmedx, and Rodeer Systems, working across startups to public companies on the provider side of the industry.


Five Critical Learnings

1. Differentiate Don't Stifle: The Three-Tier Approach to AI Governance

One of the most powerful insights came from MSK's approach to segmenting AI by risk and use case:

  • Research AI - Give clinicians space to innovate without heavy oversight

  • Operational AI - Back-office automation with moderate governance

  • High-Risk Clinical AI - Full governance committee review before deployment

As Natalia emphasized: "It's important not to stifle research. Our clinicians are highly educated and we want to give them space for innovation. The governance committee comes in once technology is getting ready for deployment."

This differentiation prevents governance from becoming a bottleneck while ensuring appropriate oversight where it matters most.


2. Build the Platform Layer First

Romy Hussein shared a critical framework for scaling AI at enterprise level: separate your foundational capabilities from your product implementations.

"Create an AI platform layer within the enterprise for capabilities that unlock value at scale—LLMs for retrieval, extraction, and inference. Make those pipelines replicable and systematic," she explained. "Then governance becomes the 'easy button' by which we are deploying safely with guardrails in place."

This infrastructure-first approach enables velocity without sacrificing safety.


3. Silent AI: The Secret to Successful Production Deployment

Why do 72% of pilots fail? Often because organizations skip a critical step between pilot and production.

The panel unanimously endorsed "silent evaluation"—running AI tools in production environments for 4-6 weeks without clinical use to collect real-world validation data. Natalia noted: "If you do the pilot properly integrated into workflows, the jump to production should not be a big leap."

Sean Carroll added another crucial practice: "The more time we spend mapping out the existing workflow before we even get to AI, the better the outcome."

4. Human-in-the-Loop AND Human-on-the-Loop

Ensuring AI equity requires two levels of oversight:

Human-in-the-Loop - Real-time clinical decision support where AI assists but doesn't replace human judgment. MSK has clear role definitions: who owns the AI, who makes technical decisions, and who oversees ROI.

Human-on-the-Loop - Systematic monitoring of outcomes stratified by demographics, geography, and other equity factors. As Vik explained: "Certification standards should require mechanisms to monitor and stratify outcomes by population considerations—rural populations, disability, race, ethnicity—to pick up process patterns quickly."


5. Programmatic Equity from Day One

Romy Hussein challenged the industry to think differently about fairness: "Every product should ship with the telemetry required for good real-time decisions. Tie the evaluation harness to product metrics upfront so you can evaluate performance against protected classes or disease states in an instant."

This isn't a post-deployment audit—it's building equity measurement into the product itself.


The Standards Evolution


A significant portion of our discussion centered on the emerging landscape of AI standards and certification. Vik Wadhwani shared NCQA's work on use case-specific implementation playbooks that address:

  • Workflow redesign for collaborative AI-human decision-making

  • Staff training requirements

  • Real-time validation of clinical outcomes

  • Standardized frameworks for comparing results across implementations

The panel agreed: we need enough standardization to enable interoperability and shared learning, but not so much that we constrain innovation.


Rapid-Fire Recommendations


We closed with our panelists' top recommendations for organizations launching AI governance programs:

Natalia: "People. The group needs representation from all relevant teams and individuals with influence who will be heard."

Romy: "Benchmarks. Set metrics upfront and define what ground truth means for your questions."

Vik: "Comparative outcomes against baselines and other implementations."

Sean: "Trust, validated through repeatability, traceability, and compliance."


The Path Forward

As we navigate this "most fascinating, exhilarating, and terrifying time in healthcare," the message from our panel was clear: thoughtful governance isn't a barrier to AI innovation—it's the foundation that makes sustainable, equitable, and effective AI possible.

The organizations winning today aren't choosing between speed and safety. They're building governance infrastructure that enables both.


Watch the Full Recording

Missed the webinar? Catch the full recording at HLTH


Join the Conversation

At ALIGNMT AI, we're dedicated to helping healthcare organizations build governance frameworks that accelerate rather than constrain AI innovation. Ready to transform your AI governance into a competitive advantage? Contact us to learn more.


 
 
 

Comments


bottom of page