The latest "AI First" podcast, hosted by
Key moments:
(00:00) Meet the leaders behind Stanford Medicine’s AI strategy
(04:01) Automation vs. augmentation in clinical care and research
(08:26) Rolling out ambient scribes across the organization
(13:31) Faculty-driven demand for AI innovation
(18:53) Digitizing analog workflows in clinical trials
(24:45) How Stanford sets AI priorities with its “firm” framework
(30:57) Balancing innovation with trust, safety, and governance
(43:00) Predictive care and precision medicine powered by AI
(52:45) The future of personalized healthcare at Stanford
Democratizing AI in Healthcare
One of the themes of the discussion was the democratization of technology within healthcare. Mike Pfeffer emphasized how AI tools are breaking traditional silos and becoming accessible to a broader audience:
"Technology is not under the ownership of the CIO per se. It's becoming more democratized than ever before.”
Stanford Medicine has embraced this democratization by creating a secure platform that allows staff across various disciplines to experiment with AI models in a structured, safe environment. Pfeffer shared:
"We have over 15 models now accessible... we said, it's all, you know, go have fun, learn, experiment, and we've seen that be very, very successful."
Sandbox environments
Establishing an open, controlled environment for experimentation can foster grassroots innovation while ensuring compliance and security standards are met.
“...really making sure we keep those sandboxes , those tool sets available , for our own , you know , team members as well as our community so that they have that space to work and innovate because it is coming too fast and furious for any one person or one team to own”
Integration: Workflow is the Key
While AI offers transformative potential, both speakers stressed that successful implementation hinges on integrating AI into existing workflows. Mike Pfeffer pointed out:
"It is all about the workflow. If you don't get the workflow right, it doesn't matter how great your models are. It's not gonna work."
Stanford Medicine has adopted a phased approach, deploying technologies like AI Ambient Scribes gradually through piloting, training, and iterative improvements. As Pfeffer succinctly noted:
"You can plan the best, but until you actually test it, pilot it into that workflow, you just don't know."
The phased rollouts led to positive outcomes, including faster and better clinical notes that strengthen the relationship between patients and clinicians. By restoring personal attention—shifting clinicians’ focus from screens to patients—AI can enhance care delivery.
Frameworks for Evaluating AI Solutions
With vendors increasingly integrating AI into their offerings, organizations face the challenge of evaluating which solutions are truly valuable versus those offering novelty. Stanford Medicine’s approach is grounded in robust frameworks to streamline assessments. One example is the FURM framework (Fair, Useful, Reliable Models):
"We identify things that have AI right away. So they go into a separate dashboard so we can keep an eye on that... reviewing them before they even go forward ensures we make investments that align with our mission."
FURM Framework (Fair, Useful, Reliable Models)
"Being really clear about your requirements and criteria, and using pre-established frameworks, takes some subjectivity out of the process.”
Organizations should consider adopting similar methods to ensure AI solutions deliver measurable value while adhering to ethical and practical guidelines.
Ethical Considerations in AI
Deploying AI in healthcare raises ethical questions, and Stanford Medicine has taken a proactive stance. For instance, Todd Ferris outlined scenarios involving predictive models:
"If you're gonna create a model that predicts which patients are likely to no-show for their appointment... The question is, what do you do with that? You could double-book appointments or call the patient to ask how to help them get to the appointment. Right? Two totally different approaches with very different ethical implications."
To address such dilemmas, Stanford employs ethics assessments as part of their governance process, ensuring transparency and patient-centric decision-making.
This highlights the responsibility organizations have to approach AI with a mindset grounded in privacy, security, and fairness. As Pfeffer observed:
"We can't talk about healthcare without talking about trust and safety. Governance is key to maintaining trust as we deploy these innovations."
Multidisciplinary Collaboration: AI is Everyone’s Job
A recurring theme in the podcast was the importance of collaboration across teams. AI should not exist in isolation as a specialized department. Pfeffer underscored this:
"AI is everyone's job in IT, and everyone needs to be upskilled. You need an amazing infrastructure team, an amazing architecture team, and a great cybersecurity team. All those pieces need to come together."
This collaborative approach aligns with broader trends in technology adoption, where specialized teams evolve into multidisciplinary partnerships. As Jon Herstein noted:
"AI is transitioning like the early days of the Internet. No one has an Internet team anymore—it’s just part of doing business."
Challenges of Digitizing Legacy Processes
Despite rapid innovation, healthcare still faces hurdles in digitizing legacy systems. Pfeffer highlighted surprising areas, such as clinical trials and fax-based workflows, that remain paper-reliant due to regulatory requirements. AI can play a vital role in automating such processes, as exemplified by Stanford's AI model that processes incoming faxes:
"We have a model that reads incoming faxes for referrals, determines urgency, and sorts them into different queues."
Embracing AI to tackle these bottlenecks can free staff time for higher-value work, streamline operations, and improve patient outcomes.
Building Agility and Staying Mission-Driven
With rapid technological innovation comes inevitable challenges in staying agile. Both guest speakers acknowledged the difficulties of managing the pace of change while remaining focused on core values. Todd Ferris shared:
"There's so much coming at us all the time. We just have to stick to our mission and values—whether it’s patients, researchers, or education—and solve problems that matter."
Agility requires prioritization, teamwork, and a clear vision. Organizations should aim not just to adopt technology but to integrate it meaningfully to serve broader organizational goals.
Final Takeaways
The discussion between
-
Prioritize workflow integration and collaboration across teams.
-
Use structured frameworks like FURM to evaluate AI solutions critically.
-
Adopt a phased, iterative rollout process for new technologies.
-
Take an ethical, mission-driven approach to deployment.
-
Enable grassroots innovation while managing top-down initiatives.
Question to Box Higher Ed Community
Do you have frameworks similar to FURM that you are using to manage AI deployment?
