Where Bias Begins in Healthcare AI
Chris Hemphill
Modular Feedback
In this episode of For the Better, host Ben Cash, CEO of Reason One, speaks with Chris Hemphill, now Director of Modular Feedback and a Fellow at the NYU McSilver Institute for Poverty Policy and Research. At the time of recording, Chris was serving as Senior Director of Commercial Intelligence at Woebot Health.
Chris shares why bias in healthcare data doesn’t start in the model—it starts in the system. From incomplete EMRs to mistrust shaped by history, Chris explains how AI in mental health must be designed with lived experience, transparency, and trust in mind.
“If you're not practicing things that acknowledge the diverse lived experiences of your population, how can you possibly be delivering good healthcare?” (01:02)
Throughout the conversation, Chris emphasizes that data doesn’t exist in a vacuum. What we see in spreadsheets and models is often the result of systems shaped by redlining, economics, and distrust.
“When people think about data, they think about spreadsheets and everything like that, but I actually think about histories and cultures and communities and how they form data.” (4:28)
They speak to the real-world implications of designing technology without diverse voices—and the harm that can follow when inclusion becomes a checkbox instead of a principle.
“If you're building something and you're not bringing those voices to the table, it all goes down to listening to people. Understanding who has the knowledge about these various cultures and populations that you represent, and finding ways to engage them.” (03:37)
Chris also challenges the idea that AI is purely technical—and reminds us that the people who develop, deploy, and evaluate it are accountable for its outcomes.
“Don’t let any of that allow people to put things over your head. Ask those basic questions. Hold vendors to task because you're advocating for a community of people here. They depend on you to do that.” (7:37)
At its core, this episode is about rethinking how healthcare approaches innovation—not by chasing speed, but by investing in trust, transparency, and cultural competency.
“Innovation can only move at the speed of trust.” (16:34)
Key Takeaways:
Why DEI in AI isn’t a feature—it’s a foundation
How bias in healthcare data reflects systemic gaps, not just patient behavior
What happens when tech is built without cultural and community input
Practical ways marketers and digital teams can evaluate AI tools more equitably
Why institutional trust—not just model accuracy—drives patient engagement
The role of curiosity, humility, and accountability in building inclusive systems
Resources mentioned in this episode:
Sponsor for this episode...
This episode is brought to you by Reason One, a group of problem-solvers and change-makers who help those who do good, do better. Whether you work in healthcare, a nonprofit, or a mission-driven organization, we help create beautiful, effective experiences for you and the people you care about.
Start turning your meaning into the message and your audience into advocates. Visit reasonone.com today.