
Dr. Avriel Epps — computational social scientist, R&B artist, and author of A Kid's Book About AI Bias — traces her journey from child actor navigating entertainment industry stereotypes to AI researcher asking "Who's in charge?" Born from her own MySpace/YouTube audience-building as a Black creative, Avriel's 2012 UCLA research revealed streaming platforms offered more diverse, pro-social representations than traditional media's antisocial stereotypes — until algorithmically curated feeds changed everything. She unpacks how AI systems reflect the biases, values, and blind spots of their creators (investor class, eugenicists, long-termists dominating ethics conversations), why preparing kids for an AI future means prioritizing critical thinking and deep creativity over technical skills, how real community-building requires emotional discomfort and conflict resolution (not chatbot companions), why we must stop designing for the comfortable majority and center marginalized communities as co-creators, and how slowing down — grocery shopping with your child, reading labels together, being pleasant to workers — is a radical anti-capitalist act of resistance. Avriel shares her restorative justice story: algorithms pushed the person who harmed her to viral fame, then later led her to a healing circle. The future requires discernment, empathy, and reclaiming our humanity.
WHAT YOU'LL LEARN
Why AI systems reflect human bias, not objective truth — algorithms mirror values, blind spots, prejudices of their creators; "who's in the room making decisions" determines impact
Who's actually in charge of AI ethics — investor class drives development; ethics conversations dominated by eugenicists, white nationalists, long-termists (sacrifice lives now for hypothetical future)
How early internet was more democratic before algorithmic curation — MySpace/early YouTube: chronological timelines, user-driven sharing; now: recommendation systems shape what we see
Why critical thinking matters more than coding for kids' futures — white collar jobs disappearing; future requires deep creativity, critical thinking, discernment over technical skills
How AI can imitate empathy but can't feel it — chatbot companions leading children down harmful paths; profit-driven systems avoid discomfort to keep users engaged
Why real community requires emotional discomfort — navigating disagreements with grace, conflict resolution, courage to encounter uncomfortable social situations; political division tied to losing these skills
What "stop designing for comfortable majority" means — center marginalized communities as co-creators; technology should serve people at margins, not make email faster for PhD holders
How slowing down is radical resistance — grocery shopping with child vs ordering online: teaching price comparison, reading labels, being pleasant to workers, connection over convenience
Why anti-efficiency is anti-capitalist — "who benefits from us feeling we need to be efficient?"; productivity isn't central tenet of being human
What restorative justice looks like — Avriel's story: algorithms pushed person who harmed her to viral fame, later led to healing circle; justice can be transformative, not just punitive
LEARN MORE

May 29, 2025
Avriel Epps
AI, Digital Justice, and Creating a Fair and Just World
"AI isn't objective. It's a mirror of who we are—and who's in the room making the decisions."
TIMESTAMPS
00:02:51 – From R&B artist to AI researcher
00:06:18 – Early internet was more democratic
00:09:35 – Algorithms shape identity
00:33:19 – Who's in charge of AI?
00:35:32 – Preparing kids for AI future
00:38:23 – AI can imitate empathy but can't feel it
00:43:43 – Stop designing for comfortable majority
00:56:30 – Slowing down as radical resistance
01:00:18 – Modeling mindful tech use to children
01:04:22 – Empowerment and digital literacy
SHOW NOTES
Dr. Dan interviews Dr. Avriel Epps, a dynamic scholar, author, and strategist whose work sits at the crossroads of transformative justice and artificial intelligence. With a PhD in Human Development and a masters in Data Science from Harvard University, Dr. Epps brings a fresh and critical perspective to conversations about technology, equity, and social justice.
On today’s episode, Dr. Dan and Dr. Epps explore her work around how bias in predictive technologies affects racial, gender, and sociopolitical identity development. She aims to understand the complex ways that algorithm design and computer-mediated social expectations—often communicated through artificial intelligence systems—impact the beliefs, behaviors, and health of developing humans.
LINKS & RESOURCES
Dr. Avriel Epps
Website — AvrielEpps.com
Social Media — @KingAvriel (all platforms)
Book — A Kid's Book About AI Bias (available everywhere books are sold; support local bookstores and donate copies to libraries)
AI for Abolition
Community organization increasing AI literacy in marginalized communities and building community power with data-driven technologies
GUEST BIO
Dr. Avriel Epps is a computational social scientist and civic science postdoctoral fellow at Cornell University's Citizens and Technology Lab. She completed her PhD at Harvard in education with a concentration in human development, a master's in data science from Harvard's School of Engineering and Applied Sciences, and a BA in Communication Studies from UCLA. Avriel's research explores how bias in predictive technologies affects racial, gender, and sociopolitical identity development. She is co-founder of AI for Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with data-driven technologies. She has spoken at Google, TikTok, and for US courts on algorithmic bias and fairness. Avriel is the author of A Kid's Book About AI Bias (also for brave adults). In fall 2025, she begins as assistant professor of fair and responsible data science at Rutgers University. She is a former child actor and R&B artist who performed as King Avriel.
RELATED EPISODES








