How AI Is Failing Us—and What This Harvard-Trained Scientist Says We Should Do About It

URL copied to clipboard.
OpenAI logo next to a digital human face composed of glowing data points on a laptop screen.
Hollandse Hoogte

Summary:

  • Artificial intelligence raises questions of bias and inequities, with Dr. Avriel Epps advocating for community-driven solutions to tech harm.

  • Epps focuses on AI’s intersection with social justice and youth development, aiming to demystify complex technology for families.

  • She calls for a broader framework involving technologists, social scientists, and grassroots organizers to address AI bias effectively.

As artificial intelligence continues to shape everything from education to healthcare, experts are raising questions about who it benefits and who it leaves behind. In a recent episode of The AI Download, host Shira Lazar spoke with Dr. Avriel Epps, a computational social scientist and professor at the University of California, Riverside, who studies algorithmic bias and community-driven solutions to tech harm.

Epps, who holds a Ph.D. from Harvard and completed her postdoc at Cornell, focuses on the intersection of AI, social justice, and youth development. Her latest work includes a children’s book, A Kids Book About AI Bias, which she hopes can help demystify complex technology for families and educators.

 

View this post on Instagram

 

A post shared by Shira Lazar (@shiralazar)

According to Epps, artificial intelligence is not neutral. “Whatever the new technology is of the time is often weaponized against the ‘undesirables’ in society,” she said. Epps explained that facial recognition, surveillance tools, and recommendation algorithms often reinforce existing inequities.

She emphasized that bias in AI systems can emerge from various points, including data collection, modeling decisions, and deployment strategies. “Sometimes the groups that have less power are just not reflected in the datasets at all,” she said.

Epps described AI bias as “systematic errors that in some way concretize or reinforce existing historical oppressions.” This includes instances where AI tools prioritize privileged groups in ways that mirror real-world inequalities, such as gender disparities in content recommendation or racial inaccuracies in facial recognition.

 

View this post on Instagram

 

A post shared by Dr. Avriel Epps (@kingavriel)

Rather than focusing solely on technical fixes, Epps advocated for a broader framework that includes input from technologists, social scientists, and grassroots organizers. “That is the trifecta that needs to be in constant communication,” she said.

ADVERTISEMENT

She also critiqued purely technical solutions that do not account for the human and social conditions that create bias in the first place. “Changing the model weights or tweaking the algorithm isn’t enough,” she said.

Through her nonprofit, AI for Abolition, Epps supports a slower, community-focused model of development that works “at the speed of trust.” She described her organization as building relationships in communities that have been historically excluded from conversations about technology, aiming to produce tools and knowledge that are co-designed and contextually grounded.

Epps’s new book, A Kids Book About AI Bias, introduces young readers—and their families—to the concept of bias in machine learning. Written in simple language and paired with an interactive card game, the book explains how data is collected and used, and how it can reflect or perpetuate inequality.

“One of the things I don’t want for kids is for them to internalize something wrong with themselves because a computer changed their image,” Epps said, referring to AI-powered selfie filters that alter skin tone or facial features.

The book encourages children to think critically about technology, using examples like AI-generated images or facial recognition to spark conversations about fairness and power. “If you make something accessible to kids, you make it accessible to a lot of other people too,” she said.

In addition to education, Epps has called for more structural change, including the formation of data labor unions. “All of us who produce the raw materials that power this technology should have a say in how it’s used,” she said.

ADVERTISEMENT

She also stressed the importance of critical media literacy and taking regular breaks from the constant flow of digital information. “It’s really hard to have good media literacy when you’re drinking from a fire hose,” she said.

While Epps said she does not endorse any one AI tool as completely bias-free, she pointed to Canva’s image generator as one that has shown more diversity in some test cases.

She also praised the work of the Collective Intelligence Project, which brings communities together to help define ethical guidelines and values for AI systems. The group uses participatory methods to shape how models should behave and be governed.

As the AI field expands rapidly, Epps said she remains focused on ensuring that technological advancement includes ethical oversight and community participation.

“There’s no silver bullet,” she said, but meaningful progress will depend on building tools with—and not just for—the people they impact.

More headlines