Humanitarian Frontiers in AI

Ethics and Responsibility from 30,000 Feet

Chris Hoffman and Nasim Motalebi Episode 3

Are we ready to let AI drive humanitarian solutions or are we rushing toward an ethical disaster? In this episode of Humanitarian Frontiers in AI, host Chris Hoffman is joined by AI experts Emily Springer, Mala Kumar, and Suzy Madigan to tackle the pressing question of accountability when AI systems cause harm and how to ensure that AI truly serves those who need it most. Together, they discuss the difference between AI ethics and responsible AI, the dangers of rushing AI pilots, the importance of AI literacy, and the need for inclusive, participatory AI systems that prioritize community wellbeing over box-ticking for compliance. Emily, Mala, and Suzy also emphasize the importance of collaboration with the Global South and address the funding gaps that typically hinder progress. The panel argues that slowing down is crucial for building the infrastructure, governance, and ethical frameworks needed to ensure AI delivers a sustainable and equitable impact. Be sure to tune in for a thought-provoking conversation on balancing innovation with responsibility and shaping AI as a force for good in humanitarian action!

Key Points From This Episode:

  • Responsible AI versus AI ethics and the importance of operationalizing ethical principles.
  • The divide between AI for compliance (negative rights) and AI for social good (positive rights).
  • CARE’s research advocating for “participatory AI” that centers voices from the Global South.
  • Challenges in troubleshooting AI failures and insufficient readiness for technical demands.
  • The need for AI literacy, funding for holistic builds, and a cultural shift in understanding AI.
  • Avoiding “participation-washing” in AI and raising the standard for meaningful inclusion.
  • Ensuring proper due diligence through collaborative design and authentic engagement.
  • Why it’s essential to slow down and prioritize responsibility before rushing AI implementation.
  • The question of who is responsible for halting AI deployment until systems are ready.
  • Balancing global standards with localized needs: the value of a context-sensitive approach.
  • Building infrastructure for the future: a focus on foundational technology, not one-off solutions.
  • What goes into navigating AI in a geopolitically diverse and rapidly changing world.

Links Mentioned in Today’s Episode:

Emily Springer on LinkedIn

Emily Springer Advisory

The Inclusive AI Lab by Emily Springer

Mala Kumar

Mala Kumar on LinkedIn

ML Commons

Suzy Madigan on LinkedIn

Suzy Madigan on X

The Machine Race by Suzy Madigan

FCDO Call for Humanitarian Action and Responsible AI Research

ML Commons AI Safety Benchmark

‘Collective Constitutional AI: Aligning a Language Model with Public Input’

Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn