
Humanitarian Frontiers in AI
“Humanitarian Frontiers in AI” is a groundbreaking podcast series designed to explore the strategic and practical aspects of artificial intelligence (AI) in the humanitarian sector. This series aims to bring together thought leaders from academia, humanitarian innovation, and the tech industry to discuss the opportunities, risks, and real-world applications of AI in enhancing humanitarian efforts. Over a series of ten episodes, the project will delve into specific topics relevant to decision-makers and influencers within the sector, providing insights into how AI can be effectively and ethically integrated into humanitarian work. This podcast is graciously funded by Innovation Norway. https://en.innovasjonnorge.no/
Humanitarian Frontiers in AI
Ethics and Responsibility from 30,000 Feet
Are we ready to let AI drive humanitarian solutions or are we rushing toward an ethical disaster? In this episode of Humanitarian Frontiers in AI, host Chris Hoffman is joined by AI experts Emily Springer, Mala Kumar, and Suzy Madigan to tackle the pressing question of accountability when AI systems cause harm and how to ensure that AI truly serves those who need it most. Together, they discuss the difference between AI ethics and responsible AI, the dangers of rushing AI pilots, the importance of AI literacy, and the need for inclusive, participatory AI systems that prioritize community wellbeing over box-ticking for compliance. Emily, Mala, and Suzy also emphasize the importance of collaboration with the Global South and address the funding gaps that typically hinder progress. The panel argues that slowing down is crucial for building the infrastructure, governance, and ethical frameworks needed to ensure AI delivers a sustainable and equitable impact. Be sure to tune in for a thought-provoking conversation on balancing innovation with responsibility and shaping AI as a force for good in humanitarian action!
Key Points From This Episode:
- Responsible AI versus AI ethics and the importance of operationalizing ethical principles.
- The divide between AI for compliance (negative rights) and AI for social good (positive rights).
- CARE’s research advocating for “participatory AI” that centers voices from the Global South.
- Challenges in troubleshooting AI failures and insufficient readiness for technical demands.
- The need for AI literacy, funding for holistic builds, and a cultural shift in understanding AI.
- Avoiding “participation-washing” in AI and raising the standard for meaningful inclusion.
- Ensuring proper due diligence through collaborative design and authentic engagement.
- Why it’s essential to slow down and prioritize responsibility before rushing AI implementation.
- The question of who is responsible for halting AI deployment until systems are ready.
- Balancing global standards with localized needs: the value of a context-sensitive approach.
- Building infrastructure for the future: a focus on foundational technology, not one-off solutions.
- What goes into navigating AI in a geopolitically diverse and rapidly changing world.
Links Mentioned in Today’s Episode:
The Inclusive AI Lab by Emily Springer
The Machine Race by Suzy Madigan
FCDO Call for Humanitarian Action and Responsible AI Research
ML Commons AI Safety Benchmark
‘Collective Constitutional AI: Aligning a Language Model with Public Input’
Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn