If AI solves problems as well as creates them, can one counteract the other?: Honest Discussions at the Intersection of AI and the SDGs
On Tuesday, 16th September, I attended “Honest Discussions at the Intersection of AI and the SDGs,” an evening event jointly organized by Humane Intelligence, Compiler and Technology Salon NYC at the Doris Duke Foundation (thanks to all these organizations, especially Doris Duke, for hosting us!).
The event was at full capacity, with over 250 expressing interest, and around 75 accommodated in a meeting room as well as an overflow room. Participants included a cross-section of the tech, AI, development, academic, policy, journalism, foundation, and impact investing communities. As an attendee said to my colleague, “All the cool kids are here!”

“Instead of asking, can AI do this, let’s ask, should AI do this?”

The evening kicked off with a keynote from Dr. Rumman Chowdhury, founder of Humane Intelligence, our fellow travellers in responsible AI. Rumman highlighted the need for trust and safety in AI, but also incentives (such as “bias bounties”) to expose vulnerabilities and biases in AI systems – not from a technical but rather social vulnerability. She emphasized the need to work with big tech for the greatest opportunity to be in the room with those designing, developing, and testing AI systems. And she reiterated her own perspective as a social scientist: “Instead of asking, can AI do this, let’s ask should AI do this?” Rumman’s keynote reminded me of a recent call to both policymakers and engineers by danah boyd (her chosen lower case formatting) for an interventionist rather than solutionist mindset. boyd argues that an interventionist approach is more iterative, non-deterministic, and aligned with the complexities of AI as a socio-technical system. On the other hand, the solutionist mindset valorizes disruptivities to the benefit of capitalism but can cause great harm in the interim. If we are in the room when AI is being designed, as Rumman called for, we can be involved both in the design as well as deployment of AI, from a human-centred design perspective.
We continued with the first panel on AI and the SDGs, with Mala Kumar from Humane Intelligence moderating a discussion between Jona Repishti from Digital Green and Professor Marco Tedesco from Columbia Climate School. Both speakers felt that the development community is falling short on achieving the Sustainable Development Goals (SDGs), having achieved around 35% of goals, including stalling on poverty and hunger reduction. Against this background, both speakers were relatively positive about the benefits of AI to the SDGs. Jona gave the example of Digital Green’s Farmer.chat, an AI-powered platform that has answered over 3.8 million questions from 300,000 users in 13 languages, and where 45% of the queries were non-textual (i.e., photo or voice-generated queries). Jona reported that using AI has reduced the cost of communicating to a farmer by 10x.
However, the speakers recognized that questions remain about realistic skill levels as well as the quality of information provided through AI, particularly as agronomic training data can be stale and formal, and out of date very quickly, given climate change. This is where real-time AI integration (e.g., weather data) can become essential. Marco also acknowledged the environmental impact of data centres (a new form of colonialism) and drew attention to the ongoing efforts on harvesting micro nuclear energy to power data centres, with a goal of 0% carbon emissions.
Finally, both speakers also recognized power dynamics prevalent in AI, notably replicating information asymmetries. Marco reiterated that without community-driven co-creation and a focus on climate justice, the rapid advancement of AI could create more stress in an already stressed environment. As he put it, “The AI that you use today is probably the worst model you’ll ever use,” emphasizing the opaque nature of current systems and the need for continuous improvement. The goal is to build trust through community-owned data and “playgrounds” where local populations can experiment and learn, ensuring AI serves the people, not just a privileged few.
The Role of Philanthropy and the Call for Urgency

The second panel was moderated by our very own Linda Raftree (founder of The MERL Tech Initiative) with panellists Veronica Olazabal, philanthropy and social finance executive and co-lead of the NLP-CoP’s Philanthropy Working Group, Vilas Dhar from Patrick McGovern Foundation, and Sunil Wadhwani, donor-founder of Wadhwani AI. The spotlight was on the critical role of philanthropic capital and strategic philanthropic partnerships, particularly in the wake of severe global funding cuts. Veronica highlighted that in the grand scheme of things, philanthropic funding was a small piece of the pie – around 7%. This point emphasized why this kind of funding needs to be a strategic leverage and the importance of collaboration across the philanthropic community.
The speakers identified three major challenges:
- The vast amounts of funding being funneled to AI without a clear focus on the world’s most impoverished populations.
- The necessity of working with governments to build responsible AI systems for public services like healthcare and education.
- The gap between critiquing big tech and offering concrete, scalable solutions.
Sunil issued a call for urgency in working with big tech. He argued that the current philanthropic model is too slow and incremental, often getting stuck in pilot cycles that fail to scale. He emphasized the importance of working at practical levels to gain understanding and trust with community experiences, at the policy level based on what is learned from practice, and supporting greater capacity and expertise among government and others as a way to rapidly increase impactful and sustainable integration of AI.
In the closing statements, all three speakers spoke of the urgent need for philanthropy to invest in AI, and also use AI internally to accelerate their own grant-making processes, at a time when global development assistance is effectively collapsing.
Vilas’ final point remained with me: “Philanthropy should not just be seen as a source for capital but rather as a source of trust”. Sunil noted this includes a cultural shift toward funding “the plumbing” – the foundational infrastructure necessary for true social transformation. This was refreshing to hear but hard to believe, given that many foundations still run on short-term funding cycles and support initiatives such as grand challenges or innovation funds. This short-term, quick-fix approach appears to be even more the case in AI, as it is moving so fast, and the appeal is to fund many initiatives with the hope that a few will blossom. The combination of these two risks – “pilotitis” and that AI projects could not just be irrelevant to a community, but could cause actual harm – is truly alarming.
While Vilas likened AI to a Trojan Horse in the sense that one could use the huge amount of interest in AI to sneak in a range of other efforts, it is important to remember that it could also be the Trojan Horse that weakens them. Hearkening back to Panel 1, a core issue that is yet unresolved is how to reconcile using AI to fix the same things that AI is breaking. Two other thoughts on my mind when attending this event were Christina Wodtke’s exposé on Generative AI company harms, and Aranya Sahay’s highly acclaimed Humans in the Loop (2024) film about Nehma, an Adivasi woman in Jharkhand facing biases against her own indigenous knowledge as a data labeler.
To use another analogy, AI is a double-edged knife and cuts both ways depending on who wields it. The continued concerns over who owns and profits from “the knife” are still something that anyone wishing to work responsibly with AI needs to think about. At MTI, we continue to work on practical ethics that are within our reach, as well as raising awareness about the big picture ethics (climate, inequality, regulation, corporate capture) that need a collective, strategic approach that aims to ensure rights are upheld. Again, there are no simple solutions, but information systems (and now AI systems) are social systems, where we need to recognize that AI is created by us humans.
Reach out to The MERL Tech Initiative if you are interested in working with us to achieve responsible, ethical, realistic use of digital technologies and digital data at your organization, whether through research, evaluation, or strategy support.
Watch below a recording of the event, also available on Humane Intelligence’s website.
You might also like
-
Join us on May 28: Building a GenAI Sexual and Reproductive Health Chatbot in Senegal and Kenya – Technical and Operational Learnings
-
Safety by Design: When AI Finds the Cracks, Who Falls Through Them?
-
Event: Community listening – What have we learned about the role of technology?
-
Event: The Humanitarian AI Countdown – How is AI infiltrating humanitarian aid operations with Giulio Coppi
