Design is Dead; Long Live Design: How Designers in International Development are thinking about AI

With thanks to co-leads Sara Chamberlain and Sofie Meyer for their suggestions and contributions.
Jenny Wren, the Design Lead for Claude, Anthropic’s LLM, claimed earlier this month that “the design process (…) is basically dead.” Whilst many of us won’t agree with this statement – watch this space for a rebuttal! – Many will agree with the sentiment: AI is profoundly changing the way we go about designing digital tools, and the profession of Design, with far-reaching implications for the International Development sector.
On March 12th, 50 members of the newly founded Design & UX of AI Learning Group representing 25 countries across Africa, South & East Asia, Europe and the US, came together to discuss how AI is reshaping the role and practice of Design & User Experience (UX), why this matters, and what it means to ‘design well’ in the age of AI.

Definitions of Design and UX used to anchor the conversation.
Crying, confused, and a little nauseous
We kicked off our discussion by asking participants to share three emojis that represent how they feel about AI in Design right now. As an ironic gesture which ultimately delivered a spot-on interpretation, we (anonymously) ran the results through Claude, who told us that the emojis used by participants suggest a ‘turbulent and predominantly negative’ emotional arc and ‘high arousal’ throughout – with emojis representing rage, horror, distress, and overwhelm being shared frequently. “Whoever sent these was having a very eventful time!” concluded Claude.
Despite the dominance of more negative emotions, many were paired with more neutral ones expressing curiosity, even hopefulness – suggesting conflicted feelings rather than a clear emotional state. One participant mentioned, “AI is totally helping to make sense of lots of inputs and insights, qualitatively, and yet there’s just this fear that we’re contributing to (…) an extractive, dangerous business model while enjoying its benefits.”
Similarly, others mentioned how powerful AI can be as part of a rapid-co-creation or prototyping with users: “one specific use case where I feel AI has greatly helped in my design process is visualizing specific user journey components, which otherwise would take 3 or 4 days to design on Canva, then (…) get user input 2 weeks later, then come back and iterate. Whereas you can actually do these iterations with [AI] while you’re speaking with users at the same time [that you’re designing].”

Snapshot of emoji analysis produced by Claude Sonnet 4.6
The contradictions and concerns encapsulated by this brief exercise are playing out across the development sector, but the threat to Design as a discipline feels especially existential. Where designers often played the central role in shaping both digital tools, and the human-centred processes underpinning them, many now find themselves consulted at piecemeal junctures, while data scientists, engineers, product managers and vibe coders steam ahead, with no formative research, no Theory of Change, no meaningful involvement of intended users, and testing confined to the lab or laptop rather than the real world.
But is this disruption of the status quo actually a problem, or are digital designers just feeling ego-bruised and left behind? In a short presentation, learning group co-lead Sara Chamberlain argued that AI and Human Centered Design can diverge in ways that can limit impact and have negative consequences, including making existing inequalities worse.
The diverging lifecycles of AI and Human Centered Design
To illustrate this, Sara took us through the well-established, iterative, five-stage HCD lifecycle (Discovery, Definition, Ideation, Prototyping, Testing), which is rooted in formative research, co-creation with stakeholders including users, and real-world usability testing. Drawing on a recent working paper, we discussed how AI development increasingly bypasses these stages, skipping the foundational research needed to understand who users actually are, what barriers they face, and what contextual factors shape their lives.
When users and frontline stakeholders are excluded from the ideation, prototyping, and testing phases, AI tools risk disrupting existing workflows, creating invisible burdens, and failing in real-world conditions. For example, Google DeepMind’s AI tool for detecting diabetic retinopathy passed lab-based testing but struggled when trialled in 11 Thai clinics. Poor clinic lighting and inconsistent use of dilation drops caused 21% of images to be rejected, triggering mandatory hospital referrals that many patients could not afford, leading up to 50% to opt out at some clinics. The team ultimately had to revise their protocol to involve a human ophthalmologist before any referral was made. This expensive mistake could have been avoided if frontline nurses and patients had been consulted at the outset about real-world conditions.
This example highlights a related problem, that of techno-centric evaluation practices: because model accuracy poses the most obvious risk factor, there has been an overemphasis on model evaluation, at the expense of wider evaluative practices that span user trust and satisfaction and overall product performance, as well as short and mid-term real-world outcomes or unintended impact. Whilst recent AI evaluation frameworks like The Agency Fund’s AI Evaluation Playbook offer a more nuanced approach, they still reflect a worldview in which impact will just ‘happen’ if the technology is working as intended across all these dimensions – as opposed to situating tech as only one ingredient in what should ideally be a multi-channel approach backed up by a rigorous Theory of Change.
Without design at the heart of AI tool development, we risk exacerbating inequities, undermining human autonomy, disrupting workflows, diverting precious resources, and eroding trust. This means returning to first principles: formative research, ongoing user engagement, intersectional gender analysis, and treating technology in development as an iterative, long-term investment, not a quick fix
Mistrust by Design: a counterintuitive new design principle?
Empathy is a core tenet of Human Centered Design. As such, whatever our own misgivings about AI, we need to recognise that in some contexts, and among some populations, many of our potential users are already turning to AI for information, support, and help with tasks related to education, health, or livelihoods. Research by OpenAI released last summer showed that usage in low and middle-income countries is increasing faster than in high-income countries, with youth under the age of 26 leading the charge. To identify where AI can add genuine value, or where alternatives are required, we must dig deeper into how users perceive, trust, and engage with AI.
Our final speakers, Yann Le Beux and Oluchi Audu from YUX Design set out to do exactly this with their Cultural AI Lab, an effort to understand “how Africans are already using LLMs”. Their survey, conducted in June 2025, collected data from 411 urban respondents in Kenya and Nigeria, 95% of whom were between 18 and 34 years old. The survey showed that 42% of respondents use AI at least once a day, 23.6% use it 10 or more times a day, with 27% using a paid version. Respondents reported using AI to organize their life (41.4%) and practice difficult conversations (39.6%), and significantly for our sector, 39.6% use it to seek health advice, and for therapy or companionship (35.1%).
Despite some geographic variations, most respondents either somewhat or completely trust AI across every aspect of their lives, from religion and spirituality to financial advice. Respondents valued clarity, accuracy, and relevance most highly, while poorly localised tools (for example, where accents weren’t recognised, local languages weren’t understood, or responses weren’t culturally relevant) were the top barriers to use and trust.
Yet the strong correlation between language and trust should give us pause. Speaking Swahili like a Kenyan doesn’t make a tool responsibly designed, and there is growing evidence that language and accuracy metrics are superseding all other benchmarks, with AI tools being deployed in live environments without consideration for the myriad other factors that determine success and prevent harm.
Critically, internet connectivity was the second-highest challenge reported by YUX’s sample, reinforcing why early and continuous feasibility research matters. It also suggests that those ploughing ahead with AI-powered interventions need to prioritise investments in tools that can work just as efficiently offline, as well as in evaluating model and product performance not just in ‘lab’ conditions but in low- or no-connection settings.
Few respondents cited data privacy as a key concern, which raises an important question: until the companies behind major LLMs prove worthy of our users’ trust, should we be designing tools that actively encourage “Informed Mistrust” ?. A new benchmark for responsibly designed AI tools could perhaps position those with the highest level of drop off after Ts & Cs as setting the bar for truly informed consent. The YUX team also reiterated that trust in AI tools was not something that could be evaluated during a one-off exercise, but must be built and measured across multi-turn conversations unfolding over days and weeks.
*
Human Centered Designers and other ‘responsible tech’ advocates can sometimes feel like they’re broken records, urging those funding and making promises around impact and efficiency gains to do their research, collaborate meaningfully and continuously with representative users, test and iterate continuously based on feedback, and bake in gender inclusion. This feeling is compounded by the current rush on AI investment. It’s tempting to be swept along by the tide, doubting our instincts and prior experiences, asking ourselves whether we’re being too resistant to change. AI is, after all, likely here to stay, and there is no doubt that it is a transformative technology that expands the potential and impact of what is possible. But if any consensus emerged during this first session, it is that now, more than ever, designers should be turning up the volume, not waiting on the sidelines while others change the record.
📌 If you enjoyed reading about this session, be sure to sign up for the AI CoP and to join the Design & UX Learning Group. Our next session will take a closer look at designing safe AI experiences. You can also email isabelle@merltech.orgto volunteer as a speaker or to share requests for future sessions.
You might also like
-
Event Recap: Evaluating the Climate & Socio-Environmental Impact of Data Centers
-
No Impact Without Engagement: Towards Standardised Product Metrics for Social and Behaviour Change Chatbots
-
Join us on May 28: Building a GenAI Sexual and Reproductive Health Chatbot in Senegal and Kenya – Technical and Operational Learnings
-
Safety by Design: When AI Finds the Cracks, Who Falls Through Them?
