Childproofing AI: a collective blindspot for the social sector?


Photo by Dall-E.

It’s fair to say that many of us may be experiencing ‘AI fatigue’, considering the amount of hype and coverage AI has generated in the past year (guilty!). However, one area which remains worryingly overlooked is the impact of AI on its youngest users – children. This is significant, given that ⅓ of internet users are children, who now average over 8 hours a day of screentime globally, represent a major user group in the countries our sector is active in, and have a greater appetite to embrace new technologies.

On October 3rd, the NLP CoP Ethics & Governance Working Group convened to discuss this worrying gap in both literature and policy, and share emerging best practices to mitigate it. Our speakers included:

Read on for an overview of our discussion – you can also find all the slides and meeting recordings here

Brain plasticity & empathy gaps: the unique ways in which AI affects children

Our speakers stressed how important it is to define what we mean by ‘children’. Chances are, if you’re developing a community-facing technology product, for example, to address a public health issue; you’re not not imagining it as a child-centered product. But from a legal, rights, and biological perspective, a child is generally considered anyone under the age of 18 – a threshold which extends to 25 in terms of brain development. This in itself makes designing for ‘children’ challenging. The needs of a 13 or 14 year old are very different than those of a 17 year old, but both groups are vulnerable in their own way.

Nomisha and Boglárka’s research highlighted a number of important risks that are unique to ‘social AI’ tools, of which chatbots are the most relevant (Nomisha also mentioned humanoid robots, but thankfully, we’re yet to see any RFPs focused on robot Community Health Workers…). 

The first is well known to most of us who have used a chatbot (and a phenomenon which I flagged back in 2019). The issue of hallucinations and toxic responses is well documented, but perhaps less understood is the impact on AI’s youngest users.The often anthropomorphic design of chatbots is much more ‘emotionally potent for young users’, which can mean that any harm caused by a response is magnified in the child’s mind and can significantly impact their mental health and emotional development.

Similarly, in the same way that biased training data can lead to biased responses for adult users – even highly localised models which rely on fine-tuning by adults or with adult data, can fail to engage appropriately with child users. Children’s speech is often playful, non-literal, filled with hyper-generational idioms, exaggerations and metaphors. Interacting with interlocutors who can actually parse these for deeper meaning, and help them build on their ideas is crucial to childrens’ cognitive growth. Whilst for obvious reasons there is limited hard data on this, the emerging evidence suggests that excessive use of AI tools could undermine critical thinking, embed attitudinal biases at a crucial age, reduce creativity, enhance unrealistic social comparisons (for example through exposure to AI-created social media content), and lead to unhealthy dependencies.

Our presenters covered many other risks, including the tendency of children to share sensitive information without understanding the consequences on their privacy or data rights. As with adolescents’ use of social media, it will take longer to definitively say which of these are the most significant. In the meantime, we need to ensure we take a risk-averse approach when designing tools for users whose brains are still developing, and are also growing up in adverse environments.

Legal governance of children and AI

There are no legal frameworks currently in place which exclusively legislate children’s use of AI. Instead, children are mostly bundled together with other ‘vulnerable groups’, as in the EU AI act, which vaguely mandates the protection of such groups by making sure the AI system ‘addresses their specific needs’. Similarly, the act requires service providers to develop a risk-management system, including anticipating and monitoring the adverse impact on persons under the age of 18. 

Guidance on how to create (or monitor) AI tools used by children is covered more extensively by non-binding policies, including the UNICEF Guidance on AI and Children 2.0. Indeed, Boglárka suggested that a rights-based approach is a useful starting point for anyone in this space, for example, considering how an initiative may violate child rights such as the rights to protection, non-discrimination, survival and development. Similarly, the UNCRC GC 25 specifically addresses children’s rights in the digital environment, including protection from harm, privacy and data protection, but also the right to digital education and literacy. 

Upholding these rights is positioned as a collective responsibility – shared between governments, parents, educators and technology providers. Unfortunately, whilst this is objectively the correct attitude to take, until the onus is legally placed upon, for example, service providers, to actively protect children from the risks associated with their products, we will continue to see products emerge and thrive which obviously and severely violate these rights.

2 stand out lessons learned when designing for children

As well as the principles mentioned above, our speakers emphasised a number of practical recommendations when developing child-centred AI. You can find many of them listed in Dr Kurian’s excellent paper on the topic, whose literature review highlights 27 recommendations across child-led design, transparency and accountability, continuous monitoring and improvement, and communication and interpretation.

As with all things Ethical AI, many of the recommendations are in fact not new: human centered design, safeguarding mechanisms, feedback loops, and privacy by design are all principles or processes we are familiar with – and yet still, often, struggle to abide by. But when designing for children these become must-haves, not nice-to-haves – and budgets need to reflect that. 

Dóra also had first hand, recent experience of creating an AI tool aimed at children, and shared some unique insights. Co-funded by the Justice Programme of the EU and implemented in Bulgaria, Greece and Romania, the i-ACCESS My Rights chatbot aims to help children access age-appropriate, reliable information about their rights when they are in the justice system.

Diagram of process followed by Terre des Hommes to develop a child-friendly chatbot

When I first spoke to Dóra in advance of this event, what struck me was the decision not to anthropomorphize their chatbot – even if this might make it less user-friendly. As a designer, this ran utterly counter to my instincts to make the product as appealing (and addictive) as possible – but it makes complete sense in the context of child development. It’s a great example of putting best practice above vanity metrics.

Dóra also warned that teams should prepare for a long runway in order to approach design and development responsibly. They painstakingly collaborated with technologists, child experts, legal experts, public administrators and children themselves – requiring patience and tradeoffs in order to meet such diverse needs.Similarly, they made sure to incorporate time and develop content to allow stakeholders, especially children, to learn about AI and generate dialogue in their own communities. This step is a crucial one which I believe should be standard all programmes with an AI component, whether the audience is young or not. It should also extend to internal teams, who as our previous events have shown, include major disparities in terms of AI literacy.

But perhaps most important in this process was the recommendation to work with children at every stage, in a way that accepts their agency in this emerging paradigm. After all, as early adopters of most new tech, the chances are, they have just as much to teach us as we do.

***

📌Access all the resources from this event here.

📌 Read Nomisha’s paper here, and Boglárka’s paper here – both delve deep into the risks and landscape.

📌  Practical guidance for safeguarding mechanisms in AI and non-AI chatbots here

📌 Join the NLP Community of Practice here, and sign up to the Ethics & Governance Working Group for invites to future events.

Leave a Reply

Your email address will not be published. Required fields are marked *