Exploring AI for SBC Strategy Design


The Social and Behaviour Change (SBC) working group of the NLP Community of Practice, of which I am a co-lead along with Nicola Harford and Stephanie Coker, recently organised a Slack event where participants looked at how various existing AI tools could potentially be used in SBC strategy design.

This hands-on experimentation event, held between 27 August and 3 September, was an opportunity for SBC professionals to experience how AI tools could enhance the design of SBC interventions, focusing on real-world applications. Participants highlighted the potential strengths and limitations of existing AI tools in designing SBC strategies. As one participant put it, “this event showed us that the future of SBC design isn’t just about human insights, but also about how we leverage technology to amplify those insights.

In this blog, we’ve summarised some of the key highlights.

The task

We created a background document that explained some basic concepts and the tools that participants could try out. Participants were asked to engage with a series of SBC-related tasks that integrated AI tools to streamline the strategy design process. Specifically, they were tasked with using these tools to:

Task 1. Define clear behavioural objectives and the potential barriers to achieving them.

Task 2. Identify and analyse the priority audience for interventions.

Task 3. Explore creative solutions and content strategies using AI-driven insights.

Each task was designed to push the boundaries of how AI could be incorporated into traditional SBC methods. The goal was to challenge participants to use technology not just for automation but to bring a richer, data-driven perspective to SBC.

Summary of the event

One of the most praised aspects was AI tools’ efficiency in data gathering. Tools like Perplexity were especially noted for their ability to quickly synthesise information and provide citations, which streamlined the research phase. As one participant put it, in comparison to some of the other AI tools that were tested, “Perplexity had better citations and provided stats, which was helpful to understand the extent of the problem.” This speed made it easier for participants to gain insights rapidly, allowing them to move through tasks more efficiently.

Another significant strength of AI tools was their ability to simplify complex concepts. Participants highlighted how tools like ChatGPT and Perplexity helped break down intricate frameworks, such as the P-Process, into more manageable steps. One participant shared, “I didn’t know about the P-Process and SBCC approach. My first prompt was to ask how to use the P-Process to improve handwashing, and the response was really useful.” This ability to clarify and teach unfamiliar frameworks showed how AI can enhance understanding and decision-making in SBC projects.

Participants also valued the flexibility of using multiple AI tools, with many experimenting across platforms like ChatGPT, Perplexity, and Claude. One user noted, “In general, I find ChatGPT and Perplexity useful if the prompts are sculpted mindfully. I feel these tools are a viable way for drafting a ‘starter dough’ for an SBC strategy in a specific context.” This cross-platform adaptability was a major strength, as different tools offered complementary insights, helping participants to create more robust strategies.

However, there were also notable limitations. Many participants found the learning curve for some AI tools, like Elicit, steep. One participant mentioned, “Research Rabbit was too academic. It provided links to papers, but I had to read them, not summaries.” This made the tool less accessible for users who needed quick, digestible information. Additionally, concerns about accuracy and fact-checking arose. Another participant shared frustration with Perplexity, stating, “It gave me four sources, but one of the first summaries was inaccurate. I had to dig through to correct it.” This indicates that while AI can expedite research, it still requires human oversight to ensure accuracy.

Contextual relevance was another challenge. AI tools struggled to fully capture local nuances, with one participant working on teen contraception access in Nairobi noting that despite specifying their location, “The AI didn’t seem to fully capture the local nuances of Nairobi.” This limitation highlights the need for AI tools to be more culturally adaptive when applied in specific regional contexts.

Key highlights from Task 1

Task 1 was dedicated to the first step of the P-Process: research and inquiry. Participants chose a topic—either access to contraception, improving handwashing, or antenatal care visits—and used AI tools to gather data, identify behavioural determinants, and map stakeholders.

Many participants found the AI tools useful in speeding up data collection. Perplexity emerged as a favourite for research due to its ability to cite sources and back up data with statistics. A participant focused on increasing antenatal care visits in Nairobi shared:

“Perplexity backed up the information with statistics, providing a clearer understanding of the problem’s scope. It was also great with citations”.

However, while Elicit was promising for academic research, its highly academic focus was a challenge for some participants. Echoing the previous critique, one person noted:

“Research Rabbit provided links to papers but no summary, which made it difficult to quickly digest and act on the information”.

This highlights one of the key early insights of the event: precision matters when interacting with AI. Broad or vague prompts often led to irrelevant results, forcing participants to refine their questions. As one participant explained:

“I needed to reframe my questions to get answers that were directly related to my context”.

Despite these challenges, AI helped participants streamline the research process. One participant observed:

“I think I would have found the same information online, but using AI tools saved me time”.

This reinforced a key theme throughout the event: while AI accelerates research, human oversight remains essential for ensuring relevance and accuracy.

Key highlights from Task 2

With the foundational research in place, participants moved on to designing their SBC strategies. This phase focused on identifying suitable SBC frameworks, developing a theory of change, and selecting behaviour change techniques.

Participants experimented with various tools, refining prompts to better tailor AI responses to their needs. One participant working on improving handwashing noted how ChatGPT and Perplexity were helpful in providing step-by-step guidance on using the P-Process, but also remarked that more follow-up questions were needed to narrow down broad initial responses:

“Although the first prompt was useful, it was broad. With time, I would have listed follow-up questions and continued asking”.

One notable challenge emerged when participants tried to make their AI queries more context-specific. For example, the participant working on antenatal care in Nairobi found that altering the geographic context in the prompt significantly affected the framework recommendations:

“It was interesting how changing the prompt’s specificity—’I am based in rural Kenya’—completely altered the AI’s framework recommendation”.

This led to a key takeaway: specific prompts lead to more tailored and useful outputs. Participants realised that starting with a clear understanding of the context is crucial when working with AI tools.

Key highlights from Task 3

We were looking forward to participants experimenting with AI tools to prototype messages and/or interventions, and possibly even run simulations of what various audiences might make of the interventions, but they did not have time for this.

Key lessons on AI and SBC strategy design

  1. Speed and efficiency in data gathering: AI tools like Perplexity and ChatGPT provided a quick and efficient way to gather and summarise data, saving significant time during the research phase.
  2. The importance of precision in prompts: Refining prompts significantly improved the quality of AI output. Broad prompts led to generalised results, while specific, context-driven prompts yielded more actionable insights.
  3. Steep learning curve with AI tools: The steep learning curve involved in using AI tools effectively was a challenge. While AI tools can be powerful, participants had to experiment with prompt structures and tools to get optimal results.
  4. Critical thinking is still key: While AI tools can accelerate research and strategy design, participants emphasised the need for critical thinking and human oversight. AI is most effective when paired with careful review and contextual validation.
  5. Cultural context and localisation matter: Participants learned that while AI can generate insights quickly, ensuring those insights are culturally and contextually relevant requires additional human intervention. Local expertise remains essential in interpreting AI outputs.

Recommendations for SBC practitioners

Based on the event’s findings, here are some key recommendations for SBC practitioners interested in using AI tools:

  1. Invest in training and experimentation: There is a significant learning curve when working with AI tools. Practitioners should allocate time to experiment with different tools and optimise their prompt structures to get the best results.
  2. Understand prompt design to maximise AI utility: To fully benefit from AI’s time efficiency, a basic understanding of prompt design is essential. While it is possible to “play around” with different prompts, this approach can be time-consuming and resource-intensive, using up computing power, energy, and water. As a result, prompt engineering is emerging as a foundational skill.
  3. Use AI as a co-pilot, not a replacement: AI is a useful tool for supporting SBC work, but it should not replace human expertise. Use AI to speed up research and generate ideas, but always review and refine its outputs critically.
  4. Start with specific, localised prompts: The specificity of your prompts directly impacts the quality of AI output. Make sure to include clear, localised context in your queries to avoid irrelevant or overly generalised results.
  5. Validate AI outputs for cultural relevance: Ensure that AI-generated insights are relevant to the local context. Cross-check AI outputs with local practitioners and stakeholders to verify cultural appropriateness and practicality.
  6. Combine AI with traditional SBC tools: AI tools work best when integrated into existing SBC frameworks like the P-Process. AI can accelerate specific stages of the process, but traditional SBC approaches remain essential for effective strategy design.

A note on the process

We had anticipated more engagement during the Slack chat event as 30 participants registered. However, we had about 25% of those who registered engage with the tasks. There were several challenges around balancing time with work or illness.

We also learned that participants prefer live events (for example over Zoom) where they can work on tasks at a specific time. Therefore, the idea of working asynchronously was not ideal for most participants. We plan to hold future skills-building events in a live format.

To join future events, please register here.

Leave a Reply

Your email address will not be published. Required fields are marked *