NLP Community of Practice

The Natural Language Processing Community of Practice (NLP-CoP) brings together monitoring, evaluation, research, and learning (MERL) practitioners, artificial intelligence (AI) experts, and data responsibility advocates to learn and collaborate. We focus on responsible, appropriate, and effective applications of NLP (including Generative AI) to address demand-driven, real-world MERL challenges.

We aim to reach and support an array of audiences who may not have data science backgrounds, presenting complex and opaque concepts in accessible ways. We work together to influence the use of NLP and GenAI for MERL, and we advocate for a stronger role of MERL professionals in evaluating the effects of NLP and GenAI in social sector programming and wider society.

The NLP-CoP is part of the wider MERL Tech community, which has focused on the responsible use of digital data and technologies in monitoring, evaluation, research, and learning practices for over a decade. The MERL Tech Initiative convenes and supports the CoP.

The Goals of the NLP-CoP are to:

  1. Connect and build a diverse network of MERL practitioners committed to learning about and advocating for responsible, ethical applications of NLP and Generative AI for MERL in development, humanitarian, human rights, peace building, and philanthropy work.
  2. Democratize understanding of NLP and Generative AI technologies and what they can (and cannot or should not) do for MERL, especially in support of problems and practitioners in global majority countries.
  3. Identify, explore, develop and/or pilot ethical and responsible NLP/GenAI tools, systems, code, and models for MERL and share those lessons in the public domain.
  4. Develop guidance, and support materials as public goods to enable responsible uptake and scaling of NLP and Generative AI technologies for MERL and to equip MERL professionals to better assess and evaluate NLP/GenAI models and uses more broadly.
  5. Influence how the MERL sector (and beyond) understands and views NLP and Generative AI at practical and systems levels and raise the alarm on contexts and instances when the use of NLP/GenAI technologies should be actively discouraged or discontinued.

Become a member!

Membership is free! To become a member of the NLP-CoP, fill in this form indicating your agreement and alignment with the NLP-CoP Charter and Code of Conduct.

What is Natural Language Processing (NLP)?

Natural Language Processing (NLP) combines artificial intelligence, computer programming, statistics, and linguistics to make it possible for computers to read, understand, generate, and draw insights from human language, text, and speech data. To conduct NLP, a computer applies certain rules to a given data set and then runs the rules on a larger set of data to draw conclusions and make predictions. 

NLP has come into focus in the public eye most recently with the release of ChatGPT, Bard, Claude, and other Generative AI (GenAI) tools that build on Large Language Models and NLP capabilities. GenAI learns from existing data and creates new content (including text, images, music, and code) that resembles the data it’s been trained on. Current commercial Gen AI models were largely trained on a huge corpus of digitized books and the Internet, with some additional ‘fine tuning’ done by humans to improve capacity for useful outputs

What do I get out of joining the NLP-CoP?
  • Access to meetings and events.
  • First access to online events and ‘in person’ events where there is limited space. 
  • Access to NLP-CoP LinkedIn discussions, NLP-CoP Slack channels, and/or other discussion forums that the CoP might create.
  • Opportunity to propose topics and to create, lead, and/or participate in WGs. 
  • First access to specific learning, materials and resources created by WGs.
  • Access to ‘expert advice’ via set ‘office hours’ or a ‘help desk’ (as it becomes available).
  • A voice in identifying NLP-CoP priorities, strategic opportunities, thematic areas of focus.
What Working Groups can I join?
  • Sandbox Working Group: The Sandbox WG group works to identify, test, and compare tools / applications; collaborate on open-source NLP for MERL approaches; and delve into technical nuts and bolts of LLMs, NLP and GenAI applications that are the interface between the models and those who use them. 

  • Ethics and Governance Working Group. The EGWG focuses on exploring emerging ethics and governance issues and themes, keeping tabs, and sharing AI ethics and governance policies and guidance, further exploring participation and accountability to individuals and communities, and exploring the application of feminist frameworks for AI Ethics and Governance. 

  • Social and Behavior Change Communications (SBCC) Working Group: This group explores ways that GenAI and NLP can support behavior change efforts with a focus on if and how GenAI Chatbots can support SBCC and the MERL of SBCC, The Group also works to test or showcase results of different approaches, explore and compare cost effectiveness and impact, and develop and implement a research agenda in the area of SBCC and NLP/GenAI. 

  • Philanthropy Working Group: This group convenes MERL teams and other staff working in philanthropic organizations for discussions about the application of NLP for MERL in the foundation world; sharing organizational policy guidance; strengthening foundation capacities to assess the proposed use of NLPs in programs; and considering the role of foundations in the future of AI.

  • AI in African Evaluation Workstream/Working Group: This group focuses on conversations and practical learning about AI and NLP for MERL in African contexts and “Made in Africa” AI and NLP tools. It creates space for connection and cross learning with other NLP / AI and Evaluation groups and networks in and outside of Africa. 

  • Humanitarian AI Working Group: This group engages with other networks (NetHope, ALNAP, CDAC) to collaborate, co-convene and support work related to greater accountability to affected people in conflict and crisis contexts for AI; to work to open space for voice and choice around the use of affected people’s data for AI decision-making and algorithm training; and to discuss the role of corporate data capture in crisis and humanitarian contexts.

    Working groups may shift focus over time to adapt to member needs and new ones will likely emerge.