Tag Archives: jacob korenblum

You can’t have Aid…without AI: How artificial intelligence may reshape M&E

by Jacob Korenblum, CEO of Souktel Digital Solutions

Photo: wikipedia.org/

Potential—And Risk

The rapid growth of Artificial Intelligence—computers behaving like humans, and performing tasks which people usually carry out—promises to transform everything from car travel to personal finance. But how will it affect the equally vital field of M&E? As evaluators, most of us hate paper-based data collection—and we know that automation can help us process data more efficiently. At the same time, we’re afraid to remove the human element from monitoring and evaluation: What if the machines screw up?

Over the past year, Souktel has worked on three areas of AI-related M&E, to determine where new technology can best support project appraisals. Here are our key takeaways on what works, what doesn’t, and what might be possible down the road.

Natural Language Processing

For anyone who’s sifted through thousands of Excel entries, natural language processing sounds like a silver bullet: This application of AI interprets text responses rapidly, often matching them against existing data sets to find trends. No need for humans to review each entry by hand! But currently, it has two main limitations: First, natural language processing works best for sentences with simple syntax. Throw in more complex phrases, or longer text strings, and the power of AI to grasp open-ended responses goes downhill. Second, natural language processing only works for a limited number of (mostly European) languages—at least for now. English and Spanish AI applications? Yes. Chichewa or Pashto M&E bots? Not yet. Given these constraints, we’ve found that AI apps are strongest at interpreting basic misspelled answer text during mobile data collection campaigns (in languages like English or French). They’re less good at categorizing open-ended responses by qualitative category (positive, negative, neutral). Yet despite these limitations, AI can still help evaluators save time.

Object Differentiation

AI does a decent job of telling objects apart; we’ve leveraged this to build mobile applications which track supply delivery more quickly & cheaply. If a field staff member submits a photo of syringes and a photo of bandages from their mobile, we don’t need a human to check “syringes” and “bandages” off a list of delivered items. The AI-based app will do that automatically—saving huge amounts of time and expense, especially during crisis events. Still, there are limitations here too: While AI apps can distinguish between a needle and a BandAid, they can’t yet tell us whether the needle is broken, or whether the BandAid is the exact same one we shipped. These constraints need to be considered carefully when using AI for inventory monitoring.

Comparative Facial Recognition

This may be the most exciting—and controversial—application of AI. The potential is huge: “Qualitative evaluation” takes on a whole new meaning when facial expressions can be captured by cameras on mobile devices. On a more basic level, we’ve been focusing on solutions for better attendance tracking: AI is fairly good at determining whether the people in a photo at Time A are the same people in a photo at Time B. Snap a group pic at the end of each community meeting or training, and you can track longitudinal participation automatically. Take a photo of a larger crowd, and you can rapidly estimate the number of attendees at an event.

However, AI applications in this field have been notoriously bad at recognizing diversity—possibly because they draw on databases of existing images, and most of those images contain…white men. New MIT research has suggested that “since a majority of the photos used to train [AI applications] contain few minorities, [they] often have trouble picking out those minority faces”. For the communities where many of us work (and come from), that’s a major problem.

Do’s and Don’ts

So, how should M&E experts navigate this imperfect world? Our work has yielded a few “quick wins”—areas where Artificial Intelligence can definitely make our lives easier: Tagging and sorting quantitative data (or basic open-ended text), simple differentiation between images and objects, and broad-based identification of people and groups. These applications, by themselves, can be game-changers for our work as evaluators—despite their drawbacks. And as AI keeps evolving, its relevance to M&E will likely grow as well. We may never reach the era of robot focus group facilitators—but if robo-assistants help us process our focus group data more quickly, we won’t be complaining.

MERL Tech and the World of ICT Social Entrepreneurs (WISE)

by Dale Hill, an economist/evaluator with over 35 years experience in development and humanitarian work. Dale led the session on “The growing world of ICT Social Entrepreneurs (WISE): Is social Impact significant?” at MERL Tech DC 2018.

Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.
Roger Nathanial Ashby of OpenWise and Christopher Robert of Dobility share experiences at MERL Tech.

What happens when evaluators trying to build bridges with new private sector actors meet real social entrepreneurs? A new appreciation for the dynamic “World of ICT Social Entrepreneurs (WISE)” and the challenges they face in marketing, pricing, and financing (not to mention measurement of social impact.)

During this MERL Tech session on WISE, Dale Hill, evaluation consultant, presented grant funded research on measurement of social impact of social entrepreneurship ventures (SEVs) from three perspectives. She then invited five ICT company CEOs to comment.

The three perspectives are:

  • the public: How to hold companies accountable, particularly if they have chosen to be legal or certified “benefit corporations”?
  • the social entrepreneurs, who are plenty occupied trying to reach financial sustainability or profit goals, while also serving the public good; and
  • evaluators, who see the important influence of these new actors, but know their professional tools need adaptation to capture their impact.

Dale’s introduction covered overlapping definitions of various categories of SEVs, including legally defined “benefit corporations”, and “B Corps”, which are intertwined with the options of certification available to social entrepreneurs. The “new middle” of SEVs are on a spectrum between for-profit companies on one end and not-for profit organizations on the other. Various types of funders, including social impact investors, new certification agencies, and monitoring and evaluation (M&E) professionals, are now interested in measuring the growing social impact of these enterprises. A show of hands revealed that representatives of most of these types of actors were present at the session.

The five social entrepreneur panelists all had ICT businesses with global reach, but they varied in legal and certification status and the number of years operating (1 to 11). All aimed to deploy new technologies to non-profit organizations or social sector agencies on high value, low price terms. Some had worked in non-profits in the past and hoped that venture capital rather than grant funding would prove easier to obtain. Others had worked for Government and observed the need for customized solutions, which required market incentives to fully develop.

The evaluator and CEO panelists’ identification of challenges converged in some cases:

  • maintaining affordability and quality when using market pricing
  • obtaining venture capital or other financing
  • worry over “mission drift” – if financial sustainability imperatives or shareholder profit maximization preferences prevail over founders’ social impact goals; and
  • the still present digital divide, when serving global customers (insufficient bandwidth, affordability issues, limited small business capital in some client countries.

New issues raised by the CEOs (and some social entrepreneurs in the audience) included:

  • the need to provide incentives to customers to use quality assurance or security features of software, to avoid falling short of achieving the SEV’s “public good” goals;
  • the possibility of hostile takeover, given high value of technological innovations;
  • the fact that mention of a “social impact goal” was a red flag to some funders who then went elsewhere to seek profit maximization.

There was also a rich discussion on the benefits and costs of obtaining certification: it was a useful “branding and market signal” to some consumers, but a negative one to some funders; also, it posed an added burden on managers to document and report social impact, sometimes according to guidelines not in line with their preferences.

Surprises?

a) Despite the “hype”, social impact investment funding proved elusive to the panelists. Options for them included: sliding scale pricing; establishment of a complementary for-profit arm; or debt financing;

b) Many firms were not yet implementing planned monitoring and evaluation (M&E) programs, despite M&E being one of their service offerings; and

c) The legislation on reporting social impact of benefit corporations among the 31 states varies considerably, and the degree of enforcement is not clear.

A conclusion for evaluators: Social entrepreneurs’ use of market solutions indeed provides an evolving, dynamic environment which poses more complex challenges for measuring social impact, and requires new criteria and tools, ideally timed with an understanding of market ups and downs, and developed with full participation of the business managers.