Tag Archives: data

Geospatial, location and big data: emerging MERL Tech approaches

Our first webinar in the series Emerging Data Landscapes in M&E, on Geospatial, location and big data: Where have we been and where can we go? was held on 28 July. We had a lively discussion on the use of these innovative technologies in the world of evaluation.

First, Estelle Raimondo,  Senior Evaluation Officer at the World Bank Independent Evaluation Group, framed the discussion with her introduction on Evaluation and emerging data: what are we learning from early applications? She noted how COVID-19 has been an accelerator of change, pushing the evaluation community to explore new, innovative technologies to overcome today’s challenges, and set the stage for the ethical, conceptual and methodical considerations we now face.

Next came the Case Study: Integrating geospatial methods into evaluations: opportunities and lessons from Anupam Anand, Evaluation Officer at the Global Environmental Facility, Independent Evaluation Office, and Hur Hassnain, Senior Evaluation Advisor, European Commission DEVCO/ESS. After providing an overview of the advantages of using satellite and remote sensing data, particularly in fragile and conflict zones, the presenters gave the examples of their use in Syria and Sierra Leone.

The second Case Study: Observing from space when you cannot observe from the field, was presented by Joachim Vandercasteelen, Young Professional at World Bank Independent Evaluation Group. This example focused on using geospatial data for evaluating a biodiversity conservation project in Madagascar, as traveling to the field was not feasible. The presentation gave an overview on how to use such technology for both quantitative and qualitative assessments, but also the downsides to consider.

Lastly, Alexandra Robinson, Co-Author of Big Data to Data Science: Moving from What to How in the MERL Tech Space, and Market Strategy and Data Ethics Lead at Threshold.World, discussed What are the organizational barriers to adopting new data types for M&E? This presentation focused on six main barriers to using big data, but also shared some key recommendations to improve its use.

The full recording of the webinar, including the PowerPoint Presentations and Questions & Answers session at the end, are available on the EES’ YouTube page.

Over the next month, we will release specific blogs of each of the presentations, where the speakers will answer the questions participants raised during the webinar that were not already addressed during the Q&A, and provide the links to further reading on the subject. These will be publicly available on the EES Blog.

The EES would like to thank our speakers for this engaging webinar, as well as our partners The Development Café, MERL Tech, and the World Bank IEG.

Stay tuned for our next webinar in the series. You can also follow the EES on Twitter, LinkedIn and Facebook, and sign up to receive our monthly newsletter EuropEval Digest for more exciting updates!

Emerging Technologies: How Can We Use Them for MERL?

Guest post from Kerry Bruce, Clear Outcomes

A new wave of technologies and approaches has the potential to influence how monitoring, evaluation, research and learning (MERL) practitioners do their work. The growth in use of smartphones and the internet, digitization of existing data sets, and collection of digital data make data increasingly available for MERL activities. This changes how MERL is conducted and, in some cases, who conducts it.

We recently completed research on emerging technologies for use in MERL as part of a wider research project on The State of the Field of MERL Tech.

We hypothesized that emerging technology is revolutionizing the types of data that can be collected and accessed and the ways that it can be processed and used for better MERL. However, improved research on and documentation of how these technologies are being used is required so the sector can better understand where, when, why, how, and for which populations and which types of MERL these emerging technologies would be appropriate.

The team reviewed the state of the field and found there were three key new areas of data that MERL practitioners should consider:

  • New kinds of data sources, such as application data, sensor data, data from drones and biometrics. These types of data are providing more access to information and larger volumes of data than ever before.
  • New types of systems for data storage.  The most prominent of these was the distributed ledger technologies (also known as blockchain) and an increasing use of cloud and edge computing.  We discuss the implications of these technologies for MERL.
  • New ways of processing data, mainly from the field of machine learning, specifically supervised and unsupervised learning techniques that could help MERL practitioners manage large volumes of both quantitative and qualitative data.

These new technologies hold great promise for making MERL practices more precise, automated and timely. However, some challenges include:

  • A need to clearly define problems so the choice of data, tool, or technique is appropriate
  • Non-representative selection bias when sampling
  • Reduced MERL practitioner or evaluator control
  • Change management needs to adapt how organizations manage data
  • Rapid platform changes and difficulty with assessing the costs
  • A need for systems thinking which may involve stitching different technologies together

To address emerging challenges and make best use of the new data, tools, and approaches, we found a need for capacity strengthening for MERL practitioners, greater collaboration among social scientists and technologists, a need for increased documentation, and a need for the incorporation of more systems thinking among MERL practitioners.

Finally there remains a need for greater attention to justice, ethics and privacy in emerging technology.

Download the paper here!

Read the other papers in the series here!

Open Call for ideas: 2020 GeOnG forum

Guest post by Nina Eissen from CartONG, organizers of the GeOnG Forum.

The 7th edition of the GeOnG Forum on Humanitarian and Development Data will take place from November 2nd to 4th, 2020 in Chambéry (France). CartONG is launching an Open Call for Suggestions.

Organized by CartONG every two years since 2008, the GeOnG forum gathers humanitarian and development actors and professionals specialized in information management. The GeOnG is dedicated to addressing issues related to data in the humanitarian and development sectors, including topics related to mapping, GIS, data collection & information management. To this end, the forum is designed to allow participants to debate current and future stakes, introduce relevant and innovative solutions and share experience and best practices. The GeOnG is one of the biggest independent fora on the topic in Europe, with an average of 180 participants from 90 organizations in the last three editions.

The main theme of the 2020 edition will be: “People at the heart of Information Management: promoting responsible and inclusive practices”. More information about the choice of this main theme is available here.

We also invite you to discover the 2020 GeOnG teaser here: 

To submit your ideas, please use this online form. The Open Call for Suggestions will remain open until the end of May 2020.

A few topics we hope to see covered during the 2020 GeOnG Forum:

  • How to better integrate vulnerable populations into the data life cycle, with a focus on ensuring that the data collected is particularly representative of populations at risk of discrimination.
  • How to implement the Do No Harm approach in relation to data: simple security & protection measures, streamlining of data privacy rights in programming, algorithmization of data processing, etc.
  • What is the role of the often considered ‘less direct stakeholders’ of humanitarian and development data (such as civil society actors, governments, etc.) so as to identify clearer pathways to share the data that should be shared for the common good and protect the data that should clearly not be shared.
  • How to promote data literacy beyond NGO information management and M&E staff to facilitate data-driven decision making.
  • How to ensure that tools and solutions used and promoted by humanitarian and development organizations are also sufficiently user-friendly and inclusive (for instance by limiting in-built biases and promoting human-centric design).
  • Beyond the main theme of the conference, don’t hesitate to send us any idea that you think might be relevant for the next GeOnG edition (about tools, methodologies, lessons learned, feedback from the field, etc.)!

Registration for the conference will open in the Spring of 2020.

 

 

No Data, No Problem: Extracting Insights from Data-Poor Environments

by Jonathan Tan

In data-poor environments, what can you do to get what you need? For Arpitha Peteru and Bob Lamb of the Foundation for Inclusion, the answer lies at the intersection of science, story, and simulation. 

The session, “No Data, No Problem: Extracting Insights from Data Poor Environments” began with a philosophical assertion: all data is qualitative, but some can be quantified. The speakers were making the argument that the processes we use to extract insights from data are fundamentally influenced by our personal assumptions, interpretations and biases, and misusing data without considering those fundamentals can produce unhelpful insights. As an example, they cited an unnamed cross-national study of fragile stages that committed several egregious data sins:

  1. It assumed that household data aggregated at the national level was reliable.
  2. It used an incoherent unit of analysis. Using a country-level metric in Somalia, for example, makes no sense because it ignored the qualitative differences between Somaliland and the rest of the country. 
  3. It ignored the complex web of interactions among several independent variables to produce pairwise correlation metrics that themselves made no sense. 

For Peteru and Lamb, the indiscriminate application of data analysis methods without understanding the forces behind the data is a failure of imagination. They spoke about the Foundation for Inclusion’s approach to social issues by their appreciation for complex systems. They illustrated the point with a demonstration: when you pour water from a pitcher onto a table, the rate of water leaving the pitcher exactly matches the rate of water hitting the table. If you were to measure both and looked only at the data, the correlation is 1 and you could conclude that the working mechanism was that the table was getting wet because it was leaving the pitcher. But what happens when there are unobserved intermediate steps? What if, for instance, the water was flowing into a cup on the table, which had to overflow before hitting the table? Or what if water was being poured into a balloon, which had to cross a certain threshold before bursting and wetting the table? The data in isolation would tell you very little about how the system actually worked. 

What can you do in the absence of good data? Here, the Foundation for Inclusion turns to stories as a source of information. They argue that talking to domain experts, reviewing local media and gathering individual viewpoints can help by revealing patterns and allowing researchers to formulate potential causal structures. Of course, the further one gets from the empirics, the more uncertainty there must be. And that can be quantified and mitigated with sensitivity tests and the like. Peteru and Lamb’s point here was that even anecdotal information can give you enough to assemble a hypothesized system or set of systems that can then be explored and validated – by way of simulation.

Simulations were the final piece of the puzzle. With researchers seeing increasing access to the hardware and computing knowledge necessary to create simulations of complex systems – systems based on information from the aforementioned stores – the speakers argued that simulations were an increasingly viable method of exploring stories and validating hypothesized causal systems. Of course, there was no one-size-fits-all: they discussed several types of simulations – from agent-based models to Monte Carlo models – as well as when each might be appropriate. For instance, health agencies today already make use of sophisticated simulations to forecast the spread of epidemics, in which collecting sufficient data would simply be too slow to act upon. By simulating thousands of potential outcomes from varying key parameters in the simulations, and systematically eliminating the models that had undesirable outcomes or those that relied on data with high levels of uncertainty, one could, in theory, be left with a handful of simulations whose parameters would be instructive. 

The purpose of data collection is to produce useful, actionable insights. Thus, in its absence, the Foundation for Inclusion argues that the careful application of science, story, and simulation can pave the way forward.

Collecting Data in Hard to Reach Places

Written by Stephanie Jamilla

By virtue of operating in the international development sphere, we oftentimes work in areas that are remote, isolated, and have little or no internet connection. However, as the presenters from Medic Mobile and Vitamin Angels (VA) argued in their talk, “Data Approaches in Hard-to-Reach Places,” it is possible to overcome these barriers and use technology to collect much-needed program data. The session was split neatly into three parts: a presentation by Mourice Barasa, the Impact Lead of Medic Mobile in Kenya, a presentation by Jamie Frederick, M&E Manager, and Samantha Serrano, M&E Specialist, from Vitamin Angels, and an activity for attendees. 

While both presentations discussed data collection in a global health context and used phone applications as the means of data collection, they illustrated two different situations. Barasa focused on the community health app that Medic Mobile is implementing. It is used by community health teams to better manage their health workers and to ease the process of providing care. The app serves many purposes. For example, it is a communication tool that connects managers and health workers as well as a performance management tool that tracks the progress of health workers and the types of cases they have worked on. The overall idea is to provide near real time (NRT) data so that health teams have up-to-date information about who has been seen, what patients need, if patients need to be seen in a health facility, etc. Medic Mobile implemented the app with the Ministry of Health in Siaya, Kenya and currently have 1700 health workers using the tools. While the use of the app is impressive, Barasa explained various barriers that hinder the app from creating NRT data. Health teams rely on the timestamp sent with every entry to know when a household is visited by a health worker. However, a health worker may wait to upload an entry and use the default time on their phone rather than the actual time of visit. Also, poor connectivity, short battery life, and internet subscription costs are of concern. Medic Mobile is working on improvements such as exploring the possibility of using offline servers, finding alternatives to phone charging, and central billing of mobile users have decreased billing from $2000/month to around $100.

Frederick and Serrano expressed similar difficulties in their presentation — particularly about the timeliness of data upload. However, their situation was different. VA used their app for specifically M&E purposes. The organization wanted to validate the extent to which it was reaching its target population, delivering services at the best practice standard, and are truly filling the 30% gap of coverage that national health services miss. Their monitoring design consisted of taking a random sample of 20% of their field partners and using ODK collect with an ONA-programmed survey (which is cloud-based) on Android devices. VA trained 30 monitors to cover countries in Latin America and the Caribbean, Africa, and Asia in which they had partners. While the VA Home Office was able to use the data collected on the app well through the cycle of data collection to action, field partners were having trouble with the data in the analysis, reporting, and action stages. Hence, a potential solution was piloted with three partners in Latin America. VA adjusted the surveys in ONA so that it would display a simple report with internal calculations based on the survey data. This report was developed in NRT, allowing partners to access the data quickly. VA also formatted the report so that the data was easily consumable. VA also made sure to gather feedback from partners about the usefulness of monitoring results to ensure that partners also valued collecting this data. 

These two presentations reinforced that while there is the ability to collect data in difficult places, there will always be barriers as well, whether they are technical or human-related. The group discussion activity revealed other challenges. The presenters prompted the audience with four questions:

  1. What are data collection approaches you have used in hard-to-reach places?
  2. What have been some challenges with these approaches?
  3. How have the data been used?
  4. What have been some challenges with use of these data?

In my group of five, we talked mainly about hindrances to data collection in our own work, such as the cost of some technology. Another that came up was how there is a gap between having the data visualizing them well but ensuring that the data we do collect actually translates into action.

Overall, the session helped me think through how important it is to consider potential challenges in the initial design of the data collection and analysis process. The experiences of Medic Mobile and Vitamin Angels demonstrated what difficulties we all will face when collecting data in hard-to-reach places but also that those difficulties can ultimately be overcome.

What Are Your ICT4D Challenges? Take a DIAL Survey to Learn What Helps and Hurts Us All

By Laura Walker McDonald, founder of BetterLab.io. Originally posted on ICT Works on March 26, 2018.

DIAL ICT4D Survey

When it comes to the impact and practice of our ICT4D work, we’re long on stories and short on evidence. My previous organization, SIMLab, developed Frameworks on Context Analysis andMonitoring and Evaluation of technology projects to try and tackle the challenge at that micro level.

But we also have little aggregated data about the macro trends and challenges of our growing sector. That’s led the Digital Impact Alliance (DIAL) to conduct an entirely new kind of data-gathering exercise, and one that would add real quantitative data to what we know about what it’s like to implement projects and develop platforms.

Please help us gather new insights from more voices

Please take our survey on the reality of delivering services to vulnerable populations in emerging markets using digital tools. We’re looking for experiences from all of DIAL’s major stakeholder groups:

  • NGO leaders from the project site to the boardroom;
  • Technology experts;
  • Platform providers and mobile network operators;
  • Governments and donors.

We’re adding to this survey with findings with in-depth interviews with 50 people from across those groups.

Please forward this survey!

We want to hear from those whose voices aren’t usually heard by global consultation and research processes. We know that the most innovative work in our space happens in projects and collaborations in the Global South – closest to the underserved communities who are our highest priority.

Please forward this survey to we can hear from those innovators, from the NGOs, government ministries, service providers and field offices who are doing the important work of delivering digital-enabled services to communities, every day.

It’s particularly important that we hear from colleagues in government, who may be supporting digital development projects in ways far removed from the usual digital development conversation.

Why should I take and share the survey?

We’ll use the data to help measure the impact of what we do – this will be a baseline for indicators of interest to DIAL. But it will provide a unique opportunity for you to help us build a unique snapshot of the challenges and opportunities you face in your work, in funding, designing, or delivering these services.

You’ll be answering questions we don’t believe are asked enough – about your partnerships, about how you cover your costs, and about the technical choices you’re making, specific to the work you do – whether you’re a businessperson, NGO worker, technologist, donor, or government employee.

How do I participate?

Please take the survey here. It will take 15-20 minutes to complete, and you’ll be answering questions, among others, about how you design and procure digital projects; how easy and how cost-effective they are to undertake; and what you see as key barriers. Your response can be anonymous.

To thank you for your time, if you leave us your email, we’ll share our findings with you and invite you into the conversation about the results. We’ll also be sharing our summary findings with the community.

We hope you’ll help us – and share this link with others.

Please help us get the word out about our survey, and help us gather more and better data about how our ecosystem really works.

What’s the Deal with Data — Bridging the Data Divide in Development

Written by Ambika Samarthya-Howard, Head of Communications, Praekelt.org. This post was originally published on March 26, 2018, on Medium.

Working on communications at Praekelt.org, I have had the opportunity to see first-hand the power of sharing stories in driving impact and changing attitudes. Over the past month I’ve attended several unrelated events all touching on data, evaluation, and digital development which have reaffirmed the importance of finding common ground to share and communicate data we value.

Storytelling and Data

I recently presented a poster on “Storytelling for Organisational Change” at the University of London’s Behavior Change Conference. Our current evaluations at Praekelt draw on work by the center, which is a game-changer in the field. But I didn’t submit an abstract on our agile, experimental investigations: I was sharing information about how I was using films and our storytelling to create change within the organisation.

After my abstract was accepted, I realized I had to present my findings as a poster. For many practitioners (like myself) we really have no idea what a poster entails. Thankfully I got advice from academics and support from design colleagues to translate my videos, photos, and storytelling deck into a visual form I could pin up. When the printers in New York told me “this is a really great poster”, I started picking up the hint that it was atypical.

Once I arrived at the poster hall at UCL, I could see why. Nearly, if not all, of the posters in the room had charts and numbers and graphs — lots and lots of data points. On the other hand, my poster had almost no “data”. It was colorful, and showed a few engaging images, the story of our human-centered design process, and was accompanied by videos playing on my laptop alongside the booth. It was definitely a departure from the “research” around the room.

This divide between research and practice showed up many times through the conference. For starters, this year, attendees were asked to choose a sticker label based on whether they were in research/ academics or programme/ practitioners. Many of the sessions talked about how to bridge the divide and make research more accessible to practitioners, and take learnings from programme creators to academia.

Thankfully for me, the tight knit group of practitioners felt solace and connection to my chart-less poster, and perhaps the academics a bit of a relief at the visuals as well: we went home with one of the best poster awards at the conference.

Data Parties and Cliques

The London conference was only the beginning of when I became aware of the conversations around the data divide in digital development. “Why are we even using the word data? Does anyone else value it? Does anyone else know what it means?” Anthony Waddell, Chief Innovation Officer of IBI, provocatively put out there at a breakout session at USAID’s Digital Development Forum in Washington. The conference gathered organisations around the United States working in digital development, asking them to consider key points around the evolution of digital development in the next decade — access, inclusivity, AI, and, of course, the role of data.

This specific break-out session was sharing best practices of using and understanding data within organisations, especially amongst programmes teams and country office colleagues. It also expanded to sharing with beneficiaries, governments, and donors. We questioned whose data mattered, why we were valuing data, and how to get other people to care.

Samhir Vasdev, the advisor for Digital Development at IREX, spoke on the panel about MIT’s initiatives and their Data Culture Lab, which shared exercises to help people understand data. He talked about throwing data parties where teams could learn and understand that what they were creating was data, too. The gatherings allow people to explore the data they produce, but perhaps did not get a chance to interrogate. The real purpose is to understand what new knowledge their own data tells them, or what further questions the data challenges them to explore. “Data parties a great way to encourage teams to explore their data and transform it into insights or questions that they can use directly in their programs.”

Understanding data can be empowering. But being shown the road forward doesn’t necessarily means that’s the road participants can or will take. As Vasdev noted, “ “Exercises like this come with their own risks. In some cases, when working with data together with beneficiaries who themselves produced that information, they might begin demanding results or action from their data. You have to be prepared to manage these expectations or connect them with resources to enable meaningful action.” One can imagine the frustration if participants saw their data leading to the need for a new clinic, yet a clinic never got built.

Big Data, Bias, and M&E

Opening the MERL (Monitoring, Evaluation, Research, and Learning) Tech Conference in London, Andre Clark, Effectiveness and Learning Adviser at Bond, spoke about the increasing importance of data in development in his keynote. Many of the voices in the room resonated with the trends and concerns I’ve observed over the last month. Is data the answer? How is it the answer?André Clarke’s keynote at MERL Tech

“The tool is not going to solve your problem,” one speaker said during the infamous off-the-record Fail Fest where attendees present on their failures to learn from each other’s mistakes. The speaker shared examples of a new reporting initiative which hadn’t panned out as expected. She noted that “we initially thought tech would help us work faster and more efficiently, but now we are clearly seeing the importance of quality data over timely data”. Although digital data may be better and faster, that does not mean it’s solving the original problem.

In using data to evaluate problems, we have to make sure we are under no illusions that we are actually dealing with core issues at hand. For examples, during my talk on Social Network Analysis we discussed both the opportunities and challenges of using the quantitative process in M&E. The conference consistently emphasized the importance of slower, and deeper processes as opposed to faster, and shorter ones driven by technology.

This holds true for how data is used in M&E practices. For example, I attended a heated debate on the role of “big data” in M&E and whether the convergence was inevitable. As one speaker mentioned, “if you close your eyes and forget the issue at hand is big data, you could feel like it was about any other tool used in M&E”. The problems around data collection, bias, inaccessibility, language, and tools were there in M&E regardless of big data or small data.

Other core issues raised were power dynamics, inclusivity, and the fact that technology is made by people and therefore it is not neutral. As Anahi Ayala Iacucci, Senior Director of Humanitarian Programs at Internews, said explicitly “we are biased, and so we are building biased tools.” In her presentation, she talked about how technology mediates and alters human relationships. If we take the slower and deeper approach we will have an ability to really explore biases and understand the value and complications of data.

“Evaluators don’t understand data, and then managers and public don’t understand evaluation talk,” Maliha Khan of Daira said, bringing it back to my original concerns about translation and bridging gaps in the space. Many of the sessions sought to address this problem, a nice example being Cooper Smith’s Kuunika project in Malawi that used local visual illustrations to accompany their survey questions on tablets. Another speaker pushed for us to move into the measurement space, as opposed to monitoring, which has the potential to be a page we can all agree on.

As someone who feels responsible for not only communicating our work externally, but sharing knowledge amongst our programmes internally, where did all this leave me? I think I’ll take my direction from Anna Maria Petruccelli, Data Analyst at Comic Relief, who spoke about how rather than organisations committing to being data-driven, they could be committed to being data-informed.

To go even further with this advice, at Praekelt we make the distinction between data-driven and evidence-driven, where the latter acknowledges the need to attend to research design and emphasize quality, not just quantity. Evidence encompasses the use of data but includes the idea that not all data are equal, that when interpreting data we attend to both the source of data and research design.

I feel confident that turning our data into knowledge, regardless of how we choose to use it and being aware of how bias informs the way we do, can be the first step forward on a unified journey. I also think this new path forward will leverage the power of storytelling to make data accessible, and organisations better informed. It’s a road less traveled, yes, but hopefully that will make all the difference.

If you are interested in joining this conversation, we encourage you to submit to the first ever MERL Tech Jozi. Abstracts due March 31st.

DataDay TV: MERL Tech Edition

What data superpower would you ask for? How would you describe data to your grandparents? What’s the worst use of data you’ve come across? 

These are a few of the questions that TechChange’s DataDay TV Show tackles in its latest episode.

The DataDay Team (Nick Martin, Samhir Vasdev, and Priyanka Pathak) traveled to MERL Tech DC last September to ask attendees some tough data-related questions. They came away with insightful, unusual, and occasionally funny answers….

If you’re a fan of discussing data, technology and MERL, join us at MERL Tech London on March 19th and 20th. 

Tickets are going fast, so be sure to register soon if you’d like to attend!

If you want to take your learning to the next level with a full-blown course, TechChange has a great 2018 schedule, including topics like blockchain, AI, digital health, data visualization, e-learning, and more. Check out their course catalog here.

What about you, what data superpower would you ask for?

 

Making (some) sense of data storage and presentation in Excel

By Anna Vasylytsya. Anna is in the process of completing her Master’s in Public Policy with an emphasis on research methods. She is excited about the role that data can play in improving people’s lives!

At the MERL Tech Conference, I attended a session called “The 20 skills that solve 80% of M&E problems” presented by Dr. Leslie Sage of DevResults. I was struck by the practical recommendations Leslie shared that can benefit anyone that uses Excel to store and/or present data.

I boiled down the 20 skills presented in the session into three key takeaways, below.

1. Discerning between data storage and data presentation

Data storage and data presentation serve two different functions and never the two shall meet. In other words, data storage is never data presentation.

Proper data storage should not contain merged cells, subheadings, color used to denote information, different data types within cells (numbers and letters), more than one piece of data in a cell (such as disaggregations). Additionally, in proper data storage, columns should be the variables and rows as the observations or vice versa. Poor data storage practices need to be avoided because they mean that you cannot use Excel’s features to present the data.

A common example of poor data storage:

Excel 1

 

One of the reasons that this is not good data storage is because you are not able to manipulate this data using Excel’s features. If you needed this data in a different format or you wanted to visualize it, you would have to do this manually, which would be time consuming.

Here is the same data presented in a “good” storage format:

2Good_Data_Storage

 

Data stored this way may not look as pretty, but it is not meant to be presented or read in within the sheet. This is an example of good data storage because each unique observation gets a new row in the spreadsheet. When you properly store data, it is easy for Excel to aggregate the data and summarize it in a pivot table, for example.

2. Use Excel’s features to organize and clean data

You do not have to use precious time to organize or clean data manually. Here are a few recommendations on Excel’s data organization and cleaning features:

  • To join to cells that have text into one cell, use the concatenate function.
  • To split text from one cell into different cells, use the text to columns
  • To clean text data, use Excel’s functions: trim, lower, upper, proper, right, left, and len.
  • To move data from rows into columns or columns into rows, use Excel’s transpose feature.
  • There is a feature to remove duplicates from the data.
  • Create a macro to automate simple repetitive steps in Excel.
  • Insert data validation in an excel spreadsheet if you are sending a data spreadsheet to implementers or partners to fill out.
    • This restricts the type of data or values that can be entered in certain parts of the spreadsheet.
    • It also saves you time from having to clean the data after you receive it.
  • Use the vlookup function in Excel in your offline version to look up a Unique ID
    • Funders or donors normally require that data is anonymized if it is made public. While not the best option for anonymizing data, you can use Excel if you haven’t been provided with specific tools or processes.
    • You can create an “online” anonymized version that contains a Unique ID and an “offline version” (not public) containing the ID and Personally Identifiable Information (PII). Then, if you needed to answer a question about a Unique ID, for example, your survey was missing data and you needed to go back and collect it, you can use vlookup to find a particular record.

3. Use Excel’s features to visualize data

One of the reasons to organize data properly so that you can use Excel’s Pivot Table feature.

Here is an example of a pivot table made from the data in the good data storage example above (which took about a minute to make):

3Pivot_Table

Using the pivot table, you can then use Excel’s Create a Chart Feature to quickly make a bar graph:

4BarGraph

In the Future

I have fallen prey to poor data storage practices in the past. Now that I have learned these best practices and features of Excel, I know I will improve my data storage and presentation practices. Also, now that I have shared them with you; I hope that you will too!

Please note that in this post I did not discuss how Excel’s functions or features work or how to use them. There are plenty of resources online to help you discover and explore them. Some helpful links have been included as a start. Additionally, the data presented here are fictional and created purely for demonstration purposes.

You can’t have Aid…without AI: How artificial intelligence may reshape M&E

by Jacob Korenblum, CEO of Souktel Digital Solutions

Photo: wikipedia.org/

Potential—And Risk

The rapid growth of Artificial Intelligence—computers behaving like humans, and performing tasks which people usually carry out—promises to transform everything from car travel to personal finance. But how will it affect the equally vital field of M&E? As evaluators, most of us hate paper-based data collection—and we know that automation can help us process data more efficiently. At the same time, we’re afraid to remove the human element from monitoring and evaluation: What if the machines screw up?

Over the past year, Souktel has worked on three areas of AI-related M&E, to determine where new technology can best support project appraisals. Here are our key takeaways on what works, what doesn’t, and what might be possible down the road.

Natural Language Processing

For anyone who’s sifted through thousands of Excel entries, natural language processing sounds like a silver bullet: This application of AI interprets text responses rapidly, often matching them against existing data sets to find trends. No need for humans to review each entry by hand! But currently, it has two main limitations: First, natural language processing works best for sentences with simple syntax. Throw in more complex phrases, or longer text strings, and the power of AI to grasp open-ended responses goes downhill. Second, natural language processing only works for a limited number of (mostly European) languages—at least for now. English and Spanish AI applications? Yes. Chichewa or Pashto M&E bots? Not yet. Given these constraints, we’ve found that AI apps are strongest at interpreting basic misspelled answer text during mobile data collection campaigns (in languages like English or French). They’re less good at categorizing open-ended responses by qualitative category (positive, negative, neutral). Yet despite these limitations, AI can still help evaluators save time.

Object Differentiation

AI does a decent job of telling objects apart; we’ve leveraged this to build mobile applications which track supply delivery more quickly & cheaply. If a field staff member submits a photo of syringes and a photo of bandages from their mobile, we don’t need a human to check “syringes” and “bandages” off a list of delivered items. The AI-based app will do that automatically—saving huge amounts of time and expense, especially during crisis events. Still, there are limitations here too: While AI apps can distinguish between a needle and a BandAid, they can’t yet tell us whether the needle is broken, or whether the BandAid is the exact same one we shipped. These constraints need to be considered carefully when using AI for inventory monitoring.

Comparative Facial Recognition

This may be the most exciting—and controversial—application of AI. The potential is huge: “Qualitative evaluation” takes on a whole new meaning when facial expressions can be captured by cameras on mobile devices. On a more basic level, we’ve been focusing on solutions for better attendance tracking: AI is fairly good at determining whether the people in a photo at Time A are the same people in a photo at Time B. Snap a group pic at the end of each community meeting or training, and you can track longitudinal participation automatically. Take a photo of a larger crowd, and you can rapidly estimate the number of attendees at an event.

However, AI applications in this field have been notoriously bad at recognizing diversity—possibly because they draw on databases of existing images, and most of those images contain…white men. New MIT research has suggested that “since a majority of the photos used to train [AI applications] contain few minorities, [they] often have trouble picking out those minority faces”. For the communities where many of us work (and come from), that’s a major problem.

Do’s and Don’ts

So, how should M&E experts navigate this imperfect world? Our work has yielded a few “quick wins”—areas where Artificial Intelligence can definitely make our lives easier: Tagging and sorting quantitative data (or basic open-ended text), simple differentiation between images and objects, and broad-based identification of people and groups. These applications, by themselves, can be game-changers for our work as evaluators—despite their drawbacks. And as AI keeps evolving, its relevance to M&E will likely grow as well. We may never reach the era of robot focus group facilitators—but if robo-assistants help us process our focus group data more quickly, we won’t be complaining.