Tag Archives: MERL Tech

Check out the agenda for MERL Tech Jozi!

We’re thrilled that the first MERL Tech conference is happening in Johannesburg, South Africa, August 1-2, 2018!

MERL Tech Jozi will be two days of in-depth sharing and exploration with 100 of your peers.  We’ll look at what’s been happening across the multi-disciplinary MERL field, including what we’ve been learning and the complex barriers that still need resolving. We’ll also generate lively debates around the possibilities and the challenges that our field needs to address as we move ahead.

The agenda for MERL Tech Jozi 2018 is now available. Take a look at register to attend!

Register to attend MERL Tech Jozi!

We’ll have workshops, panels, discussions, case studies, lightning talks, demo tables, community building, socializing, and an evening reception with a Fail Fest!

Session areas include:

  • digital data collection and management
  • data visualization
  • social network analysis
  • data quality
  • remote monitoring
  • organizational capacity for digital MERL
  • big data
  • small data
  • ethics, bias and privacy when using digital data in MERL
  • biometrics, spatial analysis, machine learning
  • WhatsApp, SMS, IVR and USSD

Take a look at the agenda to find the topics, themes and tools that are most interesting to you and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here)

MERL Tech Jozi is supported by:

Improve Data Literacy at All Levels within Your Humanitarian Programme

This post is by Janna Rous at Humanitarian Data. The original was published here on April 29, 2018

Imagine this picture of data literacy at all levels of a programme:

You’ve got a “donor visit” to your programme. The country director and a project officer accompany the donor on a field trip, and they all visit a household within one of the project communities.  All sat around a cup of tea, they started a discussion about data.  In this discussion, the household members explained what data had been collected and why. The country director explained what had surprised him/her in the data.  And the donor discussed how they made a decision to fund the programme based on the data.  What if no one was surprised at the discussion, or how the data was used, because they’d ALL seen and understood the data process?

Data literacy can mean lots of different things depending on who you are.  It could mean knowing how to:

  • collect, analyze and use data;
  • make sense of data and use it for management
  • validate data, be critical of it,
  • tell good from bad data and knowing how credible it is;
  • ensure everyone is confident talking about data.

IS “IMPROVING DATA LITERACY FOR ALL LEVELS” A TOP PRIORITY FOR THE HUMANITARIAN SECTOR?

“YES” data literacy is a priority!  Poor data literacy is still a huge stumbling block for many people in the sector and needs to be improved at ALL levels – from community households to field workers to senior management to donors.  However, there are a few challenges in how this priority is worded.

IS “LITERACY” THE RIGHT WORD?

Suggesting someone is “illiterate” when it comes to data – that doesn’t sit well with most people.  Many aid workers – from senior HQ staff right down to beneficiaries of a humanitarian programme – are well-educated and successful. Not only are they literate, but most speak 2 or more languages!  So to insinuate “illiteracy” doesn’t feel right.

Illiteracy is insulting…

Many of these same people are not super-comfortable with “data”,  but to ask them if they “struggle” with data, or to suggest they “don’t understand” by claiming they are “data illiterate” is insulting (even if you think it’s true!).

Leadership is enticing…

The language you use is extremely important here.  Instead of “literacy”, should you be talking about “leadership”?  What if you framed it as:  Improving data leadership.  Could you harness the desirability of that skill – leadership – so that workshop and training titles played into people’s egos, instead of attacking their egos?

WHAT CAN YOU DO TO IMPROVE DATA LITERACY (LEADERSHIP) WITHIN YOUR OWN ORGANIZATION?

You might be directly involved with helping to improve data literacy within your own organization.  Here are a few ideas on how to improve general data literacy/leadership:

  • Training and courses around data literacy.

While courses that focus on data analysis using computer programming languages such as [R] or Python exist, it might be better to focus on skills-development on more popular software (such as Excel) which is more sustainable. Due to the high turnover of staff within your sector, complex data analysis cannot normally be sustained once an advanced analyst leaves the field.

  • Donor funding to promote data use and the use of technology.

While the sector should not only rely on donors for pushing the agenda of data literacy forward, money is powerful.  If NGOs and agencies are required to show data literacy in order to receive funding, this will drive a paradigm shift in becoming more data-driven as a sector.  There are still big questions on how to fund interoperable tech systems in the sector to maximize the value of that funding in collaboration between multiple agencies.  However, donors who can provide structures and settings for collaboration will be able to promote data literacy across the sector.

  • Capitalize on “trendy” knowledge – what do people want to know about because it makes them look intelligent?

In 2015/16, everyone wanted to know “how to collect digital data”.  A couple years later, most people had shifted – they wanted to know “how to analyze data” and “make a dashboard”.  Now in 2018, GDPR and “Responsible Data” and “Blockchain” are trending – people want to know about it so they can talk about it.  While “trends” aren’t all we should be focusing on, they can often be the hook that gets people at all levels of our sector interested in taking their first steps forward in data literacy.

DATA LITERACY MEANS SOMETHING DIFFERENT FOR EACH PERSON

Data literacy means something completely different depending on who you are, your perspective within a programme, and what you use data for.

To the beneficiary of a programme…

data literacy might just mean understanding why data is being collected and what it is being used for.  It means having the knowledge and power to give and withhold consent appropriately.

To a project manager…

data literacy might mean understanding indicator targets, progress, and the calculations behind those numbers, in addition to how different datasets relate to one another in a complex setting.  Managers need to understand how data is coming together so that they can ask intelligent questions about their programme dashboards.

To an M&E officer…

data literacy might mean an understanding of statistical methods, random selection methodologies, how significant a result may be, and how to interpret results of indicator calculations.  They may need to understand uncertainty within their data and be able to explain this easily to others.

To the Information Management team…

data literacy might mean understanding how to translate programme calculations into computer code.  They may need to create data collection or data analysis or data visualization tools with an easy-to-understand user-interface.  They may ultimately be relied upon to ensure the correctness of the final “number” or the final “product”.

To the data scientist…

data literacy might mean understanding some very complex statistical calculations, using computer languages and statistical packages to find trends, insights, and predictive capabilities within datasets.

To the management team…

data literacy might mean being able to use data results (graphs, charts, dashboards) to explain needs, results, and impact in order to convince and persuade. Using data in proposals to give a good basis for why a programme should exist or using data to explain progress to the board of directors, or even as a basis for why a new programme should start up….or close down.

To the donor…

data literacy might mean an understanding of a “good” needs assessment vs. a “poor one” in evaluating a project proposal, how to prioritize areas and amounts of funding, how to ask tough questions of an individual partner, how to be suspect of numbers that may be too good to be true, how to evaluate quality vs. quantity, or how to see areas of collaboration between multiple partners.  They need to use data to communicate international priorities to their own wider government, board, or citizens.

Use more precise wording

Data literacy means something different to everyone.  So this priority can be interpreted in many different ways depending on who you are.  Within your organization, frame this priority with a more precise wording.  Here are some examples:

  • Improve everyone’s ability to raise important questions based on data.
  • Let’s get better at discussing our data results.
  • Improve our leadership in communicating the meaning behind data.
  • Develop our skills in analyzing and using data to create an impact.
  • Improve our use of data to inform our decisions.

This blog article was based on a recent session at MERL Tech UK 2018.  Thanks to the many voices who contributed ideas.  I’ve put my own spin on them to create this article – so if you disagree, the ideas are mine.  And if you agree – kudos to the brilliant people at the conference!

****

Register now for MERL Tech Jozi, August 1-2 or MERL Tech DC, September 6-7, 2018 if you’d like to join the discussions in person!

 

Reinventing the flat tire… don’t build what is already built!

by Ricardo Santana, MERL Practitioner

One typical factor that delays many projects in international development is the design and creation from scratch of hardware and software to provide a certain feature or accomplish a task. And, while it is true that in some cases a specific design is required, in most cases the outputs can be achieved through solutions already available in the market.

Why is this important? Because we witness over and over again how budgets are wasted in mismanaged projects and programs, delaying solutions, generating skepticism in funders, beneficiaries and other stakeholders and finally delivering a poor result. It is sad to realize that some of these issues may have been avoided simply using solutions and products already available, proved and at reasonable cost.

Then, what do we do? It is hard to find solutions aimed at international development by just browsing through Internet. During MERL Tech London 2018, the NGO Engineering for Change presented their Solutions Library. (Disclaimer: I have contributed to the library by analysing products, software and tools in different application spaces). In this database it is possible to explore and consult many available solutions that may help tackle a specific challenge or need to deliver a good result.

It doesn’t mean that this is the only place on which to rely for everything, or that projects absolutely need to adapt their processes to what is available. But as a professional responsible for evaluating and optimizing projects and programs in government and international development, I know that is always a good place for consulting on different technologies that are designed to help accelerate the overcoming of social inequalities, increasing access to services or automating and simplifying the monitoring, evaluation, research and learning processes.

Through my collaboration with this platform I came to know many different solutions to perform and effectively manage MERL processes. Some of these include: Magpi, Ushaidi, Epicollect5, RapidPro, mWater, SurveyCTO and VOTO Mobile. Some of these are private and some are OpenSource. Some are for managing disaster scenario, others for making poll, for health or for other services. What is impressive is the variety of solutions.

This was a sweet and sour discovery for me. As many other professionals, I wasted important resources and time developing software that was found in robust and previously tested forms that was in many cases a more cost effective and faster solution. However, knowledge is power and now many solutions are on my radar and I have now developed a clear sense of the need to explore before implement.

And that is my humble advice to any who is responsible of deploying a Monitoring, Evaluation, Research and Learning process within their projects. Before we start working like crazy, as we all do, due to our strong commitment to our responsibilities: take some time to carry out proper research on what platforms and software are already available in the market that may suit your needs and evaluate whether there is something feasible or useful or not before re-building every single thing from scratch. That certainly will foster your effectiveness and optimize your delivery cost and time.

As Mariela said in her MERL Tech Lightning Talk: Don’t reinvent the flat tire! You can submit ideas for the Solutions Library or participate as a solutions reviewer too. You can also find more information on the library and how solutions are vetted here at the Library website.

Register now for MERL Tech Jozi, August 1-2 or MERL Tech DC, September 6-7, 2018 if you’d like to join the discussions in person!

Big data or big hype: a MERL Tech debate

by Shawna Hoffman, Specialist, Measurement, Evaluation and Organizational Performance at the Rockefeller Foundation.

Both the volume of data available at our fingertips and the speed with which it can be accessed and processed have increased exponentially over the past decade.  The potential applications of this to support monitoring and evaluation (M&E) of complex development programs has generated great excitement.  But is all the enthusiasm warranted?  Will big data integrate with evaluation — or is this all just hype?

A recent debate that I chaired at MERL Tech London explored these very questions. Alongside two skilled debaters (who also happen to be seasoned evaluators!) – Michael Bamberger and Rick Davies – we sought to unpack whether integration of big data and evaluation is beneficial – or even possible.

Before we began, we used Mentimeter to see where the audience  stood on the topic:

Once the votes were in, we started.

Both Michael and Rick have fairly balanced and pragmatic viewpoints; however, for the sake of a good debate, and to help unearth the nuances and complexity surrounding the topic, they embraced the challenge of representing divergent and polarized perspectives – with Michael arguing in favor of integration, and Rick arguing against.

“Evaluation is in a state of crisis,” Michael argued, “but help is on the way.” Arguments in favor of the integration of big data and evaluation centered on a few key ideas:

  • There are strong use cases for integration. Data science tools and techniques can complement conventional evaluation methodology, providing cheap, quick, complexity-sensitive, longitudinal, and easily analyzable data.
  • Integration is possible. Incentives for cross-collaboration are strong, and barriers to working together are reducing. Traditionally these fields have been siloed, and their relationship has been characterized by a mutual lack of understanding of the other (or even questioning of the other’s motivations or professional rigor).  However, data scientists are increasingly recognizing the benefits of mixed methods, and evaluators are seeing the potential to use big data to increase the number of types of evaluation that can be conducted within real-world budget, time and data constraints. There are some compelling examples (explored in this UN Global Pulse Report) of where integration has been successful.
  • Integration is the right thing to do.  New approaches that leverage the strengths of data science and evaluation are potentially powerful instruments for giving voice to vulnerable groups and promoting participatory development and social justice.   Without big data, evaluation could miss opportunities to reach the most rural and remote people.  Without evaluation (which emphasizes transparency of arguments and evidence), big data algorithms can be opaque “black boxes.”

While this may paint a hopeful picture, Rick cautioned the audience to temper its enthusiasm. He warned of the risk of domination of evaluation by data science discourse, and surfaced some significant practical, technical, and ethical considerations that would make integration challenging.

First, big data are often non-representative, and the algorithms underpinning them are non-transparent. Second, “the mechanistic approaches offered by data science, are antithetical to the very notion of evaluation being about people’s values and necessarily involving their participation and consent,” he argued. It is – and will always be – critical to pay attention to the human element that evaluation brings to bear. Finally, big data are helpful for pattern recognition, but the ability to identify a pattern should not be confused with true explanation or understanding (correlation ≠ causation). Overall, there are many problems that integration would not solve for, and some that it could create or exacerbate.

The debate confirmed that this question is complex, nuanced, and multi-faceted. It helped to remind that there is cause for enthusiasm and optimism, at the same time as a healthy dose of skepticism. What was made very clear is that the future should leverage the respective strengths of these two fields in order to maximize good and minimize potential risks.

In the end, the side in favor of integration of big data and evaluation won the debate by a considerable margin.

The future of integration looks promising, but it’ll be interesting to see how this conversation unfolds as the number of examples of integration continues to grow.

Interested in learning more and exploring this further? Stay tuned for a follow-up post from Michael and Rick. You can also attend MERL Tech DC in September 2018 if you’d like to join in the discussions in person!

Takeaways from MERL Tech London

Written by Vera Solutions and originally published here on 16th April 2018.

In March, Zak Kaufman and Aditi Patel attended the second annual MERL Tech London conference to connect with leading thinkers and innovators in the technology for monitoring and evaluation space. In addition to running an Amp Impact demo session, we joined forces with Joanne Trotter of the Aga Khan Foundation as well as Eric Barela and Brian Komar from Salesforce.org to share lessons learned in using Salesforce as a MERL Tech solution. The panel included representatives from Pencils of Promise, the International Youth Foundation, and Economic Change, and was an inspiring showcase of different approaches to and successes with using Salesforce for M&E.

The event packed two days of introspection, demo sessions, debates, and sharing of how technology can drive more efficient program monitoring, stronger evaluation, and a more data-driven social sector. The first day concluded with a (hilarious!) Fail Fest–an open and honest session focused on sharing mistakes in order to learn from them.

At MERL Tech London in 2017, participants identified seven priority areas that the MERL Tech community should focus on:

  1. Responsible data policy and practice
  2. Improving data literacy
  3. Interoperability of data and systems
  4. User-driven, accessible technologies
  5. Participatory MERL/user-centered design
  6. Lean MERL/User-focused MERL
  7. Overcoming “extractive” data approaches

These priorities were revisited this year, and it seemed to us that almost all revolve around a recurrent theme of the two days: focusing on the end user of any MERL technology. The term “end user” was not itself without controversy–after all, most of our MERL tech tools involve more than one kind of user.

When trying to dive into the fourth, fifth, and sixth priorities, we often came back to the issue of who is the proverbial “user” for whom we should be optimizing our technologies. One participant mentioned that regardless of who it is, the key is to maintain a lens of “Do No Harm” when attempting to build user-centered tools.

The discussion around the first and seventh priorities naturally veered into a discussion of the General Data Protection Regulation (GDPR), and how we can do better as a sector by using it as a guideline for data protection beyond Europe.

A heated session with Oxfam, Simprints, and the Engine Room dove into the pros, cons, and considerations of biometrics in international development. The overall sense was that biometrics can offer tremendous value to issues like fraud prevention and healthcare, but also enhance the  sector’s challenges and risks around data protection. This is clearly a topicwhere much movement can be expected in the coming years.

In addition to meeting dozens of NGOs, we connected with numerous tech providers working in the space, including SimPrints, SurveyCTO, Dharma, Social Cops, and DevResults. We’re always energized to learn about others’ tools and to explore integration and collaboration opportunities.

We wrapped up the conference at a happy hour event co-hosted by ICT4D London and Salesforce.org, with three speakers focused on ‘ICT as a catalyst for gender equality’. A highlight from the evening was a passionate talk by Seyi Akiwowo, Founder of Glitch UK, a young organization working to reduce online violence against women and girls. Seyi shared her experience as a victim of online violence and how Glitch is turning the tables to fight back.

We’re looking forward for the first MERL Tech Johannesburg taking place August 1-2, 2018.

 

Technologies in monitoring and evaluation | 5 takeaways

Bloggers: Martijn Marijnis and Leonard Zijlstra. This post originally appeared on the ICCO blog on April 3, 2018.
.

Technologies in monitoring and evaluation | 5 takeaways

On March 19 and 20 ICCO participated in the MERL Tech 2018 in London. The conference explores the possibilities of technology in monitoring, evaluation, learning and research in development. About 200 like-minded participants from various countries participated. Key issues on the agenda were data privacy, data literacy within and beyond your organization, human-centred monitoring design and user-driven technologies. Interesting practices where shared, amongst others in using blockchain technologies and machine learning. Here are our most important takeaways:

1)  In many NGOs data gathering still takes place in silo’s

Oxfam UK shared some knowledgeable insights and practical tips in putting in place an infrastructure that combines data: start small and test, e.g. by building up a strong country use case; discuss with and learn from others; ensure privacy by design and make sure senior leadership is involved. ICCO Cooperation currently faces a similar challenge, in particular in combining our household data with our global result indicators.

2)  Machine learning has potential for NGOs

While ICCO recently started to test machine learning in the food security field (see this blog) other organisations showcased interesting examples: the Wellcome Trust shared a case where they tried to answer the following question: Is the organization informing and influencing policy and if so, how? Wellcome teamed up their data lab and insight & analysis team and started to use open APIs to pull data in combination with natural language processing to identify relevant cases of research supported by the organization. With their 8.000 publications a year this would be a daunting task for a human team. First, publications linked to Wellcome funding were extracted from a European database (EPMC) in combination with end of grant reports. Then WHO’s reference section was scraped to see if and to what extent WHO’s policy was influenced and to identify potential interesting cases for Wellcome’s policy team.

3)  Use a standardized framework for digital development

See digitalpinciples.org. It gives – amongst others – practical guidelines on how to use open standards and open data, how data can be reused, how privacy and security can be addressed, how users can and should be involved in using technologies in development projects. It is a useful framework for evaluating your design.

4)  Many INGOs get nervous these days about blockchain technology

What is it, a new hype or a real game changer? For many it is just untested technology with high risks and little upside for the developing world. But, for example INGOs working in agriculture value chains or in humanitarian relief operations, its potential is definitely consequential enough to merit a closer look. It starts with the underlying principle, that users of a so-called blockchain can transfer value, or assets, between each other without the need for a trusted intermediary. The running history of the transactions is called the blockchain, and each transaction is called a block. All transactions are recorded in a ledger that is shared by all users of a blockchain.

The upside of blockchain applications is the considerable time and money saving aspect of it. Users rely on this shared ledger to provide a transparent view into the details of the assets or values, including who owns them, as well as descriptive information such as quality or location. Smallholder farmers could benefit (e.g. real-time payment on delivery, access to credit), so can international sourcing companies (e.g. traceability of produce without certification), banks (e.g. cost-reductions, risk-reduction), as much as refugees and landless (e.g. registration, identification). Although we haven’t yet seen large-scale adoption of blockchain technology in the development sector, investors like the Bill and Melinda Gates Foundation and various venture capitalists are paying attention to this space.

But one of the main downsides or challenges for blockchain, like with agricultural technology at large, is connecting the technology to viable business models and compelling use cases. With or without tested technology, this is hard enough as it is and requires innovation, perseverance and focus on real value for the end-user; ICCO’s G4AW projects gain experience with blockchain.

5)  Start thinking about data-use incentives

Over the years, ICCO has made significant investments in monitoring & evaluation and data skills training. Yet limited measurable results of increased data use can be seen, like in many other organizations. US-based development consultant Cooper&Smith shared revealing insights into data usage incentives. It tested three INGOs working across five regions globally. The hypothesis was, that better alignment of data-use training incentives leads to increased data use later on. They looked at both financial and non-financial rewards that motivate individuals to behave in a particular way. Incentives included different training formats (e.g. individual, blended), different hardware (e.g. desktop, laptop, mobile phone), recognition (e.g. certificate, presentation at a conference), forms of feedback & support (e.g. one-on-one, peer group) and leisure time during the training (e.g. 2 hours/week, 12 hours/week). Data use was referred to as the practice of collecting, managing, analyzing and interpreting data for making program policy and management decisions.

They found considerable differences in appreciation of the attributes. For instance, respondents overwhelmingly prefer a certificate in data management, but instead they currently receive primarily no recognition or only recognition from their supervisor. Or  one region prefers a certificate while the other prefers attending an international conference as reward. Or that they prefer one-on-one feedback but instead they receive only peer-2-peer support. The lesson here is, that while most organizations apply a ‘one-size fits all’-reward system (or have no reward system at all), this study points at the need to develop a culturally sensitive and geographically smart reward system to see real increase in data usage.

For many NGOs the data revolution has just begun, but we are underway!

Please Submit Session Ideas for MERL Tech Jozi

We’re thrilled to announce that we’re organizing MERL TEch Jozi for August of 2018!

Please submit your session ideas or reserve your demo table now, to explore what’s happening with innovation, digital data, and new technologies across the monitoring, evaluation, research, and learning (MERL) fields.

MERL Tech Jozi will be in Johannesburg, South Africa, August 1-2, 2018!

At MERL Tech Jozi, we’ll build on earlier MERL Tech conferences in DC and London, engaging 100 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn theories of MERL technology into effective practices that deliver real insight and learning in our sector.

MERL Tech is a lively, interactive, community-driven conference.  We’re actively seeking a diverse set of practitioners in monitoring, evaluation, research, learning, program implementation, management, data science, and technology to lead every session.

Submit your session ideas now.

We’re looking for sessions that focus on:

  • Discussions around good practice and evidence-based review
  • Innovative MERL approaches that incorporate technology
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion, and responsible policy and practice in MERL Tech
  • Exploration of complex MERL Tech challenges and emerging good practice
  • Workshop sessions with practical, hands-on exercises and approaches
  • Lightning Talks to showcase new ideas or to share focused results and learning
Submission Deadline: Saturday, March 31, 2018.

Session submissions are reviewed and selected by our steering committee. Presenters and session leads will have priority access to MERL Tech tickets. We will notify you whether your session idea was selected in late April and if selected, you will be asked to submit the final session title, summary and detailed session outline by June 1st, 2018

If you’d prefer to showcase your technology tool or platform to MERL Tech participants, you can reserve your demo table here.

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone through our Code of Conduct.

MERL Tech Jozi is organized by Kurante and supported by the following sponsors. Contact Linda Raftree if you’d like to be a sponsor of MERL Tech Jozi too.

 

 

 

MERL Tech 101: Google forms

by Daniel Ramirez-Raftree, MERL Tech volunteer

In his MERL Tech DC session on Google Forms, Samhir Vesdev from IREX led a hands-on workshop on Google Forms and laid out some of the software’s capabilities and limitations. Much of the session focused on Google Forms’ central concepts and the practicality of building a form.

At its most fundamental level, a form is made up of several sections, and each section is designed to contain a question or prompt. The centerpiece of a section is the question cell, which is, as one would imagine, the cell dedicated to the question. Next to the question cell there is a drop down menu that allows one to select the format of the question, which ranges from multiple-choice to short answer.


At the bottom right hand corner of the section you will find three dots arranged vertically. When you click this toggle, a drop-down menu will appear. The options in this menu vary depending on the format of the question. One common option is to include a few lines of description, which is useful in case the question needs further elaboration or instruction. Another is the data validation option, which restricts the kinds of text that a respondent can input. This is useful in the case that, for example, the question is in a short answer format but the form administrators need the responses to be limited numerals for the sake of analysis.

The session also covered functions available in the “response” tab, which sits at the top of the page. Here one can find a toggle labeled “accepting responses” that can be turned off or on depending on the needs for the form.

Additionally, in the top right corner this tab, there are three dots arranged vertically, and this is the options menu for this tab. Here you will find options such as enabling email notifications for each new response, which can be used in case you want to be alerted when someone responds to the form. Also in this drop down, you can click “select response destination” to link the Google Form with Google Sheets, which simplifies later analysis. The green sheets icon next to the options drop-down will take you to the sheet that contains the collected data.

Other capabilities in Google Forms include the option for changing the color scheme, which you can access by clicking the palette icon at the top of the screen. Also, by clicking the settings button at the top of the screen you can limit the response amount to restrict people’s ability to skew the data by submitting multiple responses, or you can enable response editing after submission to allow respondents to go in and correct their response after submitting it.

Branching is another important tool in Google Forms. It can be used in the case that you want a particular response to a question (say, a multiple choice question) to lead the respondent to another related question only if they respond in a certain way.

For example, if in one section you ask “did you like the workshop?” with the answer options being “yes” and “no,” and if you want to know what they didn’t like about the workshop only if they answer “no,” you can design the sheet to take the respondent to a section with the question “what didn’t you like about the workshop?” only in the case that they answer “no,” and then you can design the sheet to bring the respondent back to the main workflow after they’ve answered this additional question.

To do this, create at least two new sections (by clicking “add section” in the small menu to the right of the sections), one for each path that a person’s response will lead them down. Then, in the options menu on the lower right hand side select “go to section based on answer” and using the menu that appears, set the path that you desire.

These are just some of the tools that Google Forms offers, but with just these it is possible to build an effective form to collect the data you need. Samhir ended with a word of caution that Google has been known to shut down popular apps, so you should be wary about building an organization strategy around Google Forms.

Qualitative Coding: From Low Tech to High Tech Options

by Daniel Ramirez-Raftree, MERL Tech volunteer

In their MERL Tech DC session on qualitative coding, Charles Guedenet and Anne Laesecke from IREX together with Danielle de Garcia of Social Impact offered an introduction to the qualitative coding process followed by a hands-on demonstration on using Excel and Dedoose for coding and analyzing text.

They began by defining content analysis as any effort to make sense of qualitative data that takes a volume of qualitative material and attempts to identify core consistencies and meanings. More concretely, it is a research method that uses a set of procedures to make valid inferences from text. They also shared their thoughts on what makes for a good qualitative coding method.

Their belief is that: it should

  • consider what is already known about the topic being explored
  • be logically grounded in this existing knowledge
  • use existing knowledge as a basis for looking for evidence in the text being analyzed

With this definition laid out, they moved to a discussion about the coding process where they elaborated on four general steps:

  1. develop codes and a codebook
  2. decide on a sampling plan
  3. code your data
  4. go back and do it again!
  5. test for reliability

Developing codes and a codebook is important for establishing consistency in the coding process, especially if there will be multiple coders working on the data. A good way to start developing these codes is to consider what is already known. For example, you can think about literature that exists on the subject you’re studying. Alternatively, you can simply turn to the research questions the project seeks to answer and use them as a guide for creating your codes. Beyond this, it is also useful to go through the content and think about what you notice as you read. Once a codebook is created, it will lend stability and some measure of objectivity to the project.

The next important issue is the question of sampling. When determining sample size, though a larger sample will yield more robust results, one must of course consider the practical constraints of time, cost and effort. Does the benefit of higher quality results justify the additional investment? Fortunately, the type of data will often inform sampling. For example, if there is a huge volume of data, it may be impossible to analyze it all, but it would be prudent to sample at least 30% of it. On the other hand, usually interview and focus group data will all be analyzed, because otherwise the effort of obtaining the data would have gone to waste.

Regarding sampling method, session leads highlighted two strategies that produce sound results. One is systematic random sampling and the other is quota sampling–a method employed to ensure that the proportions of demographic group data are fairly represented.

Once these key decisions have been made, the actual coding can begin. Here, all coders should work from the same codebook and apply the codes to the same unit of analysis. Typical units of analysis are: single words, themes, sentences, paragraphs, and items (such as articles, images, books, or programs). Consistency is essential. A way to test the level of consistency is to have a 10% overlap in the content each coder analyzes and aim for 80% agreement between their coding of that content. If the coders are not applying the same codes to the same units this could either mean that they are not trained properly or that the code book needs to be altered.

Along a similar vein, the fourth step in the coding process is to test for reliability. Challenges in producing stable and consistent results in coding could include: using a unit of analysis that is too large for a simple code to be reliably applied, coding themes or concepts that are ambiguous, and coding nonverbal items. For each of these, the central problem is that the units of analysis leave too much room for subjective interpretation that can introduce bias. Having a detailed codebook can help to mitigate against this.

After giving an overview of the coding process, the session leads suggested a few possible strategies for data visualization. One is to use a word tree, which helps one look at the context in which a word appears. Another is a bubble chart, which is useful if one has descriptive data and demographic information. Thirdly, correlation maps are good for showing what sorts of relationships exist among the data. The leads suggested visiting the website stephanieevergreen.com/blog for more ideas about data visualization.

Finally, the leads covered low-tech and high-tech options for coding. On the low-tech end of the spectrum, paper and pen get the job done. They are useful when there are few data sources to analyze, when the coding is simple, and when there is limited tech literacy among the coders. Next up the scale is Excel, which works when there are few data sources and when the coders are familiar with Excel. Then the session leads closed their presentation with a demonstration of Dedoose, which is a qualitative coding tool with advanced capabilities like the capacity to code audio and video files and specialized visualization tools. In addition to Dedoose, the presenters mentioned Nvivo and Atlas as other available qualitative coding software.

Despite the range of qualitative content available for analysis, there are a few core principles that can help ensure that it is analyzed well, these include consistency and disciplined methodology. And if qualitative coding will be an ongoing part of your organization’s operations, there are several options for specialized software that are available for you to explore. [Click here for links and additional resources from the session.]

You can’t have Aid…without AI: How artificial intelligence may reshape M&E

by Jacob Korenblum, CEO of Souktel Digital Solutions

Photo: wikipedia.org/

Potential—And Risk

The rapid growth of Artificial Intelligence—computers behaving like humans, and performing tasks which people usually carry out—promises to transform everything from car travel to personal finance. But how will it affect the equally vital field of M&E? As evaluators, most of us hate paper-based data collection—and we know that automation can help us process data more efficiently. At the same time, we’re afraid to remove the human element from monitoring and evaluation: What if the machines screw up?

Over the past year, Souktel has worked on three areas of AI-related M&E, to determine where new technology can best support project appraisals. Here are our key takeaways on what works, what doesn’t, and what might be possible down the road.

Natural Language Processing

For anyone who’s sifted through thousands of Excel entries, natural language processing sounds like a silver bullet: This application of AI interprets text responses rapidly, often matching them against existing data sets to find trends. No need for humans to review each entry by hand! But currently, it has two main limitations: First, natural language processing works best for sentences with simple syntax. Throw in more complex phrases, or longer text strings, and the power of AI to grasp open-ended responses goes downhill. Second, natural language processing only works for a limited number of (mostly European) languages—at least for now. English and Spanish AI applications? Yes. Chichewa or Pashto M&E bots? Not yet. Given these constraints, we’ve found that AI apps are strongest at interpreting basic misspelled answer text during mobile data collection campaigns (in languages like English or French). They’re less good at categorizing open-ended responses by qualitative category (positive, negative, neutral). Yet despite these limitations, AI can still help evaluators save time.

Object Differentiation

AI does a decent job of telling objects apart; we’ve leveraged this to build mobile applications which track supply delivery more quickly & cheaply. If a field staff member submits a photo of syringes and a photo of bandages from their mobile, we don’t need a human to check “syringes” and “bandages” off a list of delivered items. The AI-based app will do that automatically—saving huge amounts of time and expense, especially during crisis events. Still, there are limitations here too: While AI apps can distinguish between a needle and a BandAid, they can’t yet tell us whether the needle is broken, or whether the BandAid is the exact same one we shipped. These constraints need to be considered carefully when using AI for inventory monitoring.

Comparative Facial Recognition

This may be the most exciting—and controversial—application of AI. The potential is huge: “Qualitative evaluation” takes on a whole new meaning when facial expressions can be captured by cameras on mobile devices. On a more basic level, we’ve been focusing on solutions for better attendance tracking: AI is fairly good at determining whether the people in a photo at Time A are the same people in a photo at Time B. Snap a group pic at the end of each community meeting or training, and you can track longitudinal participation automatically. Take a photo of a larger crowd, and you can rapidly estimate the number of attendees at an event.

However, AI applications in this field have been notoriously bad at recognizing diversity—possibly because they draw on databases of existing images, and most of those images contain…white men. New MIT research has suggested that “since a majority of the photos used to train [AI applications] contain few minorities, [they] often have trouble picking out those minority faces”. For the communities where many of us work (and come from), that’s a major problem.

Do’s and Don’ts

So, how should M&E experts navigate this imperfect world? Our work has yielded a few “quick wins”—areas where Artificial Intelligence can definitely make our lives easier: Tagging and sorting quantitative data (or basic open-ended text), simple differentiation between images and objects, and broad-based identification of people and groups. These applications, by themselves, can be game-changers for our work as evaluators—despite their drawbacks. And as AI keeps evolving, its relevance to M&E will likely grow as well. We may never reach the era of robot focus group facilitators—but if robo-assistants help us process our focus group data more quickly, we won’t be complaining.