Tag Archives: MERL Tech

How to Create a MERL Culture within Your Organization

Written by Jana Melpolder, MERL Tech DC Volunteer and former ICT Works Editor. Find Jana on Twitter:  @JanaMelpolder

As organizations grow, they become increasingly aware of how important MERL (Monitoring, Evaluation, Research, and Learning) is to their international development programs. To meet this challenge, new hires need to be brought on board, but more importantly, changes need to happen in the organization’s culture.

How can nonprofits and organizations change to include more MERL? Friday afternoon’s MERL Tech DC  session “Creating a MERL Culture at Your Nonprofit” set out to answer that question. Representatives from Salesforce.org and Samaschool.org were part of the discussion.

Salesforce.org staff members Eric Barela and Morgan Buras-Finlay emphasized that their organization has set aside resources (financial and otherwise) for international and external M&E. “A MERL culture is the foundation for the effective use of technology!” shared Eric Barela.

Data is a vital part of MERL, but those providing it to organizations often need to “hold the hands” of those on the receiving end. What is especially vital is helping people understand this data and gain deeper insight from it. It’s not just about the numbers – it’s about what is meant by those numbers and how people can learn and improve using the data.

According to Salesforce.org, an organization’s MERL culture is comprised of its understanding of the benefit of defining, measuring, understanding, and learning for social impact with rigor. And building or maintaining a MERL culture doesn’t just mean letting the data team do whatever they like or being the ones in charge. Instead, it’s vital to focus on outcomes. Salesforce.org discussed how its MERL staff prioritize keeping a foot in the door in many places and meeting often with people from different departments.

Where does technology fit into all of this? According to Salesforce.org, the push is on keep the technology ethical. Morgan Buras-Finlay described it well, saying “technology goes from building a useful tool to a tool that will actually be used.”

Another participant on Friday’s panel was Samaschool’s Director of Impact, Kosar Jahani. Samaschool describes itself as a San Francisco-based nonprofit focused on preparing low-income populations to succeed as independent workers. The organization has “brought together a passionate group of social entrepreneurs and educators who are reimagining workforce development for the 21st century.”

Samaschool creates a MERL culture through Learning Calls for their different audiences and funders. These Learning Calls are done regularly, they have a clear agenda, and sometimes they even happen openly on Facebook LIVE.

By ensuring a high level of transparency, Samasource is also aiming to create a culture of accountability where it can learn from failures as well as successes. By using social media, doors are opened and people have an easier time gaining access to information that otherwise would have been difficult to obtain.

Kosar explained a few negative aspects of this kind of transparency, saying that there is a risk to putting information in such a public place to view. It can lead to lost future investment. However, the organization feels this has helped build relationships and enhanced interactions.

Sadly, flight delays prevented a third organization. Big Elephant Studios and its founder Andrew Means from attending MERL Tech. Luckily, his slides were presented by Eric Barela. Andrew’s slides highlighted the following three things that are needed to create a MERL Culture:

  • Tools – investments in tools that help an organization acquire, access, and analyze the data it needs to make informed decisions
  • Processes – Investments in time to focus on utilizing data and supporting decision making
  • Culture – Organizational values that ensure that data is invested in, utilized, and listened to

One of Andrew’s main points was that generally, people really do want to gain insight and learn from data. The other members of the panel reiterated this as well.

A few lingering questions from the audience included:

  • How do you measure how culture is changing within an organization?
  • How does one determine if an organization’s culture is more focused on MERL that previously?
  • Which social media platforms and strategies can be used to create a MERL culture that provides transparency to clients, funders, and other stakeholders?

What about you? How do you create and measure the “MERL Culture” in your organization?

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL on the Money: Are we getting funding for data right?

By Paige Kirby, Senior Policy Advisor at Development Gateway

Time for a MERL pop quiz: Out of US $142.6 billion spent in ODA each year, how much goes to M&E?

A)  $14.1-17.3 billion
B)  $8.6-10 billion
C)  $2.9-4.3 billion

It turns out, the correct answer is C. An average of only $2.9-$4.3 billion — or just 2-3% of all ODA spending — goes towards M&E.

That’s all we get. And despite the growing breadth of logframes and depths of donor reporting requirements, our MERL budgets are likely not going to suddenly scale up.

So, how can we use our drop in the bucket better, to get more results for the same amount of money?

At Development Gateway, we’ve been doing some thinking and applied research on this topic, and have three key recommendations for making the most of MERL funding.

Teamwork

Image Credit: Kjetil Korslien CC BY NC 2.0

When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.

Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.

Consider the Public Option

Image Credit: Development Gateway

And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?

If partner governments and donors invested in country statistical and administrative data systems, we implementers would not have such incentive or need to conduct one-off data collection.

For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.

The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.

Flip the Paradigm

Image Credit: Rafael J M Souza CC BY 2.0

And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.

But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.

Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.

 

Integrating big data into program evaluation: An invitation to participate in a short survey

As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.

Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”.  However, there is currently very limited empirical evidence to test these concerns.

To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.

The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.

One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.

The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.

Here’s the link to the survey – please take a few minute to fill it out!

You can also join me, Kerry Bruce and Pete York on September 5th for a full day workshop on Big Data and Evaluation in Washington DC.

MERL Tech Jozi Feedback Report

MERL Tech Jozi took place on August 1-2, 2018. Below are some highlights from the post-conference survey that was sent to participants requesting feedback on their MERL Tech Jozi experience. Thirty-four percent of our attendees filled out the post-conference survey via Google Forms.

Overall Experience

Here’s how survey participants rated their overall experience:

Participants’ favorite sessions

The sessions that were most frequently mentioned as favorites and some reasons why included:

Session title Comments
Conducting a Baseline of the ICT Ecosystem – Genesis Analytics and DIAL

 

…interactive session and felt practical. I could easily associate with what the team was saying. I really hope these learnings make it to implementation and start informing decision-making around funding! The presenters were also great.

… interesting and engaging, findings were really relevant to the space.

…shared lessons and insights resonated with my own professional experience. The discussions were fruitful and directly relevant to my line of work.

…incredibly useful.

The study confirmed a lot of my perceptions as an IT developer in the MERL space, but now I have some more solid backup. I will use this in my webinars and consulting on “IT for M&E”

Datafication Discrimination — Media Monitoring Africa, Open Data Durban, Amandla.mobi and Oxfam South Africa

 

Linked both MERL and Tech to programme and focussed on the impact of MERL Tech in terms of sustainable, inclusive development.

Great panel, very knowledgeable, something different to the usual M&E. interactive and diverse.

… probably most critical and informative in terms of understanding where the sector was at … the varied level of information across the audience and the panel was fascinating – if slightly worrying about how unclear we are as an M&E sector.

When WhatsApp Becomes About More Than Messaging – Genesis Analytics, Every1Mobile and Praekelt.org

 

As an evaluator, I have never thought of using WhatsApp as a way of communicating with potential beneficiaries. It made me think about different ways of getting in touch with beneficiaries of programme, and getting them to participate in a survey.

The different case studies included examples, great media, good Q&A session at the end, and I learnt new things. WhatsApp is only just reaching it’s potential in mHealth so it was good to learn real life lessons.

Hearing about the opportunities and challenges of applying a tool in different contexts and for different purposes gave good all-around insights

Social Network Analysis – Data Innovators and Praeklelt.org

 

I was already very familiar with SNA but had not had the opportunity to use it for a couple of years. Hearing this presentation with examples of how others have used it really inspired me and I’ve since sketched out a new project using SNA on data we’re currently gathering for a new product! I came away feeling really inspired and excited about doing the analysis.
Least favorite sessions

Where participants rated sessions as their “least favorite it was because:

  • The link to technology was not clear
  • It felt like a sales pitch
  • It felt extractive
  • Speaker went on too long
  • Views on MERL or Tech seemed old fashioned
Topics that need more focus in the future

Unpack the various parts of “M” “E” “R” “L”

  • Technology across MERL, not just monitoring. There was a lot of technology for data collection & tracking but little for ERL in MERL
  • More evaluation?
  • The focus was very much on evaluation (from the sessions I attended) and I feel like we did not talk about the monitoring, research and learning so much. This is huge for overall programme implementation and continuously learning from our data. Next time, I would like to talk a bit more about how organisations are actually USING data day-to-day to make decisions (monitoring) and learning from it to adapt programmes.
  • The R of MERL is hardly discussed at all. Target this for the next MERL Tech.

New digital approaches / data science

  • AI and how it can introduce biases, machine learning, Python
  • A data science-y stream could open new channels of communication and collaboration

Systems and interoperability

  • Technology for data management between organizations and teams.
  • Integrations between platforms.
  • Public Health, Education. Think of how do we discuss and bring more attention to the various systems out there, and ensure interoperability and systems that support the long term visions of countries.
  • Different types of MERL systems. We focused a lot on data collection systems, but there is a range of monitoring systems that programme managers can use to make decisions.

 Scale and sustainability

  • How to engage and educate governments on digital data collection systems.
  • The debate on open source: because in development sector it is pushed as the holy grail, whereas most other software worldwide is proprietary for a reason (safety, maintenance, continued support, custom solution), and open source doesn’t mean free.
  • Business opportunities. MERL as a business tool. How MERL Tech has proved ROI in business and real market settings, even if those settings were in the NGO/NPO space. What is the business case behind MERL Tech and MERL Tech developments?
Ah ha! Moments

Learning about technology / tech approaches

  • I found the design workshops enlightening, and did not as an evaluator realise how much time technies put into user testing.
  • I am a tech dinosaur – so everything I learned about a new technology and how it can be applied in evaluation was an ‘aha!’

New learning and skills

  • The SNA [social network analysis] inspiration that struck me was my big takeaway! I can’t wait to get back to the office and start working on it.
  • Really enjoyed learning about WhatsApp for SBCC.
  • The qualitative difference in engagement, structure, analysis and resource need between communicating via SMS versus IM. (And realising again how old school I am for a tech person!)

Data privacy, security, ethics

  • Ah ha moment was around how we could improve handling data
  • Data security
  • Our sector (including me) doesn’t really understand ‘big data,’ how it can discriminate, and what that might mean to our programmes.

Talking about failure

  • The fail fest was wonderful. We all theoretically know that it’s good to be honest about failure and to share what that was like, but this took honest reflection to a whole new level and set the tone for Day 2.

I’m not alone!

  • The challenges I am facing with introducing tech for MERL in my organisations aren’t unique to me.
  • There are other MERL Tech practitioners with a journalism/media background! This is exciting and makes me feel I am in the right place. The industry seems to want to gate keep (academia, rigourous training) so this is interesting to consider going forward, but also excites me to challenge this through mentorship opportunities and opening the space to others like me who were given a chance and gained experience along the way. Also had many Aha moments for using WhatsApp and its highly engaging format.
  • Learning that many other practitioners support learning on your own.
  • There are people locally interested in connecting and learning from.
Recommendations for future MERL Tech events

More of most everything…

  • More technical sessions
  • More panel discussions
  • More workshops
  • More in-depth sessions!
  • More time for socializing and guided networking like the exercise with the coloured stickers on Day 1
  • More NGOs involved, especially small NGOs.
  • More and better marketing to attract more people
  • More demo tables, or have new people set up demo tables each day
  • More engagement: is there a way that MERL Tech could be used further to shape, drive and promote the agenda of using technology for better MERL? Maybe through a joint session where we identify important future topics to focus on? Just as something that gives those who want the opportunity to further engage with and contribute to MERL Tech and its agenda-setting?
  • The conversations generally were very ‘intellectual’. Too many conversations revolved around how the world had to move on to better appreciate the value of MERL, rather than how MERL was adapted, used and applied in the real world. [It was] too dominated by MERL early adopters and proponents, rather than MERL customers… Or am I missing the point, which may be that MERL (in South Africa) is still a subculture for academic minded researchers. Hope not.
  • More and better wine!
 Kudos
  • For some reason this conference – as opposed to so many other conferences I have been to – actually worked. People were enthused, they were kind, willing to talk – and best of all by day 2 they hadn’t dropped out like flies (which is such an issue with conferences!). So whatever you did do it again next time!
  • Very interactive and group-focused! This was well balanced with informative sessions. I think creative group work is good but it wouldn’t be good to have the whole conference like this. However, this was the perfect amount of it and it was well led and organized.
  • I really had a great time at this conference. The sessions were really interesting and it was awesome to get so many different people in the same place to discuss such interesting topics and issues. Lunch was also really delicious
  • Loved the lightning talks! Also the breakaway sessions were great. The coffee was amazing thank you Fail fest is such a cool concept and looking to introduce this kind of thinking into our own organisation more – we all struggle with the same things, was good to be around likeminded professionals.
  • I really appreciated the fairly “waste-free” conference with no plastic bottles, unnecessary programmes and other things that I’ll just throw away afterwards. This was a highlight for me!
  • I really enjoyed this conference. Firstly the food was amazing (always a win). But most of all the size was perfect. It was really clever the way you forced us to sit in small lunch sizes and that way by the end of the conference I really had the confidence to speak to people. Linda was a great organiser – enthusiastic and punctual.
Who attended MERL Tech Jozi?

Who presented at MERL Tech Jozi?

 

If you’d like to experience MERL Tech, sign up now to attend in Washington, DC on September 5-7, 2018!

Using WhatsApp to improve family health

Guest post from ​Yolandi Janse van Rensburg, Head of Content & Communities at Every1Mobile. This post first appeared here.

I recently gave a talk at the MERL Tech 2018 conference in Johannesburg about the effectiveness of Whatsapp as a communication channel to reach low-income communities in the urban slums of Nairobi, Kenya and understand their health behaviours and needs.

Mobile Economy Report 2018. Communicating more effectively with a larger audience in hard-to-reach areas has never been easier. Instead of relying on paper questionnaires or instructing field workers to knock on doors, you can now communicate directly with your users, no matter where you are in the world.

With this in mind, some may choose to create a Whatsapp group, send a batch of questions and wait for quality insights to stream in, but in reality, they receive little to no participation from their users.

Why, you ask? Whatsapp can be a useful tool to engage your users, but there are a few lessons we’ve learnt along the way to encourage high levels of participation and generate important insights.

Building trust comes first

Establishing a relationship with the communities you’re targeting can easily be overlooked. Between project deadlines, coordination and insight gathering, it can be easy to neglect forging a connection with our users, offering a window into our thinking, so they can learn more about who we are and what we’re trying to achieve. This is the first step in building trust and acquiring your users’ buy-in to your programme. This lies at the core of Every1Mobile’s programming. The relationship you build with your users can unlock honest feedback that is crucial to the success of your programme going forward.

In late 2017, Every1Mobile ran a 6-week Whatsapp pilot with young mothers and mothers-to-be in Kibera and Kawangware, Nairobi, to better understand their hygiene and nutrition practices in terms of handwashing and preparing a healthy breakfast for their families. The U Afya pilot kicked off with a series of on-the-ground breakfast clubs, where we invited community members to join. It was an opportunity for the mothers to meet us, as well as one another, which made them feel more comfortable to participate in the Whatsapp groups.

Having our users meet beforehand and become acquainted with our local project team ensured that they felt confident enough to share honest feedback, talk amongst themselves and enjoy the Whatsapp chats. As a result, 60% of our users attended every Whatsapp session and 84% attended more than half of the sessions.

Design content using SBCC

At Every1Mobile, we do not simply create engaging copy, our content design is based on research into user behaviour, analytics and feedback, tailored with a human-centric approach to inspire creative content strategies and solutions that nurture an understanding of our users.

When we talk about content design, we mean taking a user need and presenting it in the best way possible. Applying content design principles means we do the hard work for the user. And the reward is communication that is simpler, clearer and faster for our communities

For the U Afya pilot, we incorporated Unilever, our partner’s, behaviour change approach, namely the Five Levers for Change, to influence attitudes and behaviours, and improve family health and nutrition. The approach aims to create sustainable habits using social and behaviour change communication (SBCC) techniques like signposting, pledging, prompts and cues, and peer support. Each week covered a different topic including pregnancy, a balanced diet, an affordable and healthy breakfast, breastfeeding, hygiene and weaning for infants.

Localisation means more than translating words

Low adult literacy in emerging markets can have a negative impact on the outcomes of your behaviour change campaigns. In Kenya, roughly  38.5% of the adult population is illiterate with bottom-of-the-pyramid communities having little formal education. This means translating your content into a local language may not be enough.

To address this challenge for the U Afya pilot, our Content Designers worked closely with our in-country Community Managers to localise the Whatsapp scripts so they are applicable to the daily lives of our users. We translated our Whatsapp scripts into Sheng, even though English and Kiswahili are the official languages in Kenya. Sheng is a local slang blend of English, Kiswahili and ethnic words from other cultures. It is widely spoken by the urban communities with over 3,900 words, idioms and phrases. It’s a language that changes and evolves constantly, which means we needed a translator who has street knowledge of urban life in Nairobi.

Beyond translating our scripts, we integrated real-life references applicable to our target audience. We worked with our project team to find out what the daily lives of the young mothers in Kibera and Kawangware looked like. What products are affordable and accessible? Do they have running water? What do they cook for their families and what time is supper served? Answers to these questions had a direct impact on our use of emojis, recipes and advice in our scripts. For example, we integrated local foods into the content like uji and mandazi for breakfast and indigenous vegetables including ndengu, ngwashi and nduma.

Can WhatsApp can drive behaviour change?

The answer is ‘yes’, mobile has the potential to drive SBCC. We observed an interesting link between shifts in attitude and engagement, with increased self-reported assimilation of new behaviour from women who actively posted during the Whatsapp sessions.

To measure the impact of our pilot on user knowledge, attitudes and behaviours, we designed interactive pre- and post-surveys, which triggered airtime incentives once completed. Surprisingly, the results showed little impact in knowledge with pre-scores registering higher than anticipated, however, we saw a notable decrease in perceived barriers of adopting these new behaviours and a positive impact on self-efficacy and confidence.

WhatsApp can inform the programme design

Your audience can become collaborators and help you design your programme. We used our insights gathered through the U Afya Whatsapp pilot to create a brand new online community platform that offers young mothers in Nairobi a series of online courses called Tunza Class.

We built the community platform based on the three key life stages identified within the motherhood journey, namely pregnancy and birth, newborn care, and mothers with children under five. The platform includes an interactive space called Sistaz Corner where users can share their views, experiences and advice with other mothers in their community.

With a range of SBCC techniques built into the platform, users can get peer support anonymously, and engage field experts on key health issues. Our Responsible Social Network functionality allows users to make friends, build their profile and show off their community activity which further drives overall user engagement on the site. The Every1Mobile platform is built in a way that enables users to access the online community using the most basic web-enabled feature phone, at the lowest cost for our end user, with fast loading and minimal data usage.

Following the site launch in early August 2018, we are now continuing to use our Whatsapp groups so we can gather real-time feedback on site navigation, design, functionality, labelling and content, in order to apply iterative design and ensure the mobile platform is exactly what our users want it to be.

 

September 5th: MERL Tech DC pre-workshops

This year at MERL Tech DC, in addition to the regular conference on September 6th and 7th, we’re offering two full-day, in-depth workshops on September 5th. Join us for a deeper look into the possibilities and pitfalls of Blockchain for MERL and Big Data for Evaluation!

What can Blockchain offer MERL? with Shailee Adinolfi, Michael Cooper, and Val Gandhi, co-hosted by Chemonics International, 1717 H St. NW, Washington, DC 20016. 

Tired of the blockchain hype, but still curious on how it will impact MERL? Join us for a full day workshop with development practitioners who have implemented blockchain solutions with social impact goals in various countries. Gain knowledge of the technical promises and drawbacks of blockchain technology as it stands today and brainstorm how it may be able to solve for some of the challenges in MERL in the future. Learn about ethical design principles for blockchain and how to engage with blockchain service providers to ensure that your ideas and programs are realistic and avoid harm. See the agenda here.

Register now to claim a spot at the blockchain and MERL pre-workshop!

Big Data and Evaluation with Michael Bamberger, Kerry Bruce and Peter York, co-hosted by the Independent Evaluation Group at the World Bank – “I” Building, Room: I-1-200, 1850 I St NW, Washington, DC 20006

Join us for a one-day, in-depth workshop on big data and evaluation where you’ll get an introduction to Big Data for Evaluators. We’ll provide an overview of applications of big data in international development evaluation, discuss ways that evaluators are (or could be) using big data and big data analytics in their work. You’ll also learn about the various tools of data science and potential applications, as well as run through specific cases where evaluators have employed big data as one of their methods. We will also address the important question as to why many evaluators have been slower and more reluctant to incorporate big data into their work than have their colleagues in research, program planning, management and other areas such as emergency relief programs. Lastly, we’ll discuss the ethics of using big data in our work. See the agenda here!

Register now to claim a spot at the Big Data and Ealuation pre-workshop!

You can also register here for the main conference on September 6-7, 2018!

 

Check out the agenda for MERL Tech DC!

MERL Tech DC is coming up quickly!

This year we’ll have two pre-workshops on September 5th: What Can Blockchain Offer MERL? (hosted by Chemonics) and Big Data and Evaluation (hosted by the World Bank).

On September 6-7, 2018, we’ll have our regular two days of lightning talks, break-out sessions, panels, Fail Fest, demo tables, and networking with folks from diverse sectors who all coincide at the intersection of MERL and Tech!

Registration is open – and we normally sell out, so get your tickets now while there is still space!

Take a peek at the agenda – we’re excited about it — and we hope you’ll join us!

 

 

MERL Tech Jozi: Highlights, Takeaways and To Dos

Last week 100 people gathered at Jozihub for MERL Tech Jozi — two days of sharing, learning and exploring what’s happening at the intersection of Monitoring, Evaluation, Research and Learning (MERL) and Tech.

This was our first MERL Tech event outside of Washington DC, New York or London, and it was really exciting to learn about the work that is happening in South Africa and nearby countries. The conference vibe was energetic, buzzy and friendly, with lots of opportunities to meet people and discuss this area of work.

Participants spanned backgrounds and types of institutions – one of the things that makes MERL Tech unique! Much of what we aim to do is to bridge gaps and encourage people from different approaches to talk to each other and learn from each other, and MERL Tech Jozi provided plenty of opportunity for that.

Sessions covered a range of topics, from practical, hands-on workshops on Excel, responsible data, and data visualization, to experience sharing on data quality, offline data capture, video measurement, and social network analysis, to big picture discussions on the ICT ecosystem, the future of evaluation, the fourth industrial revolution, and the need to enhance evaluator competencies when it comes to digital tools and new approaches.

Demo tables gave participants a chance to see what tools are out there and to chat with developers about their specific needs. Lightning Talks offered a glimpse into new approaches and reflections on the importance of designing with users and understanding context in which these new approaches are utilized. And at the evening “Fail Fest” we heard about evaluation failures, challenges using mobile technology for evaluation, and sustainable tool selection.

Access the MERL Tech Jozi agenda with presentations here or all the presentations here.

3 Takeaways

One key take-away for me was that there’s a gap between the ‘new school’ of younger, more tech savvy MERL Practitioners and the more established, older evaluation community. Some familiar tensions were present between those with years of experience in MERL and less expertise in tech and those who are newer to the MERL side yet highly proficient in tech-enabled approaches. The number of people who identify as having skills that span both areas is growing and will continue to do so.

It’s going to be important to continue to learn from one another and work together to bring our MERL work to the next level, both in terms of how we form MERL teams with the necessary expertise internally and how we engage with each other and interact as a whole sector. As one participant put it, we are not going find all these magical skills in one person — the “MERL Tech Unicorn” so we need to be cognizant of how we form teams that have the right variety of skills and experiences, including data management and data science where necessary.

It is critical that we all have a better understanding of the wider impacts of technologies, beyond our projects, programs, platforms and evaluations. If we don’t have a strong grip on how technology is affecting wider society, how will we understand how social change happens in increasingly digital contexts? How will we negotiate data privacy? How will we wrestle with corporate data use and the potential for government surveillance? If evaluator understanding of technology and the information society is low, how will evaluators offer relevant and meaningful insights? How do diversity, inclusion and bias manifest themselves in a tech-enabled world and in tech-enabled MERL and what do evaluators need to know about that in order to ensure representation? How do we understand data in its newer forms and manifestations? How do we ensure ethical and sound approaches? We need all the various sectors who form part of the MERL Tech community work together to come to a better understanding of both the tangible and intangible impacts of technology in development work, evaluation, and wider society.

A second key takeaway is that we need to do a better job of documenting and evaluating the use of technology in development and in MERL (e.g., the MERL of ICT4D and MERL of tech-enabled MERL). I learned so much from the practical presentations and experience sharing during MERL Tech Jozi. In many cases, the challenges and learning were very similar across projects and efforts.  We need to find better ways of ensuring that this kind of learning is found, accessed, and that it is put into practice when creating new initiatives. We need to also understand more about the power dynamics, negative incentives and other barriers that prevent us from using what we know.

As “MERL Tech”, we are planning to pull some resources and learning together over the next year or two, to trace the shifts in the space over the past 5 years, and to highlight some of the trends we are seeing for the future. (Please get in touch with me if you’d like to participate in this “MERL of MERL Tech” research with a case study, an academic paper, other related research, or as a key informant!)

A third takeaway, as highlighted by Victor Naidu from the South African Monitoring and Evaluation Association (SAMEA), is that we need to focus on developing the competencies that evaluators require for the near future. And we need to think about how the tech sector can better serve the MERL community. SAMEA has created a set of draft competencies for evaluators, but these are missing digital competencies. SAMEA would love your comments and thoughts on what digital competencies evaluators require. They would also like to see you as part of their community and at their next event! (More info on joining SAMEA).

What digital competencies should be added to this list of evaluator competencies? Please add your suggestions and comments here on the google doc.

MERL Tech will be collaborating more closely with SAMEA to include a “MERL Tech Track” at SAMEA’s 2019 conference, and we hope to be back at JoziHub again in 2020 with MERL Tech Jozi as its own separate event.

Be sure to follow us on Twitter or sign up (in the side bar) to receive MERL Tech news if you’d like to stay in the loop! And thanks to all our sponsors – Genesis Analytics, Praekelt.org, The Digital Impact Alliance  (DIAL) and JoziHub!

MERL Tech DC is coming up on September 6-7, with pre-workshops on September 5 on Big Data and Evaluation and Blockchain and MERL! Register here.

 

 

 

 

 

Evaluating for Trust in Blockchain Applications

by Mike Cooper

This is the fourth in a series of blogs aimed at discussing and soliciting feedback on how the blockchain can benefit MEL practitioners in their work.  The series includes: What does Blockchain Offer to MERL,  Blockchain as an M&E Tool, How Can MERL Inform Maturation of the Blockchain, this post, and future posts on integrating  blockchain into MEL practices. The series leads into a MERL Tech Pre-Workshop on September 5th, 2018 in Washington D.C.  that will go into depth on possibilities and examples of MEL blockchain applications. Register here!

Enabling trust in an efficient manner is the primary innovation that the blockchain delivers through the use of cryptology and consensus algorithms.  Trust is usually a painstaking relationship building effort that requires iterative interactions to build.  The blockchain alleviates the need for much of the resources required to build this trust, but that does not mean that stakeholders will automatically trust the blockchain application.  There will still need to be trust building mechanisms with any blockchain application and MEL practitioners are uniquely situated to inform how these trust relationships can mature.

Function of trust in the blockchain

Trust is expensive.  You pay fees to banks who provide confidence to sellers who take your debit card as payment and trust that they will receive funds for the transaction.  Agriculture buyers pay fees to third parties (who can certify that the produce is organic, etc.) to validate quality control on products coming through the value chain  Often sellers do not see the money from debit card transaction in their accounts automatically and agriculture actors perpetually face the pressures resulting from being paid for goods and/or services they provided weeks previously. The blockchain could alleviate much of these harmful effects by substituting trust in humans by trust in math.

We pay these third parties because they are trusted agents, and these trusted agents can be destructive rent seekers at times; creating profits that do not add value to the goods and services they work with. End users in these transactions are used to using standard payment services for utility bills, school fees, etc.  This history of iterative transactions has resulted in a level of trust in these processes. It may not be equitable but it is what many are used to and introducing an innovation like blockchain will require an understanding of how these processes are influencing stakeholders, their needs and how they might be nudged to trust something different like a blockchain application.  

How MEL can help understand and build trust

Just as microfinance introduced new methods of sending/receiving money and access to new financial services that required piloting different possible solutions to build this understanding, so will blockchain applications. This is an area where MEL can add value to achieving mass impact, by designing the methods to iteratively build this understanding and test solutions.  

MEL has done this before.  Any project that requires relationship building should be based on understanding the mindset and incentives for relevant actions (behavior) amongst stakeholders to inform the design of the “nudge” (the treatment) intended to shift behavior.

Many of the programs we work on as MEL practitioners involve various forms and levels of relationship building, which is essentially “trust”.  There have been many evaluations of relationship building whether it be in microfinance, agriculture value chains or policy reform.  In each case, “trust” must be defined as a behavior change outcome that is “nudged” based on the framing (mindset) of the stakeholder.  Meaning that each stakeholder, depending on their mindset and the required behavior to facilitate blockchain uptake, will require a customized nudge.  

The role of trust in project selection and design: What does that mean for MEL

Defining “trust” should begin during project selection/design.  Project selection and design criteria/due diligence are invaluable for MEL.  Many of the dimensions of evaluability assessments refer back to the work that is done in the project selection/design phrase (which is why some argue evaluability assessments are essentially project design tools).  When it comes to blockchain, the USAID Blockchain Primer provides some of the earliest thinking for how to select and design blockchain projects, hence it is a valuable resources for MEL practitioners who want to start thinking about how they will evaluate blockchain applications.  

What should we be thinking about?

Relationship building and trust are behaviors, hence blockchain theories of change should have outcomes stated as behavior changes by specific stakeholders (hence the value add of tools like stakeholder analysis and outcome mapping).  However, these Theories of Change (TOC) are only as good as what informs them, hence building a knowledge base of blockchain applications as well as previous lessons learned from evidence on relationship building/trust will be critical to developing a MEL Strategy for blockchain applications.  

If you’d like to discuss this and related aspects, join us on September 5th in Washington, DC, for a one-day workshop on “What can the blockchain offer MERL?”

Michael Cooper is a former Associate Director at Millennium Challenge Corporation and the U.S. State Dept in Policy and Evaluation.  He now heads Emergence, a firm that specializes in MEL and Blockchain services. He can be reached at emergence.cooper@gmail.com or through the Emergence website.