Tag Archives: MERL Tech

Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps

Guest post by John Burg, Christine Murphy, and Jean Paul Pétraud, international development professionals who presented a one-hour session at the  MERL Tech DC 2018 conference on Sept. 7, 2018. Their presentation focused on the topic of creating a learning agenda to help MERL practitioners gauge the value of blockchain technology for development programming. Opinions and work expressed here are their own.

We attended the MERL Tech DC 2018 conference held on Sept. 7, 2018 and led a session related to the creation of a learning agenda to help MERL practitioners gauge the value of blockchain technology for development programming.

As a trio of monitoring, evaluation, research, and learning, (MERL) practitioners in international development, we are keenly aware of the quickly growing interest in blockchain technology. Blockchain is a type of distributed database that creates a nearly unalterable record of cryptographically secure peer-to-peer transactions without a central, trusted administrator. While it was originally designed for digital financial transactions, it is also being applied to a wide variety of interventions, including land registries, humanitarian aid disbursement in refugee camps, and evidence-driven education subsidies. International development actors, including government agencies, multilateral organizations, and think tanks, are looking at blockchain to improve effectiveness or efficiency in their work.

Naturally, as MERL practitioners, we wanted to learn more. Could this radically transparent, shared database managed by its users, have important benefits for data collection, management, and use? As MERL practice evolves to better suit adaptive management, what role might blockchain play? For example, one inherent feature of blockchain is the unbreakable and traceable linkages between blocks of data. How might such a feature improve the efficiency or effectiveness of data collection, management, and use? What are the advantages of blockchain over other more commonly used technologies? To guide our learning we started with an inquiry designed to help us determine if, and to what degree, the various features of blockchain add value to the practice of MERL. With our agenda established, we set out eagerly to find a blockchain case study to examine, with the goal of presenting our findings at the September 2018 MERL Tech DC conference.

What we did

We documented 43 blockchain use-cases through internet searches, most of which were described with glowing claims like “operational costs… reduced up to 90%,” or with the assurance of “accurate and secure data capture and storage.” We found a proliferation of press releases, white papers, and persuasively written articles. However, we found no documentation or evidence of the results blockchain was purported to have achieved in these claims. We also did not find lessons learned or practical insights, as are available for other technologies in development.

We fared no better when we reached out directly to several blockchain firms, via email, phone, and in person. Not one was willing to share data on program results, MERL processes, or adaptive management for potential scale-up. Despite all the hype about how blockchain will bring unheralded transparency to processes and operations in low-trust environments, the industry is itself opaque. From this, we determined the lack of evidence supporting value claims of blockchain in the international development space is a critical gap for potential adopters.

What we learned

Blockchain firms supporting development pilots are not practicing what they preach — improving transparency — by sharing data and lessons learned about what is working, what isn’t working, and why. There are many generic decision trees and sales pitches available to convince development practitioners of the value blockchain will add to their work. But, there is a lack of detailed data about what happens when development interventions use blockchain technology.

Since the function of MERL is to bridge knowledge gaps and help decision-makers take action informed by evidence, we decided to explore the crucial questions MERL practitioners may ask before determining whether blockchain will add value to data collection, management, and use. More specifically, rather than a go/no-go decision tool, we propose using a learning agenda to probe the role of blockchain in data collection, data management and data use at each stage of project implementation.   “Before you embark on that shiny blockchain project, you need to have a very clear idea of why you are using a blockchain.”  

Avoiding the Pointless Blockchain Project, Gideon Greenspan (2015)

Typically, “A learning agenda is a set of questions, assembled by an organization or team, that identifies what needs to be learned before a project can be planned and implemented.” The process of developing and finding answers to learning questions is most useful when it’s employed continuously throughout the duration of project implementation, so that changes can be made based on what is learned about changes in the project’s context, and to support the process of applying evidence to decision-making in adaptive management.

We explored various learning agenda questions for data collection, management and use that should continue to be developed and answered throughout the project cycle. However, because the content of a learning agenda is highly context-dependent, we focused on general themes. Examples of questions that might be asked by beneficiaries, implementing partners, donors, and host-country governments, include:

  • What could each of a project’s stakeholder groups gain from the use of blockchain across the stages of design and implementation, and, would the benefits of blockchain incentivize them to participate?
  • Can blockchain resolve trust or transparency issues between disparate stakeholder groups, e.g. to ensure that data reported represent reality, or that they are of sufficient quality for decision-making?
  • Are there less-expensive, more appropriate, or easier to execute, existing technologies that already meet each group’s MERL needs?
  • Are there unaddressed MERL management needs blockchain could help address, or capabilities blockchain offers that might inspire new and innovative thinking about what is done, and how it gets done?

This approach resonated with other MERL for development practitioners

We presented this approach to a diverse group of professionals at MERL Tech DC, including other MERL practitioners and IT support professionals, representing organizations from multilateral development banks to US-based NGOs. Facilitated as a participatory roundtable, the session participants discussed how MERL professionals could use learning agendas to help their organizations both decide whether blockchain is appropriate for intervention design, as well as guide learning during implementation to strengthen adaptive management.

Questions and issues raised by the session participants ranged widely, from how blockchain works, to expressing doubt that organizational leaders would have the risk appetite required to pilot blockchain when time and costs (financial and human resource) were unknown. Session participants demonstrated an intense interest in this topic and our approach. Our session ran over time and side conversations continued into the corridors long after the session had ended.

Next Steps

Our approach, as it turns out, echoes others in the field who question whether the benefits of blockchain add value above and beyond existing technologies, or accrue to stakeholders beyond the donors that fund them. This trio of practitioners will continue to explore ways MERL professionals can help their teams learn about the benefits of blockchain technology for international development. But, in the end, it may turn out that the real value of blockchain wasn’t the application of the technology itself, but rather as an impetus to question what we do, why we do it, and how we could do it better.

Creative Commons License
Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps by John Burg, Christine Murphy, and Jean-Paul Petraud is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

MERL and the 4th Industrial Revolution: Submit your AfrEA abstract now!

by Dhashni Naidoo, Genesis Analytics

Digitization is everywhere! Digital technologies and data have changed the way we engage with each other and how we work. We cannot escape the effects of digitization. Whether in our personal capacity — how our own data is being used — or in our professional capacity, in terms of understanding how to use data and technology. These changes are exciting! But we also need to consider the challenges they present to the MERL community and their impact on development.

The advent and proliferation of big data has the potential to change how evaluations are conducted. New skills are needed to process and analyse big data. Mathematics, statistics and analytical skills will be ever more important. As evaluators, we need to be discerning about the data we use. In a world of copious amounts of data, we need to ensure we have the ability to select the right data to answer our evaluation questions.

We also have an ethical and moral duty to manage data responsibly. We need new strategies and tools to guide the ways in which we collect, store, use and report data. Evaluators need to improve our skills as related to processing and analysing data. Evaluative thinking in the digital age is evolving and we need to consider the technical and soft skills required to maintain integrity of the data and interpretation thereof.

Though technology can make data collection faster and cheaper, two important considerations are access to technology by vulnerable groups and data integrity. Women, girls and people in rural areas normally do not have the same levels of access to technology as men and boys This impacts on our ability to rely solely on technology to collect data from these population groups, because we need to be aware of inclusion, bias and representativity. Equally we need to consider how to maintain the quality of data being collected through new technologies such as mobile phones and to understand how the use of new devices might change or alter how people respond.

In a rapidly changing world where technologies such as AI, Blockchain, Internet of Things, drones and machine learning are on the horizon, evaluators need to be robust and agile in how we change and adapt.

For this reason, a new strand has been introduced at the African Evaluation Association (AfrEA) conference, taking place from 11 – 15 March 2019 in Abidjan, Cote d’Ivoire. This stream, The Fourth Industrial Revolution and its Impact on Development: Implications for Evaluation, will focus on five sub-themes:

  • Guide to Industry 4.0 and Next Generation Tech
  • Talent and Skills in Industry 4.0
  • Changing World of Work
  • Evaluating youth programmes in Industry 4.0
  • MERLTech

Genesis Analytics will be curating this strand.  We are excited to invite experts working in digital development and practitioners at the forefront of technological innovation for development and evaluation to submit abstracts for this strand.

The deadline for abstract submissions is 16 November 2018. For more information please visit the AfrEA Conference site!

Does your MERL Tech effort need innovation or maintenance?

by Stacey Berlow, Managing Partner at Project Balance and Jana Melpolder, MERL Tech DC Volunteer and Communications Manager at Inveneo. Find Jana on Twitter:  @JanaMelpolder

At MERL Tech DC 2018, Project Balance’s Stacey Berlow led a session titled “Application Maintenance Isn’t Sexy, But Critical to Success.” In her session and presentation, she outlined several reasons why software maintenance planning and funding is essential to the sustainability of an M&E software solution.

The problems that arise with software or applications go well beyond day-to-day care and management. A foundational study on software maintenance by P. Lientz and E. Burton [1] looked at the activities of 487 IT orgs and found that maintenance activities can be broken down into four types:

  • Corrective (bug fixing),
  • Adaptive (impacts due to changes outside the system),
  • Perfective (enhancements), and
  • Preventive (monitoring and optimization)

The table below outlines the percentage of time IT departments spend on the different types of maintenance. Note that most of the time dedicated to maintenance is not defect fixing (corrective), but enhancing (perfecting) the tool or system.

Maintenance Type Effort Breakdown
Corrective (Total: 21.7%) Emergency fixes: 12.4% 

Routine debugging: 9.3%

Adaptive (Total: 23.6%) Changes to data inputs and files: 17.4%

Changes to hardware and system software: 6.2% 

Perfective (Total: 51.3%) Customer enhancements: 41.8% 

Improvements to documentation: 5.5% 

Optimization: 4.0%

Other (Total: 3.4%) Various: 3.4%

The study also pointed out some of the most common maintenance problems:

  • Poor quality application system documentation
  • Excessive demand from customers
  • Competing demands for maintenance personnel time
  • Inadequate training of user personnel
  • Turnover in the user organizations

Does Your Project Need Innovations or Just Maintenance?

Organizations often prioritize innovation over maintenance. They have a list of enhancing strategies or improvements they want to make, and they’ll start new projects when what they should really be focusing on is maintenance. International development organizations often want to develop new software with the latest technology — they want NEW software for their projects. In reality, what is usually needed is software maintenance and enhancement of an existing product.

Moreover, when an organization is considering adopting a new piece of software, it’s absolutely vital that it think about the cost of maintenance in addition to the cost of development. Experts estimate that the cost of maintenance can vary from 40%-90% of the original build cost [2]. Maintenance costs a lot more than many organizations realize.

It’s also not easy to know beforehand or to estimate what the actual cost of maintenance will be. Creating a Service Level Agreement (SLA), which specifies the time required to respond to issues or deploy enhancements as part of a maintenance contract, is vital to having a handle on the human resources, price levels and estimated costs of maintenance.

As Stacey emphasizes, “Open Source does not mean ‘free’. Updates to DHIS2 versions, Open MRS, Open HIE, Drupal, WordPress, and more WILL require maintenance to custom code.”

It’s All About the Teamwork

Another point to consider when it comes to the cost of maintenance for your app or software is the time and money spent on staff. Members of your team will not always be well-versed in a certain type of software. Also, when transferring a software asset to a funder or ministry/government entity, consider the skill level of the receiving team as well as the time availability of team members. Many software products cannot be well maintained by teams that not involved in developing them. As a result, they often fall into disrepair and become unusable. A software vendor may be better equipped to monitor and respond to issues than the team.

What Can You Do?

So what are effective ways to ensure the sustainability of software tools? There’s a few strategies you can use. First of all, ensure that your IT staff members are involved in the planning of your project or organization’s RFP process. They will give you valuable metrics on efforts and cost, right up front, so that you can secure funding. Second, scale down the size of your project so that your tool budget matches your funds. Consider what the minimum software functionality is that you need, and enhance the tools later. Third, invite the right stakeholders and IT staff members to meetings and conference calls as soon as the project begins. Having the right people on board early on will make a huge difference in how you manage and transition software to country stakeholders later at the end of the project!

The session at MERL Tech ended with a discussion of the incredible need and value of involving local skills and IT experts as part of the programming team. Local knowledge and IT expertise is one of the most important, if not the most important, pieces of the application maintenance puzzle. One of the key ideas I learned was that application maintenance should start at the local level and grow from there. Local IT personnel will be able to answer many technical questions and address many maintenance issues. Furthermore, IT staff members from international development agencies will be able to learn from local IT experts as well, giving a boost in the capacity of all staff members across the board.

Application maintenance may not be the most interesting part of an international development project, but it is certainly one of the most vital to help ensure the project’s success and ongoing sustainability.

Check out this great Software Maintenance/Monitoring Checklist to ensure you’ve considered everything you need when planning your next MERL Tech (or other) effort!

[1] P. Lientz, E. Burton, Software Maintenance Management: A Study of the Maintenance of Computer Application Software in 487 Data Processing Organizations, Addison-Wesley (August 1, 1980)

[2] Reference: Jeff Hanby, Software Maintenance: Understanding and Estimating Costs, https://bit.ly/2Ob3iOn

How to Create a MERL Culture within Your Organization

Written by Jana Melpolder, MERL Tech DC Volunteer and former ICT Works Editor. Find Jana on Twitter:  @JanaMelpolder

As organizations grow, they become increasingly aware of how important MERL (Monitoring, Evaluation, Research, and Learning) is to their international development programs. To meet this challenge, new hires need to be brought on board, but more importantly, changes need to happen in the organization’s culture.

How can nonprofits and organizations change to include more MERL? Friday afternoon’s MERL Tech DC  session “Creating a MERL Culture at Your Nonprofit” set out to answer that question. Representatives from Salesforce.org and Samaschool.org were part of the discussion.

Salesforce.org staff members Eric Barela and Morgan Buras-Finlay emphasized that their organization has set aside resources (financial and otherwise) for international and external M&E. “A MERL culture is the foundation for the effective use of technology!” shared Eric Barela.

Data is a vital part of MERL, but those providing it to organizations often need to “hold the hands” of those on the receiving end. What is especially vital is helping people understand this data and gain deeper insight from it. It’s not just about the numbers – it’s about what is meant by those numbers and how people can learn and improve using the data.

According to Salesforce.org, an organization’s MERL culture is comprised of its understanding of the benefit of defining, measuring, understanding, and learning for social impact with rigor. And building or maintaining a MERL culture doesn’t just mean letting the data team do whatever they like or being the ones in charge. Instead, it’s vital to focus on outcomes. Salesforce.org discussed how its MERL staff prioritize keeping a foot in the door in many places and meeting often with people from different departments.

Where does technology fit into all of this? According to Salesforce.org, the push is on keep the technology ethical. Morgan Buras-Finlay described it well, saying “technology goes from building a useful tool to a tool that will actually be used.”

Another participant on Friday’s panel was Samaschool’s Director of Impact, Kosar Jahani. Samaschool describes itself as a San Francisco-based nonprofit focused on preparing low-income populations to succeed as independent workers. The organization has “brought together a passionate group of social entrepreneurs and educators who are reimagining workforce development for the 21st century.”

Samaschool creates a MERL culture through Learning Calls for their different audiences and funders. These Learning Calls are done regularly, they have a clear agenda, and sometimes they even happen openly on Facebook LIVE.

By ensuring a high level of transparency, Samasource is also aiming to create a culture of accountability where it can learn from failures as well as successes. By using social media, doors are opened and people have an easier time gaining access to information that otherwise would have been difficult to obtain.

Kosar explained a few negative aspects of this kind of transparency, saying that there is a risk to putting information in such a public place to view. It can lead to lost future investment. However, the organization feels this has helped build relationships and enhanced interactions.

Sadly, flight delays prevented a third organization. Big Elephant Studios and its founder Andrew Means from attending MERL Tech. Luckily, his slides were presented by Eric Barela. Andrew’s slides highlighted the following three things that are needed to create a MERL Culture:

  • Tools – investments in tools that help an organization acquire, access, and analyze the data it needs to make informed decisions
  • Processes – Investments in time to focus on utilizing data and supporting decision making
  • Culture – Organizational values that ensure that data is invested in, utilized, and listened to

One of Andrew’s main points was that generally, people really do want to gain insight and learn from data. The other members of the panel reiterated this as well.

A few lingering questions from the audience included:

  • How do you measure how culture is changing within an organization?
  • How does one determine if an organization’s culture is more focused on MERL that previously?
  • Which social media platforms and strategies can be used to create a MERL culture that provides transparency to clients, funders, and other stakeholders?

What about you? How do you create and measure the “MERL Culture” in your organization?

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL on the Money: Are we getting funding for data right?

By Paige Kirby, Senior Policy Advisor at Development Gateway

Time for a MERL pop quiz: Out of US $142.6 billion spent in ODA each year, how much goes to M&E?

A)  $14.1-17.3 billion
B)  $8.6-10 billion
C)  $2.9-4.3 billion

It turns out, the correct answer is C. An average of only $2.9-$4.3 billion — or just 2-3% of all ODA spending — goes towards M&E.

That’s all we get. And despite the growing breadth of logframes and depths of donor reporting requirements, our MERL budgets are likely not going to suddenly scale up.

So, how can we use our drop in the bucket better, to get more results for the same amount of money?

At Development Gateway, we’ve been doing some thinking and applied research on this topic, and have three key recommendations for making the most of MERL funding.

Teamwork

Image Credit: Kjetil Korslien CC BY NC 2.0

When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.

Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.

Consider the Public Option

Image Credit: Development Gateway

And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?

If partner governments and donors invested in country statistical and administrative data systems, we implementers would not have such incentive or need to conduct one-off data collection.

For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.

The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.

Flip the Paradigm

Image Credit: Rafael J M Souza CC BY 2.0

And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.

But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.

Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.

 

Integrating big data into program evaluation: An invitation to participate in a short survey

As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.

Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”.  However, there is currently very limited empirical evidence to test these concerns.

To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.

The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.

One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.

The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.

Here’s the link to the survey – please take a few minute to fill it out!

You can also join me, Kerry Bruce and Pete York on September 5th for a full day workshop on Big Data and Evaluation in Washington DC.

MERL Tech Jozi Feedback Report

MERL Tech Jozi took place on August 1-2, 2018. Below are some highlights from the post-conference survey that was sent to participants requesting feedback on their MERL Tech Jozi experience. Thirty-four percent of our attendees filled out the post-conference survey via Google Forms.

Overall Experience

Here’s how survey participants rated their overall experience:

Participants’ favorite sessions

The sessions that were most frequently mentioned as favorites and some reasons why included:

Session title Comments
Conducting a Baseline of the ICT Ecosystem – Genesis Analytics and DIAL

 

…interactive session and felt practical. I could easily associate with what the team was saying. I really hope these learnings make it to implementation and start informing decision-making around funding! The presenters were also great.

… interesting and engaging, findings were really relevant to the space.

…shared lessons and insights resonated with my own professional experience. The discussions were fruitful and directly relevant to my line of work.

…incredibly useful.

The study confirmed a lot of my perceptions as an IT developer in the MERL space, but now I have some more solid backup. I will use this in my webinars and consulting on “IT for M&E”

Datafication Discrimination — Media Monitoring Africa, Open Data Durban, Amandla.mobi and Oxfam South Africa

 

Linked both MERL and Tech to programme and focussed on the impact of MERL Tech in terms of sustainable, inclusive development.

Great panel, very knowledgeable, something different to the usual M&E. interactive and diverse.

… probably most critical and informative in terms of understanding where the sector was at … the varied level of information across the audience and the panel was fascinating – if slightly worrying about how unclear we are as an M&E sector.

When WhatsApp Becomes About More Than Messaging – Genesis Analytics, Every1Mobile and Praekelt.org

 

As an evaluator, I have never thought of using WhatsApp as a way of communicating with potential beneficiaries. It made me think about different ways of getting in touch with beneficiaries of programme, and getting them to participate in a survey.

The different case studies included examples, great media, good Q&A session at the end, and I learnt new things. WhatsApp is only just reaching it’s potential in mHealth so it was good to learn real life lessons.

Hearing about the opportunities and challenges of applying a tool in different contexts and for different purposes gave good all-around insights

Social Network Analysis – Data Innovators and Praeklelt.org

 

I was already very familiar with SNA but had not had the opportunity to use it for a couple of years. Hearing this presentation with examples of how others have used it really inspired me and I’ve since sketched out a new project using SNA on data we’re currently gathering for a new product! I came away feeling really inspired and excited about doing the analysis.
Least favorite sessions

Where participants rated sessions as their “least favorite it was because:

  • The link to technology was not clear
  • It felt like a sales pitch
  • It felt extractive
  • Speaker went on too long
  • Views on MERL or Tech seemed old fashioned
Topics that need more focus in the future

Unpack the various parts of “M” “E” “R” “L”

  • Technology across MERL, not just monitoring. There was a lot of technology for data collection & tracking but little for ERL in MERL
  • More evaluation?
  • The focus was very much on evaluation (from the sessions I attended) and I feel like we did not talk about the monitoring, research and learning so much. This is huge for overall programme implementation and continuously learning from our data. Next time, I would like to talk a bit more about how organisations are actually USING data day-to-day to make decisions (monitoring) and learning from it to adapt programmes.
  • The R of MERL is hardly discussed at all. Target this for the next MERL Tech.

New digital approaches / data science

  • AI and how it can introduce biases, machine learning, Python
  • A data science-y stream could open new channels of communication and collaboration

Systems and interoperability

  • Technology for data management between organizations and teams.
  • Integrations between platforms.
  • Public Health, Education. Think of how do we discuss and bring more attention to the various systems out there, and ensure interoperability and systems that support the long term visions of countries.
  • Different types of MERL systems. We focused a lot on data collection systems, but there is a range of monitoring systems that programme managers can use to make decisions.

 Scale and sustainability

  • How to engage and educate governments on digital data collection systems.
  • The debate on open source: because in development sector it is pushed as the holy grail, whereas most other software worldwide is proprietary for a reason (safety, maintenance, continued support, custom solution), and open source doesn’t mean free.
  • Business opportunities. MERL as a business tool. How MERL Tech has proved ROI in business and real market settings, even if those settings were in the NGO/NPO space. What is the business case behind MERL Tech and MERL Tech developments?
Ah ha! Moments

Learning about technology / tech approaches

  • I found the design workshops enlightening, and did not as an evaluator realise how much time technies put into user testing.
  • I am a tech dinosaur – so everything I learned about a new technology and how it can be applied in evaluation was an ‘aha!’

New learning and skills

  • The SNA [social network analysis] inspiration that struck me was my big takeaway! I can’t wait to get back to the office and start working on it.
  • Really enjoyed learning about WhatsApp for SBCC.
  • The qualitative difference in engagement, structure, analysis and resource need between communicating via SMS versus IM. (And realising again how old school I am for a tech person!)

Data privacy, security, ethics

  • Ah ha moment was around how we could improve handling data
  • Data security
  • Our sector (including me) doesn’t really understand ‘big data,’ how it can discriminate, and what that might mean to our programmes.

Talking about failure

  • The fail fest was wonderful. We all theoretically know that it’s good to be honest about failure and to share what that was like, but this took honest reflection to a whole new level and set the tone for Day 2.

I’m not alone!

  • The challenges I am facing with introducing tech for MERL in my organisations aren’t unique to me.
  • There are other MERL Tech practitioners with a journalism/media background! This is exciting and makes me feel I am in the right place. The industry seems to want to gate keep (academia, rigourous training) so this is interesting to consider going forward, but also excites me to challenge this through mentorship opportunities and opening the space to others like me who were given a chance and gained experience along the way. Also had many Aha moments for using WhatsApp and its highly engaging format.
  • Learning that many other practitioners support learning on your own.
  • There are people locally interested in connecting and learning from.
Recommendations for future MERL Tech events

More of most everything…

  • More technical sessions
  • More panel discussions
  • More workshops
  • More in-depth sessions!
  • More time for socializing and guided networking like the exercise with the coloured stickers on Day 1
  • More NGOs involved, especially small NGOs.
  • More and better marketing to attract more people
  • More demo tables, or have new people set up demo tables each day
  • More engagement: is there a way that MERL Tech could be used further to shape, drive and promote the agenda of using technology for better MERL? Maybe through a joint session where we identify important future topics to focus on? Just as something that gives those who want the opportunity to further engage with and contribute to MERL Tech and its agenda-setting?
  • The conversations generally were very ‘intellectual’. Too many conversations revolved around how the world had to move on to better appreciate the value of MERL, rather than how MERL was adapted, used and applied in the real world. [It was] too dominated by MERL early adopters and proponents, rather than MERL customers… Or am I missing the point, which may be that MERL (in South Africa) is still a subculture for academic minded researchers. Hope not.
  • More and better wine!
 Kudos
  • For some reason this conference – as opposed to so many other conferences I have been to – actually worked. People were enthused, they were kind, willing to talk – and best of all by day 2 they hadn’t dropped out like flies (which is such an issue with conferences!). So whatever you did do it again next time!
  • Very interactive and group-focused! This was well balanced with informative sessions. I think creative group work is good but it wouldn’t be good to have the whole conference like this. However, this was the perfect amount of it and it was well led and organized.
  • I really had a great time at this conference. The sessions were really interesting and it was awesome to get so many different people in the same place to discuss such interesting topics and issues. Lunch was also really delicious
  • Loved the lightning talks! Also the breakaway sessions were great. The coffee was amazing thank you Fail fest is such a cool concept and looking to introduce this kind of thinking into our own organisation more – we all struggle with the same things, was good to be around likeminded professionals.
  • I really appreciated the fairly “waste-free” conference with no plastic bottles, unnecessary programmes and other things that I’ll just throw away afterwards. This was a highlight for me!
  • I really enjoyed this conference. Firstly the food was amazing (always a win). But most of all the size was perfect. It was really clever the way you forced us to sit in small lunch sizes and that way by the end of the conference I really had the confidence to speak to people. Linda was a great organiser – enthusiastic and punctual.
Who attended MERL Tech Jozi?

Who presented at MERL Tech Jozi?

 

If you’d like to experience MERL Tech, sign up now to attend in Washington, DC on September 5-7, 2018!

Using WhatsApp to improve family health

Guest post from ​Yolandi Janse van Rensburg, Head of Content & Communities at Every1Mobile. This post first appeared here.

I recently gave a talk at the MERL Tech 2018 conference in Johannesburg about the effectiveness of Whatsapp as a communication channel to reach low-income communities in the urban slums of Nairobi, Kenya and understand their health behaviours and needs.

Mobile Economy Report 2018. Communicating more effectively with a larger audience in hard-to-reach areas has never been easier. Instead of relying on paper questionnaires or instructing field workers to knock on doors, you can now communicate directly with your users, no matter where you are in the world.

With this in mind, some may choose to create a Whatsapp group, send a batch of questions and wait for quality insights to stream in, but in reality, they receive little to no participation from their users.

Why, you ask? Whatsapp can be a useful tool to engage your users, but there are a few lessons we’ve learnt along the way to encourage high levels of participation and generate important insights.

Building trust comes first

Establishing a relationship with the communities you’re targeting can easily be overlooked. Between project deadlines, coordination and insight gathering, it can be easy to neglect forging a connection with our users, offering a window into our thinking, so they can learn more about who we are and what we’re trying to achieve. This is the first step in building trust and acquiring your users’ buy-in to your programme. This lies at the core of Every1Mobile’s programming. The relationship you build with your users can unlock honest feedback that is crucial to the success of your programme going forward.

In late 2017, Every1Mobile ran a 6-week Whatsapp pilot with young mothers and mothers-to-be in Kibera and Kawangware, Nairobi, to better understand their hygiene and nutrition practices in terms of handwashing and preparing a healthy breakfast for their families. The U Afya pilot kicked off with a series of on-the-ground breakfast clubs, where we invited community members to join. It was an opportunity for the mothers to meet us, as well as one another, which made them feel more comfortable to participate in the Whatsapp groups.

Having our users meet beforehand and become acquainted with our local project team ensured that they felt confident enough to share honest feedback, talk amongst themselves and enjoy the Whatsapp chats. As a result, 60% of our users attended every Whatsapp session and 84% attended more than half of the sessions.

Design content using SBCC

At Every1Mobile, we do not simply create engaging copy, our content design is based on research into user behaviour, analytics and feedback, tailored with a human-centric approach to inspire creative content strategies and solutions that nurture an understanding of our users.

When we talk about content design, we mean taking a user need and presenting it in the best way possible. Applying content design principles means we do the hard work for the user. And the reward is communication that is simpler, clearer and faster for our communities

For the U Afya pilot, we incorporated Unilever, our partner’s, behaviour change approach, namely the Five Levers for Change, to influence attitudes and behaviours, and improve family health and nutrition. The approach aims to create sustainable habits using social and behaviour change communication (SBCC) techniques like signposting, pledging, prompts and cues, and peer support. Each week covered a different topic including pregnancy, a balanced diet, an affordable and healthy breakfast, breastfeeding, hygiene and weaning for infants.

Localisation means more than translating words

Low adult literacy in emerging markets can have a negative impact on the outcomes of your behaviour change campaigns. In Kenya, roughly  38.5% of the adult population is illiterate with bottom-of-the-pyramid communities having little formal education. This means translating your content into a local language may not be enough.

To address this challenge for the U Afya pilot, our Content Designers worked closely with our in-country Community Managers to localise the Whatsapp scripts so they are applicable to the daily lives of our users. We translated our Whatsapp scripts into Sheng, even though English and Kiswahili are the official languages in Kenya. Sheng is a local slang blend of English, Kiswahili and ethnic words from other cultures. It is widely spoken by the urban communities with over 3,900 words, idioms and phrases. It’s a language that changes and evolves constantly, which means we needed a translator who has street knowledge of urban life in Nairobi.

Beyond translating our scripts, we integrated real-life references applicable to our target audience. We worked with our project team to find out what the daily lives of the young mothers in Kibera and Kawangware looked like. What products are affordable and accessible? Do they have running water? What do they cook for their families and what time is supper served? Answers to these questions had a direct impact on our use of emojis, recipes and advice in our scripts. For example, we integrated local foods into the content like uji and mandazi for breakfast and indigenous vegetables including ndengu, ngwashi and nduma.

Can WhatsApp can drive behaviour change?

The answer is ‘yes’, mobile has the potential to drive SBCC. We observed an interesting link between shifts in attitude and engagement, with increased self-reported assimilation of new behaviour from women who actively posted during the Whatsapp sessions.

To measure the impact of our pilot on user knowledge, attitudes and behaviours, we designed interactive pre- and post-surveys, which triggered airtime incentives once completed. Surprisingly, the results showed little impact in knowledge with pre-scores registering higher than anticipated, however, we saw a notable decrease in perceived barriers of adopting these new behaviours and a positive impact on self-efficacy and confidence.

WhatsApp can inform the programme design

Your audience can become collaborators and help you design your programme. We used our insights gathered through the U Afya Whatsapp pilot to create a brand new online community platform that offers young mothers in Nairobi a series of online courses called Tunza Class.

We built the community platform based on the three key life stages identified within the motherhood journey, namely pregnancy and birth, newborn care, and mothers with children under five. The platform includes an interactive space called Sistaz Corner where users can share their views, experiences and advice with other mothers in their community.

With a range of SBCC techniques built into the platform, users can get peer support anonymously, and engage field experts on key health issues. Our Responsible Social Network functionality allows users to make friends, build their profile and show off their community activity which further drives overall user engagement on the site. The Every1Mobile platform is built in a way that enables users to access the online community using the most basic web-enabled feature phone, at the lowest cost for our end user, with fast loading and minimal data usage.

Following the site launch in early August 2018, we are now continuing to use our Whatsapp groups so we can gather real-time feedback on site navigation, design, functionality, labelling and content, in order to apply iterative design and ensure the mobile platform is exactly what our users want it to be.

 

September 5th: MERL Tech DC pre-workshops

This year at MERL Tech DC, in addition to the regular conference on September 6th and 7th, we’re offering two full-day, in-depth workshops on September 5th. Join us for a deeper look into the possibilities and pitfalls of Blockchain for MERL and Big Data for Evaluation!

What can Blockchain offer MERL? with Shailee Adinolfi, Michael Cooper, and Val Gandhi, co-hosted by Chemonics International, 1717 H St. NW, Washington, DC 20016. 

Tired of the blockchain hype, but still curious on how it will impact MERL? Join us for a full day workshop with development practitioners who have implemented blockchain solutions with social impact goals in various countries. Gain knowledge of the technical promises and drawbacks of blockchain technology as it stands today and brainstorm how it may be able to solve for some of the challenges in MERL in the future. Learn about ethical design principles for blockchain and how to engage with blockchain service providers to ensure that your ideas and programs are realistic and avoid harm. See the agenda here.

Register now to claim a spot at the blockchain and MERL pre-workshop!

Big Data and Evaluation with Michael Bamberger, Kerry Bruce and Peter York, co-hosted by the Independent Evaluation Group at the World Bank – “I” Building, Room: I-1-200, 1850 I St NW, Washington, DC 20006

Join us for a one-day, in-depth workshop on big data and evaluation where you’ll get an introduction to Big Data for Evaluators. We’ll provide an overview of applications of big data in international development evaluation, discuss ways that evaluators are (or could be) using big data and big data analytics in their work. You’ll also learn about the various tools of data science and potential applications, as well as run through specific cases where evaluators have employed big data as one of their methods. We will also address the important question as to why many evaluators have been slower and more reluctant to incorporate big data into their work than have their colleagues in research, program planning, management and other areas such as emergency relief programs. Lastly, we’ll discuss the ethics of using big data in our work. See the agenda here!

Register now to claim a spot at the Big Data and Ealuation pre-workshop!

You can also register here for the main conference on September 6-7, 2018!