MERL Tech News

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL on the Money: Are we getting funding for data right?

By Paige Kirby, Senior Policy Advisor at Development Gateway

Time for a MERL pop quiz: Out of US $142.6 billion spent in ODA each year, how much goes to M&E?

A)  $14.1-17.3 billion
B)  $8.6-10 billion
C)  $2.9-4.3 billion

It turns out, the correct answer is C. An average of only $2.9-$4.3 billion — or just 2-3% of all ODA spending — goes towards M&E.

That’s all we get. And despite the growing breadth of logframes and depths of donor reporting requirements, our MERL budgets are likely not going to suddenly scale up.

So, how can we use our drop in the bucket better, to get more results for the same amount of money?

At Development Gateway, we’ve been doing some thinking and applied research on this topic, and have three key recommendations for making the most of MERL funding.

Teamwork

Image Credit: Kjetil Korslien CC BY NC 2.0

When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.

Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.

Consider the Public Option

Image Credit: Development Gateway

And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?

If partner governments and donors invested in country statistical and administrative data systems, we implementers would not have such incentive or need to conduct one-off data collection.

For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.

The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.

Flip the Paradigm

Image Credit: Rafael J M Souza CC BY 2.0

And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.

But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.

Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.

 

The Art and Necessity of Building a Data Culture

By Ben Mann, Policy & engineering nerd. Technology & data evangelist. Working for @DAIGlobal. The original appears here.

We live in the digital era. And the digital era is built on data. Everyone in your business, organization, agency, family, and friend group needs data. We don’t always realize it. Some won’t acknowledge it. But everyone needs and uses data every day to make decisions. One of my colleagues constantly reminds me that we are all data junkies who need that fix to “get sh*t done.”

 

So we all agree that we need data, right? Right.

Now comes the hard part: how do we actually use data? And not just to inform what we should buy on Amazon or who we should follow on Twitter, but how do we do the impossible (and over-used-buzzword of the century) to “make data-driven-decisions?” As I often hear from frustrated friends at conferences or over coffee, there is a collectively identified need for improving data literacy and, at the same time, collective angst over actually improving the who/what/where/when/why/how of data at our companies or organizations.

The short answer: We need to build our own data culture.

It needs to be inclusive and participatory for all levels of data users. It needs to leverage appropriate technology that is paired with responsible processes. It needs champions and data evangelists. It needs to be deep and wide and complex and welcoming where there are no stupid questions.

The long answer: We need to build our own data cultures. And it’s going to be hard. And expensive. And it’s an unreachable destination.

I was blessed to hear Shash Hegde (Microsoft Data Guru extraordinare) talk about modern data strategies for organizations. He lays out 6 core elements of a data strategy that any team needs to address to build a culture that is data-friendly and data-engaged:

Vision: Does your organization know their current state of data? Is there a vision for how it can be used and put to work?

People: Maybe more important than anything else on this list, people matter. They are the core of your user group, the ones who will generate most of your data, manage the systems, and consumer the insights. Do you know their habits, needs, and desires?

Structure: Not to be confused with stars or snowflakes — we mean the structure of your organization. How business units are formed, who manages what, who controls what resources, and how the pieces fit together.

Process: As a systems thinkings person, I know that there is always a process in play. Even the abscence of process is a process in and of itself. Knowing the process and workflow of your data is critical to flow and use of your culture.

Rules: They govern us. They set boundaries and guiding rails, defining our workspaces and playing fields.

Tools+Tech: We almost always start here, but I’d argue it is the least important. With the cloud and modern data platforms, with a sprinkling of AI and ML, it is rarely the bottleneck anymore. It’s important, but should never be the priority.

Building data culture is a journey. It can be endless. You may never achieve it. And unlike the Merry Pranksters, we need a destination to drive towards in building data literacy, use, and acceptance. And if anyone tells you that they can do it cheap or free, please show them the exit ASAP.

Starting your data adventure

At MERLTech DC, we recently hosted a panel on organizational data literacy and our desperate need for more of it. Experts (smarter than me) weighed in on how the heck we get ourselves, our teams, and our companies onto the path to data literacy and a data loving culture.

Three tangible things we agreed on:

💪🏼Be the champion.

Because someone has to, why not you?

👩🏾‍💼Get a senior sponsor.

Unless you are the CEO, you need someone with executive level weight behind you. Trust us (& learn from our own failures).

🧗🏽‍♂️Keep marching on. And invite everyone to join you.

You will face obstacles. You’ll face failures. You may feel like you’re alone. But helping lead organizational change is a rewarding experience — especially with something as awesome as data. It’s a journey everyone should be on and I encourage you to bring along as many coworkers/coconspirators/collaborators as possible. Preferably everyone.

So don’t wait any longer. Start your adventure in your organization today!!

Integrating big data into program evaluation: An invitation to participate in a short survey

As we all know, big data and data science are becoming increasingly important in all aspects of our lives. There is a similar rapid growth in the applications of big data in the design and implementation of development programs. Examples range from the use of satellite images and remote sensors in emergency relief and the identification of poverty hotspots, through the use of mobile phones to track migration and to estimate changes in income (by tracking airtime purchases), social media analysis to track sentiments and predict increases in ethnic tension, and using smart phones on Internet of Things (IOT) to monitor health through biometric indicators.

Despite the rapidly increasing role of big data in development programs, there is speculation that evaluators have been slower to adopt big data than have colleagues working in other areas of development programs. Some of the evidence for the slow take-up of big data by evaluators is summarized in “The future of development evaluation in the age of big data”.  However, there is currently very limited empirical evidence to test these concerns.

To try to fill this gap, my colleagues Rick Davies and Linda Raftree and I would like to invite those of you who are interested in big data and/or the future of evaluation to complete the attached survey. This survey, which takes about 10 minutes to complete asks evaluators to report on the data collection and data analysis techniques that you use in the evaluations you design, manage or analyze; while at the same time asking data scientists how familiar they are with evaluation tools and techniques.

The survey was originally designed to obtain feedback from participants in the MERL Tech conferences on “Exploring the Role of Technology in Monitoring, Evaluation, Research and Learning in Development” that are held annually in London and Washington, DC, but we would now like to broaden the focus to include a wider range of evaluators and data scientists.

One of the ways in which the findings will be used is to help build bridges between evaluators and data scientists by designing integrated training programs for both professions that introduce the tools and techniques of both conventional evaluation practice and data science, and show how they can be combined to strengthen both evaluations and data science research. “Building bridges between evaluators and big data analysts” summarizes some of the elements of a strategy to bring the two fields closer together.

The findings of the survey will be shared through this and other sites, and we hope this will stimulate a follow-up discussion. Thank you for your cooperation and we hope that the survey and the follow-up discussions will provide you with new ways of thinking about the present and potential role of big data and data science in program evaluation.

Here’s the link to the survey – please take a few minute to fill it out!

You can also join me, Kerry Bruce and Pete York on September 5th for a full day workshop on Big Data and Evaluation in Washington DC.

MERL Tech Jozi Feedback Report

MERL Tech Jozi took place on August 1-2, 2018. Below are some highlights from the post-conference survey that was sent to participants requesting feedback on their MERL Tech Jozi experience. Thirty-four percent of our attendees filled out the post-conference survey via Google Forms.

Overall Experience

Here’s how survey participants rated their overall experience:

Participants’ favorite sessions

The sessions that were most frequently mentioned as favorites and some reasons why included:

Session title Comments
Conducting a Baseline of the ICT Ecosystem – Genesis Analytics and DIAL

 

…interactive session and felt practical. I could easily associate with what the team was saying. I really hope these learnings make it to implementation and start informing decision-making around funding! The presenters were also great.

… interesting and engaging, findings were really relevant to the space.

…shared lessons and insights resonated with my own professional experience. The discussions were fruitful and directly relevant to my line of work.

…incredibly useful.

The study confirmed a lot of my perceptions as an IT developer in the MERL space, but now I have some more solid backup. I will use this in my webinars and consulting on “IT for M&E”

Datafication Discrimination — Media Monitoring Africa, Open Data Durban, Amandla.mobi and Oxfam South Africa

 

Linked both MERL and Tech to programme and focussed on the impact of MERL Tech in terms of sustainable, inclusive development.

Great panel, very knowledgeable, something different to the usual M&E. interactive and diverse.

… probably most critical and informative in terms of understanding where the sector was at … the varied level of information across the audience and the panel was fascinating – if slightly worrying about how unclear we are as an M&E sector.

When WhatsApp Becomes About More Than Messaging – Genesis Analytics, Every1Mobile and Praekelt.org

 

As an evaluator, I have never thought of using WhatsApp as a way of communicating with potential beneficiaries. It made me think about different ways of getting in touch with beneficiaries of programme, and getting them to participate in a survey.

The different case studies included examples, great media, good Q&A session at the end, and I learnt new things. WhatsApp is only just reaching it’s potential in mHealth so it was good to learn real life lessons.

Hearing about the opportunities and challenges of applying a tool in different contexts and for different purposes gave good all-around insights

Social Network Analysis – Data Innovators and Praeklelt.org

 

I was already very familiar with SNA but had not had the opportunity to use it for a couple of years. Hearing this presentation with examples of how others have used it really inspired me and I’ve since sketched out a new project using SNA on data we’re currently gathering for a new product! I came away feeling really inspired and excited about doing the analysis.
Least favorite sessions

Where participants rated sessions as their “least favorite it was because:

  • The link to technology was not clear
  • It felt like a sales pitch
  • It felt extractive
  • Speaker went on too long
  • Views on MERL or Tech seemed old fashioned
Topics that need more focus in the future

Unpack the various parts of “M” “E” “R” “L”

  • Technology across MERL, not just monitoring. There was a lot of technology for data collection & tracking but little for ERL in MERL
  • More evaluation?
  • The focus was very much on evaluation (from the sessions I attended) and I feel like we did not talk about the monitoring, research and learning so much. This is huge for overall programme implementation and continuously learning from our data. Next time, I would like to talk a bit more about how organisations are actually USING data day-to-day to make decisions (monitoring) and learning from it to adapt programmes.
  • The R of MERL is hardly discussed at all. Target this for the next MERL Tech.

New digital approaches / data science

  • AI and how it can introduce biases, machine learning, Python
  • A data science-y stream could open new channels of communication and collaboration

Systems and interoperability

  • Technology for data management between organizations and teams.
  • Integrations between platforms.
  • Public Health, Education. Think of how do we discuss and bring more attention to the various systems out there, and ensure interoperability and systems that support the long term visions of countries.
  • Different types of MERL systems. We focused a lot on data collection systems, but there is a range of monitoring systems that programme managers can use to make decisions.

 Scale and sustainability

  • How to engage and educate governments on digital data collection systems.
  • The debate on open source: because in development sector it is pushed as the holy grail, whereas most other software worldwide is proprietary for a reason (safety, maintenance, continued support, custom solution), and open source doesn’t mean free.
  • Business opportunities. MERL as a business tool. How MERL Tech has proved ROI in business and real market settings, even if those settings were in the NGO/NPO space. What is the business case behind MERL Tech and MERL Tech developments?
Ah ha! Moments

Learning about technology / tech approaches

  • I found the design workshops enlightening, and did not as an evaluator realise how much time technies put into user testing.
  • I am a tech dinosaur – so everything I learned about a new technology and how it can be applied in evaluation was an ‘aha!’

New learning and skills

  • The SNA [social network analysis] inspiration that struck me was my big takeaway! I can’t wait to get back to the office and start working on it.
  • Really enjoyed learning about WhatsApp for SBCC.
  • The qualitative difference in engagement, structure, analysis and resource need between communicating via SMS versus IM. (And realising again how old school I am for a tech person!)

Data privacy, security, ethics

  • Ah ha moment was around how we could improve handling data
  • Data security
  • Our sector (including me) doesn’t really understand ‘big data,’ how it can discriminate, and what that might mean to our programmes.

Talking about failure

  • The fail fest was wonderful. We all theoretically know that it’s good to be honest about failure and to share what that was like, but this took honest reflection to a whole new level and set the tone for Day 2.

I’m not alone!

  • The challenges I am facing with introducing tech for MERL in my organisations aren’t unique to me.
  • There are other MERL Tech practitioners with a journalism/media background! This is exciting and makes me feel I am in the right place. The industry seems to want to gate keep (academia, rigourous training) so this is interesting to consider going forward, but also excites me to challenge this through mentorship opportunities and opening the space to others like me who were given a chance and gained experience along the way. Also had many Aha moments for using WhatsApp and its highly engaging format.
  • Learning that many other practitioners support learning on your own.
  • There are people locally interested in connecting and learning from.
Recommendations for future MERL Tech events

More of most everything…

  • More technical sessions
  • More panel discussions
  • More workshops
  • More in-depth sessions!
  • More time for socializing and guided networking like the exercise with the coloured stickers on Day 1
  • More NGOs involved, especially small NGOs.
  • More and better marketing to attract more people
  • More demo tables, or have new people set up demo tables each day
  • More engagement: is there a way that MERL Tech could be used further to shape, drive and promote the agenda of using technology for better MERL? Maybe through a joint session where we identify important future topics to focus on? Just as something that gives those who want the opportunity to further engage with and contribute to MERL Tech and its agenda-setting?
  • The conversations generally were very ‘intellectual’. Too many conversations revolved around how the world had to move on to better appreciate the value of MERL, rather than how MERL was adapted, used and applied in the real world. [It was] too dominated by MERL early adopters and proponents, rather than MERL customers… Or am I missing the point, which may be that MERL (in South Africa) is still a subculture for academic minded researchers. Hope not.
  • More and better wine!
 Kudos
  • For some reason this conference – as opposed to so many other conferences I have been to – actually worked. People were enthused, they were kind, willing to talk – and best of all by day 2 they hadn’t dropped out like flies (which is such an issue with conferences!). So whatever you did do it again next time!
  • Very interactive and group-focused! This was well balanced with informative sessions. I think creative group work is good but it wouldn’t be good to have the whole conference like this. However, this was the perfect amount of it and it was well led and organized.
  • I really had a great time at this conference. The sessions were really interesting and it was awesome to get so many different people in the same place to discuss such interesting topics and issues. Lunch was also really delicious
  • Loved the lightning talks! Also the breakaway sessions were great. The coffee was amazing thank you Fail fest is such a cool concept and looking to introduce this kind of thinking into our own organisation more – we all struggle with the same things, was good to be around likeminded professionals.
  • I really appreciated the fairly “waste-free” conference with no plastic bottles, unnecessary programmes and other things that I’ll just throw away afterwards. This was a highlight for me!
  • I really enjoyed this conference. Firstly the food was amazing (always a win). But most of all the size was perfect. It was really clever the way you forced us to sit in small lunch sizes and that way by the end of the conference I really had the confidence to speak to people. Linda was a great organiser – enthusiastic and punctual.
Who attended MERL Tech Jozi?

Who presented at MERL Tech Jozi?

 

If you’d like to experience MERL Tech, sign up now to attend in Washington, DC on September 5-7, 2018!

Using WhatsApp to improve family health

Guest post from ​Yolandi Janse van Rensburg, Head of Content & Communities at Every1Mobile. This post first appeared here.

I recently gave a talk at the MERL Tech 2018 conference in Johannesburg about the effectiveness of Whatsapp as a communication channel to reach low-income communities in the urban slums of Nairobi, Kenya and understand their health behaviours and needs.

Mobile Economy Report 2018. Communicating more effectively with a larger audience in hard-to-reach areas has never been easier. Instead of relying on paper questionnaires or instructing field workers to knock on doors, you can now communicate directly with your users, no matter where you are in the world.

With this in mind, some may choose to create a Whatsapp group, send a batch of questions and wait for quality insights to stream in, but in reality, they receive little to no participation from their users.

Why, you ask? Whatsapp can be a useful tool to engage your users, but there are a few lessons we’ve learnt along the way to encourage high levels of participation and generate important insights.

Building trust comes first

Establishing a relationship with the communities you’re targeting can easily be overlooked. Between project deadlines, coordination and insight gathering, it can be easy to neglect forging a connection with our users, offering a window into our thinking, so they can learn more about who we are and what we’re trying to achieve. This is the first step in building trust and acquiring your users’ buy-in to your programme. This lies at the core of Every1Mobile’s programming. The relationship you build with your users can unlock honest feedback that is crucial to the success of your programme going forward.

In late 2017, Every1Mobile ran a 6-week Whatsapp pilot with young mothers and mothers-to-be in Kibera and Kawangware, Nairobi, to better understand their hygiene and nutrition practices in terms of handwashing and preparing a healthy breakfast for their families. The U Afya pilot kicked off with a series of on-the-ground breakfast clubs, where we invited community members to join. It was an opportunity for the mothers to meet us, as well as one another, which made them feel more comfortable to participate in the Whatsapp groups.

Having our users meet beforehand and become acquainted with our local project team ensured that they felt confident enough to share honest feedback, talk amongst themselves and enjoy the Whatsapp chats. As a result, 60% of our users attended every Whatsapp session and 84% attended more than half of the sessions.

Design content using SBCC

At Every1Mobile, we do not simply create engaging copy, our content design is based on research into user behaviour, analytics and feedback, tailored with a human-centric approach to inspire creative content strategies and solutions that nurture an understanding of our users.

When we talk about content design, we mean taking a user need and presenting it in the best way possible. Applying content design principles means we do the hard work for the user. And the reward is communication that is simpler, clearer and faster for our communities

For the U Afya pilot, we incorporated Unilever, our partner’s, behaviour change approach, namely the Five Levers for Change, to influence attitudes and behaviours, and improve family health and nutrition. The approach aims to create sustainable habits using social and behaviour change communication (SBCC) techniques like signposting, pledging, prompts and cues, and peer support. Each week covered a different topic including pregnancy, a balanced diet, an affordable and healthy breakfast, breastfeeding, hygiene and weaning for infants.

Localisation means more than translating words

Low adult literacy in emerging markets can have a negative impact on the outcomes of your behaviour change campaigns. In Kenya, roughly  38.5% of the adult population is illiterate with bottom-of-the-pyramid communities having little formal education. This means translating your content into a local language may not be enough.

To address this challenge for the U Afya pilot, our Content Designers worked closely with our in-country Community Managers to localise the Whatsapp scripts so they are applicable to the daily lives of our users. We translated our Whatsapp scripts into Sheng, even though English and Kiswahili are the official languages in Kenya. Sheng is a local slang blend of English, Kiswahili and ethnic words from other cultures. It is widely spoken by the urban communities with over 3,900 words, idioms and phrases. It’s a language that changes and evolves constantly, which means we needed a translator who has street knowledge of urban life in Nairobi.

Beyond translating our scripts, we integrated real-life references applicable to our target audience. We worked with our project team to find out what the daily lives of the young mothers in Kibera and Kawangware looked like. What products are affordable and accessible? Do they have running water? What do they cook for their families and what time is supper served? Answers to these questions had a direct impact on our use of emojis, recipes and advice in our scripts. For example, we integrated local foods into the content like uji and mandazi for breakfast and indigenous vegetables including ndengu, ngwashi and nduma.

Can WhatsApp can drive behaviour change?

The answer is ‘yes’, mobile has the potential to drive SBCC. We observed an interesting link between shifts in attitude and engagement, with increased self-reported assimilation of new behaviour from women who actively posted during the Whatsapp sessions.

To measure the impact of our pilot on user knowledge, attitudes and behaviours, we designed interactive pre- and post-surveys, which triggered airtime incentives once completed. Surprisingly, the results showed little impact in knowledge with pre-scores registering higher than anticipated, however, we saw a notable decrease in perceived barriers of adopting these new behaviours and a positive impact on self-efficacy and confidence.

WhatsApp can inform the programme design

Your audience can become collaborators and help you design your programme. We used our insights gathered through the U Afya Whatsapp pilot to create a brand new online community platform that offers young mothers in Nairobi a series of online courses called Tunza Class.

We built the community platform based on the three key life stages identified within the motherhood journey, namely pregnancy and birth, newborn care, and mothers with children under five. The platform includes an interactive space called Sistaz Corner where users can share their views, experiences and advice with other mothers in their community.

With a range of SBCC techniques built into the platform, users can get peer support anonymously, and engage field experts on key health issues. Our Responsible Social Network functionality allows users to make friends, build their profile and show off their community activity which further drives overall user engagement on the site. The Every1Mobile platform is built in a way that enables users to access the online community using the most basic web-enabled feature phone, at the lowest cost for our end user, with fast loading and minimal data usage.

Following the site launch in early August 2018, we are now continuing to use our Whatsapp groups so we can gather real-time feedback on site navigation, design, functionality, labelling and content, in order to apply iterative design and ensure the mobile platform is exactly what our users want it to be.

 

September 5th: MERL Tech DC pre-workshops

This year at MERL Tech DC, in addition to the regular conference on September 6th and 7th, we’re offering two full-day, in-depth workshops on September 5th. Join us for a deeper look into the possibilities and pitfalls of Blockchain for MERL and Big Data for Evaluation!

What can Blockchain offer MERL? with Shailee Adinolfi, Michael Cooper, and Val Gandhi, co-hosted by Chemonics International, 1717 H St. NW, Washington, DC 20016. 

Tired of the blockchain hype, but still curious on how it will impact MERL? Join us for a full day workshop with development practitioners who have implemented blockchain solutions with social impact goals in various countries. Gain knowledge of the technical promises and drawbacks of blockchain technology as it stands today and brainstorm how it may be able to solve for some of the challenges in MERL in the future. Learn about ethical design principles for blockchain and how to engage with blockchain service providers to ensure that your ideas and programs are realistic and avoid harm. See the agenda here.

Register now to claim a spot at the blockchain and MERL pre-workshop!

Big Data and Evaluation with Michael Bamberger, Kerry Bruce and Peter York, co-hosted by the Independent Evaluation Group at the World Bank – “I” Building, Room: I-1-200, 1850 I St NW, Washington, DC 20006

Join us for a one-day, in-depth workshop on big data and evaluation where you’ll get an introduction to Big Data for Evaluators. We’ll provide an overview of applications of big data in international development evaluation, discuss ways that evaluators are (or could be) using big data and big data analytics in their work. You’ll also learn about the various tools of data science and potential applications, as well as run through specific cases where evaluators have employed big data as one of their methods. We will also address the important question as to why many evaluators have been slower and more reluctant to incorporate big data into their work than have their colleagues in research, program planning, management and other areas such as emergency relief programs. Lastly, we’ll discuss the ethics of using big data in our work. See the agenda here!

Register now to claim a spot at the Big Data and Ealuation pre-workshop!

You can also register here for the main conference on September 6-7, 2018!

 

How MERL Tech Jozi helped me bridge my own data gap

Guest post from Praekelt.org. The original post appeared on August 15 here.

Our team had the opportunity to enjoy a range of talks at the first ever MERL Tech in Johannesburg. Here are some of their key learnings:

During “Designing the Next Generation of MERL Tech Software” by Mobenzi’s CEO Andi Friedman, we were challenged to apply design thinking techniques to critique both our own as well as our partners’ current projects. I have previously worked on an educational tool that is aimed to improve the quality of learning of students who are based in a disadvantaged community in the Eastern Cape, South Africa. I learned that language barriers are a serious concern when it comes to effectively implementing a new tool.

We mapped out a visual representation of solving a communication issue that one of the partners had for an educational programme implemented in rural Eastern Cape, which included drawing various shapes on paper. What we came up with was to replace the posters that had instructions in words with clear visuals that the students were familiar with. This was inspired by the idea that visuals resonate with people more than words.

-Perez Mnkile, Project Manager

Amy Green Presenting on Video Metrics

I really enjoyed the presentation on video metrics from Girl Effect’s Amy Green. She spoke to us about video engagement on Hara Huru Dara, a vlog series featuring social media influencers. What I found really interesting is how hard it is to measure impact or engagement. Different platforms (YouTube vs Facebook) have different definitions for various measurements (e.g. views) and also use a range of algorithms to reach these measurements. Her talk really helped me understand just how hard MERL can be in a digital age! As our projects expand into new technologies, I’ll definitely be more aware of how complicated seemingly simple metrics (for example, views on a video) may be.

-Jessica Manim, Project Manager

Get it right by getting it wrong: embracing failure as a tool for learning and improvement was a theme visible throughout the two day MERL Tech conference and one session highlighting this theme was conducted by Annie Martin a Research Associate at Akros, who explored challenges in Offline Data Capture.

She referenced a project that took place in Zambia to track participants of an HIV prevention program, highlighting some of the technical challenges the project faced along the way. The project involved equipping field workers with an Android tablet and an Application developed for capturing offline data and synching data, when data connectivity was available. A number of bugs due to insufficient system user testing along with server hosting issues resulted in field workers often not successfully being able to send data or create user IDs.

The lesson, which I believe we strive to include in our developmental processes, is to focus on iterative piloting, testing and learning before deployment. This doesn’t necessarily mean that a bug-free system or service is guaranteed but it does encourage us to focus our attention on the end-users and stakeholders needs, expectations and requirements.

-Neville Tietz, Service Designer

Slide from Panel on WhatsApp and engagement

Sometimes, we don’t fully understand the problems that we are trying to solve. Siziwe Ngcwabe from the African Evidence Network gave the opening talk on evidence-based work. It showed me the importance of fully understanding the problem we are solving and identifying the markers of success or failure before we start rolling out solutions. Once we have established all this, we can then create effective solutions. Rachel Sibande from DIAL, gave a talk on how their organisation is now using data from mobile network providers to anticipate how a disease outbreak will spread, based on the movement patterns of the network’s subscribers. Using this data they can advise ministries to run campaigns in certain areas and increase medical supplies in another. The talk by Siziwe showed me the importance of fully understanding the problem you are trying to solve and how to effectively measure progress. Rachel’s talk really showed me how easy it is to create an effective solution, once you fully understand the problem.

-Katlego Maakane, Project Manager

I really enjoyed the panel discussion on Datafication Discrimination with William Bird, Director of Media Monitoring Africa, Richard Gevers, Director of Open Data Durban, Koketso Moeti, Executive Director of amandla.mobi that was moderated by Siphokazi Mthathi, Executive Director of Oxfam South Africa. The impact that the mass collection of data can have on communities can potentially be used to further discriminate against them, especially when they are not aware on what their data will be used for. For example, information around sexuality can be used to target individuals during a time when there is rapid reversing of anti-discrimination laws in many countries.

I also thought it was interesting how projection models for population movement and the planning of new areas for residential development and public infrastructure in cities in South Africa are flawed, since the development of these models are outsourced by government to the private sector and different government departments often use different forecasts. Essentially the various government departments are all planning cities with different projections further preventing the poorest people from accessing quality services and infrastructure.

For me this session really highlighted the responsibility we have when collecting data in our projects from vulnerable individuals and that we have to ensure that we interrogate what we intend to use this data for. As part of our process, we must investigate how the data could potentially be exploited. We need to empower people to take control of the information they share and be able to make decisions in their best interest.

-Benjamin Vermeulen, Project Manager

Check out the agenda for MERL Tech DC!

MERL Tech DC is coming up quickly!

This year we’ll have two pre-workshops on September 5th: What Can Blockchain Offer MERL? (hosted by Chemonics) and Big Data and Evaluation (hosted by the World Bank).

On September 6-7, 2018, we’ll have our regular two days of lightning talks, break-out sessions, panels, Fail Fest, demo tables, and networking with folks from diverse sectors who all coincide at the intersection of MERL and Tech!

Registration is open – and we normally sell out, so get your tickets now while there is still space!

Take a peek at the agenda – we’re excited about it — and we hope you’ll join us!

 

 

MERL Tech Jozi: Highlights, Takeaways and To Dos

Last week 100 people gathered at Jozihub for MERL Tech Jozi — two days of sharing, learning and exploring what’s happening at the intersection of Monitoring, Evaluation, Research and Learning (MERL) and Tech.

This was our first MERL Tech event outside of Washington DC, New York or London, and it was really exciting to learn about the work that is happening in South Africa and nearby countries. The conference vibe was energetic, buzzy and friendly, with lots of opportunities to meet people and discuss this area of work.

Participants spanned backgrounds and types of institutions – one of the things that makes MERL Tech unique! Much of what we aim to do is to bridge gaps and encourage people from different approaches to talk to each other and learn from each other, and MERL Tech Jozi provided plenty of opportunity for that.

Sessions covered a range of topics, from practical, hands-on workshops on Excel, responsible data, and data visualization, to experience sharing on data quality, offline data capture, video measurement, and social network analysis, to big picture discussions on the ICT ecosystem, the future of evaluation, the fourth industrial revolution, and the need to enhance evaluator competencies when it comes to digital tools and new approaches.

Demo tables gave participants a chance to see what tools are out there and to chat with developers about their specific needs. Lightning Talks offered a glimpse into new approaches and reflections on the importance of designing with users and understanding context in which these new approaches are utilized. And at the evening “Fail Fest” we heard about evaluation failures, challenges using mobile technology for evaluation, and sustainable tool selection.

Access the MERL Tech Jozi agenda with presentations here or all the presentations here.

3 Takeaways

One key take-away for me was that there’s a gap between the ‘new school’ of younger, more tech savvy MERL Practitioners and the more established, older evaluation community. Some familiar tensions were present between those with years of experience in MERL and less expertise in tech and those who are newer to the MERL side yet highly proficient in tech-enabled approaches. The number of people who identify as having skills that span both areas is growing and will continue to do so.

It’s going to be important to continue to learn from one another and work together to bring our MERL work to the next level, both in terms of how we form MERL teams with the necessary expertise internally and how we engage with each other and interact as a whole sector. As one participant put it, we are not going find all these magical skills in one person — the “MERL Tech Unicorn” so we need to be cognizant of how we form teams that have the right variety of skills and experiences, including data management and data science where necessary.

It is critical that we all have a better understanding of the wider impacts of technologies, beyond our projects, programs, platforms and evaluations. If we don’t have a strong grip on how technology is affecting wider society, how will we understand how social change happens in increasingly digital contexts? How will we negotiate data privacy? How will we wrestle with corporate data use and the potential for government surveillance? If evaluator understanding of technology and the information society is low, how will evaluators offer relevant and meaningful insights? How do diversity, inclusion and bias manifest themselves in a tech-enabled world and in tech-enabled MERL and what do evaluators need to know about that in order to ensure representation? How do we understand data in its newer forms and manifestations? How do we ensure ethical and sound approaches? We need all the various sectors who form part of the MERL Tech community work together to come to a better understanding of both the tangible and intangible impacts of technology in development work, evaluation, and wider society.

A second key takeaway is that we need to do a better job of documenting and evaluating the use of technology in development and in MERL (e.g., the MERL of ICT4D and MERL of tech-enabled MERL). I learned so much from the practical presentations and experience sharing during MERL Tech Jozi. In many cases, the challenges and learning were very similar across projects and efforts.  We need to find better ways of ensuring that this kind of learning is found, accessed, and that it is put into practice when creating new initiatives. We need to also understand more about the power dynamics, negative incentives and other barriers that prevent us from using what we know.

As “MERL Tech”, we are planning to pull some resources and learning together over the next year or two, to trace the shifts in the space over the past 5 years, and to highlight some of the trends we are seeing for the future. (Please get in touch with me if you’d like to participate in this “MERL of MERL Tech” research with a case study, an academic paper, other related research, or as a key informant!)

A third takeaway, as highlighted by Victor Naidu from the South African Monitoring and Evaluation Association (SAMEA), is that we need to focus on developing the competencies that evaluators require for the near future. And we need to think about how the tech sector can better serve the MERL community. SAMEA has created a set of draft competencies for evaluators, but these are missing digital competencies. SAMEA would love your comments and thoughts on what digital competencies evaluators require. They would also like to see you as part of their community and at their next event! (More info on joining SAMEA).

What digital competencies should be added to this list of evaluator competencies? Please add your suggestions and comments here on the google doc.

MERL Tech will be collaborating more closely with SAMEA to include a “MERL Tech Track” at SAMEA’s 2019 conference, and we hope to be back at JoziHub again in 2020 with MERL Tech Jozi as its own separate event.

Be sure to follow us on Twitter or sign up (in the side bar) to receive MERL Tech news if you’d like to stay in the loop! And thanks to all our sponsors – Genesis Analytics, Praekelt.org, The Digital Impact Alliance  (DIAL) and JoziHub!

MERL Tech DC is coming up on September 6-7, with pre-workshops on September 5 on Big Data and Evaluation and Blockchain and MERL! Register here.