Tag Archives: feedback

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL Tech Jozi Feedback Report

MERL Tech Jozi took place on August 1-2, 2018. Below are some highlights from the post-conference survey that was sent to participants requesting feedback on their MERL Tech Jozi experience. Thirty-four percent of our attendees filled out the post-conference survey via Google Forms.

Overall Experience

Here’s how survey participants rated their overall experience:

Participants’ favorite sessions

The sessions that were most frequently mentioned as favorites and some reasons why included:

Session title Comments
Conducting a Baseline of the ICT Ecosystem – Genesis Analytics and DIAL

 

…interactive session and felt practical. I could easily associate with what the team was saying. I really hope these learnings make it to implementation and start informing decision-making around funding! The presenters were also great.

… interesting and engaging, findings were really relevant to the space.

…shared lessons and insights resonated with my own professional experience. The discussions were fruitful and directly relevant to my line of work.

…incredibly useful.

The study confirmed a lot of my perceptions as an IT developer in the MERL space, but now I have some more solid backup. I will use this in my webinars and consulting on “IT for M&E”

Datafication Discrimination — Media Monitoring Africa, Open Data Durban, Amandla.mobi and Oxfam South Africa

 

Linked both MERL and Tech to programme and focussed on the impact of MERL Tech in terms of sustainable, inclusive development.

Great panel, very knowledgeable, something different to the usual M&E. interactive and diverse.

… probably most critical and informative in terms of understanding where the sector was at … the varied level of information across the audience and the panel was fascinating – if slightly worrying about how unclear we are as an M&E sector.

When WhatsApp Becomes About More Than Messaging – Genesis Analytics, Every1Mobile and Praekelt.org

 

As an evaluator, I have never thought of using WhatsApp as a way of communicating with potential beneficiaries. It made me think about different ways of getting in touch with beneficiaries of programme, and getting them to participate in a survey.

The different case studies included examples, great media, good Q&A session at the end, and I learnt new things. WhatsApp is only just reaching it’s potential in mHealth so it was good to learn real life lessons.

Hearing about the opportunities and challenges of applying a tool in different contexts and for different purposes gave good all-around insights

Social Network Analysis – Data Innovators and Praeklelt.org

 

I was already very familiar with SNA but had not had the opportunity to use it for a couple of years. Hearing this presentation with examples of how others have used it really inspired me and I’ve since sketched out a new project using SNA on data we’re currently gathering for a new product! I came away feeling really inspired and excited about doing the analysis.
Least favorite sessions

Where participants rated sessions as their “least favorite it was because:

  • The link to technology was not clear
  • It felt like a sales pitch
  • It felt extractive
  • Speaker went on too long
  • Views on MERL or Tech seemed old fashioned
Topics that need more focus in the future

Unpack the various parts of “M” “E” “R” “L”

  • Technology across MERL, not just monitoring. There was a lot of technology for data collection & tracking but little for ERL in MERL
  • More evaluation?
  • The focus was very much on evaluation (from the sessions I attended) and I feel like we did not talk about the monitoring, research and learning so much. This is huge for overall programme implementation and continuously learning from our data. Next time, I would like to talk a bit more about how organisations are actually USING data day-to-day to make decisions (monitoring) and learning from it to adapt programmes.
  • The R of MERL is hardly discussed at all. Target this for the next MERL Tech.

New digital approaches / data science

  • AI and how it can introduce biases, machine learning, Python
  • A data science-y stream could open new channels of communication and collaboration

Systems and interoperability

  • Technology for data management between organizations and teams.
  • Integrations between platforms.
  • Public Health, Education. Think of how do we discuss and bring more attention to the various systems out there, and ensure interoperability and systems that support the long term visions of countries.
  • Different types of MERL systems. We focused a lot on data collection systems, but there is a range of monitoring systems that programme managers can use to make decisions.

 Scale and sustainability

  • How to engage and educate governments on digital data collection systems.
  • The debate on open source: because in development sector it is pushed as the holy grail, whereas most other software worldwide is proprietary for a reason (safety, maintenance, continued support, custom solution), and open source doesn’t mean free.
  • Business opportunities. MERL as a business tool. How MERL Tech has proved ROI in business and real market settings, even if those settings were in the NGO/NPO space. What is the business case behind MERL Tech and MERL Tech developments?
Ah ha! Moments

Learning about technology / tech approaches

  • I found the design workshops enlightening, and did not as an evaluator realise how much time technies put into user testing.
  • I am a tech dinosaur – so everything I learned about a new technology and how it can be applied in evaluation was an ‘aha!’

New learning and skills

  • The SNA [social network analysis] inspiration that struck me was my big takeaway! I can’t wait to get back to the office and start working on it.
  • Really enjoyed learning about WhatsApp for SBCC.
  • The qualitative difference in engagement, structure, analysis and resource need between communicating via SMS versus IM. (And realising again how old school I am for a tech person!)

Data privacy, security, ethics

  • Ah ha moment was around how we could improve handling data
  • Data security
  • Our sector (including me) doesn’t really understand ‘big data,’ how it can discriminate, and what that might mean to our programmes.

Talking about failure

  • The fail fest was wonderful. We all theoretically know that it’s good to be honest about failure and to share what that was like, but this took honest reflection to a whole new level and set the tone for Day 2.

I’m not alone!

  • The challenges I am facing with introducing tech for MERL in my organisations aren’t unique to me.
  • There are other MERL Tech practitioners with a journalism/media background! This is exciting and makes me feel I am in the right place. The industry seems to want to gate keep (academia, rigourous training) so this is interesting to consider going forward, but also excites me to challenge this through mentorship opportunities and opening the space to others like me who were given a chance and gained experience along the way. Also had many Aha moments for using WhatsApp and its highly engaging format.
  • Learning that many other practitioners support learning on your own.
  • There are people locally interested in connecting and learning from.
Recommendations for future MERL Tech events

More of most everything…

  • More technical sessions
  • More panel discussions
  • More workshops
  • More in-depth sessions!
  • More time for socializing and guided networking like the exercise with the coloured stickers on Day 1
  • More NGOs involved, especially small NGOs.
  • More and better marketing to attract more people
  • More demo tables, or have new people set up demo tables each day
  • More engagement: is there a way that MERL Tech could be used further to shape, drive and promote the agenda of using technology for better MERL? Maybe through a joint session where we identify important future topics to focus on? Just as something that gives those who want the opportunity to further engage with and contribute to MERL Tech and its agenda-setting?
  • The conversations generally were very ‘intellectual’. Too many conversations revolved around how the world had to move on to better appreciate the value of MERL, rather than how MERL was adapted, used and applied in the real world. [It was] too dominated by MERL early adopters and proponents, rather than MERL customers… Or am I missing the point, which may be that MERL (in South Africa) is still a subculture for academic minded researchers. Hope not.
  • More and better wine!
 Kudos
  • For some reason this conference – as opposed to so many other conferences I have been to – actually worked. People were enthused, they were kind, willing to talk – and best of all by day 2 they hadn’t dropped out like flies (which is such an issue with conferences!). So whatever you did do it again next time!
  • Very interactive and group-focused! This was well balanced with informative sessions. I think creative group work is good but it wouldn’t be good to have the whole conference like this. However, this was the perfect amount of it and it was well led and organized.
  • I really had a great time at this conference. The sessions were really interesting and it was awesome to get so many different people in the same place to discuss such interesting topics and issues. Lunch was also really delicious
  • Loved the lightning talks! Also the breakaway sessions were great. The coffee was amazing thank you Fail fest is such a cool concept and looking to introduce this kind of thinking into our own organisation more – we all struggle with the same things, was good to be around likeminded professionals.
  • I really appreciated the fairly “waste-free” conference with no plastic bottles, unnecessary programmes and other things that I’ll just throw away afterwards. This was a highlight for me!
  • I really enjoyed this conference. Firstly the food was amazing (always a win). But most of all the size was perfect. It was really clever the way you forced us to sit in small lunch sizes and that way by the end of the conference I really had the confidence to speak to people. Linda was a great organiser – enthusiastic and punctual.
Who attended MERL Tech Jozi?

Who presented at MERL Tech Jozi?

 

If you’d like to experience MERL Tech, sign up now to attend in Washington, DC on September 5-7, 2018!

Using WhatsApp to improve family health

Guest post from ​Yolandi Janse van Rensburg, Head of Content & Communities at Every1Mobile. This post first appeared here.

I recently gave a talk at the MERL Tech 2018 conference in Johannesburg about the effectiveness of Whatsapp as a communication channel to reach low-income communities in the urban slums of Nairobi, Kenya and understand their health behaviours and needs.

Mobile Economy Report 2018. Communicating more effectively with a larger audience in hard-to-reach areas has never been easier. Instead of relying on paper questionnaires or instructing field workers to knock on doors, you can now communicate directly with your users, no matter where you are in the world.

With this in mind, some may choose to create a Whatsapp group, send a batch of questions and wait for quality insights to stream in, but in reality, they receive little to no participation from their users.

Why, you ask? Whatsapp can be a useful tool to engage your users, but there are a few lessons we’ve learnt along the way to encourage high levels of participation and generate important insights.

Building trust comes first

Establishing a relationship with the communities you’re targeting can easily be overlooked. Between project deadlines, coordination and insight gathering, it can be easy to neglect forging a connection with our users, offering a window into our thinking, so they can learn more about who we are and what we’re trying to achieve. This is the first step in building trust and acquiring your users’ buy-in to your programme. This lies at the core of Every1Mobile’s programming. The relationship you build with your users can unlock honest feedback that is crucial to the success of your programme going forward.

In late 2017, Every1Mobile ran a 6-week Whatsapp pilot with young mothers and mothers-to-be in Kibera and Kawangware, Nairobi, to better understand their hygiene and nutrition practices in terms of handwashing and preparing a healthy breakfast for their families. The U Afya pilot kicked off with a series of on-the-ground breakfast clubs, where we invited community members to join. It was an opportunity for the mothers to meet us, as well as one another, which made them feel more comfortable to participate in the Whatsapp groups.

Having our users meet beforehand and become acquainted with our local project team ensured that they felt confident enough to share honest feedback, talk amongst themselves and enjoy the Whatsapp chats. As a result, 60% of our users attended every Whatsapp session and 84% attended more than half of the sessions.

Design content using SBCC

At Every1Mobile, we do not simply create engaging copy, our content design is based on research into user behaviour, analytics and feedback, tailored with a human-centric approach to inspire creative content strategies and solutions that nurture an understanding of our users.

When we talk about content design, we mean taking a user need and presenting it in the best way possible. Applying content design principles means we do the hard work for the user. And the reward is communication that is simpler, clearer and faster for our communities

For the U Afya pilot, we incorporated Unilever, our partner’s, behaviour change approach, namely the Five Levers for Change, to influence attitudes and behaviours, and improve family health and nutrition. The approach aims to create sustainable habits using social and behaviour change communication (SBCC) techniques like signposting, pledging, prompts and cues, and peer support. Each week covered a different topic including pregnancy, a balanced diet, an affordable and healthy breakfast, breastfeeding, hygiene and weaning for infants.

Localisation means more than translating words

Low adult literacy in emerging markets can have a negative impact on the outcomes of your behaviour change campaigns. In Kenya, roughly  38.5% of the adult population is illiterate with bottom-of-the-pyramid communities having little formal education. This means translating your content into a local language may not be enough.

To address this challenge for the U Afya pilot, our Content Designers worked closely with our in-country Community Managers to localise the Whatsapp scripts so they are applicable to the daily lives of our users. We translated our Whatsapp scripts into Sheng, even though English and Kiswahili are the official languages in Kenya. Sheng is a local slang blend of English, Kiswahili and ethnic words from other cultures. It is widely spoken by the urban communities with over 3,900 words, idioms and phrases. It’s a language that changes and evolves constantly, which means we needed a translator who has street knowledge of urban life in Nairobi.

Beyond translating our scripts, we integrated real-life references applicable to our target audience. We worked with our project team to find out what the daily lives of the young mothers in Kibera and Kawangware looked like. What products are affordable and accessible? Do they have running water? What do they cook for their families and what time is supper served? Answers to these questions had a direct impact on our use of emojis, recipes and advice in our scripts. For example, we integrated local foods into the content like uji and mandazi for breakfast and indigenous vegetables including ndengu, ngwashi and nduma.

Can WhatsApp can drive behaviour change?

The answer is ‘yes’, mobile has the potential to drive SBCC. We observed an interesting link between shifts in attitude and engagement, with increased self-reported assimilation of new behaviour from women who actively posted during the Whatsapp sessions.

To measure the impact of our pilot on user knowledge, attitudes and behaviours, we designed interactive pre- and post-surveys, which triggered airtime incentives once completed. Surprisingly, the results showed little impact in knowledge with pre-scores registering higher than anticipated, however, we saw a notable decrease in perceived barriers of adopting these new behaviours and a positive impact on self-efficacy and confidence.

WhatsApp can inform the programme design

Your audience can become collaborators and help you design your programme. We used our insights gathered through the U Afya Whatsapp pilot to create a brand new online community platform that offers young mothers in Nairobi a series of online courses called Tunza Class.

We built the community platform based on the three key life stages identified within the motherhood journey, namely pregnancy and birth, newborn care, and mothers with children under five. The platform includes an interactive space called Sistaz Corner where users can share their views, experiences and advice with other mothers in their community.

With a range of SBCC techniques built into the platform, users can get peer support anonymously, and engage field experts on key health issues. Our Responsible Social Network functionality allows users to make friends, build their profile and show off their community activity which further drives overall user engagement on the site. The Every1Mobile platform is built in a way that enables users to access the online community using the most basic web-enabled feature phone, at the lowest cost for our end user, with fast loading and minimal data usage.

Following the site launch in early August 2018, we are now continuing to use our Whatsapp groups so we can gather real-time feedback on site navigation, design, functionality, labelling and content, in order to apply iterative design and ensure the mobile platform is exactly what our users want it to be.

 

Feedback Report from MERL Tech London 2018

MERL Tech London happened on March 19-20, 2018. Here are some highlights from session level feedback and the post-conference survey on the overall MERL Tech London experience.

If you attended MERL Tech London, please get in touch if you have any further questions about the feedback report or if you would like us to send you detailed (anonymized) feedback about a session you led. Please also be sure to send us your blog posts & session summaries so that we can post them on MERL Tech News!

Background on the data

  • 54 participants (~27%) filled out the post-conference survey via Google Forms.
  • 59 (~30%) rated and/or commented on individual sessions via the Sched app. Participants chose from three ’emoji’ options: a happy face 🙂 , a neutral face 😐 , and a sad face 🙁 . Participants could also leave their comments on individual sessions.
  • We received 616 session ratings/comments via Sched. Some people rated the majority of sessions they attended; others only rated 1-2 sessions.
  • Some reported that they did not feel comfortable rating sessions in the Sched app because they were unclear about whether session leads and the public could see the rating. In future, we will let participants know that only Sched administrators can see the identity of commenters and the ratings given to sessions.
  • We do not know if there is an overlap between those who filled out Sched and those that fed back via Google Forms because the Google Forms survey is anonymous.

Overall feedback

Here’s how survey participants rated the overall experience:

Breakout sessions– 137 ratings: 69% 🙂 30% 😐 and 13% 🙁

Responses were fairly consistent across both Sched ratings and Google Forms (the form asked people to identify their favorite session). Big data and data science sessions stand out with the highest number of favorable ratings and comments. General Data Protection Regulation (GDPR) and responsible data made an important showing, as did the session on participatory video in evaluation.

Sessions with favorable comments tended to include or combine elements of:

  • an engaging format
  • good planning and facilitation
  • strong levels of expertise
  • clear and understandable language and examples
  • and strategic use of case studies to point at a bigger picture that is replicable to other situations.

Below are the breakout sessions that received the most favorable ratings and comments overall. (Plenty of other sessions were also rated well but did not make the “top-top.”)

Session title

Comments

Be it resolved: In the near future, conventional evaluation and big data will be successfully integrated Brilliant session! Loved the format! Fantastic to have such experts taking part. Really appreciated the facilitation and that there was a time at the end for more open questions/discussion.
Innovative Use of Theory-Based and Data Science Evaluation Approaches Most interesting talk of the day (maybe more for the dedicated evaluation practitioners), very practical and easy to understand and I’m really looking forward to hearing more about the results as the work progresses!
Unpacking How Change Happened (or Didn’t): Participatory Video and Most Significant Change Right amout of explanation and using case studies to illustrate points and respond to questions rather than just stand alone case studies.
GDPR – What Is It and What Do We Do About It? GDPR and what we do about it – Great presentation starting off with some historical background, explaining with clarity how this new legislation is a rights-based approach and concluding on how for Oxfam this is not a compliance project but a modification in data culture. Amazing, innovative and the speaker knew his area very well.
The Best of Both Worlds? Combining Data Science and Traditional M&E to Understand Impact I learned so much from this session and was completely inspired by the presenter and the content. Clear – well paced – honest – open – collaborative and packed with really good insight. Amazing.
Big Data, Adaptive Management, and the Future of MERL Quite a mixed bag of presenters, with focus on different pieces on the overall topic. Speakers from Novometrics was particularly engaging and stimulated some good discussion.
Blockchain: Getting Past the Hype and Considering its Value for MERL Great group with good facilitation. Open ended question left lots of room for discussion without bias towards particular outcome. Learned lots and not just about blockchain.
LEAP, and How to Bring Data to Life in Your Organization Really great session, highly interactive, rich in concepts clearly and convincingly explained. No questions were left unanswered. Very insightful suggestions shared between the presenters/facilitators and the audience. Should be on the agenda of next MERL Tech Conference as well.
Who Watches the Watchers? Good Practice for Ethical MERL(Tech)

 

I came out with some really helpful material. Collaborative session and good workshop participants willing to share and mind map. Perhaps the lead facilitator could have been a bit more contextual. Not always clear. But, our table session was really helpful and output useful.
The GDPR is coming! Now what?! Practical Steps to Help You Get Ready Good session. Appreciated the handouts….

What could session leads improve on?

We also had a few sessions that were ranked closer to 😐 (somewhere around a 6 or 6.5 on a scale of 1-10). Why did participants rate some sessions lower?

  • “Felt like a product pitch”
  • “Title was misleading”
  • Participatory activity was unclear
  • Poor time management
  • “Case studies did not expand to learning for the sector – too much ‘this is what we did’ and not enough ‘this is what it means.””
  • Poor facilitation/moderation
  • “Too unstructured, meandering”
  • Low energy
  • “Only a chat among panelists, very little time for Q&A. No space to engage”

Additionally, some sessions had participants with very diverse levels of expertise and varied backgrounds and expectations, which seemed to affect session ratings.

Lightning Talks– 182 ratings: 74% 🙂 22% 😐 4% 🙁

Lightning talks consistently get the highest ratings at MERL Tech, and this year was no exception. As one participant said, “my favorite sessions were the lightning talks because they gave a really quick overview of really concrete uses of technology in M&E work. This really helped in getting an overview of the type of activities and projects that were going on.”  Participants rated almost all the lightning talks positively.

Plenary sessions– 192 ratings: 77% 🙂 21% 😐 and 2% 🙁

Here we include:  the welcome, discussions on MERL Tech priorities on Day 1, opening talks on both days, summary on Day 1, panel with donors, the closing ‘fishbowl’, and the Fail Fest.

Opening Talks:

  • People appreciated André’ Clarke’s stage setting talk on Day 1. “Clear, accessible and thoughtful.” “Nice deck!”
  • Anahi Ayala Iacucci’s opening talk on Day 2 was a hit: “Great keynote! Anahi is very engaging but also her content was really rich. Useful that she used a lot of examples and provided a lot of history.” And “Like Anahi says ‘The question to ask is what does technology do _to_ development, rather than what can technology do _for_ development.'”

Deep Dive into Priorities for the Sector:

  • Most respondents enjoyed the self-directed conversations around the various topics.
  • “Great way to set the tone for the following sessions….” “Some really valuable and practical insights shared.” “Great group, very interesting discussion, good way to get to know a few people.”

Fail Fest:

  • The Fail Fest was enjoyed by virtually everyone. “Brilliantly honest! Well done for having created the space and thank you to those who shared so openly.” “Awesome! Anahi definitely stole the show. What an amzing way to share learning, so memorable. Again, one to steal….” “I thought this was fun way to end the first day. All the presenters were really good and the session was well framed and facilitated by Wayan.”

Fishbowl:

  • There were mixed reactions to the “Fish Bowl”
  • “Great session and way to close the event!” “Fascinating – especially insights from Michael and Veronica.” “Not enough people volunteered to speak.” “Some speakers went on too long.”

Lunchtime Demos – 23 ratings: 52% 🙂 34% 😐 and 13% 🙁

We know that many MERL Tech participants are wary of being “sold” to. Feedback from past conferences has been that participants don’t like sales pitches disguised as breakout sessions and lightning talks. So, this year we experimented with the idea of lunchtime demo sessions. The idea was that these optional sessions would allow people with a specific interest in a tool or product to have dedicated time with the tool creators for a demo or Q&A. We hoped that doing demo sessions separately from breakout sessions would make the nature of the sessions clear. Judging from the feedback, we didn’t hit the mark. We’ll try to do better next time!

What went wrong?

  • Timing: “The schedule was too tight.” “Give more time to the lunch time demo sessions or change the format. I missed the Impact Mapper session on day 1 as there was insufficient time to eat, go for a comfort break and network. This is really disappointing. I suggest a dedicated hour in the programme on the first day to visit all the ICT provider stalls.”
  • Content: “Demo sessions were more like advertising sessions by respective companies, while nicely titled as if they were to explore topical issues. Demo sessions were all promising the world to us while we know how much challenge technology application faces in real-world. Overall so many demo sessions within a short 2-day conference compromised the agenda”
  • Framing and intent: “I don’t know that framing the lunch sessions as ‘product demos’ makes a ton of sense. Maybe force people to have real case studies or practical (hands-on) sessions, and make them regular sessions? Not sure.” “I think more care needs to be taken to frame the sessions run by the software companies with a proper declarations of interests…. Sessions led by software reps were a little less transparent in that they pitched their product, but through some other topic that people would be interested in. I think that it would be wise to make it a DOI [declaration of intent] that is scripted when people who have an interest declare their interest up front for every panel discussion at the beginning, even if they did a previous one. I think that way the rules would be a little clearer.” 

General Comments

Because we bring such a diverse group together in terms of field, experience, focus and interest, expectations are varied, and we often see conflicting suggestions. Whereas some would like more MERL content, others want more Tech content. Where as some learn a lot, others feel they have heard much of this before. Here are a few of the overall comments from Sched and the Google Form.

Who else should be at MERL Tech?

  • More donors “EU, DFID, MCC, ADB, WB, SIDA, DANIDA, MFA Finland”
  • “People working with governments in developing countries”
  • “People from the ‘field’. It was mentioned in one of the closing comments that the term ‘field’ is outdated and we are not sure what we mean anymore. Wrong. There couldn’t be a more striking difference in discussions during those two days between those with solid field experience and those lacking in it.”
  • “More Brits? There were a lot of Americans that came in from DC…”

Content that participants would like to see in the future

  • More framing: “An opening session that explains what MERL Tech is and all the different ways it’s being or can be used”
  • More specifics on how to integrate technology for specific purposes and for new purposes: “besides just allowing quicker and faster data collection and analysis”
  • More big data/data science: “Anything which combines data science, stats and qualitative research is really interesting for me and seems to be the direction a lot of organisations are going in.”
  • Less big data/data science: “The big data stuff was less relevant to me”
  • More MERL-related sessions: “I have a tech background, so I would personally have liked to have seen more MERL-related sessions.”
  • More tech-related sessions: “It was my first MERLTech, so enjoyed it. I felt that many of the presentations could have been more on-point with respect to the Tech side, rather than the MERL end (or better focus on the integration of the two).”
  • More “R” (Research): Institutional learning and research (evaluations as a subset of research).
  • More “L” Learning treated as topic of it’s own. By this I mean, the capture of tacit knowledge and good practice, use of this learning for adaptive management. Compared to my last MERL Tech, I felt this meeting better featured evaluation, or at least spoke of ‘E’ as its own independent letter. I would like to see this for ‘L.’”
  • More opportunities for smaller organisations to get best practice lessons.
  • More ethics discussions: “Consequences/whether or not we should be using personal data held by privately owned companies (like call details records from telecomms companies)” “The conceptual issues around the power dynamics and biases in data and tech ownership, collection, analysis and use and what it means for the development sector.”
  • Hands-on tutorials “for applying some of the methods people have used would be amazing, although may be beyond the remit of this conference.”
  • Coaching sessions: “one-on-ones to discuss challenges in setting up good M&E systems in smaller organisations – the questions we had, and the challenges we face did not feel like they would have been of relevance to the INGOs in the room.”

Some “Ah ha! Moments”

  • “The whole tech discussion needs to be framed around evaluation practice and theory – it seems to me that people come at this being somewhat data obsessed and driven but not starting from what we want to know and why that might be useful.”
  • “There is still quite a large gap between the data scientist and the M&E world – we really need to think more on how to bridge that gap. Despite the fact that it is recognized I do feel that much of the tech stuff was ‘ because we can’ and not because it is useful and answers to a concrete problem. On the other hand some of the tech was so complex that I also couldn’t assess whether it was really useful and what possible risks could be”
  • “I was surprised to see the scale of the impact GDPR is apparently making. Before the conference, I usually felt that most people didn’t have much of an interest in data privacy and responsible data.”
  • “That people were being honest and critical about tech!”
  • “That the tech world remains data hungry and data obsessed!”
  • “That this group is seriously confused about how tech and MERL can be used effectively as a general business practice.”
  • “This community is learning fast!”
  • “Hot topics like Big Data and Block Chain are only new tools, not silver bullets. Like RCTs a few years ago, we are starting to understand their best use and specific added value.”

Kudos

  • “A v useful conference for a growing sector. Well done!”
  • “Great opportunity for bringing together different sectors – sometimes it felt we were talking across tech, merl, and programming without much clarity of focus or common language but I suppose that shows the value of this space to discuss and work towards a common understanding and debate.”
  • “Small but meaningful to me – watching session leads attend other sessions and actively participating was great. We have such an overlap in purpose and in some cases almost no overlap in skillsets. Really felt like MERLTech was a community taking turns to learn from each other, which is pretty different from the other conferences I’ve been to, where the same people often present the same idea to a slightly different audience each year.”
  • “I loved the vibe. I’m coming to this late in my career but was made to feel welcome. I did not feel like an idiot. I found it so informative and some sessions were really inspiring. It will probably become an annual must go to event for me.”
  • “I was fully blown away. I haven’t learnt so much in a long time during a conference. The mix of types of sessions helps massively make the most of the knowledge in room, so keep up that format in the future.”
  • “I absolutely loved it. It felt so good to be with like minded people who have similar concerns and values….”

Thanks again to everyone who filled out the feedback forms and rated their sessions. This really does help us to adapt and improve. We take your ideas and opinions seriously!

If you’d like to experience MERL Tech, please join us in Johannesburg August 1-2, 2018, or Washington, DC, September 6-7, 2018!  The call for session ideas for MERL Tech DC is open through April 30th – please submit yours now!

Buckets of data for MERL

by Linda Raftree, Independent Consultant and MERL Tech Organizer

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general.These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?