All posts by Linda Raftree

About Linda Raftree

Linda Raftree supports strategy, program design, research, and technology in international development initiatives. She co-founded MERLTech in 2014 and Kurante in 2013. Linda advises Girl Effect on digital safety, security and privacy and supports the organization with research and strategy. She is involved in developing responsible data policies for both Catholic Relief Services and USAID. Since 2011, she has been advising The Rockefeller Foundation’s Evaluation Office on the use of ICTs in monitoring and evaluation. Prior to becoming an independent consultant, Linda worked for 16 years with Plan International. Linda runs Technology Salons in New York City and advocates for ethical approaches for using ICTs and digital data in the humanitarian and development space. She is the co-author of several publications on technology and development, including Emerging Opportunities: Monitoring and Evaluation in a Tech-Enabled World with Michael Bamberger. Linda blogs at Wait… What? and tweets as @meowtree. See Linda’s full bio on LInkedIn.

Report back on MERL Tech DC

Day 1, MERL Tech DC 2018. Photo by Christopher Neu.

The MERL Tech Conference explores the intersection of Monitoring, Evaluation, Research and Learning (MERL) and technology. The main goals of “MERL Tech” as an initiative are to:

  • Transform and modernize MERL in an intentionally responsible and inclusive way
  • Promote ethical and appropriate use of tech (for MERL and more broadly)
  • Encourage diversity & inclusion in the sector & its approaches
  • Improve development, tech, data & MERL literacy
  • Build/strengthen community, convene, help people talk to each other
  • Help people find and use evidence & good practices
  • Provide a platform for hard and honest talks about MERL and tech and the wider sector
  • Spot trends and future-scope for the sector

Our fifth MERL Tech DC conference took place on September 6-7, 2018, with a day of pre-workshops on September 5th. Some 300 people from 160 organizations joined us for the 2-days, and another 70 people attended the pre-workshops.

Attendees came from a wide diversity of professions and disciplines:

What professional backgrounds did we see at MERL Tech DC in 2018?

An unofficial estimate on speaker racial and gender diversity is here.

Gender balance on panels

At this year’s conference, we focused on 5 themes (See the full agenda here):

  1. Building bridges, connections, community, and capacity
  2. Sharing experiences, examples, challenges, and good practice
  3. Strengthening the evidence base on MERL Tech and ICT4D approaches
  4. Facing our challenges and shortcomings
  5. Exploring the future of MERL

As always, sessions were related to: technology for MERL, MERL of ICT4D and Digital Development programs, MERL of MERL Tech, digital data for adaptive decisions/management, ethical and responsible data approaches and cross-disciplinary community building.

Big Data and Evaluation Session. Photo by Christopher Neu.

Sessions included plenaries, lightning talks and breakout sessions. You can find a list of sessions here, including any presentations that have been shared by speakers and session leads. (Go to the agenda and click on the session of interest. If we have received a copy of the presentation, there will be a link to it in the session description).

One topic that we explored more in-depth over the two days was the need to get better at measuring ourselves and understanding both the impact of technology on MERL (the MERL of MERL Tech) and the impact of technology overall on development and societies.

As Anahi Ayala Iacucci said in her opening talk — “let’s think less about what technology can do for development, and more about what technology does to development.” As another person put it, “We assume that access to tech is a good thing and immediately helps development outcomes — but do we have evidence of that?”

Feedback from participants

Some 17.5% of participants filled out our post-conference feedback survey, and 70% of them rated their experience either “awesome” or “good”. Another 7% of participants rated individual sessions through the “Sched” app, with an average session satisfaction rating of 8.8 out of 10.

Topics that survey respondents suggested for next time include: more basic tracks and more advanced tracks, more sessions relating to ethics and responsible data and a greater focus on accountability in the sector.  Read the full Feedback Report here!

What’s next? State of the Field Research!

In order to arrive at an updated sense of where the field of technology-enabled MERL is, a small team of us is planning to conduct some research over the next year. At our opening session, we did a little crowdsourcing to gather input and ideas about what the most pressing questions are for the “MERL Tech” sector.

We’ll be keeping you informed here on the blog about this research and welcome any further input or support! We’ll also be sharing more about individual sessions here.

MERL Tech Jozi Feedback Report

MERL Tech Jozi took place on August 1-2, 2018. Below are some highlights from the post-conference survey that was sent to participants requesting feedback on their MERL Tech Jozi experience. Thirty-four percent of our attendees filled out the post-conference survey via Google Forms.

Overall Experience

Here’s how survey participants rated their overall experience:

Participants’ favorite sessions

The sessions that were most frequently mentioned as favorites and some reasons why included:

Session title Comments
Conducting a Baseline of the ICT Ecosystem – Genesis Analytics and DIAL

 

…interactive session and felt practical. I could easily associate with what the team was saying. I really hope these learnings make it to implementation and start informing decision-making around funding! The presenters were also great.

… interesting and engaging, findings were really relevant to the space.

…shared lessons and insights resonated with my own professional experience. The discussions were fruitful and directly relevant to my line of work.

…incredibly useful.

The study confirmed a lot of my perceptions as an IT developer in the MERL space, but now I have some more solid backup. I will use this in my webinars and consulting on “IT for M&E”

Datafication Discrimination — Media Monitoring Africa, Open Data Durban, Amandla.mobi and Oxfam South Africa

 

Linked both MERL and Tech to programme and focussed on the impact of MERL Tech in terms of sustainable, inclusive development.

Great panel, very knowledgeable, something different to the usual M&E. interactive and diverse.

… probably most critical and informative in terms of understanding where the sector was at … the varied level of information across the audience and the panel was fascinating – if slightly worrying about how unclear we are as an M&E sector.

When WhatsApp Becomes About More Than Messaging – Genesis Analytics, Every1Mobile and Praekelt.org

 

As an evaluator, I have never thought of using WhatsApp as a way of communicating with potential beneficiaries. It made me think about different ways of getting in touch with beneficiaries of programme, and getting them to participate in a survey.

The different case studies included examples, great media, good Q&A session at the end, and I learnt new things. WhatsApp is only just reaching it’s potential in mHealth so it was good to learn real life lessons.

Hearing about the opportunities and challenges of applying a tool in different contexts and for different purposes gave good all-around insights

Social Network Analysis – Data Innovators and Praeklelt.org

 

I was already very familiar with SNA but had not had the opportunity to use it for a couple of years. Hearing this presentation with examples of how others have used it really inspired me and I’ve since sketched out a new project using SNA on data we’re currently gathering for a new product! I came away feeling really inspired and excited about doing the analysis.
Least favorite sessions

Where participants rated sessions as their “least favorite it was because:

  • The link to technology was not clear
  • It felt like a sales pitch
  • It felt extractive
  • Speaker went on too long
  • Views on MERL or Tech seemed old fashioned
Topics that need more focus in the future

Unpack the various parts of “M” “E” “R” “L”

  • Technology across MERL, not just monitoring. There was a lot of technology for data collection & tracking but little for ERL in MERL
  • More evaluation?
  • The focus was very much on evaluation (from the sessions I attended) and I feel like we did not talk about the monitoring, research and learning so much. This is huge for overall programme implementation and continuously learning from our data. Next time, I would like to talk a bit more about how organisations are actually USING data day-to-day to make decisions (monitoring) and learning from it to adapt programmes.
  • The R of MERL is hardly discussed at all. Target this for the next MERL Tech.

New digital approaches / data science

  • AI and how it can introduce biases, machine learning, Python
  • A data science-y stream could open new channels of communication and collaboration

Systems and interoperability

  • Technology for data management between organizations and teams.
  • Integrations between platforms.
  • Public Health, Education. Think of how do we discuss and bring more attention to the various systems out there, and ensure interoperability and systems that support the long term visions of countries.
  • Different types of MERL systems. We focused a lot on data collection systems, but there is a range of monitoring systems that programme managers can use to make decisions.

 Scale and sustainability

  • How to engage and educate governments on digital data collection systems.
  • The debate on open source: because in development sector it is pushed as the holy grail, whereas most other software worldwide is proprietary for a reason (safety, maintenance, continued support, custom solution), and open source doesn’t mean free.
  • Business opportunities. MERL as a business tool. How MERL Tech has proved ROI in business and real market settings, even if those settings were in the NGO/NPO space. What is the business case behind MERL Tech and MERL Tech developments?
Ah ha! Moments

Learning about technology / tech approaches

  • I found the design workshops enlightening, and did not as an evaluator realise how much time technies put into user testing.
  • I am a tech dinosaur – so everything I learned about a new technology and how it can be applied in evaluation was an ‘aha!’

New learning and skills

  • The SNA [social network analysis] inspiration that struck me was my big takeaway! I can’t wait to get back to the office and start working on it.
  • Really enjoyed learning about WhatsApp for SBCC.
  • The qualitative difference in engagement, structure, analysis and resource need between communicating via SMS versus IM. (And realising again how old school I am for a tech person!)

Data privacy, security, ethics

  • Ah ha moment was around how we could improve handling data
  • Data security
  • Our sector (including me) doesn’t really understand ‘big data,’ how it can discriminate, and what that might mean to our programmes.

Talking about failure

  • The fail fest was wonderful. We all theoretically know that it’s good to be honest about failure and to share what that was like, but this took honest reflection to a whole new level and set the tone for Day 2.

I’m not alone!

  • The challenges I am facing with introducing tech for MERL in my organisations aren’t unique to me.
  • There are other MERL Tech practitioners with a journalism/media background! This is exciting and makes me feel I am in the right place. The industry seems to want to gate keep (academia, rigourous training) so this is interesting to consider going forward, but also excites me to challenge this through mentorship opportunities and opening the space to others like me who were given a chance and gained experience along the way. Also had many Aha moments for using WhatsApp and its highly engaging format.
  • Learning that many other practitioners support learning on your own.
  • There are people locally interested in connecting and learning from.
Recommendations for future MERL Tech events

More of most everything…

  • More technical sessions
  • More panel discussions
  • More workshops
  • More in-depth sessions!
  • More time for socializing and guided networking like the exercise with the coloured stickers on Day 1
  • More NGOs involved, especially small NGOs.
  • More and better marketing to attract more people
  • More demo tables, or have new people set up demo tables each day
  • More engagement: is there a way that MERL Tech could be used further to shape, drive and promote the agenda of using technology for better MERL? Maybe through a joint session where we identify important future topics to focus on? Just as something that gives those who want the opportunity to further engage with and contribute to MERL Tech and its agenda-setting?
  • The conversations generally were very ‘intellectual’. Too many conversations revolved around how the world had to move on to better appreciate the value of MERL, rather than how MERL was adapted, used and applied in the real world. [It was] too dominated by MERL early adopters and proponents, rather than MERL customers… Or am I missing the point, which may be that MERL (in South Africa) is still a subculture for academic minded researchers. Hope not.
  • More and better wine!
 Kudos
  • For some reason this conference – as opposed to so many other conferences I have been to – actually worked. People were enthused, they were kind, willing to talk – and best of all by day 2 they hadn’t dropped out like flies (which is such an issue with conferences!). So whatever you did do it again next time!
  • Very interactive and group-focused! This was well balanced with informative sessions. I think creative group work is good but it wouldn’t be good to have the whole conference like this. However, this was the perfect amount of it and it was well led and organized.
  • I really had a great time at this conference. The sessions were really interesting and it was awesome to get so many different people in the same place to discuss such interesting topics and issues. Lunch was also really delicious
  • Loved the lightning talks! Also the breakaway sessions were great. The coffee was amazing thank you Fail fest is such a cool concept and looking to introduce this kind of thinking into our own organisation more – we all struggle with the same things, was good to be around likeminded professionals.
  • I really appreciated the fairly “waste-free” conference with no plastic bottles, unnecessary programmes and other things that I’ll just throw away afterwards. This was a highlight for me!
  • I really enjoyed this conference. Firstly the food was amazing (always a win). But most of all the size was perfect. It was really clever the way you forced us to sit in small lunch sizes and that way by the end of the conference I really had the confidence to speak to people. Linda was a great organiser – enthusiastic and punctual.
Who attended MERL Tech Jozi?

Who presented at MERL Tech Jozi?

 

If you’d like to experience MERL Tech, sign up now to attend in Washington, DC on September 5-7, 2018!

September 5th: MERL Tech DC pre-workshops

This year at MERL Tech DC, in addition to the regular conference on September 6th and 7th, we’re offering two full-day, in-depth workshops on September 5th. Join us for a deeper look into the possibilities and pitfalls of Blockchain for MERL and Big Data for Evaluation!

What can Blockchain offer MERL? with Shailee Adinolfi, Michael Cooper, and Val Gandhi, co-hosted by Chemonics International, 1717 H St. NW, Washington, DC 20016. 

Tired of the blockchain hype, but still curious on how it will impact MERL? Join us for a full day workshop with development practitioners who have implemented blockchain solutions with social impact goals in various countries. Gain knowledge of the technical promises and drawbacks of blockchain technology as it stands today and brainstorm how it may be able to solve for some of the challenges in MERL in the future. Learn about ethical design principles for blockchain and how to engage with blockchain service providers to ensure that your ideas and programs are realistic and avoid harm. See the agenda here.

Register now to claim a spot at the blockchain and MERL pre-workshop!

Big Data and Evaluation with Michael Bamberger, Kerry Bruce and Peter York, co-hosted by the Independent Evaluation Group at the World Bank – “I” Building, Room: I-1-200, 1850 I St NW, Washington, DC 20006

Join us for a one-day, in-depth workshop on big data and evaluation where you’ll get an introduction to Big Data for Evaluators. We’ll provide an overview of applications of big data in international development evaluation, discuss ways that evaluators are (or could be) using big data and big data analytics in their work. You’ll also learn about the various tools of data science and potential applications, as well as run through specific cases where evaluators have employed big data as one of their methods. We will also address the important question as to why many evaluators have been slower and more reluctant to incorporate big data into their work than have their colleagues in research, program planning, management and other areas such as emergency relief programs. Lastly, we’ll discuss the ethics of using big data in our work. See the agenda here!

Register now to claim a spot at the Big Data and Ealuation pre-workshop!

You can also register here for the main conference on September 6-7, 2018!

 

Check out the agenda for MERL Tech DC!

MERL Tech DC is coming up quickly!

This year we’ll have two pre-workshops on September 5th: What Can Blockchain Offer MERL? (hosted by Chemonics) and Big Data and Evaluation (hosted by the World Bank).

On September 6-7, 2018, we’ll have our regular two days of lightning talks, break-out sessions, panels, Fail Fest, demo tables, and networking with folks from diverse sectors who all coincide at the intersection of MERL and Tech!

Registration is open – and we normally sell out, so get your tickets now while there is still space!

Take a peek at the agenda – we’re excited about it — and we hope you’ll join us!

 

 

MERL Tech Jozi: Highlights, Takeaways and To Dos

Last week 100 people gathered at Jozihub for MERL Tech Jozi — two days of sharing, learning and exploring what’s happening at the intersection of Monitoring, Evaluation, Research and Learning (MERL) and Tech.

This was our first MERL Tech event outside of Washington DC, New York or London, and it was really exciting to learn about the work that is happening in South Africa and nearby countries. The conference vibe was energetic, buzzy and friendly, with lots of opportunities to meet people and discuss this area of work.

Participants spanned backgrounds and types of institutions – one of the things that makes MERL Tech unique! Much of what we aim to do is to bridge gaps and encourage people from different approaches to talk to each other and learn from each other, and MERL Tech Jozi provided plenty of opportunity for that.

Sessions covered a range of topics, from practical, hands-on workshops on Excel, responsible data, and data visualization, to experience sharing on data quality, offline data capture, video measurement, and social network analysis, to big picture discussions on the ICT ecosystem, the future of evaluation, the fourth industrial revolution, and the need to enhance evaluator competencies when it comes to digital tools and new approaches.

Demo tables gave participants a chance to see what tools are out there and to chat with developers about their specific needs. Lightning Talks offered a glimpse into new approaches and reflections on the importance of designing with users and understanding context in which these new approaches are utilized. And at the evening “Fail Fest” we heard about evaluation failures, challenges using mobile technology for evaluation, and sustainable tool selection.

Access the MERL Tech Jozi agenda with presentations here or all the presentations here.

3 Takeaways

One key take-away for me was that there’s a gap between the ‘new school’ of younger, more tech savvy MERL Practitioners and the more established, older evaluation community. Some familiar tensions were present between those with years of experience in MERL and less expertise in tech and those who are newer to the MERL side yet highly proficient in tech-enabled approaches. The number of people who identify as having skills that span both areas is growing and will continue to do so.

It’s going to be important to continue to learn from one another and work together to bring our MERL work to the next level, both in terms of how we form MERL teams with the necessary expertise internally and how we engage with each other and interact as a whole sector. As one participant put it, we are not going find all these magical skills in one person — the “MERL Tech Unicorn” so we need to be cognizant of how we form teams that have the right variety of skills and experiences, including data management and data science where necessary.

It is critical that we all have a better understanding of the wider impacts of technologies, beyond our projects, programs, platforms and evaluations. If we don’t have a strong grip on how technology is affecting wider society, how will we understand how social change happens in increasingly digital contexts? How will we negotiate data privacy? How will we wrestle with corporate data use and the potential for government surveillance? If evaluator understanding of technology and the information society is low, how will evaluators offer relevant and meaningful insights? How do diversity, inclusion and bias manifest themselves in a tech-enabled world and in tech-enabled MERL and what do evaluators need to know about that in order to ensure representation? How do we understand data in its newer forms and manifestations? How do we ensure ethical and sound approaches? We need all the various sectors who form part of the MERL Tech community work together to come to a better understanding of both the tangible and intangible impacts of technology in development work, evaluation, and wider society.

A second key takeaway is that we need to do a better job of documenting and evaluating the use of technology in development and in MERL (e.g., the MERL of ICT4D and MERL of tech-enabled MERL). I learned so much from the practical presentations and experience sharing during MERL Tech Jozi. In many cases, the challenges and learning were very similar across projects and efforts.  We need to find better ways of ensuring that this kind of learning is found, accessed, and that it is put into practice when creating new initiatives. We need to also understand more about the power dynamics, negative incentives and other barriers that prevent us from using what we know.

As “MERL Tech”, we are planning to pull some resources and learning together over the next year or two, to trace the shifts in the space over the past 5 years, and to highlight some of the trends we are seeing for the future. (Please get in touch with me if you’d like to participate in this “MERL of MERL Tech” research with a case study, an academic paper, other related research, or as a key informant!)

A third takeaway, as highlighted by Victor Naidu from the South African Monitoring and Evaluation Association (SAMEA), is that we need to focus on developing the competencies that evaluators require for the near future. And we need to think about how the tech sector can better serve the MERL community. SAMEA has created a set of draft competencies for evaluators, but these are missing digital competencies. SAMEA would love your comments and thoughts on what digital competencies evaluators require. They would also like to see you as part of their community and at their next event! (More info on joining SAMEA).

What digital competencies should be added to this list of evaluator competencies? Please add your suggestions and comments here on the google doc.

MERL Tech will be collaborating more closely with SAMEA to include a “MERL Tech Track” at SAMEA’s 2019 conference, and we hope to be back at JoziHub again in 2020 with MERL Tech Jozi as its own separate event.

Be sure to follow us on Twitter or sign up (in the side bar) to receive MERL Tech news if you’d like to stay in the loop! And thanks to all our sponsors – Genesis Analytics, Praekelt.org, The Digital Impact Alliance  (DIAL) and JoziHub!

MERL Tech DC is coming up on September 6-7, with pre-workshops on September 5 on Big Data and Evaluation and Blockchain and MERL! Register here.

 

 

 

 

 

Integrating Big Data into Evaluation: a conversation with Michael Bamberger and Rick Davies

At MERL Tech London, 2018, we invited Michael Bamberger and Rick Davies to debate the question of whether the enthusiasm for Big Data in Evaluation is warranted. At their session, through a formal debate (skillfully managed by Shawna Hoffman from The Rockefeller Foundation) they discussed whether Big Data and Evaluation would eventually converge, whether one would dominate the other, how can and should they relate to each other, and what risks and opportunities there are in this relationship.

Following the debate, Michael and Risk wanted to continue the discussion — this time exploring the issues in a more conversational mode on the MERL Tech Blog, because in practice both of them see more than one side to the issue.

So, what do Rick and Michael think — will big data integrate with evaluation — or is it all just hype?

Rick: In the MERL Tech debate I put a lot of emphasis on the possibility that evaluation, as a field, would be overwhelmed by big data / data science rhetoric. But since then I have been thinking about a countervailing development, which is that evaluative thinking is pushing back against unthinking enthusiasm for the use of data science algorithms. I emphasise “evaluative thinking” rather than “evaluators” as a category of people, because a lot of this pushback is coming from people who would not identify themselves as evaluators. There are different strands to this evaluative response.

One is a social justice perspective, reflected in recent books such as “Weapons of Math Destruction”, “Automated Inequality”, and “Algorithms of Oppression” which emphasise the human cost of poorly designed and or poorly supervised use of algorithms using large amounts of data to improve welfare and justice administration. Another strand is more like a form of exploratory philosophy, and has focused on how it might be possible to define “fairness” when designing and evaluating algorithms that have consequences for human welfare[ See 1, 2, 3, 4]. Another strand is perhaps more technical in focus, but still has a value concern. This is the literature on algorithmic transparency. Without transparency it is difficult to assess fairness [See 5, 6, ] Neural networks are often seen as a particular challenge. Associated with this are discussions about “the right to explanation” and what this means in practice[1,]

In parallel there is also some infiltration of data science thinking into mainstream evaluation practice. DFID is funding the World Bank’s Strategic Impact Evaluation Fund (SIEF) latest call for “nimble evaluations” [7]. These are described as rapid and low cost and likely to take the form of an RCT but ones which are focused on improving implementation rather than assessing overall impact [8]. This type of RCT is directly equivalent to A/B testing used by the internet giants to improve the way their platforms engage with their users. Hopefully these nimble approaches may bring a more immediate benefit to the people’s lives than RCTs which have tried to assess the impact of a whole project and then inform the design of subsequent projects.

Another recent development is the World Bank’s Data Science competition [9], where participants are being challenged to develop predictive models of household poverty status, based on World Bank Household Survey data.  The intention is that they should provide a cheaper means of identifying poor households than simply relying on what can be very expensive and time consuming nationwide household surveys. At present the focus on the supporting website is very technical. As far as I can see there is no discussion of how the winning prediction model will be used and an how any risks of adverse effects might be monitored and managed.  Yet as I suggested at MERLTech London, most algorithms used for prediction modelling will have errors. The propensity to generate False Positives and False Negatives is machine learning’s equivalent of original sin. It is to be expected, so it should be planned for. Plans should include systematic monitoring of errors and a public policy for correction, redress and compensation.

Michael:  These are both important points, and it is interesting to think what conclusions we can draw for the question before us.  Concerning the important issue of algorithmic transparency (AT), Rick points out that a number of widely discussed books and articles have pointed out the risk that the lack of AT poses for democracy and particularly for poor and vulnerable groups. Virginia Eubanks, one of the authors cited by Rick, talks about the “digital poorhouse” and how unregulated algorithms can help perpetuate an underclass.  However, I think we should examine more carefully how evaluators are contributing to this discussion. My impression, based on very limited evidence is that evaluators are not at the center — or even perhaps the periphery — of this discussion. Much of the concern about these issues is being generated by journalists, public administration specialists or legal specialists.  I argued in an earlier MERL Tech post that many evaluators are not very familiar with big data and data analytics and are often not very involved in these debates.  This is a hypothesis that we hope readers can help us to test.

Rick’s second point, about the infiltration of data science into evaluation is obviously very central to our discussion.  I would agree that the World Bank is one of the leaders in the promotion of data science, and the example of “nimble evaluation” may be a good example of convergence between data science and evaluation.  However, there are other examples where the Bank is on the cutting edge of promoting new information technology, but where potential opportunities to integrate technology and evaluation do not seem to have been taken up.  An example would be the Bank’s very interesting Big Data Innovation Challenge, which produced many exciting new applications of big data to development (e.g. climate smart agriculture, promoting financial inclusion, securing property rights through geospatial data, and mapping poverty through satellites). The use of data science to strengthen evaluation of the effectiveness of these interventions, however, was not mentioned as one of the objectives or outputs of this very exciting program.  

It would also be interesting to explore to what extent the World Bank Data Science competition that Rick mentions resulted in the convergence of data science and evaluation, or whether it was simply testing new applications of data science.

Finally, I would like to mention two interesting chapters in Cybersociety, Big Data and Evaluation edited by Petersson and Breul (2017, Transaction Publications).  One chapter (by Hojlund et al) reports on a survey which found that only 50% of professional evaluators claimed to be familiar with the basic concepts of big data, and only about 10% reported having used big data in an evaluation.  In another chapter, Forss and Noren reviewed a sample of Terms of Reference (TOR) for evaluations conducted by different development agencies, where they found that none of the 25 TOR specifically required the evaluators to incorporate big data into their evaluation design.

It is difficult to find hard evidence on the extent to which evaluators are familiar with, sympathetic to, or using big data into their evaluations, but the examples mentioned above show that there are important questions about the progress made towards the convergence of evaluation and big data.  

We invite readers to share their experiences both on how the two professions are starting to converge, or on the challenges that slow down, or even constrain the process of convergence.

Take our survey on Big Data and Evaluation!

Or sign up for Michael’s full-day workshop on Big Data and Evaluation in Washington, DC, on September 5th, 2018! 

Check out the agenda for MERL Tech Jozi!

We’re thrilled that the first MERL Tech conference is happening in Johannesburg, South Africa, August 1-2, 2018!

MERL Tech Jozi will be two days of in-depth sharing and exploration with 100 of your peers.  We’ll look at what’s been happening across the multi-disciplinary MERL field, including what we’ve been learning and the complex barriers that still need resolving. We’ll also generate lively debates around the possibilities and the challenges that our field needs to address as we move ahead.

The agenda for MERL Tech Jozi 2018 is now available. Take a look at register to attend!

Register to attend MERL Tech Jozi!

We’ll have workshops, panels, discussions, case studies, lightning talks, demo tables, community building, socializing, and an evening reception with a Fail Fest!

Session areas include:

  • digital data collection and management
  • data visualization
  • social network analysis
  • data quality
  • remote monitoring
  • organizational capacity for digital MERL
  • big data
  • small data
  • ethics, bias and privacy when using digital data in MERL
  • biometrics, spatial analysis, machine learning
  • WhatsApp, SMS, IVR and USSD

Take a look at the agenda to find the topics, themes and tools that are most interesting to you and to learn more about the different speakers and facilitators and their work.

Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here)

MERL Tech Jozi is supported by:

Feedback Report from MERL Tech London 2018

MERL Tech London happened on March 19-20, 2018. Here are some highlights from session level feedback and the post-conference survey on the overall MERL Tech London experience.

If you attended MERL Tech London, please get in touch if you have any further questions about the feedback report or if you would like us to send you detailed (anonymized) feedback about a session you led. Please also be sure to send us your blog posts & session summaries so that we can post them on MERL Tech News!

Background on the data

  • 54 participants (~27%) filled out the post-conference survey via Google Forms.
  • 59 (~30%) rated and/or commented on individual sessions via the Sched app. Participants chose from three ’emoji’ options: a happy face 🙂 , a neutral face 😐 , and a sad face 🙁 . Participants could also leave their comments on individual sessions.
  • We received 616 session ratings/comments via Sched. Some people rated the majority of sessions they attended; others only rated 1-2 sessions.
  • Some reported that they did not feel comfortable rating sessions in the Sched app because they were unclear about whether session leads and the public could see the rating. In future, we will let participants know that only Sched administrators can see the identity of commenters and the ratings given to sessions.
  • We do not know if there is an overlap between those who filled out Sched and those that fed back via Google Forms because the Google Forms survey is anonymous.

Overall feedback

Here’s how survey participants rated the overall experience:

Breakout sessions– 137 ratings: 69% 🙂 30% 😐 and 13% 🙁

Responses were fairly consistent across both Sched ratings and Google Forms (the form asked people to identify their favorite session). Big data and data science sessions stand out with the highest number of favorable ratings and comments. General Data Protection Regulation (GDPR) and responsible data made an important showing, as did the session on participatory video in evaluation.

Sessions with favorable comments tended to include or combine elements of:

  • an engaging format
  • good planning and facilitation
  • strong levels of expertise
  • clear and understandable language and examples
  • and strategic use of case studies to point at a bigger picture that is replicable to other situations.

Below are the breakout sessions that received the most favorable ratings and comments overall. (Plenty of other sessions were also rated well but did not make the “top-top.”)

Session title

Comments

Be it resolved: In the near future, conventional evaluation and big data will be successfully integrated Brilliant session! Loved the format! Fantastic to have such experts taking part. Really appreciated the facilitation and that there was a time at the end for more open questions/discussion.
Innovative Use of Theory-Based and Data Science Evaluation Approaches Most interesting talk of the day (maybe more for the dedicated evaluation practitioners), very practical and easy to understand and I’m really looking forward to hearing more about the results as the work progresses!
Unpacking How Change Happened (or Didn’t): Participatory Video and Most Significant Change Right amout of explanation and using case studies to illustrate points and respond to questions rather than just stand alone case studies.
GDPR – What Is It and What Do We Do About It? GDPR and what we do about it – Great presentation starting off with some historical background, explaining with clarity how this new legislation is a rights-based approach and concluding on how for Oxfam this is not a compliance project but a modification in data culture. Amazing, innovative and the speaker knew his area very well.
The Best of Both Worlds? Combining Data Science and Traditional M&E to Understand Impact I learned so much from this session and was completely inspired by the presenter and the content. Clear – well paced – honest – open – collaborative and packed with really good insight. Amazing.
Big Data, Adaptive Management, and the Future of MERL Quite a mixed bag of presenters, with focus on different pieces on the overall topic. Speakers from Novometrics was particularly engaging and stimulated some good discussion.
Blockchain: Getting Past the Hype and Considering its Value for MERL Great group with good facilitation. Open ended question left lots of room for discussion without bias towards particular outcome. Learned lots and not just about blockchain.
LEAP, and How to Bring Data to Life in Your Organization Really great session, highly interactive, rich in concepts clearly and convincingly explained. No questions were left unanswered. Very insightful suggestions shared between the presenters/facilitators and the audience. Should be on the agenda of next MERL Tech Conference as well.
Who Watches the Watchers? Good Practice for Ethical MERL(Tech)

 

I came out with some really helpful material. Collaborative session and good workshop participants willing to share and mind map. Perhaps the lead facilitator could have been a bit more contextual. Not always clear. But, our table session was really helpful and output useful.
The GDPR is coming! Now what?! Practical Steps to Help You Get Ready Good session. Appreciated the handouts….

What could session leads improve on?

We also had a few sessions that were ranked closer to 😐 (somewhere around a 6 or 6.5 on a scale of 1-10). Why did participants rate some sessions lower?

  • “Felt like a product pitch”
  • “Title was misleading”
  • Participatory activity was unclear
  • Poor time management
  • “Case studies did not expand to learning for the sector – too much ‘this is what we did’ and not enough ‘this is what it means.””
  • Poor facilitation/moderation
  • “Too unstructured, meandering”
  • Low energy
  • “Only a chat among panelists, very little time for Q&A. No space to engage”

Additionally, some sessions had participants with very diverse levels of expertise and varied backgrounds and expectations, which seemed to affect session ratings.

Lightning Talks– 182 ratings: 74% 🙂 22% 😐 4% 🙁

Lightning talks consistently get the highest ratings at MERL Tech, and this year was no exception. As one participant said, “my favorite sessions were the lightning talks because they gave a really quick overview of really concrete uses of technology in M&E work. This really helped in getting an overview of the type of activities and projects that were going on.”  Participants rated almost all the lightning talks positively.

Plenary sessions– 192 ratings: 77% 🙂 21% 😐 and 2% 🙁

Here we include:  the welcome, discussions on MERL Tech priorities on Day 1, opening talks on both days, summary on Day 1, panel with donors, the closing ‘fishbowl’, and the Fail Fest.

Opening Talks:

  • People appreciated André’ Clarke’s stage setting talk on Day 1. “Clear, accessible and thoughtful.” “Nice deck!”
  • Anahi Ayala Iacucci’s opening talk on Day 2 was a hit: “Great keynote! Anahi is very engaging but also her content was really rich. Useful that she used a lot of examples and provided a lot of history.” And “Like Anahi says ‘The question to ask is what does technology do _to_ development, rather than what can technology do _for_ development.'”

Deep Dive into Priorities for the Sector:

  • Most respondents enjoyed the self-directed conversations around the various topics.
  • “Great way to set the tone for the following sessions….” “Some really valuable and practical insights shared.” “Great group, very interesting discussion, good way to get to know a few people.”

Fail Fest:

  • The Fail Fest was enjoyed by virtually everyone. “Brilliantly honest! Well done for having created the space and thank you to those who shared so openly.” “Awesome! Anahi definitely stole the show. What an amzing way to share learning, so memorable. Again, one to steal….” “I thought this was fun way to end the first day. All the presenters were really good and the session was well framed and facilitated by Wayan.”

Fishbowl:

  • There were mixed reactions to the “Fish Bowl”
  • “Great session and way to close the event!” “Fascinating – especially insights from Michael and Veronica.” “Not enough people volunteered to speak.” “Some speakers went on too long.”

Lunchtime Demos – 23 ratings: 52% 🙂 34% 😐 and 13% 🙁

We know that many MERL Tech participants are wary of being “sold” to. Feedback from past conferences has been that participants don’t like sales pitches disguised as breakout sessions and lightning talks. So, this year we experimented with the idea of lunchtime demo sessions. The idea was that these optional sessions would allow people with a specific interest in a tool or product to have dedicated time with the tool creators for a demo or Q&A. We hoped that doing demo sessions separately from breakout sessions would make the nature of the sessions clear. Judging from the feedback, we didn’t hit the mark. We’ll try to do better next time!

What went wrong?

  • Timing: “The schedule was too tight.” “Give more time to the lunch time demo sessions or change the format. I missed the Impact Mapper session on day 1 as there was insufficient time to eat, go for a comfort break and network. This is really disappointing. I suggest a dedicated hour in the programme on the first day to visit all the ICT provider stalls.”
  • Content: “Demo sessions were more like advertising sessions by respective companies, while nicely titled as if they were to explore topical issues. Demo sessions were all promising the world to us while we know how much challenge technology application faces in real-world. Overall so many demo sessions within a short 2-day conference compromised the agenda”
  • Framing and intent: “I don’t know that framing the lunch sessions as ‘product demos’ makes a ton of sense. Maybe force people to have real case studies or practical (hands-on) sessions, and make them regular sessions? Not sure.” “I think more care needs to be taken to frame the sessions run by the software companies with a proper declarations of interests…. Sessions led by software reps were a little less transparent in that they pitched their product, but through some other topic that people would be interested in. I think that it would be wise to make it a DOI [declaration of intent] that is scripted when people who have an interest declare their interest up front for every panel discussion at the beginning, even if they did a previous one. I think that way the rules would be a little clearer.” 

General Comments

Because we bring such a diverse group together in terms of field, experience, focus and interest, expectations are varied, and we often see conflicting suggestions. Whereas some would like more MERL content, others want more Tech content. Where as some learn a lot, others feel they have heard much of this before. Here are a few of the overall comments from Sched and the Google Form.

Who else should be at MERL Tech?

  • More donors “EU, DFID, MCC, ADB, WB, SIDA, DANIDA, MFA Finland”
  • “People working with governments in developing countries”
  • “People from the ‘field’. It was mentioned in one of the closing comments that the term ‘field’ is outdated and we are not sure what we mean anymore. Wrong. There couldn’t be a more striking difference in discussions during those two days between those with solid field experience and those lacking in it.”
  • “More Brits? There were a lot of Americans that came in from DC…”

Content that participants would like to see in the future

  • More framing: “An opening session that explains what MERL Tech is and all the different ways it’s being or can be used”
  • More specifics on how to integrate technology for specific purposes and for new purposes: “besides just allowing quicker and faster data collection and analysis”
  • More big data/data science: “Anything which combines data science, stats and qualitative research is really interesting for me and seems to be the direction a lot of organisations are going in.”
  • Less big data/data science: “The big data stuff was less relevant to me”
  • More MERL-related sessions: “I have a tech background, so I would personally have liked to have seen more MERL-related sessions.”
  • More tech-related sessions: “It was my first MERLTech, so enjoyed it. I felt that many of the presentations could have been more on-point with respect to the Tech side, rather than the MERL end (or better focus on the integration of the two).”
  • More “R” (Research): Institutional learning and research (evaluations as a subset of research).
  • More “L” Learning treated as topic of it’s own. By this I mean, the capture of tacit knowledge and good practice, use of this learning for adaptive management. Compared to my last MERL Tech, I felt this meeting better featured evaluation, or at least spoke of ‘E’ as its own independent letter. I would like to see this for ‘L.’”
  • More opportunities for smaller organisations to get best practice lessons.
  • More ethics discussions: “Consequences/whether or not we should be using personal data held by privately owned companies (like call details records from telecomms companies)” “The conceptual issues around the power dynamics and biases in data and tech ownership, collection, analysis and use and what it means for the development sector.”
  • Hands-on tutorials “for applying some of the methods people have used would be amazing, although may be beyond the remit of this conference.”
  • Coaching sessions: “one-on-ones to discuss challenges in setting up good M&E systems in smaller organisations – the questions we had, and the challenges we face did not feel like they would have been of relevance to the INGOs in the room.”

Some “Ah ha! Moments”

  • “The whole tech discussion needs to be framed around evaluation practice and theory – it seems to me that people come at this being somewhat data obsessed and driven but not starting from what we want to know and why that might be useful.”
  • “There is still quite a large gap between the data scientist and the M&E world – we really need to think more on how to bridge that gap. Despite the fact that it is recognized I do feel that much of the tech stuff was ‘ because we can’ and not because it is useful and answers to a concrete problem. On the other hand some of the tech was so complex that I also couldn’t assess whether it was really useful and what possible risks could be”
  • “I was surprised to see the scale of the impact GDPR is apparently making. Before the conference, I usually felt that most people didn’t have much of an interest in data privacy and responsible data.”
  • “That people were being honest and critical about tech!”
  • “That the tech world remains data hungry and data obsessed!”
  • “That this group is seriously confused about how tech and MERL can be used effectively as a general business practice.”
  • “This community is learning fast!”
  • “Hot topics like Big Data and Block Chain are only new tools, not silver bullets. Like RCTs a few years ago, we are starting to understand their best use and specific added value.”

Kudos

  • “A v useful conference for a growing sector. Well done!”
  • “Great opportunity for bringing together different sectors – sometimes it felt we were talking across tech, merl, and programming without much clarity of focus or common language but I suppose that shows the value of this space to discuss and work towards a common understanding and debate.”
  • “Small but meaningful to me – watching session leads attend other sessions and actively participating was great. We have such an overlap in purpose and in some cases almost no overlap in skillsets. Really felt like MERLTech was a community taking turns to learn from each other, which is pretty different from the other conferences I’ve been to, where the same people often present the same idea to a slightly different audience each year.”
  • “I loved the vibe. I’m coming to this late in my career but was made to feel welcome. I did not feel like an idiot. I found it so informative and some sessions were really inspiring. It will probably become an annual must go to event for me.”
  • “I was fully blown away. I haven’t learnt so much in a long time during a conference. The mix of types of sessions helps massively make the most of the knowledge in room, so keep up that format in the future.”
  • “I absolutely loved it. It felt so good to be with like minded people who have similar concerns and values….”

Thanks again to everyone who filled out the feedback forms and rated their sessions. This really does help us to adapt and improve. We take your ideas and opinions seriously!

If you’d like to experience MERL Tech, please join us in Johannesburg August 1-2, 2018, or Washington, DC, September 6-7, 2018!  The call for session ideas for MERL Tech DC is open through April 30th – please submit yours now!

Present or lead a session at MERL Tech DC!

Please sign up to present, register to attend, or reserve a demo table for MERL Tech DC 2018 on September 6-7, 2018 at FHI 360 in Washington, DC.

We will engage 300 practitioners from across the development ecosystem for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.

MERL Tech DC 2018, September 6-7, 2018

Digital data and new media and information technologies are changing monitoring, evaluation, research and learning (MERL). The past five years have seen technology-enabled MERL growing by leaps and bounds. We’re also seeing greater awareness and concern for digital data privacy and security coming into our work.

The field is in constant flux with emerging methods, tools and approaches, such as:

  • Adaptive management and developmental evaluation
  • Faster, higher quality data collection
  • Remote data gathering through sensors and self-reporting by mobile
  • Big data, data science, and social media analytics
  • Story-triggered methodologies

Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.

The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical standards for digital data.

Championing the above is a growing and diversifying community of MERL practitioners, assembling from a variety of fields; hailing from a range of starting points; espousing different core frameworks and methodological approaches; and representing innovative field implementers, independent evaluators, and those at HQ that drive and promote institutional policy and practice.

Please sign up to present, register to attend, or reserve a demo table for MERL Tech DC to experience 2 days of in-depth sharing and exploration of what’s been happening across this cross-disciplinary field, what we’ve been learning, complex barriers that still need resolving, and debate around the possibilities and the challenges that our field needs to address as we move ahead.

Submit Your Session Ideas Now

Like previous conferences, MERL Tech DC will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.

Please submit your session ideas now. We are looking for a range of topics, including:

  • Experiences and learning at the intersection of MERL and tech
  • Ethics, inclusion, safeguarding, and data privacy
  • Data (big data, data science, data analysis)
  • Evaluation of ICT-enabled efforts
  • The future of MERL
  • Tech-enabled MERL Failures

Visit the session submission page for more detail on each of these areas.

Submission Deadline: Monday, April 30, 2018 (at midnight EST)

Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early June and, if selected, you will be asked to submit the final session title, summary and outline by June 30.

Register Now

Please sign up to present or register to attend MERL Tech DC 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening reception and Fail Fest to foster needed networking across sectors and an exploration of how we can learn from our mistakes.

We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.

You can also submit session ideas for MERL Tech Jozi, coming up on August 1-2, 2018! Those are due on March 31st, 2018!

Please Submit Session Ideas for MERL Tech Jozi

We’re thrilled to announce that we’re organizing MERL TEch Jozi for August of 2018!

Please submit your session ideas or reserve your demo table now, to explore what’s happening with innovation, digital data, and new technologies across the monitoring, evaluation, research, and learning (MERL) fields.

MERL Tech Jozi will be in Johannesburg, South Africa, August 1-2, 2018!

At MERL Tech Jozi, we’ll build on earlier MERL Tech conferences in DC and London, engaging 100 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn theories of MERL technology into effective practices that deliver real insight and learning in our sector.

MERL Tech is a lively, interactive, community-driven conference.  We’re actively seeking a diverse set of practitioners in monitoring, evaluation, research, learning, program implementation, management, data science, and technology to lead every session.

Submit your session ideas now.

We’re looking for sessions that focus on:

  • Discussions around good practice and evidence-based review
  • Innovative MERL approaches that incorporate technology
  • Future-focused thought provoking ideas and examples
  • Conversations about ethics, inclusion, and responsible policy and practice in MERL Tech
  • Exploration of complex MERL Tech challenges and emerging good practice
  • Workshop sessions with practical, hands-on exercises and approaches
  • Lightning Talks to showcase new ideas or to share focused results and learning
Submission Deadline: Saturday, March 31, 2018.

Session submissions are reviewed and selected by our steering committee. Presenters and session leads will have priority access to MERL Tech tickets. We will notify you whether your session idea was selected in late April and if selected, you will be asked to submit the final session title, summary and detailed session outline by June 1st, 2018

If you’d prefer to showcase your technology tool or platform to MERL Tech participants, you can reserve your demo table here.

MERL Tech is dedicated to creating a safe, inclusive, welcoming and harassment-free experience for everyone through our Code of Conduct.

MERL Tech Jozi is organized by Kurante and supported by the following sponsors. Contact Linda Raftree if you’d like to be a sponsor of MERL Tech Jozi too.