Tag Archives: lessons learned

Tech Is Easy, People Are Hard: Behavioral Design Considerations to Improve Mobile Engagement

By Cathy Richards

Mobile platforms are often a go-to when it comes to monitoring and evaluation in developing communities and markets. One provider of these platforms, Echo Mobile, is often asked, “what sort of response rate can I expect for my SMS survey?” or, “what percentage of my audience will engage my IVR initiative?” In this session at MERL Tech DC in September, Boris Maguire, CEO of Echo Mobile, walked participants through various case studies which highlight that the answer to that question largely depends on the project’s individual context and that there is ultimately no one size fits all solution. 

Echo Mobile is a platform that allows users to have powerful conversations over SMS, voice, and USSD for purposes such as monitoring and evaluation, field reporting, feedback, information access, market research and customer service. The platform’s user segments include consumer goods (20%), education and health (16%), M&E/Research (15%), agriculture and conservation (14%), finance and consulting (13%) and media and advocacy (7%). Its user types are primarily business (35%), non-profit (31%) and social enterprises (29%). 

The team at Echo Mobile has learned that regardless of the chosen mobile engagement technology, achieving MERL goals often rests on the design and psychology behind the mobile engagement strategy – the content, tone, language, and timing of communications and the underlying incentives of the audience. More often than not, the most difficult parts in mobile engagement are the human aspects (psychological, emotional, strategic) rather than the technological implementation. 

Because of this, Echo Mobile chose to dive deeper into the factors they believed influenced mobile engagement the most. Some of their beliefs included:

  • Responder characteristics: Who are you trying to engage with? It’s important to figure out who you are engaging with and tailor your strategy to them.
  • Social capital and trust: Do these responders have a reason to trust you? What is the nature of your relationship with them?
  • Style, tone & content: What specific words are you using to engage with them? Are you showing that you want to know more and that you care about them?
  • Convenience: What is the level of effort, time and resources that responders have to invest in order to engage with your organization?
  • Incentives/relevance: Do they have a reason to engage with your organization? Do they think you’ll understand them better? Will they get more of what they need?

Through informal analysis, Echo Mobile found that the factors most highly correlated with high rates of engagement are the time of day in which recipients receive the messaging, followed by reminders to engage. Financial incentives were found to be the least effective. However, case studies prove that context ultimately adds the most important component of the mobile engagement strategy.

In the first case study, a BBOXX team in Rwanda sought to understand the welfare impact of solar consumption amongst their customers via SMS surveys. They first ran a set of small experiments, modifying survey financial incentives, timing, length, and language to see which moved the needle on response rates and compare the results to what customers told them in focus groups. In this case, Echo Mobile found that reminders in the morning and surveys in the evening nearly doubled their response rates. The choice to opt or dive in also affected response rates.

In the second case study, a UN agency nearly doubled SMS engagement rates from 40,000 Kenyan teachers by dropping financial incentives and tweaking the structure, tone and content of their messaging. In this case, incentive amounts once again did not do much to increase engagement but rather the ability to opt or dive in, reminders, and content/tone made the biggest difference. 

In short, Echo Mobile’s biggest takeaways are that:

  • Convenience is king
  • One can harass but not bore
  • Financial incentives are often overrated

Several participants also shared their experiences with mobile engagement and cited factors such as survey length and consent as important. 

5 Insights from MERL Tech 2016

By Katherine Haugh, a visual note taker who summarizes content in a visually simple manner while keeping the complexity of the subject matter. Originally published on Katherine’s blog October 20, 2015 and here on ICT Works January 18th, 2016. 

MT1MT2

Recently, I had the opportunity to participate in the 2015 MERL Tech conference that brought together over 260 people from 157 different organizations. I joined the conference as a “visual note-taker,” and I documented the lightning talks, luncheon discussions, and breakout sessions with a mix of infographics, symbols and text.

Experiencing several “a-ha” moments myself, I thought it would be helpful to go a step further than just documenting what was covered and add some insights on my own. Five clear themes stood out to me: 1) There is such a thing as “too much data”2) “Lessons learned” is like a song on repeat 3) Humans > computers 4) Sharing is caring 5) Social impact investment is crucial.

1) There is such a thing as “too much data.”

MERLTech 2015 began with a presentation by Ben Ramalingham, who explained that, “big data is like teenage sex. No one knows how to do it and everyone thinks that everyone else is doing it.” In addition to being the most widely tweeted quote at the conference and eliciting a lot of laughter and nods of approval, Ben’s point was well-received by the audience. The fervor for collecting more and more data has been, ironically, limiting the ability of organizations to meaningfully understand their data and carry out data-driven decision-making.

Additionally, I attended the breakout session on “data minimalism” with Vanessa CorlazzoliMonalisa Salib, from USAID LEARN, and Teresa Crawford that further emphasized this point.

The session covered the ways that we can identify key learning questions and pinpoint need-to-have-data (not nice-to-have-data) to be able to answer those questions. [What this looks like in practice: a survey with onlyfive questions. Yes, just five questions.] This approach to data collection enforces the need to think critically each step of the way about what is needed and absolutely necessary, as opposed to collecting as much as possible and then thinking about what is “usable” later.

2) “Lessons learned” is like a song on repeat.

Similar to a popular song, the term “lessons learned” has been on repeat for many M&E practitioners (including myself). How many reports have we seen that conclude with lessons learned that are never actually learned? Having concluded my own capstone project with a set of “lessons learned,” I am at fault for this as well. In her lightning talk on “Lessons Not Learned in MERL,” Susan Davis explained that, “while it’s OK to re-invent the wheel, it’s not OK to re-invent a flat tire.”

It seems that we are learning the same “lessons” over and over again in the M&E-tech field and never implementing or adapting in accordance with those lessons. Susan suggested we retire the “MERL” acronym and update to “MERLA” (monitoring, evaluation, research, learning and adaptation).

How do we bridge the gap between M&E findings and organizational decision-making? Dave Algoso has some answers. (In fact, just to get a little meta here: Dave Algoso wrote about “lessons not learned” last year at M&E Tech and now we’re learning about “lessons not learned” again at MERLTech 2015. Just some food for thought.). A tip from Susan for not re-inventing a flat wheel: find other practitioners who have done similar work and look over their “lessons learned” before writing your own. Stay tuned for more on this at FailFest 2015 in December!

3) Humans > computers.

Who would have thought that at a tech-related conference, a theme would be the need for more human control and insight? Not me. That’s for sure! A funny aside: I have (for a very long time) been fearful that the plot of the Will Smith movie, “I-Robot” would become a reality. I now feel slightly more assured that this won’t happen, given that there was a consensus at this conference and others on the need for humans in the M&E process (and in the world). As Ben Ramalingham so eloquently explained, “you can’t use technology to substitute humans; use technology to understand humans.”

4) Sharing is caring.

Circling back to the lessons learned on repeat point, “sharing is caring” is definitely one we’ve heard before. Jacob Korenblum emphasized the need for more sharing in the M&E field and suggested three mechanisms for publicizing M&E results: 1)Understanding the existing eco-system (i.e. the decision between using WhatsApp in Jordan or in Malawi) 2) Building feedback loops directly into M&E design and 3) Creating and tracking indicators related to sharing. Dave Algoso also expands on this concept in TechChange’s TC111 course on Technology for Monitoring and Evaluation; Dave explains that bridging the gaps between the different levels of learning (individual, organizational, and sectoral) is necessary for building the overall knowledge of the field, which spans beyond the scope of a singular project.

5) Social impact investment is crucial.

I’ve heard this at other conferences I’ve attended, like the Millennial Action Project’s Congressional Summit on Next Generation Leadership and many others.  As a panelist on “The Future of MERLTech: A Donor View,” Nancy McPherson got right down to business: she addressed the elephant in the room by asking questions about “who the data is really for” and “what projects are really about.” Nancy emphasized the need for role reversal if we as practitioners and researchers are genuine in our pursuit of “locally-led initiatives.” I couldn’t agree more. In addition to explaining that social impact investing is the new frontier of donors in this space, she also gave a brief synopsis of trends in the evaluation field (a topic that my brilliant colleague Deborah Grodzicki and I will be expanding on. Stay tuned!)