When seeking information for a project baseline, midline, endline, or anything in between, it has become second nature to budget for collecting (or commissioning) primary data ourselves.
Really, it would be more cost-and time-effective for all involved if we got better at asking peers in the space for already-existing reports or datasets. This is also an area where our donors – particularly those with large country portfolios – could help with introductions and matchmaking.
Consider the Public Option
And speaking of donors as a second point – why are we implementers responsible for collecting MERL relevant data in the first place?
For example, one DFID Country Office we worked with noted that a lack of solid population and demographic data limited their ability to monitor all DFID country programming. As a result, DFID decided to co-fund the country’s first census in 30 years – which benefited DFID and non-DFID programs.
The term “country systems” can sound a bit esoteric, pretty OECD-like – but it really can be a cost-effective public good, if properly resourced by governments (or donor agencies), and made available.
Flip the Paradigm
And finally, a third way to get more bang for our buck is – ready or not – Results Based Financing, or RBF. RBF is coming (and, for folks in health, it’s probably arrived). In an RBF program, payment is made only when pre-determined results have been achieved and verified.
But another way to think about RBF is as an extreme paradigm shift of putting M&E first in program design. RBF may be the shake-up we need, in order to move from monitoring what already happened, to monitoring events in real-time. And in some cases – based on evidence from World Bank and other programming – RBF can also incentivize data sharing and investment in country systems.
Ultimately, the goal of MERL should be using data to improve decisions today. Through better sharing, systems thinking, and (maybe) a paradigm shake-up, we stand to gain a lot more mileage with our 3%.
We’ve been working hard over the past several weeks to finish up the agenda for MERL Tech London 2018, and it’s now ready!
We’ve got workshops, panels, discussions, case studies, lightning talks, demos, community building, socializing, and an evening reception with a Fail Fest!
Topics range from mobile data collection, to organizational capacity, to learning and good practice for information systems, to data science approaches, to qualitative methods using mobile ethnography and video, to biometrics and blockchain, to data ethics and privacy and more.
You can search the agenda to find the topics, themes and tools that are most interesting, identify sessions that are most relevant to your organization’s size and approach, pick the session methodologies that you prefer (some of us like participatory and some of us like listening), and to learn more about the different speakers and facilitators and their work.
Tickets are going fast, so be sure to snap yours up before it’s too late! (Register here!)
In his MERL Tech DC session on Google Forms, Samhir Vesdev from IREX led a hands-on workshop on Google Forms and laid out some of the software’s capabilities and limitations. Much of the session focused on Google Forms’ central concepts and the practicality of building a form.
At its most fundamental level, a form is made up of several sections, and each section is designed to contain a question or prompt. The centerpiece of a section is the question cell, which is, as one would imagine, the cell dedicated to the question. Next to the question cell there is a drop down menu that allows one to select the format of the question, which ranges from multiple-choice to short answer.
At the bottom right hand corner of the section you will find three dots arranged vertically. When you click this toggle, a drop-down menu will appear. The options in this menu vary depending on the format of the question. One common option is to include a few lines of description, which is useful in case the question needs further elaboration or instruction. Another is the data validation option, which restricts the kinds of text that a respondent can input. This is useful in the case that, for example, the question is in a short answer format but the form administrators need the responses to be limited numerals for the sake of analysis.
The session also covered functions available in the “response” tab, which sits at the top of the page. Here one can find a toggle labeled “accepting responses” that can be turned off or on depending on the needs for the form.
Additionally, in the top right corner this tab, there are three dots arranged vertically, and this is the options menu for this tab. Here you will find options such as enabling email notifications for each new response, which can be used in case you want to be alerted when someone responds to the form. Also in this drop down, you can click “select response destination” to link the Google Form with Google Sheets, which simplifies later analysis. The green sheets icon next to the options drop-down will take you to the sheet that contains the collected data.
Other capabilities in Google Forms include the option for changing the color scheme, which you can access by clicking the palette icon at the top of the screen. Also, by clicking the settings button at the top of the screen you can limit the response amount to restrict people’s ability to skew the data by submitting multiple responses, or you can enable response editing after submission to allow respondents to go in and correct their response after submitting it.
Branching is another important tool in Google Forms. It can be used in the case that you want a particular response to a question (say, a multiple choice question) to lead the respondent to another related question only if they respond in a certain way.
For example, if in one section you ask “did you like the workshop?” with the answer options being “yes” and “no,” and if you want to know what they didn’t like about the workshop only if they answer “no,” you can design the sheet to take the respondent to a section with the question “what didn’t you like about the workshop?” only in the case that they answer “no,” and then you can design the sheet to bring the respondent back to the main workflow after they’ve answered this additional question.
To do this, create at least two new sections (by clicking “add section” in the small menu to the right of the sections), one for each path that a person’s response will lead them down. Then, in the options menu on the lower right hand side select “go to section based on answer” and using the menu that appears, set the path that you desire.
These are just some of the tools that Google Forms offers, but with just these it is possible to build an effective form to collect the data you need. Samhir ended with a word of caution that Google has been known to shut down popular apps, so you should be wary about building an organization strategy around Google Forms.
by Roger Nathanial Ashby, Co-Founder & Principal Consultant, OpenWise.
The universe of MERL Tech solutions has grown exponentially. In 2008 monitoring and evaluating tech within global development could mostly be confined to mobile data collection tools like Open Data Kit (ODK), and Excel spreadsheets to analyze and visualize survey data. In the intervening decade a myriad of tools, companies and NGOs have been created to advance the efficiency and effectiveness of monitoring, evaluation, research and learning (MERL) through the use of technology. Whether it’s M&E platforms or suites, satellite imagery, remote sensors, or chatbots, new innovations are being deployed every day in the field.
However, how do we evaluate the impact when MERL Tech is the intervention itself? That was the question and task put to participants of the “M&E Squared” workshop at MERL Tech 2017.
Workshop participants were separated into three groups that were each given a case study to discuss and analyze. One group was given a case about improving the learning efficiency of health workers in Liberia through the mHero Health Information System (HIS). The system was deployed as a possible remedy to some of the information communication challenges identified during the 2014 West African Ebola outbreak. A second group was given a case about the use of RapidPro to remind women to attend antenatal care (ANC) for preventive malaria medicine in Guinea. The USAID StopPalu project goal was to improve the health of infants by increasing the percent of women attending ANC visits. The final group was given a case about using remote images to assist East African pastoralists. The Satellite Assisted Pastoral Resource Management System (SAPARM) informs pastoralists of vegetation through remote sensing imagery so they can make better decisions about migrating their livestock.
After familiarizing ourselves with the particulars of the case studies, each group was tasked to present their findings to all participants after pondering a series of questions. Some of the issues under discussion included
(1) “How would you assess your MERL Tech’s relevance?”
(2) “How would you evaluate the effectiveness of your MERL Tech?”
(3) “How would you measure efficiency?” and
(4) “How will you access sustainability?”.
Each group came up with some innovative answers to the questions posed and our facilitators and session leads (Alexandra Robinson & Sutyajeet Soneja from USAID and Molly Chen from RTI) will soon synthesize the workshop findings and notes into a concise written brief for the MERL Tech community.
Relevance – The extent to which the technology choice is appropriately suited to the priorities and capacities of the context of the target group or organization.
Effectiveness – A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.
Efficiency – Measure of the outputs (qualitative and quantitative) in relation to the inputs.
Impact – The positive and negative changed produced by technology introduction, change in a technology tool, or platform on the overall development intervention (directly or indirectly; intended or unintended).
Sustainability – Measure of whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn.
Coherence – How related is the technology to the broader policy context (development, market, communication networks, data standards & interoperability mandates, and national & international law) within which the technology was developed and implemented.
While it’s unfortunate that SIMLab stopped most operations in early September 2017, their exceptional work in this and other areas lives on and you can access the full framework here.
I learned a great deal in this session from the facilitators and my colleagues attending the workshop. I would encourage everyone in the MERL Tech community to take the ideas generated during this workshop and the great work done by SIMLab into their development practice. We certainly intend to integrate much of these insights into our work at OpenWise. Read more about “The Evidence Agenda” here on SIMLab’s blog.
The rapid growth of Artificial Intelligence—computers behaving like humans, and performing tasks which people usually carry out—promises to transform everything from car travel to personal finance. But how will it affect the equally vital field of M&E? As evaluators, most of us hate paper-based data collection—and we know that automation can help us process data more efficiently. At the same time, we’re afraid to remove the human element from monitoring and evaluation: What if the machines screw up?
Over the past year, Souktel has worked on three areas of AI-related M&E, to determine where new technology can best support project appraisals. Here are our key takeaways on what works, what doesn’t, and what might be possible down the road.
Natural Language Processing
For anyone who’s sifted through thousands of Excel entries, natural language processing sounds like a silver bullet: This application of AI interprets text responses rapidly, often matching them against existing data sets to find trends. No need for humans to review each entry by hand! But currently, it has two main limitations: First, natural language processing works best for sentences with simple syntax. Throw in more complex phrases, or longer text strings, and the power of AI to grasp open-ended responses goes downhill. Second, natural language processing only works for a limited number of (mostly European) languages—at least for now. English and Spanish AI applications? Yes. Chichewa or Pashto M&E bots? Not yet. Given these constraints, we’ve found that AI apps are strongest at interpreting basic misspelled answer text during mobile data collection campaigns (in languages like English or French). They’re less good at categorizing open-ended responses by qualitative category (positive, negative, neutral). Yet despite these limitations, AI can still help evaluators save time.
AI does a decent job of telling objects apart; we’ve leveraged this to build mobile applications which track supply delivery more quickly & cheaply. If a field staff member submits a photo of syringes and a photo of bandages from their mobile, we don’t need a human to check “syringes” and “bandages” off a list of delivered items. The AI-based app will do that automatically—saving huge amounts of time and expense, especially during crisis events. Still, there are limitations here too: While AI apps can distinguish between a needle and a BandAid, they can’t yet tell us whether the needle is broken, or whether the BandAid is the exact same one we shipped. These constraints need to be considered carefully when using AI for inventory monitoring.
Comparative Facial Recognition
This may be the most exciting—and controversial—application of AI. The potential is huge: “Qualitative evaluation” takes on a whole new meaning when facial expressions can be captured by cameras on mobile devices. On a more basic level, we’ve been focusing on solutions for better attendance tracking: AI is fairly good at determining whether the people in a photo at Time A are the same people in a photo at Time B. Snap a group pic at the end of each community meeting or training, and you can track longitudinal participation automatically. Take a photo of a larger crowd, and you can rapidly estimate the number of attendees at an event.
However, AI applications in this field have been notoriously bad at recognizing diversity—possibly because they draw on databases of existing images, and most of those images contain…white men. New MIT research has suggested that “since a majority of the photos used to train [AI applications] contain few minorities, [they] often have trouble picking out those minority faces”. For the communities where many of us work (and come from), that’s a major problem.
Do’s and Don’ts
So, how should M&E experts navigate this imperfect world? Our work has yielded a few “quick wins”—areas where Artificial Intelligence can definitely make our lives easier: Tagging and sorting quantitative data (or basic open-ended text), simple differentiation between images and objects, and broad-based identification of people and groups. These applications, by themselves, can be game-changers for our work as evaluators—despite their drawbacks. And as AI keeps evolving, its relevance to M&E will likely grow as well. We may never reach the era of robot focus group facilitators—but if robo-assistants help us process our focus group data more quickly, we won’t be complaining.
by Alvaro Cobo-Santillan, Catholic Relief Services (CRS); Jeff Lundberg, CRS; Paul Perrin, University of Notre Dame; and Gillian Kerr, LogicalOutcomes Canada.
In the year 2017, with all of us holding a mini-computer at all hours of the day and night, it’s probably not too hard to imagine that “A teenager in Africa today has access to more information than the President of United States had 15 years ago”. So it also stands to reason that the ability to appropriately and ethically grapple with the use of that immense amount information has grown proportionately.
What do we mean when we say that the world of development—particularly evaluation—data is murky? A major factor in this sentiment is the ambiguous polarity between research and evaluation data.
“Research seeks to prove; evaluation seeks to improve.” – CDC
“Research studies involving human subjects require IRB review. Evaluative studies and activities do not.”
This has led to debates as to the actual relationship between research and evaluation. Some see them as related, but separate activities, others see evaluation as a subset of research, and still others might posit that research is a specific case of evaluation.
But regardless, though motivations of the two may differ, research and evaluation look the same due to their stakeholders, participants, and methods.
If that statement is true, then we must hold both to similar protections!
What are some ways to make the waters less murky?
Deeper commitment to informed consent
Reasoned use of identifiers
Need to know vs. nice to know
Data security and privacy protocols
Data use agreements and protocols for outside parties
Revisit NGO primary and secondary data IRB requirements
Alright then, what can we practically do within our individual agencies to move the needle on data protection?
In short, governance. Responsible data is absolutely a crosscutting responsibility, but can be primarily championed through close partnerships between the M&E and IT Departments
Think about ways to increase usage of digital M&E – this can ease the implementation of R&D
Can existing agency processes and resources be leveraged?
Plan and expect to implement gradual behavior change and capacity building as a pre-requisite for a sustainable implementation of responsible data protections
Think in an iterative approach. Gradually introduce guidelines, tools and training materials
Plan for business and technical support structures to support protections
Is anyone doing any of the practical things you’ve mentioned?
Yes! Gillian Kerr from LogicalOutcomes spoke about highlights from an M&E system her company is launching to provide examples of the type of privacy and security protections they are doing in practice.
As a basis for the mindset behind their work, she notably presented a pretty fascinating and simple comparison of high risk vs. low risk personal information – year of birth, gender, and 3 digit zip code is unique for .04% of US residents, but if we instead include a 5 digit zip code over 50% of US residents could be uniquely identified. Yikes.
In that vein, they are not collecting names or identification and only year of birth (not month or day) and seek for minimal sensitive data defining data elements by level of risk to the client (i.e. city of residence – low, glucose level – medium, and HIV status – high).
In addition, asking for permission not only in the original agency permission form, but also in each survey. Their technical system maintains two instances – one containing individual level personal information with tight permission even for administrators and another with aggregated data with small cell sizes. Other security measures such as multi-factor authentication, encryption, and critical governance; such as regular audits are also in place.
It goes without saying that we collectively have ethical responsibilities to protect personal information about vulnerable people – here are final takeaways:
If you can’t protect sensitive information, don’t collect it.
If you can’t keep up with current security practices, outsource your M&E systems to someone who can.
Your technology roadmap should aspire to give control of personal information to the people who provide it (a substantial undertaking).
In the meantime, be more transparent about how data is being stored and shared
What happens when evaluators trying to build bridges with new private sector actors meet real social entrepreneurs? A new appreciation for the dynamic “World of ICT Social Entrepreneurs (WISE)” and the challenges they face in marketing, pricing, and financing (not to mention measurement of social impact.)
During this MERL Tech session on WISE, Dale Hill, evaluation consultant, presented grant funded research on measurement of social impact of social entrepreneurship ventures (SEVs) from three perspectives. She then invited five ICT company CEOs to comment.
The three perspectives are:
the public: How to hold companies accountable, particularly if they have chosen to be legal or certified “benefit corporations”?
the social entrepreneurs, who are plenty occupied trying to reach financial sustainability or profit goals, while also serving the public good; and
evaluators, who see the important influence of these new actors, but know their professional tools need adaptation to capture their impact.
Dale’s introduction covered overlapping definitions of various categories of SEVs, including legally defined “benefit corporations”, and “B Corps”, which are intertwined with the options of certification available to social entrepreneurs. The “new middle” of SEVs are on a spectrum between for-profit companies on one end and not-for profit organizations on the other. Various types of funders, including social impact investors, new certification agencies, and monitoring and evaluation (M&E) professionals, are now interested in measuring the growing social impact of these enterprises. A show of hands revealed that representatives of most of these types of actors were present at the session.
The five social entrepreneur panelists all had ICT businesses with global reach, but they varied in legal and certification status and the number of years operating (1 to 11). All aimed to deploy new technologies to non-profit organizations or social sector agencies on high value, low price terms. Some had worked in non-profits in the past and hoped that venture capital rather than grant funding would prove easier to obtain. Others had worked for Government and observed the need for customized solutions, which required market incentives to fully develop.
The evaluator and CEO panelists’ identification of challenges converged in some cases:
maintaining affordability and quality when using market pricing
obtaining venture capital or other financing
worry over “mission drift” – if financial sustainability imperatives or shareholder profit maximization preferences prevail over founders’ social impact goals; and
the still present digital divide, when serving global customers (insufficient bandwidth, affordability issues, limited small business capital in some client countries.
New issues raised by the CEOs (and some social entrepreneurs in the audience) included:
the need to provide incentives to customers to use quality assurance or security features of software, to avoid falling short of achieving the SEV’s “public good” goals;
the possibility of hostile takeover, given high value of technological innovations;
the fact that mention of a “social impact goal” was a red flag to some funders who then went elsewhere to seek profit maximization.
There was also a rich discussion on the benefits and costs of obtaining certification: it was a useful “branding and market signal” to some consumers, but a negative one to some funders; also, it posed an added burden on managers to document and report social impact, sometimes according to guidelines not in line with their preferences.
a) Despite the “hype”, social impact investment funding proved elusive to the panelists. Options for them included: sliding scale pricing; establishment of a complementary for-profit arm; or debt financing;
b) Many firms were not yet implementing planned monitoring and evaluation (M&E) programs, despite M&E being one of their service offerings; and
c) The legislation on reporting social impact of benefit corporations among the 31 states varies considerably, and the degree of enforcement is not clear.
A conclusion for evaluators: Social entrepreneurs’ use of market solutions indeed provides an evolving, dynamic environment which poses more complex challenges for measuring social impact, and requires new criteria and tools, ideally timed with an understanding of market ups and downs, and developed with full participation of the business managers.
Building on MERL Tech London 2017, we will engage 200 practitioners from across the development and technology ecosystems for a two-day conference seeking to turn the theories of MERL technology into effective practice that delivers real insight and learning in our sector.
MERL Tech London 2018
Digital data and new media and information technologies are changing MERL practices. The past five years have seen technology-enabled MERL growing by leaps and bounds, including:
Adaptive management and ‘developmental evaluation’
Faster, higher quality data collection.
Remote data gathering through sensors and self-reporting by mobile.
Big Data and social media analytics
Alongside these new initiatives, we are seeing increasing documentation and assessment of technology-enabled MERL initiatives. Good practice guidelines and new frameworks are emerging and agency-level efforts are making new initiatives easier to start, build on and improve.
The swarm of ethical questions related to these new methods and approaches has spurred greater attention to areas such as responsible data practice and the development of policies, guidelines and minimum ethical frameworks and standards for digital data.
Like previous conferences, MERL Tech London will be a highly participatory, community-driven event and we’re actively seeking practitioners in monitoring, evaluation, research, learning, data science and technology to facilitate every session.
Innovations: Brand new, untested technologies or approaches and their application to MERL(Tech)
Debates: Lively discussions, big picture conundrums, thorny questions, contentious topics related to MERL Tech
Management: People, organizations, partners, capacity strengthening, adaptive management, change processes related to MERL Tech
Evaluating MERL Tech: comparisons or learnings about MERL Tech tools/approaches and technology in development processes
Failures: What hasn’t worked and why, and what can be learned from this?
Demo Tables: to share MERL Tech approaches, tools, and technologies
Other topics we may have missed!
Session Submission Deadline: Friday, November 10, 2017.
Session leads receive priority for the available seats at MERL Tech and a discounted registration fee. You will hear back from us in early December and, if selected, you will be asked to submit an updated and final session title, summary and outline by Friday, January 19th, 2018.
Please register to attend, or reserve a demo table for MERL Tech London 2018 to examine these trends with an exciting mix of educational keynotes, lightning talks, and group breakouts, including an evening Fail Festival reception to foster needed networking across sectors.
We are charging a modest fee to better allocate seats and we expect to sell out quickly again this year, so buy your tickets or demo tables now. Event proceeds will be used to cover event costs and to offer travel stipends for select participants implementing MERL Tech activities in developing countries.
by Maliha Khan, a development practitioner in the fields of design, measurement, evaluation and learning. Maliha led the Maturity Model sessions at MERL Tech DC and Linda Raftree, independent consultant and lead organizer of MERL Tech.
MERL Tech is a platform for discussion, learning and collaboration around the intersection of digital technology and Monitoring, Evaluation, Research, and Learning (MERL) in the humanitarian and international development fields. The MERL Tech network is multidisciplinary and includes researchers, evaluators, development practitioners, aid workers, technology developers, data analysts and data scientists, funders, and other key stakeholders.
One key goal of the MERL Tech conference and platform is to bring people from diverse backgrounds and practices together to learn from each other and to coalesce MERL Tech into a more cohesive field in its own right — a field that draws from the experiences and expertise of these various disciplines. MERL Tech tends to bring together six broad communities:
traditional M&E practitioners, who are interested in technology as a tool to help them do their work faster and better;
development practitioners, who are running ICT4D programs and beginning to pay more attention to the digital data produced by these tools and platforms;
business development and strategy leads in organizations who want to focus more on impact and keep their organizations up to speed with the field;
tech people who are interested in the application of newly developed digital tools, platforms and services to the field of development, but may lack knowledge of the context and nuance of that application
data people, who are focused on data analytics, big data, and predictive analytics, but similarly may lack a full grasp of the intricacies of the development field
donors and funders who are interested in technology, impact measurement, and innovation.
Since our first series of Technology Salons on ICT and M&E in 2012 and the first MERL Tech conference in 2014, the aim has been to create stronger bridges between these diverse groups and encourage the formation of a new field with an identity of its own — In other words, to move people beyond identifying as, say, an “evaluator who sometimes uses technology,” and towards identifying as a member of the MERL Tech space (or field or discipline) with a clearer understanding of how these various elements work together and play off one another and how they influence (and are influenced by) the shifts and changes happening in the wider ecosystem of international development.
By building and strengthening these divergent interests and disciplines into a field of their own, we hope that the community of practitioners can begin to better understand their own internal competencies and what they, as a unified field, offered to international development. This is a challenging prospect, as beyond their shared use of technology to gather, analyze, and store data and an interest in better understanding how, when, why, where, (etc.) these tools work for MERL and for development/humanitarian programming, there aren’t many similarities between participants.
At the MERL Tech London and MERL Tech DC conferences in 2017, we made a concerted effort to get to the next level in the process of creating a field. In London in February, participants created a timeline of technology and MERL and identified key areas that the MERL Tech community could work on strengthening (such as data privacy and security frameworks and more technological tools for qualitative MERL efforts). At MERL Tech DC, we began trying to understand what a ‘maturity model’ for MERL Tech might look like.
What do we mean by a ‘maturity model’?
Broadly, maturity models seek to qualitatively assess people/culture, processes/structures, and objects/technology to craft a predictive path that an organization, field, or discipline can take in its development and improvement.
Initially, we considered constructing a “straw” maturity model for MERL Tech and presenting it at the conference. The idea was that our straw model’s potential flaws would spark debate and discussion among participants. In the end, however, we decided against this approach because (a) we were worried that our straw model would unduly influence people’s opinions, and (b) we were not very confident in our own ability to construct a good maturity model.
Instead, we opted to facilitate a creative space over three sessions to encourage discussion on what a maturity model might look like, and what it might contain. Our vision for these sessions was to get participants to brainstorm in mixed groups containing different types of people- we didn’t want small subsets of participants to create models independently without the input of others.
In the first session, “Developing a MERL Tech Maturity Model”, we invited participants to consider what a maturity model might look like. Could we begin to imagine a graphic model that would enable self-evaluation and allow informed choices about how to best develop competencies, change and adjust processes and align structures in organizations to optimize using technology for MERL or indeed other parts of the development field?
In the second session, “Where do you sit on the Maturity Model?” we asked participants to use the ideas that emerged from our brainstorm in the first session to consider their own organizations and work, and compare them against potential maturity models. We encouraged participants to assess themselves using green (young sapling) to yellow (somewhere in the middle) and red (mature MERL Tech ninja!) and to strike up a conversation with other people in the breaks on why they chose that color.
In our third session, “Something old, something new”, we consolidated and synthesized the various concepts participants had engaged with throughout the conference. Everyone was encouraged to reflect on their own learning, lessons for their work, and what new ideas or techniques they may have picked up on and might use in the future.
The Maturity Models
As can be expected, when over 300 people take marker and crayons to paper, many a creative model emerges. We asked the participants to gallery walk the models over the next day during the breaks and vote on their favorite models.
We won’t go into detail of what all the 24 the models showed, but there were some common themes that emerged from the ones that got the most votes – almost all maturity models include dimensions (elements, components) and stages, and a depiction of potential progression from early stages to later stages across each dimension. They all also showed who the key stakeholders or players were, and some had some details on what might be expected of them at different stages of maturity.
Two of the models (MERLvana and the Data Appreciation Maturity Model – DAMM) depicted the notion that reaching maturity was never really possible and the process was an almost infinite loop. As the presenters explained MERLvana “it’s an impossible to reach the ideal state, but one must keep striving for it, in ever closer and tighter loops with fewer and fewer gains!”
“MERL-tropolis” had clearly defined categories (universal understanding, learning culture and awareness, common principles, and programmatic strategy) and the structures/ buildings needed for those (staff, funding, tools, standard operating procedures, skills).
The most popular was “The Data Turnpike” which showed the route from the start of “Implementation with no data” to the finish line of “Technology, capacity and interest in data and adaptive management” with all the pitfalls along the way (misuse, not timely, low ethics etc) marked to warn of the dangers.
As organizers of the session, we found the exercises both interesting and enlightening, and we hope they helped participants to begin thinking about their own MERL Tech practice in a more structured way. Participant feedback on the session was on polar extremes. Some people loved the exercise and felt that it allowed them to step back and think about how they and their organization were approaching MERL Tech and how they could move forward more systematically with building greater capacities and higher quality work. Some told us that they left with clear ideas on how they would work within their organizations to improve and enhance their MERL Tech practice, and that they had a better understanding of how to go about that. A few did not like that we had asked them to “sit around drawing pictures” and some others felt that the exercise was unclear and that we should have provided a model instead of asking people to create one. [Note: This is an ongoing challenge when bringing together so many types of participants from such diverse backgrounds and varied ways of thinking and approaching things!]
We’re curious if others have worked with “maturity models” and if they’ve been applied in this way or to the area of MERL Tech before. What do you think about the models we’ve shared? What is missing? How can we continue to think about this field and strengthen our theory and practice? What should we do at MERL Tech London in March 2018 and beyond to continue these conversations?