Tag Archives: MIS

How to buy M&E software and not get bamboozled

by Josh Mandell, a Director at DevResults where he leads strategy and business development. Josh can be reached at josh@devresults.com.

While there is no way to guarantee that M&E software will solve all of your problems or make all of your colleagues happy, there absolutely are things you can do during the discovery, procurement, and contracts stages to mitigate against the risk of getting bamboozled.

#1 – Trust no one. Test everything.

Most development practitioners I speak with are balancing a heavy load of client work, internal programmatic and BD support, and other organization initiatives. I can appreciate that time is scarce and testing software you may not buy could feel like a giant waste of time.

However, when it comes to reducing uncertainty and building confidence in your decision, the single most productive use of your time is spent testing. When you don’t test, what evidence do you have to base your decision on? The vendor’s marketing and proposal materials. Don’t take the BD guy’s word for it and whatever you do, don’t trust screenshots, brochures, or proposals. Like a well-curated social media profile, marketing collateral gives you a sense for what’s possible, but probably isn’t the most accurate reflection of reality. If you really want to understand usability, performance, and culture fit, you simply need to see for yourself.

We have found that the organizations that take the time to identify and test what they’ll actually be doing in DevResults are much better off than those who buy based on what they see in documentation and presentations, or based on someone else’s recommendation.

And it makes our lives easier too! We may have to spend a little more time upfront in the discovery and procurement phases, but by properly setting expectations early on, we have to provide far less support over the long-term. This makes for smoother, lower-cost implementations and happier customers.

#2 – Document what success looks like in plain language.

We obviously need contracts for defining the scope of work, payment terms, SLA, and other legalese, but the reality is that the people leading procurement and contracts are often not the people leading the day to day data operations.

Contracts are also typically dense and hard to use as a point of reference for frequent, human communication. So, it’s incredibly important that the implementation leads themselves define what success looks like in their own words and that is what drives the implementation.

It took us years to figure this out, but we’ve taken the lesson to heart. What we do now with each of our engagements is create an Implementation Charter that documents, in the words of the implementation leads, things like a summary baseline, roles and responsibilities, and a list of desired outcomes, i.e. ‘what success looks like.’ We then use the charter as the primary point of reference for determining whether or not we’re doing a good job and we evaluate ourselves against the charter quarterly.

Similar to the point about testing above, we have found this practice to dramatically increase transparency, properly set expectations, and establish more effective channels for communication, all of which are crucial in enterprise software implementations.

#3 – Plan for the long-haul and create the right financial incentives. Spread out the payments.

Whether at the project or organizational levels, M&E software implementations are long-term efforts. Unlike custom, external-facing websites where the bulk of work is done up front and the rest is mostly maintenance, enterprise software is constantly evolving. Rapidly changing technology and industry trends, shifting user requirements, and quality user experience all require persistent attention and ongoing development.

Your contract and payment structure should reflect that reality.

The easiest way to achieve this alignment is to spread the payments out over time. I’m not going to get into the merits of a software as a service (SaaS) business model here (we’ll be putting another post out on that in the coming weeks), but suffice to say that you get better service when your technology partner needs to continuously earn your money month after month and year after year.

This not only shifts the focus from checking boxes in a contract to delivering actual utility for users over the long-term, but it also hedges against the prospect of paying for unused software (or even paying for vaporware, as in the case of the BMGF case against Saama).

We know from experience that shifting to a new way of doing things can be difficult. We used to be a custom-web development shop and we did pretty well in that old model. The transition to a SaaS offering was painful because we had to work harder to earn our money and expectations went up dramatically. Nonetheless, we know the pain has been worth it because our customers are holding us to a different standard and it’s forcing us to deliver the best product we’re capable of. As a result, we’ll not only have happier customers, but a stronger, more sustainable business doing what we love.

Stop the bamboozling.

If you have any tips or recommendations for buying software, please share those in the comments below, or feel free to reach out to me directly. We’re always looking to share what we know and learn from others. Good luck!

MERL Tech London is coming up on March 20-21, 2018 — Submit your session ideas or register to attend!

12 ways to ensure your data management implementation doesn’t become a dumpster fire

By Jason Rubin, PCI; Kate Mueller, Dev Results; and Mike Klein, ISG. They lead the session on “One system to rule them all? Balancing organization-wide data standards and project data needs.

Dumpster FireLet’s face it: failed information system implementations are not uncommon in our industry, and as a result, we often have a great deal of skepticism toward new tools and processes.

We addressed this topic head-on during our 2017 MERL Tech session, One system to rule them all?

The session discussed the tension between the need for enterprise data management solutions that can be used across the entire organization and solutions that meet the needs of specific projects. The three of us presented our lessons learned on this topic from our respective roles as M&E advisor, M&E software provider, and program implementer.

We then asked attendees to provide a list of their top do’s and don’ts related to their own experiences – and then reviewed the feedback to identify key themes.

Here’s a rundown on the themes that emerged from participants’ feedback:

Organizational Systems

Think of these as systems broadly—not M&E specific. For example: How do HR practices affect technology adoption? Does your organization have a federated structure that makes standard indicator development difficult? Do you require separate reporting for management and donor partners? These are all organizational systems that need to be properly considered before system selection and implementation. Top takeaways from the group include these insights to help you ensure your implementation goes smoothly:

1. Form Follows Function: This seems like an obvious theme, but since we received so much feedback about folks’ experiences, it bears repeating: define your goals and purpose first, then design a system to meet those, not the other way around. Don’t go looking for a solution that doesn’t address an existing problem. This means that if the ultimate goal for a system is to improve field staff data collection, don’t build a system to improve data visualization.

2. HR & Training: One of the areas our industry seems to struggle with is long-term capacity building and knowledge transfer around new systems. Suggestions in this theme were that training on information systems become embedded in standard HR processes with ongoing knowledge sharing and training of field staff, and putting a priority on hiring staff with adequate skill mixes to make use of information systems.

3. Right-Sized Deployment for Your Organization: There were a number of horror stories around organizations that tried to implement a single system simultaneously across all projects and failed because they bit off more than they could chew, or because the selected tool really didn’t meet a majority of their organization’s projects’ needs. The general consensus here was that small pilots, incremental roll-outs, and other learn-and-iterate approaches are a best practice. As one participant put it: Start small, scale slowly, iterate, and adapt.

M&E Systems

We wanted to get feedback on best and worst practices around M&E system implementations specifically—how tools should be selected, necessary planning or analysis, etc.

4. Get Your M&E Right: Resoundingly, participants stressed that a critical component of implementing an M&E information system is having well-organized M&E, particularly indicators. We received a number of comments about creating standardized indicators first, auditing and reconciling existing indicators, and so on.

5. Diagnose Your Needs: Participants also chorused the need for effective diagnosis of the current state of M&E data and workflows and what the desired end-state is. Feedback in this theme focused on data, process, and tool audits and putting more tool-selection power in M&E experts’ hands rather than upper management or IT.

6. Scope It Out: One of the flaws each of us has seen in our respective roles is having too generalized or vague of a sense of why a given M&E tool is being implemented in the first place. All three of us talked about the need to define the problem and desired end state of an implementation. Participants’ feedback supported this stance. One of the key takeaways from this theme was to define who the M&E is actually for, and what purpose it’s serving: donors? Internal management? Local partner selection/management? Public accountability/marketing?

Technical Specifications

The first two categories are more about the how and why of system selection, roll-out, and implementation. This category is all about working to define and articulate what any type of system needs to be able to do.

7. UX Matters: It seems like a lot of folks have had experience with systems that aren’t particularly user-friendly. We received a lot of feedback about consulting users who actually have to use the system, building the tech around them rather than forcing them to adapt, and avoiding “clunkiness” in tool interfaces. This feels obvious but is, in fact, often hard to do in practice.

8. Keep It Simple, Stupid: This theme echoed the Right-Sized Deployment for Your Organization: take baby steps; keep things simple; prioritize the problems you want to solve; and don’t try to make a single tool solve all of them at once. We might add to this: many organizations have never had a successful information system implementation. Keeping the scope and focus tight at first and getting some wins on those roll-outs will help change internal perception of success and make it easier to implement broader, more elaborate changes long-term.

9. Failing to Plan Is Planning to Fail: The consensus in feedback was that it pays to take more time upfront to identify user/system needs and figure out which are required and which are nice to have. If interoperability with other tools or systems is a requirement, think about it from day one. Work directly with stakeholders at all levels to determine specs and needs; conduct internal readiness assessments to see what the actual needs are; and use this process to identify hierarchies of permissions and security.

Change Management

Last, but not least, there’s how systems will be introduced and rolled out to users. We got the most feedback on this section and there was a lot of overlap with other sections. This seems to be the piece that organizations struggle with the most.

10. Get Buy-in/Identify Champions: Half the feedback we received on change management revolved around this theme. For implementations to be successful, you need both a top-down approach (buy-in from senior leadership) and a bottom-up approach (local champions/early adopters). To help facilitate this buy-in, participants suggested creating incentives (especially for management), giving local practitioners ownership, including programs and operations in the process, and not letting the IT department lead the initiative. The key here is that no matter which group the implementation ultimately benefits the most, having everyone on the same page understanding the implementation goals and why the organization needs it are key.

11. Communicate: Part of how you get buy-in is to communicate early and often. Communicate the rationale behind why tools were selected, what they’re good—and bad—at, what the value and benefits of the tool are, and transparency in the roll-out/what it hopes to achieve/progress towards those goals. Consider things like behavior change campaigns, brown bags, etc.

12. Shared Vision: This is a step beyond communication: merely telling people what’s going on is not enough. There must be a larger vision of what the tool/implementation is trying to achieve and this, particularly, needs to be articulated. How will it benefit each type of user? Shared vision can help overcome people’s natural tendencies to resist change, hold onto “their” data, or cover up failures or inconsistencies.

We have a data problem

by Emily Tomkys, ICT in Programmes at Oxfam GB

Following my presentation at MERL Tech, I have realised that it’s not only Oxfam who have a data problem; many of us have a data problem. In the humanitarian and development space, we collect a lot of data – whether via mobile phone or a paper process, the amount of data each project generates is staggering. Some of this data goes into our MIS (Management Information Systems), but all too often data remains in Excel spreadsheets on computer hard drives, unconnected cloud storage systems or Access and bespoke databases.

(Watch Emily’s MERL Tech London Lightning Talk!)

This is an issue because the majority of our programme data is analysed in silos on a survey-to-survey basis and at best on a project-to-project basis. What about when we want to analyse data between projects, between countries, or even globally? It would currently take a lot of time and resources to bring data together in usable formats. Furthermore, issues of data security, limited support for country teams, data standards and the cost of systems or support mean there is a sustainability problem that is in many people’s interests to solve.

The demand from Oxfam’s country teams is high – one of the most common requests the ICT in Programme Team receive centres around databases and data analytics. Teams want to be able to store and analyse their data easily and safely; and there is growing demand for cross border analytics. Our humanitarian managers want to see statistics on the type of feedback we receive globally. Our livelihoods team wants to be able to monitor prices at markets on a national and regional scale. So this motivated us to look for a data solution but it’s something we know we can’t take on alone.

That’s why MERL Tech represented a great opportunity to check in with other peers about potential solutions and areas for collaboration. For now, our proposal is to design a data hub where no matter what the type of data (unstructured, semi-structured or structured) and no matter how we collect the data (mobile data collection tools or on paper), our data can integrate into a database. This isn’t about creating new tools – rather it’s about focusing on the interoperability and smooth transition between tools and storage options.  We plan to set this up so data can be pulled through into a reporting layer which may have a mixture of options for quantitative analysis, qualitative analysis and GIS mapping. We also know we need to give our micro-programme data a home and put everything in one place regardless of its source or format and make it easy to pull it through for analysis.

In this way we can explore data holistically, spot trends on a wider scale and really know more about our programmes and act accordingly. Not only should this reduce our cost of analysis, we will be able to analyse our data more efficiently and effectively. Moreover, taking a holistic view of the data life cycle will enable us to do data protection by design and it will be easier to support because the process and the tools being used will be streamlined. We know that one tool does not and cannot do everything we require when we work in such vast contexts, so a challenge will be how to streamline at the same time as factoring in contextual nuances.

Sounds easy, right? We will be starting to explore our options and working on the datahub in the coming months. MERL Tech was a great start to make connections, but we are keen to hear from others about how you are approaching “the data problem” and eager to set something up which can also be used by other actors. So please add your thoughts in the comments or get in touch if you have ideas!