Tag Archives: systems

Good, Cheap, Fast — Pick Two!

By Chris Gegenheimer, Director of Monitoring, Evaluating and Learning Technology at Chemonics International; and Leslie Sage, Director of Data Science at DevResults. (Originally posted here).

Back in September, Chemonics and DevResults spoke at MERL Tech DC about the inherent compromise involved when purchasing enterprise software. In short, if you want good software that does everything you want exactly the way you want it, cheap software that is affordable and sustainable, and fast software that is available immediately and responsive to emerging needs, you may have to relax one of those requirements. In other words: “good, cheap, fast – pick two!”

Of course, no buyer or vendor would ever completely neglect any one of those dimensions to maximize the other two; instead, we all try to balance these competing priorities as best we can as our circumstances will allow. It’s not an “all or nothing” compromise. It’s not even a monolithic compromise: both buyer and vendor can choose which services and domains will prioritize quality and speed over affordability, or affordability and quality over speed, or affordability and speed over quality (although that last one does sometimes come back to bite).

Chemonics and DevResults have been working together to support Chemonics’ projects and its monitoring and evaluation (M&E) needs since 2014, and we’ve had to learn from each other how best to achieve the mythical balance of quality, affordability, and speed. We haven’t always gotten it right, but we do have a few suggestions on how technological partnerships can ensure long-term success.

Observations from an implementer

As a development implementer, Chemonics recognizes that technology advances development outcomes and enables us to do our work faster and more efficiently. While we work in varied contexts, we generally don’t have time to reinvent technology solutions for each project. Vendors bring value when they can supply configurable products that meet our needs in the real world faster and cheaper than building something custom. Beyond the core product functionality, vendors offer utility with staff who maintain the IT infrastructure, continually upgrade product features, and ensure compliance with standards, such as the General Data Protection Regulation (GDPR) or the International Aid Transparency Initiative (IATI). Not every context is right for off-the-shelf solutions. Just because a product exists, it doesn’t mean the collaboration with a software vendor will be successful. But, from an implementer’s perspective, here are a few key factors for success:

Aligned incentives

Vendors should have a keen interest in ensuring that their product meets your requirements. When they are primarily interested in your success in delivering your core product or service — and not just selling you a product — the relationship is off to a good start. If the vendor does not understand or have a fundamental interest in your core business, this can lead to diverging paths, both in features and in long-term support. In some cases, fresh perspectives from non-development outsiders are constructive, but being the outlier client can contribute to project failure.

Inclusion in roadmap

Assuming the vendor’s incentives are aligned with your own, it should be interested in your feedback as well as responsive to making changes, even to some core features. As our staff puts systems through their paces, we regularly come up with feature requests, user interface improvements, and other feedback. We realize that not every feature request will make it into code, but behind every request is a genuine need, and vendors should be willing to talk through each need to figure out how to address it.

Straight talk

There’s a tendency for tech vendors, especially sales teams, to have a generous interpretation of system capabilities. Unmet expectations can result from a client’s imprecise requirements or a vendor’s excessive zeal, which leads to disappointment when you get what you asked for, but not what you wanted. A good vendor will clearly state up front what its product can do, cannot do, and will not ever do. In return, implementers have a responsibility to make their technical requirements as specific, well-scoped, and operational as possible.

Establish support liaisons

Many vendors offer training, help articles, on-demand support, and various other resources for turning new users into power users, but relying on the vendor to shoulder this burden serves no one. By establishing a solid internal front-line support system, you can act as intermediaries and translators between end users and the software vendor. Doing so has meant that our users don’t have to be conversant in developer-speak or technical language, nor does our technology partner have to field requests coming from every corner of our organization.

Observations from a tech vendor

DevResults’ software is used to manage international development data in 145 countries, and we support M&E projects around the world. We’ve identified three commonalities among organizations that implement our software most effectively: 1) the person who does the most work has the authority to make decisions, 2) the person with the most responsibility has technical aptitude and a whatever-it-takes attitude, and 3) breadth of adoption is achieved when the right responsibilities are delegated to the project staff, building capacity and creating buy-in.

Organizational structure

We’ve identified two key factors that predict organizational success: dedicated staff resources and their level of authority. Most of our clients are implementing a global M&E system for the first time, so the responsibility for managing the rollout is often added to someone’s already full list of duties, which is a recipe for burnout. Even if a “system owner” is established and space is made in their job description, if they don’t have the authority to request resources or make decisions, it restricts their ability to do their job well. Technology projects are regularly entrusted to younger, more junior employees, who are often fast technical learners, but their effectiveness is hindered by having to constantly appeal to their boss’ boss’ boss about every fork in the road. Middle-sized organizations are typically advantaged here because they have enough staff to dedicate to managing the rollout, yet few enough layers of bureaucracy that such a person can act with authority.

Staffing

Technical expertise is critical when it comes to managing software implementations. Too often, technical duties are foisted upon under-prepared (or less-than-willing) staffers. This may be a reality in an era of constrained budgets, but asking experts in one thing to operate outside of their wheelhouse is another recipe for burnout. In the software industry, we conduct technical exams for all new hires. We would be thrilled to see the practice extended across the ICT4D space, even for roles that don’t involve programming but do involve managing technical products. Even so, there’s a certain aspect of the ideal implementation lead that comes down to personality and resourcefulness. The most successful teams we work with have at least one person who has the willingness and the ability to do whatever it takes to make a new system work. Call it ‘ownership,’ call it a ‘can-do’ attitude, but whatever it is, it works!

Timing and resource allocation

Change management is hard, and introducing a new system requires a lot of work up front. There’s a lot that headquarters personnel can do to unburden project staff (configuring the system, developing internal guidance and policies, etc.), but sometimes it’s better to involve project staff directly and early. When project staff are involved in the system configuration and decision-making process, we’ve seen them demonstrate more ownership of the system and less resentment of “another thing coming down from headquarters.” System setup and configuration can also be a training opportunity, further developing internal capacity across the organization. Changing systems requires conversations across the entire org chart; well-designed software can facilitate those conversations. But even when implementers do everything right, they should always expect challenges, plan for change management, and adopt an agile approach to managing a system rollout.

Good, cheap, fast: pick THREE!

As we said, there are ways to balance these three dimensions. We’ve managed to strike a successful balance in this partnership because we understand the incentives, constraints, and priorities of our counterpart. The software as a service (SaaS) model is instrumental here because it ensures software is well-suited to multiple clients across the industry (good), more affordable than custom builds (cheap), and immediately available on day one (fast). The implicit tradeoff is that no one client can control the product roadmap, but when each and every customer has a say, the end product represents the collective wisdom, best practice, and feedback of everyone. It may not be perfectly tailored to each and every client’s preferences, but in the end, that’s usually a good thing..

Using Social Network Analysis and Feedback to Measure Systems Change

by Alexis Smart, Senior Technical Officer, and Alexis Banks, Technical Officer, at Root Change

As part of their session at MERL Tech DC 2018, Root Change launched Pando, an online platform that makes it possible to visualize, learn from, and engage with the systems where you work. Pando harnesses the power of network maps and feedback surveys to help organizations strengthen systems and improve their impact.

Decades of experience in the field of international development has taught our team that trust and relationships are at the heart of social change. Our research shows that achieving and sustaining development outcomes depends on the contributions of multiple actors embedded in thick webs of social relationships and interactions. However, traditional MERL approaches have failed to help us understand the complex dynamics within those relationships. Pando was created to enable organizations to measure trust, relationships, and accountability between development actors.

Relationship Management & Network Maps

Grounded in social network analysis, Pando uses web-based relationship surveys to identify diverse organizations within a system and track relationships in real time. The platform automatically-generates a network map that visualizes the organizations and relationships within asystem. Data filters and analysis tools help uncover key actors, areas ofcollaboration, and network structures and dynamics.

Feedback Surveys & Analysis

Pando is integrated with Keystone Accountability’s Feedback Commons, an online tool that gives map administrators the ability to collect and analyze feedback about levels of trust and relationship quality among map participants. The combined power of network maps and feedback surveys helps create a holistic understanding of the system of organizations that impact a social issue, facilitate dialogue, and track change over time as actors work together to strengthen the system.

Examples of Systems Analysis

During Root Change’s session, “Measuring Complexity: A Real-Time Systems Analysis Tool,”Root Change Co-Founder, Evan Bloom and Senior Technical Officer, Alexis Smart, highlighted four examples of using network analysis to create social change from our work:

  • Evaluating Local Humanitarian ResponseSystems: We worked with the Harvard Humanitarian Institute (HHI) to evaluate the effect of local capacity development efforts on local ownership within humanitarian response networks in the Philippines, Kenya, Myanmar, and Ethiopia. Using social network analysis, Root Change and HHI assessed the roles of local and international organizations within each network to determine thedegree to which each system was locally-led.
  • Supporting Collective Impact in Nigeria: Network mapping has also been used in the USAID funded Strengthening Advocacy and Civic Engagement (SACE) project in Nigeria. Over five years, more than 1,300 organizationsand 2,000 relationships across 17 advocacy issue areas were identified andtracked. Nigerian organizations used the map to form meaningful partnerships,set common agendas, coordinate strategies, and hold the government accountable.
  • Informing Project Design in Kenya – Root Change and the Aga Khan Foundation (AKF) collected relationship data from hundreds of youth and organizations supporting youth opportunities in coastal Kenya. Analysis revealed gaps in expertise within the system, and opportunities to improve relationships among organizations and youth. These insights helped inform AKF’s program design, and ongoing mapping will be used to monitor system change. 
  • Tracking Local Ownership: This year, under USAID Local Works, Root Change is working with USAID missions to measure local ownership of development initiatives using newly designed localization metrics on Pando. USAID Bosnia and Herzegovina (BiH) launched a national Local Works map, identifying over 1,000 organizations working together on community development. Root Change and USAID BiH are exploring a pilot to use this map to continue to collect data and track localization metrics and train a local organization to support with this process.
     

Join the MERL Tech DC Network Map

As part of the MERL Tech DC 2018 conference, Root Change launched a map of the MERL Tech community. Event participants were invited to join this collaborative mapping effort to identify and visualize the relationships between organizations working to design, fund, and implement technology that supports monitoring, evaluation, research, and learning (MERL) efforts in development.

It’s not too late to join! Email info@mypando.org for an invitation to join the MERL Tech DC map and a chance to explore Pando.

Learn more about Pando

Pando is the culmination of more than a decade of experience providing training and coaching on the use of social network analysis and feedback surveys to design, monitor, and evaluate systems change initiatives. Initial feedback from international and local NGOs, governments, community-based organizations, and more is promising. But don’t take our word for it. We want to hear from you about ways that Pando could be useful in your social impact work. Contact us to discuss ways Pando could be applied in your programs.

12 ways to ensure your data management implementation doesn’t become a dumpster fire

By Jason Rubin, PCI; Kate Mueller, Dev Results; and Mike Klein, ISG. They lead the session on “One system to rule them all? Balancing organization-wide data standards and project data needs.

Dumpster FireLet’s face it: failed information system implementations are not uncommon in our industry, and as a result, we often have a great deal of skepticism toward new tools and processes.

We addressed this topic head-on during our 2017 MERL Tech session, One system to rule them all?

The session discussed the tension between the need for enterprise data management solutions that can be used across the entire organization and solutions that meet the needs of specific projects. The three of us presented our lessons learned on this topic from our respective roles as M&E advisor, M&E software provider, and program implementer.

We then asked attendees to provide a list of their top do’s and don’ts related to their own experiences – and then reviewed the feedback to identify key themes.

Here’s a rundown on the themes that emerged from participants’ feedback:

Organizational Systems

Think of these as systems broadly—not M&E specific. For example: How do HR practices affect technology adoption? Does your organization have a federated structure that makes standard indicator development difficult? Do you require separate reporting for management and donor partners? These are all organizational systems that need to be properly considered before system selection and implementation. Top takeaways from the group include these insights to help you ensure your implementation goes smoothly:

1. Form Follows Function: This seems like an obvious theme, but since we received so much feedback about folks’ experiences, it bears repeating: define your goals and purpose first, then design a system to meet those, not the other way around. Don’t go looking for a solution that doesn’t address an existing problem. This means that if the ultimate goal for a system is to improve field staff data collection, don’t build a system to improve data visualization.

2. HR & Training: One of the areas our industry seems to struggle with is long-term capacity building and knowledge transfer around new systems. Suggestions in this theme were that training on information systems become embedded in standard HR processes with ongoing knowledge sharing and training of field staff, and putting a priority on hiring staff with adequate skill mixes to make use of information systems.

3. Right-Sized Deployment for Your Organization: There were a number of horror stories around organizations that tried to implement a single system simultaneously across all projects and failed because they bit off more than they could chew, or because the selected tool really didn’t meet a majority of their organization’s projects’ needs. The general consensus here was that small pilots, incremental roll-outs, and other learn-and-iterate approaches are a best practice. As one participant put it: Start small, scale slowly, iterate, and adapt.

M&E Systems

We wanted to get feedback on best and worst practices around M&E system implementations specifically—how tools should be selected, necessary planning or analysis, etc.

4. Get Your M&E Right: Resoundingly, participants stressed that a critical component of implementing an M&E information system is having well-organized M&E, particularly indicators. We received a number of comments about creating standardized indicators first, auditing and reconciling existing indicators, and so on.

5. Diagnose Your Needs: Participants also chorused the need for effective diagnosis of the current state of M&E data and workflows and what the desired end-state is. Feedback in this theme focused on data, process, and tool audits and putting more tool-selection power in M&E experts’ hands rather than upper management or IT.

6. Scope It Out: One of the flaws each of us has seen in our respective roles is having too generalized or vague of a sense of why a given M&E tool is being implemented in the first place. All three of us talked about the need to define the problem and desired end state of an implementation. Participants’ feedback supported this stance. One of the key takeaways from this theme was to define who the M&E is actually for, and what purpose it’s serving: donors? Internal management? Local partner selection/management? Public accountability/marketing?

Technical Specifications

The first two categories are more about the how and why of system selection, roll-out, and implementation. This category is all about working to define and articulate what any type of system needs to be able to do.

7. UX Matters: It seems like a lot of folks have had experience with systems that aren’t particularly user-friendly. We received a lot of feedback about consulting users who actually have to use the system, building the tech around them rather than forcing them to adapt, and avoiding “clunkiness” in tool interfaces. This feels obvious but is, in fact, often hard to do in practice.

8. Keep It Simple, Stupid: This theme echoed the Right-Sized Deployment for Your Organization: take baby steps; keep things simple; prioritize the problems you want to solve; and don’t try to make a single tool solve all of them at once. We might add to this: many organizations have never had a successful information system implementation. Keeping the scope and focus tight at first and getting some wins on those roll-outs will help change internal perception of success and make it easier to implement broader, more elaborate changes long-term.

9. Failing to Plan Is Planning to Fail: The consensus in feedback was that it pays to take more time upfront to identify user/system needs and figure out which are required and which are nice to have. If interoperability with other tools or systems is a requirement, think about it from day one. Work directly with stakeholders at all levels to determine specs and needs; conduct internal readiness assessments to see what the actual needs are; and use this process to identify hierarchies of permissions and security.

Change Management

Last, but not least, there’s how systems will be introduced and rolled out to users. We got the most feedback on this section and there was a lot of overlap with other sections. This seems to be the piece that organizations struggle with the most.

10. Get Buy-in/Identify Champions: Half the feedback we received on change management revolved around this theme. For implementations to be successful, you need both a top-down approach (buy-in from senior leadership) and a bottom-up approach (local champions/early adopters). To help facilitate this buy-in, participants suggested creating incentives (especially for management), giving local practitioners ownership, including programs and operations in the process, and not letting the IT department lead the initiative. The key here is that no matter which group the implementation ultimately benefits the most, having everyone on the same page understanding the implementation goals and why the organization needs it are key.

11. Communicate: Part of how you get buy-in is to communicate early and often. Communicate the rationale behind why tools were selected, what they’re good—and bad—at, what the value and benefits of the tool are, and transparency in the roll-out/what it hopes to achieve/progress towards those goals. Consider things like behavior change campaigns, brown bags, etc.

12. Shared Vision: This is a step beyond communication: merely telling people what’s going on is not enough. There must be a larger vision of what the tool/implementation is trying to achieve and this, particularly, needs to be articulated. How will it benefit each type of user? Shared vision can help overcome people’s natural tendencies to resist change, hold onto “their” data, or cover up failures or inconsistencies.