Progress in Integrating Big Data into Program Evaluation: Case studies and lessons learned
September 4, 9am-5pm
Hosted at Independent Evaluation Group (IEG) at the World Bank
“I” Building, (Room to be announced)
1850 I St NW, Washington, DC 20006
Join us for a one-day workshop on Big Data and Evaluation!
Does “big data” still seem like something that is out of reach for you and your organization? Join us for a one-day workshop where we will “demystify” big data for evaluators and program managers. We’ll show that there are a wide range of tools and techniques that can be incorporated into evaluations, and we’ll aim to advance discussions on how to build bridges between the evaluation and big data communities.
A core part of the workshop will be the presentation and discussion of case studies illustrating big data and data analytics tools and techniques that are already being used (or tested) in program evaluation, including:
- Satellites and drones
- Mobile phones and phone record analysis
- Economical methods for the collection and analysis of evaluation data that permit significant increases in sample size
- Social media analytics
- Predictive analytics
- Artificial intelligence
- Constructing integrated data bases
The workshop will conclude with a general discussion of the opportunities and challenges for integrating big data into program evaluation and some of the next steps.
Michael Bamberger has a Ph.D. in Sociology from the London School of Economics. He has been involved for over forty years in the evaluation of development programs in Africa, Asia and Latin America. His focus has been on poverty, social exclusion, gender equality and women’s empowerment, and urban development. He has taught and written extensively on how to conduct methodologically sound evaluations when working in real-world development contexts. Over the past few years he has worked on the opportunities and challenges for integrating new information technology into the evaluation of development programs. His recent publications include: “Dealing with complexity in development evaluation”, “RealWorld Evaluation: working under budget, time, data and political constraints”, and “Evaluating the social development goals through equity-focused and gender-responsive evaluations.” He regularly organizes workshops on “Evaluating complex development programs”, “Evaluation in the age of big data”, “Identifying unintended outcomes of development programs”, and “Mixed method evaluations”.
Case Studies from:
Peter York is BCT Partner’s Chief Data Scientist. He has over 20 years of experience in research, evaluation and data analytics, as well as serving as a national spokesperson for social impact and impact measurement issues for the government, nonprofit and philanthropic sectors. Mr. York has authored book chapters and published academic and professional peer-reviewed articles on the use of machine learning algorithms with administrative data to build predictive, prescriptive and rigorous evaluation models. This includes a book, “Funder’s Guide to Evaluation: Leveraging Evaluation to Improve Nonprofit Effectiveness;” a book chapter, “The Application of Predictive Analytics and Machine Learning to Risk Assessment in Juvenile Justice;” and other articles, peer-reviewed publications and papers on the topic of the use of big data and machine learning for evaluation. He is a popular speaker on evaluation and data science/analytics, presenting regularly at the American Evaluation Association, Alliance for Nonprofit Management, Data Analysts for Social Good, Monitoring Evaluation Research & Learning, and more.
Kecia Bertermann is Director of Learning & Impact at Luminate. An experienced leader in global research, evaluation, and strategy for social change, Kecia was previously Director of Digital Research and Learning at Girl Effect. Kecia led the organisation’s digital research and evaluation, set the strategic agenda for digital learning across its portfolio, and designed frameworks to measure changes in knowledge, behaviour, and social norms. Prior to joining Girl Effect, Kecia was Senior Monitoring, Learning and Results Manager at Nike Foundation, where she developed innovative tools such as the Girl Impact Map, a mapping platform layering data sets related to teenage girls in Rwanda. She also designed and managed the Girl Research Unit, an award-winning group of young Rwandan women trained in in qualitative research methods. She previously led monitoring and evaluation, strategic design, and programmatic support for Medical Teams International’s Asia program portfolio.
Jos Vaessen (Ph.D. Maastricht University) is adviser on evaluation methods at the Independent Evaluation Group, World Bank Group. Since 1998 he has been involved in evaluation research activities, first as an academic and consultant to bilateral and multilateral development organizations and from 2011 to 2015 as an evaluation manager at UNESCO. Jos firmly believes in a strong link between research and practice. His ongoing involvement in some evaluation-related research and teaching activities as honorary lecturer at Maastricht University contributes to the necessary cross-fertilization between these two domains. Jos has been author of several internationally peer-reviewed publications, including three books. Notable examples of his publications are: Impact evaluations and development – NONIE guidance on impact evaluation (2009, with Frans Leeuw; NONIE/World Bank), Mind the gap: perspectives on policy evaluation and the social sciences (2009, with Frans Leeuw; Transaction Publishers). His most recent book is: Dealing with complexity in development evaluation: a practical approach (2015, with Michael Bamberger and Estelle Raimondo; SAGE Publications). Jos regularly serves on reference groups of evaluations for different institutions and is a member of the Board of the European Evaluation Society.
Dustin Homer is the Director of Solutions at Fraym.
Co-hosted and sponsored by:
Questions? Contact Linda Raftree