Prior to coming to MIT, I used to work with non-profits in the greater Washington, DC area to measure their outcomes and evaluate their impact. I was doing this as a research associate at a research organization, a non-profit itself. Entering into the project, I had no idea of the concepts of logic modeling, performance measurement, and theories of change. But I was in for a steep learning curve.
The project was funded by a large development organization that primarily worked internationally. However, in this case, the funder was interested in building the capacity of local non-profits to connect their efforts to data-driven outcomes, set up integrated data systems, and analyze and communicate their outcomes. Rather than flying in the dark, we were helping these non-profits better see their efforts and improve the work they did to serve their target populations. We were responsible for setting up a community of practice for data practitioners at non-profits in the region, as well as for providing one-on-one technical assistance to a few of them.
In my first few meetings advising non-profits on how to better understand their outcomes, I felt in over my head. I had read a few papers on the value of measurement and evaluation and had a beginner’s understanding of the general topics, but felt totally unready to give advice. I did my best to listen to the issues that the data staff at the non-profits we were assisting were running into. At first, I felt content being their therapist- helping them manage power dynamics and work through office conflicts. But quickly, I was asked to help them with issues like building integrated data systems, vetting survey procedures, and managing organizational culture change.
In my work I realized two things- we were far enough removed that I couldn’t feel our impact on the city residents who were being served by these non-profits, and that a lot of the workshops and advising sessions I was conducting leaned towards one-size-fits-all approaches. This was especially try at the community of practice sessions, where we would encourage data practitioners to share their successes at work with one another, despite the fact that we were pulling together people who worked in sectors as broad as health care, homeless and housing services, education and job training, and legal services. This meant that they were measuring different populations with different cultures and vulnerabilities, across different dimensions, using different tools and under different reporting requirements. Due to the small staff sizes for most of the data departments, these constraints meant that most practitioners were battling uphill just to meet compliance goals. Our suggestions and trainings geared towards greater long-run capacity were out-of-reach to a number of the members of our community.
Ultimately, though, I found one solution that I was very well-positioned to handle: connecting practitioners to each other to collaborate on shared challenges and aspirations. I was responsible for intake before any organization was allowed to join our community of practice (to ensure they were far enough along in their thinking and institutional commitment to data collection to benefit from such a peer group), and I knew the struggles and some successes that they each had. I was tasked with regularly communicating with the community of practice, and I became familiar and trusted by most of the members. During the meetings, I would often get updates from them on their work, and would be able to suggest a group (within their sphere or outside of it) who would be better suited to talk through their problem than I was. In focusing on building this community of socially-minded data nerds, I hope that we were able to help them help each other.