Modeling the Future?

I had just graduated college and felt like I had hit the first-job jackpot: I was moving down to Washington, DC to work at a policy research organization. While the topic area, health policy, would take me away from my undergraduate interest of urban policy, I was promised an exciting position conducting policy simulation of a wide range of health insurance policies. Moreover, the Affordable Care Act was about to be implemented, the data on health policy was about to get much more interesting, and I was excited to expand my knowledge of statistics and coding.

My first year on the job was fantastic. I was able to make immediate impact on the microsimulation model by restructuring its underlying framework to better match that of the ACA, and began to see papers that I co-authored affecting meaningful change in policy decision-making. It was exciting to be able to conduct what-if analysis for a range of proposed national, state, and local policies, to be able to look into a crystal ball to see the next great solution to expand access to affordable health insurance. I felt excited to harness quantitative tools to improve the lives of ordinary citizens.

Around that time, an external organization that had spun off from the president’s campaign analytics team came to our office to give a lunch presentation on their work. They had developed an extensive methodology to identify those who were most likely to be uninsured so that enrollment assistants could provide targeted support to those newly eligible for Medicaid and subsidized plans. Although these populations were the least likely to show up in traditional survey data sets, the organization had augmented that data with a range of individual-level proprietary data on what struck me as incredibly personal and sensitive things- like consumption patterns, voting registration, and financial wellbeing. However, they also seemed to be using these tools for good, so I tried not to think too hard about the creepiness of the detailed data they were using.

A couple years later, the Democrats had suffered a major defeat in the midterms, having lost a significant number of seats due to politically-motivated redistricting, also known as gerrymandering. When I heard about the proprietary data sets and geospatial tools used by Republicans to “pack” and “crack” liberal voters in critical swing states in such a way as to dilute the value of their vote, I felt sick to my stomach. It sounded very similar to the procedure used to identify uninsured citizens and, to some extent, the work I had done on the microsimulation model in my first job. By this time, I had moved back into research on urban policy, but this example of using GIS and data for disenfranchisement spurred something in me to more thoughtfully understand how data was being used to make policy decisions, especially when that analysis is done at the national-level.

I noticed that data could be used for bad. Data could be used for progressive governance, too. But, either way, as long as the rise of data threatened the extent to which policymakers looked to their electorate over data, I could see the democratic process begin to wither. This experience pushed me to ultimately come to MIT to understand how data-driven policymaking can be more inclusive a the communities it is meant to be representing.

Leave a Reply

Your email address will not be published. Required fields are marked *