Measure for Measure

As a strategy consultant in the private sector for much of my (admittedly short) professional career, I have had limited opportunities to work on projects that explicitly aimed to effect large scale social change. However, technology played a central role in essentially all solutions that we ended up recommending to our clients. One project in particular resonated with a number of themes that we have already touched upon in this course.

A US telecommunications provider was looking to restructure the incentivization strategy for software developers working on internal (non-customer facing) projects. The leadership believed that existing metrics and KPIs were contributing to a bloated budget, with too much focus on “lines of code” productivity as opposed to high quality output.

This was a notoriously difficult issue, especially in cases where the software or technical maintenance provided had no direct revenue associated with it. How do you measure productivity, in dollar terms, for people working on maintaining intangible assets that support, but do not directly impact, other business functions? It was often only possible to attribute success or failure of projects well after they had been deployed, too late to include in ongoing performance reviews, assuming, of course, that they could somehow be traced to a team or individual in the first place.

The likely consequences of this project on the existing workforce were significant. Although ostensibly motivated by a desire to improve productivity in the short-term, the eventual cost-cutting implications were obvious. We knew that our recommendations would effectively decide which resources were seen as being “better” performing, and thus more essential than their peers.

From the onset we approached this problem as one of motivating desirable outcomes simply by aligning metrics with contributing actions. But more fundamentally, our search was driven by the emerging “opportunity” presented by the new paradigm of data generation and collection. There was a genuine sense that in the world of high tech and big data, we could now “measure” everything, and thus finding the right metric was not a technical challenge, but a logical one.

Although I was uneasy with our work on this project for reasons I could not quite articulate at the time, in hindsight there were two fundamental issues at play.

The first and most obvious issue is common to most consulting engagements. Is a small team of external observers, with a relatively one-dimensional view developed over the course of a few short months, the best positioned party to resolve such deep-seated, and consequential, structural issues? This is especially troubling given that consultants are most often incentivized to unearth existing problems rather than spend time understanding why things have come to be the way they are.

The second is a broader issue with the obsession of using technology to find “objective” representations of truth through metrics that can then be optimized, often in isolation, and devoid of their larger context.

Both these points represent some of the key problems that drive detrimental unintended social consequences from technological interventions, both in the private and public sphere. Self-professed serial problem-solvers (such as Shane Snow and virtually all consultants) believe that there is more value in modular solutions than in embracing and respecting context and complexity. Compounding this is the well documented tendency of human targets of metrics to attempt to game them to their advantage, often at the cost of the original overall goals and objectives.

The impact of such thinking is clear to see from small to large scale interventions. Some of the best examples of this context-free, metric-driven impact assessment methodology are in the world of international development, through organizations such as the UN and USAID. Eventually, if we are to avoid repeating mistakes from our past, the first step to correcting some of the inherent myopia in this form of solutionism will have to be a greater willingness to engage and investigate the impact of technological solutions on target populations more deeply and empathetically.

Leave a Reply

Your email address will not be published. Required fields are marked *