Saturday, January 25, 2014

Campbell's Law

I'd like this blog to be a repository of possibly-helpful ideas for Hamilton Central's short-term, medium-term, and long-term options... and of course at the moment it's going to focus on the 2014-15 Budget Process and even on the next Upcoming Public Event:

Please Come To The 5:30PM Jan 29th High School Library Meeting

Bring Ideas



On the other hand, I do want to go back to putting up posts about interesting articles/books/TED talks that I think are relevant, even if indirectly. This morning I see a Wired Magazine article about the methods by which we evaluate options, specifically about limitations on the value of the sort of quantitative methods that geeks like me get involved with implementing, at Why Quants Don’t Know Everything
all these new systems—metrics, algo­rithms, automated decisionmaking processes—result in humans gaming the system in rational but often unpredictable ways. Sociologist Donald T. Campbell noted this dynamic back in the ’70s, when he articulated what’s come to be known as Campbell’s law: “The more any quantitative social indicator is used for social decision-making,” he wrote, “the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
On a managerial level, once the quants come into an industry and disrupt it, they often don’t know when to stop. ...As soon as managers pick a numerical metric as a way to measure whether they’re achieving their desired outcome, everybody starts maximizing that metric rather than doing the rest of their job—just as Campbell’s law predicts.
Policing is a good example, as explained by Harvard sociologist Peter Moskos in his book Cop in the Hood: My Year Policing Baltimore’s Eastern District. Most cops have a pretty good idea of what they should be doing, if their goal is public safety: reducing crime, locking up kingpins, confiscating drugs. It involves foot patrols, deep investigations, and building good relations with the community. But under statistically driven regimes, individual officers have almost no incentive to actually do that stuff. Instead, they’re all too often judged on results—specifically, arrests. ...
The same goes for the rise of “teaching to the test” in public schools, or the perverse incentives placed on snowplow operators, who, paid by the quantity of snow cleared, might simply ignore patches of lethal black ice. Even with the 2012 Obama campaign, it became hard to learn about the candidate’s positions by visiting his website, because it was so optimized for maximizing donations—an easy and obvious numerical target—that all other functions fell by the wayside.

At best, your measurements make a model: as George Box put it, all models are wrong, but some are useful. And as he added: "Since all models are wrong the scientist cannot obtain a 'correct' one by excessive elaboration. On the contrary... overelaboration and overparameterization is often the mark of mediocrity." A little quantification, a little measurement, can be very helpful; more is often worse. (The world would be easier for me to deal with if this weren't so.)


Or then again, maybe not.

No comments:

Post a Comment