Machine Learning and Unintended Consequences

Everyone I know is familiar with the idea of unintended consequences. Here are four well-known examples (pulled from the Internet so take the details with a grain of salt).

The cobra effect: The British government was concerned about the number of cobra snakes in Delhi and offered a bounty. Large numbers of snakes were killed for the reward. Eventually entrepreneurs began breeding cobras. The government cancelled the reward program, and cobra breeders set the now-worthless snakes free. Not good. (“The Cobra Effect: Good Intentions, Perverse Outcomes”, Psychology Today).

The welfare effect: Welfare and social assistance in many states pays more than recipients would get if they were working. For example, in Massachusetts recipients can get the equivalent of over $55,000 per year. And unwed mothers are paid more for having more children. The unintended consequences can be that that people who are on welfare will not look for jobs, women can be incented to not marry and to have more children out of wedlock. (“Intended and Unintended Effects of the War on Poverty” Journal of Policy Analysis and Management).

The diversity effect: Companies that establish hiring goals for underrepresented groups and genders (for all intents and purposes hiring quotas, especially if executive compensation is linked to percentages) when there just aren’t enough qualified candidates can end up with employees who are less qualified than those who are hired strictly on the basis of ability. Employees hired as part of diversity initiatives can feel overwhelmed, lose confidence, perform poorly, and reinforce stereotypes held by regular employees (“Unintended Consequences of Diversity Initiatives: Types, Causes, and Interventions”, Harvard Business Review).


Is this machine learning system biased? I wouldn’t want someone who doesn’t understand ML to answer that question.

The discrimination effect: Government regulations to prevent discrimination in the workplace can backfire spectacularly. For example, statistically, any company with more than 500 employees is virtually guaranteed to be discriminatory in a mathematical sense (“Unintended Consequences of EEO Enforcement Policies: Being Big is Worse than Being Bad”, Journal of Business). This can encourage lawsuits against large companies and put a chilling effect on the hiring practices of large companies.

OK, the list of unintended consequences examples could go on forever. What does this have to do with machine learning?

In many cases policies that have negative unintended consequences are put in place by well-meaning people but people who just don’t have the technical knowledge to fully understand the problem at hand. In machine learning, everyone seems to think they’re an expert in areas such as bias and ethics. My colleagues have told me about machine learning ethics committees at their large tech companies that are composed of managers who haven’t ever coded an ML system, graduate students who are woefully inexperienced, and a sprinkling of diverse members just for appearances. I’m glad I don’t work in such an environment (“[Company] Cancels AI Ethics Board in Response to Outcry”, BBC Technical News).

My thought is that policies related to machine learning and artificial intelligence should be carefully considered by a combination of technical experts (who understand the nuts and bolts of machine learning prediction systems) and a wide range of policy experts (who understand big picture consequences). There are lots of unintended consequences related to machine learning that are just waiting to jump up and bite.


This entry was posted in Machine Learning. Bookmark the permalink.