Artificial intelligence will eliminate more bias than it creates

While the stories we hear about potential biases tend to be the disastrous ones, there can also be ‘good AI bias,’ as organizations try to use machine learning to pursue particular goals.

Lately, there has been a lot of discussion about potential biases in artificial intelligence systems. But what do we actually mean by bias? And, on balance, will AI create more or less of it?

The first question doesn’t get the attention it deserves. While the origins of the word bias are obscure, its root meaning revolves around some sort of slant or angle. In English lawn bowling, “the bias” is a deliberate overweighting of one side of the ball so that it is easier to curve when rolling. Nothing positive or negative is implied any of these definitions; it’s only in more recent times that bias has taken on its current—almost entirely pejorative—meaning.

That there are many downsides to human bias is undeniable. We know that criminal sentencing varies greatly depending upon the deciding judge; loan and credit approval biases have a long and painful history; teachers at all levels have been shown to grade student papers differently depending upon who they believe wrote them; managers tend to hire people who seem similar to themselves; journalists have vastly different views regarding what issues should be covered.

Behavioral science has identified so many sources of human bias that it is said to be over-explained. We rely too much on current societal stereotypes; we overly value our own personal experiences, especially our recent ones; we are quick to embrace anecdotes that confirm our existing views; we generally have a poor sense of statistics and probabilities; and we over-rate the reliability of our personal gut feel. Perhaps most importantly, we perpetuate these biases through our reluctance to admit mistakes, a trait that often gets stronger the more educated and successful one becomes.

But not all biases are bad. In many ways, our biases are inseparable from our beliefs, hopes, interests, opinions, allegiances, preferences, inclinations, tastes and values—the very things that define us as individuals and give society such energy and dynamism. Who would want a bias-free world where everybody thinks and acts in the same way? That’s the stuff of dystopian science fiction.

Of course, AI systems can introduce their own bad biases, as Apple has recently learned with its Apple Card. If a machine learning system is based on a data set that is biased by age, gender, race or some other factor, or if there are biases in the way a system is conceived or constructed, the system’s outputs will tend to reflect those biases. But while the stories we hear about will tend to be the disastrous ones, there can also be “good AI bias,” as organizations try to use machine learning to pursue particular goals.

Given that there are both good and bad biases, stemming from human and machines alike, there are three main ways of assessing the impact of artificial intelligence. Can we use AI to reduce “bad” human biases? Can we minimize the bad bias that might be embedded in an AI system? And is the overall balance between good and bad bias positive or negative?

When viewed along these dimensions, the outlook for AI becomes much more sanguine than today’s headlines imply.

First, data analytics and machine learning are now routinely used to confront the many human biases listed above, and thus they clearly can reduce bad bias. So this is a huge plus.

Second, data scientists and statisticians can examine the representativeness of a system’s underlying data in pretty much the same way that they assess the representativeness of a survey sample. Once biases are identified, adjustments can be made, typically without the emotional resistance that comes with confronting human prejudices. This means that unfair machine learning biases will tend to decrease over time, another big plus.

Perhaps most decisively, in areas of high stakes decision-making, human expertise will still be vital because making the least biased decision is usually less important than making the most correct and/or fair one. Just as human/machine cooperation is still the optimal way to play chess, invest money or make medical diagnoses, so it is with reducing bad bias.

Whether we are seeking more equitable criminal sentencing, student grading, or loan/credit approvals, the combination of machine learning and human judgement will likely produce the best and fairest results. It’s another plus for those organizations willing to embrace a slightly less algorithmic path.

The bottom line is that long-standing human biases have created many societal inequities that well-designed and ever-improving computer systems can significantly reduce. While there will surely be many mistakes and even scandals along the way, over time the combination of machine and human decision-making will be more accurate, more fair, and less biased than anything we saw in the pre-AI era. We should welcome the change.

More for you

Loading data for hdm_tax_topic #better-outcomes...