Making sure AI is fair

A critical challenge that AI presents to Legal and Compliance leaders in enterprises deploying AI is that of fairness. Here are some key things every Legal and Compliance team should know.

Prevent AI bias by making sure you feed it balanced training data

In early 2018 researchers from MIT and Microsoft Research published an unexpected finding about AI software that analyzes human faces. They discovered that the online facial analysis services provided by some of the biggest names in the industry—IBM and Microsoft—had higher error rates on dark-skinned women than on light-skinned men.

All faces in research study

Subject faces in the MIT and Microsoft Research face recognition study
(See the original research report here.)

This finding troubled the conscience of the AI community and provoked a storm of media coverage. That reaction is understandable, because over the past few years AI-based face recognition has become widespread, used not only by social media services and personal computing devices, but also by corporations and governments, including many law enforcement agencies. A mistaken identification may be trivial on Facebook, but it is less so when it causes police to arrest the wrong person.

But the good news is that this finding has already spurred significant improvements in the performance of face recognition algorithms, along with greater awareness of why AI fairness is not something we can take for granted. Both IBM and Microsoft reacted by quickly overhauling their facial analysis AI services. Within a few months, both had rolled out improved services that scored dramatically better on minorities, especially on darker-skinned women.

The truth is that, despite this temporary setback, face recognition software really has made extraordinary progress since computer scientists first started exploring it more than 50 years ago. The paradox is that even as algorithms have become much better in recent years, awareness of the risks their deficiencies carry has also increased.

But you might be wondering: if today’s AI software is so smart, how could it make such blunders on something as seemingly simple as telling the difference between women’s and men’s faces?

The real breakthrough in facial analysis has come only in the past five or six years, with the rise of an approach known as deep learning, which is at the heart of modern AI. Thanks to deep learning, accuracy in face recognition as measured on standard benchmarks has improved from around 92% in 2012 to better than 98.8% today.

But these benchmarks are not necessarily representative of all face recognition tasks in the real world. The problem with the original biased versions of the IBM and Microsoft facial analysis services was that they were trained on data sets that did not contain a sufficiently representative sample of the full diversity of human faces. Once the developers of these services learned of their skewed results, they were able to fix the problem by boosting the gender and skin tone balance of faces in their training data.

The moral of this story is that AI requires careful human supervision

When a technology as powerful as face recognition spreads into every corner of society and the economy, as is now happening, we need to think carefully about how it will be controlled. It should perhaps not surprise you then that Microsoft’s President and Chief Legal Officer Brad Smith last summer issued a forceful public call for government regulation of face recognition AI, proposing the creation of a bipartisan expert commissions to advise Congress on suitable legislation.

In the months and years ahead, we are going to hear a lot more about government regulation of AI. But necessary as such initiatives are, it is also important that we not handicap AI’s extraordinary potential to help humans in many domains. Getting that balance right will take careful thought by the tech industry, public policymakers, and the enterprises and consumers who use AI.

“As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law. Given the global sweep of this technology, we’ll need to address these issues internationally, in no small part by working with and relying upon many other respected voices. We will all need to work together, and we look forward to doing our part.”

Brad Smith, Microsoft's President and Chief Legal Officer