Thinking Outside the Black Box: Why Transparency in AI is Just the Beginning

As algorithms increasingly make decisions about our lives and others’, the world is pushing for greater transparency into how they work. But while transparency is a necessary starting point, it’s not the end goal. You might have a perfectly transparent system – but that doesn’t mean that it’s an unbiased one.

Holding AI or ML accountable is a multifaceted endeavor. Doing so requires an understanding of the “why” behind their decision making. We want to know why we were rejected for a bank loan, or why apparently objective machines are participating in racial profiling.

But being able to see and understand the “why” doesn’t mean that “why” is without problems. Take our bank loan example. A system might be transparent in that it lists thousands of factors influencing its decision making, none of which are outwardly problematic. But seemingly innocuous factors such as zip code or a parent’s profession could be functioning as proxies for race.

Transparency helps us flag bias – but it isn’t enough to counteract it. So what can we do to ensure that our AIs are acting fairly and equitably?

The evil is in the data (or lack thereof)

Where algorithms go bad is in large part in the information they’re trained on. Those datasets hold all the biases, prejudices and blind spots that come out to play when they’re fed into an algorithm. If the data is racist or sexist, those issues will show up even if you exclude racial and gendered variables. Your AI will just use proxy measures instead.

To minimize bias, it’s essential to deal with these issues as close to the source as possible. That means holding ourselves – both as companies and citizens – accountable for how we collect, clean and treat our data.

Biased data is absolutely an issue in AI and ML. But it’s not the only issue that arises when dealing with large datasets. Others include the context around how our datasets are built and how the decisions of an algorithm are ultimately applied. Take the college admissions process as an example. Factors such as test score, grades and extracurriculars can be weighted as a way of highlighting potential candidates. But what about prospective students whose achievements don’t show up on those measures? How can the system identify and recognize their accomplishments?

Machines shouldn’t be making decisions that can have a major impact on people’s lives. They can’t possibly know that they’re not getting the whole picture. They’re just machines. They lack a true understanding of ethics, compassion and plain old common sense.

What we need is to admit fallibility

The solution would seem to be that humans should be the ones making those decisions – after all, we have the benefit of context, empathy and experience.

But humans also suffer from, well, being human. We all have our own internal biases and prejudices, and we also suffer from things like cognitive overload and changeable moods. Judges, for example, are more likely to be lenient early in the day or after a lunch break. We’re the reason our data is biased in the first place, and relying on humans to counteract the prejudice put into the system by humans has limited value.

The more we hear about AI “fails,” the more it becomes clear that despite our best efforts the correct decision in a situation isn’t always obvious, and that bias is hard to detect. What we need isn’t just transparency, but systems that acknowledge their own fallibility – and seek to correct it.

Algorithms may be biased, but that’s because we are. Those issues are compounded by the fact that another part of being human is that people make mistakes. We might misunderstand what an algorithm is telling us, or find bias popping up in an unexpected place. We might find that our business goals don’t map clearly to variables that can be optimized. Or we might be misled into thinking that a transparent system is necessarily a fair one.

Yes, we should be demanding transparency around the data that AI and ML learn from. But we should also be demanding it around the laws, processes and procedures that exist around a workflow where algorithms are present. Checks and balances are needed to reduce bias and error in both our data and in the way we understand and apply the results of our algorithms in the real world.

After all, transparency is meaningless unless paired with an effort to identity, mitigate and act upon the issues it spins up. Transparency is just a small part of a much larger problem.

Bio:

Paul Barba, Chief Scientist, Lexalytics

Paul Barba has years of experience developing, architecting, researching and generally thinking about AI/machine learning, text analytics and NLP software. He has been working on growing system understanding while reducing human intervention and bringing text analytics to web scale. Paul has expertise in diverse areas of NLP and machine learning, from sentiment analysis and machine summarization to genetic programming and bootstrapping algorithms. Paul is continuing to bring cutting edge research to solve everyday business problems, while working on new “big ideas” to push the whole field forward. Paul earned a degree in Computer Science and Mathematics from UMass Amherst.