As organizations rely more on artificial intelligence and machine learning models, how can they ensure they’re trustworthy?
The promise of artificial intelligence (AI) suggests that machines will augment human understanding by automating decision-making. Josh Parenteau, Director of Market Intelligence at Tableau explained how artificial intelligence and machine learning will act as another perspective, "helping uncover those insights that have gone previously undiscovered." Gartner research indicates that by 2020, "85% of CIOs will be piloting artificial intelligence programs through a combination of buy, build, and outsource efforts." But as organizations become more reliant on machine learning models, how can humans be sure that these recommendations are trustworthy?
Many machine learning applications don’t currently have a way to "look under the hood" to understand the algorithms or logic behind decisions and recommendations, so organizations piloting AI programs are rightfully concerned about widespread adoption. As outlined by Adrian Weller, senior research fellow in machine learning at the University of Cambridge, "Transparency is often deemed critical to enable effective real-world deployment of intelligent systems" like machine learning. This is the case for a variety of reasons—mainly, to ensure that models are working as designed, or to establish trust with users so they can confidently make decisions based on predictions.
The need for transparency has led to the growth of explainable AI, the practice of understanding and presenting transparent views into machine learning models. Decision makers expect to be able to ask follow-up questions around why a model says something, how confident it is, and what it would say if inputs were different—very similar to how a leader would query a human expert when making critical decisions. As Richard Tibbetts, Product Manager for AI at Tableau, notes, "Decision makers are right to be skeptical when answers provided by AI and machine learning cannot be explained. Analytics and AI should assist—but not completely replace—human expertise and understanding."
Line of business leaders in organizations—particularly organizations concerned with risk like financial services and pharmaceutical companies—are demanding data science teams to use models that are more explainable and offer documentation or an audit trail around how models are constructed. As data scientists are tasked with explaining these models to business users, they are leaning on BI platforms as an interactive method for exploring and validating conclusions.
Ultimately, companies have embraced the value of artificial intelligence and machine learning. But to make a disruptive impact in organizations, AI has to be trusted. It must justify its conclusions in an intelligible fashion, as simply as possible, and dynamically answer follow-up questions—all to help humans better understand their data.