How Important Is Explainability?

Understanding the inner workings of ML algorithms may distract from realizing benefits from good predictions.

Explainability Explained

With the rise of artificial intelligence has come skepticism. Mysteries of how AI works make questioning the “black box” natural. “Explainability” refers to being able to trace and follow the logic AI algorithms use to form their conclusions. In some cases—particularly in unsupervised learning—the answer is, “We don’t know.” How disconcerting is that? Whether or not the answers are valid, not being able to “show your work” engenders suspicions.

Supervised learning is different. Algorithms such as trees and linear regressions have clearly defined math that humans could follow and arrive at the same answers as automated machine learning (AutoML), if only they had time to work through millions of calculations. Nevertheless, unfamiliarity with the data science of supervised learning also causes doubts.

Explainability could be a concern when …

  • Making life-or-death decisions
  • Proving court-of-law-style certainty
  • Completely eradicating biases

Fortunately, most practical business decisions do not need to pass those tests.

How to Convince Decision Makers to Trust AutoML

First of all, emphasize the ROI of good predictions. In marketing, one of the most common use cases for AutoML, predictions only have to be a little better than a coin flip to deliver huge returns. Next, show the evidence:

  • Model accuracy is calculated by comparing AutoML predictions to known results. This same accuracy will prevail for new, unseen data.
  • Squark shows lists of the factors that were most predictive, which explains enough of the model’s logic to inspire confidence.
  • Data features can easily be added and removed to test biases and understand predictive behaviors.

A Useful Analogy

Before using Google Maps to find the best route to your destination, do you investigate which AI algorithms it used and why it chose that exact route? Of course not. The reasons are simple:

  • The algorithms are not understandable unless you are a data scientist.
  • The results are usually very good.
  • Time wasted checking the process delays reaching your goal.

Squark displays algorithm performance information along with prediction results. This shows how a multiplicity of models were evaluated to make sure the best one was utilized. Think of the lower-ranked models as the gray, “12 minutes slower” routes and start your journey with confidence.

Recent Posts

GPT and Predictive Analytics AI
navigating ai

Squark is a no-code AI as a Service platform that helps data-literate business users make better decisions with their data. Squark is used across a variety of industries & use cases to uncover AI-driven insights from tabular and textual data, prioritize decisions, and take informed action. The Squark platform is designed to be easy to use, accurate, scalable, and secure.

Copyright © 2023 Squark. All Rights Reserved | Privacy Policy