With the rise of artificial intelligence has come skepticism. Mysteries of how AI works make questioning the “black box” natural. “Explainability” refers to being able to trace and follow the logic AI algorithms use to form their conclusions. In some cases—particularly in unsupervised learning—the answer is, “We don’t know.” How disconcerting is that? Whether or not the answers are valid, not being able to “show your work” engenders suspicions.
Supervised learning is different. Algorithms such as trees and linear regressions have clearly defined math that humans could follow and arrive at the same answers as automated machine learning (AutoML), if only they had time to work through millions of calculations. Nevertheless, unfamiliarity with the data science of supervised learning also causes doubts.
Explainability could be a concern when …
Fortunately, most practical business decisions do not need to pass those tests.
First of all, emphasize the ROI of good predictions. In marketing, one of the most common use cases for AutoML, predictions only have to be a little better than a coin flip to deliver huge returns. Next, show the evidence:
Before using Google Maps to find the best route to your destination, do you investigate which AI algorithms it used and why it chose that exact route? Of course not. The reasons are simple:
Squark displays algorithm performance information along with prediction results. This shows how a multiplicity of models were evaluated to make sure the best one was utilized. Think of the lower-ranked models as the gray, “12 minutes slower” routes and start your journey with confidence.
Judah Phillips