Why healthcare wants to crack the ‘black box’ surrounding AI

For clinicians to trust applications that use artificial intelligence, they must understand the how the technology reaches its decisions.


How can end users of artificial intelligence applications trust the results they receive when they have no idea how the AI reached its conclusions? This black-box paradox haunts proponents of AI in every industry, but it’s particularly problematic in healthcare, where the method used to reach a conclusion is vitally important.

Yet the problem seems almost insurmountable—the most effective machine learning models are notoriously opaque, offering few clues as to how they arrive at their conclusions.

In healthcare, physicians are especially reluctant to trust technologies they can’t explain. Trained as they are in the scientific method, with responsibility for making life and death decisions, physicians are understandably unwilling to entrust those decisions to a black box.

For AI to take hold in healthcare, it has to be explainable. The many technical, operational and organizational challenges posed by AI pale in comparison to the trust challenge.

To address the problem, many healthcare AI companies are willing to offer full transparency. But while transparency is a fundamental requirement, it has little bearing on the problem of trust.

What the world needs is justification, which is something entirely different than transparency.

To explain, let’s define transparency. Transparency is identifying what algorithm was used, and what parameters were learned from the data. We have seen some companies expose the source code of the algorithm and the learned parameters. But the revelation provides minimal insight; it’s akin to checking whether an Oracle database can join tables correctly. There is little value in knowing that your computer can do matrix multiplication or basic linear algebra operations.

This is not to suggest there is no utility in transparency—far from it. There is great value in understanding what has been done, with enough precision to enable its replication. Transparency might also reveal why calculations were designed in a particular way. But this is essentially QA; it fails to reveal why the machine came to its conclusions.

The concept of justification is far more robust than transparency, and that’s what is required to move AI into production in healthcare. Like transparency, justification identifies the algorithm that was used and the parameters that were applied, but it also provides the additional ingredient of intuition—the ability to see what the machine is thinking: “When x, because y.”

Justification tell us, for every atomic operation, here is the reason or reasons. For every classification, prediction, regression, event, anomaly or hotspot, we can identify matching examples in the data as proof. These are presented in human-understandable output and represent the variables, the ingredients of the model.

Getting to the atomic level is the key to cracking the AI black box. So how might we achieve that in practice?

Machine learning is the practice of optimization—all the algorithms maximize/minimize some objective. An important feature of optimization is the distinction between global and local optima. Finding global optima is difficult because the mathematical conditions to check whether we are near an optima are unable to distinguish between global and local optima. The challenge is that at the global level, it is difficult to know when you have found that maxima.

If this sounds obscure, consider the well-worn but highly effective example of climbing a hill in the fog. Your visibility is highly constrained, often just a few feet. How do you know when you are at the top? Is it when you start to descend? What if you crested a false summit? You wouldn’t know it, but you would claim victory as you began to descend.

But what if you had a GPS—a map and a way to locate yourself in the fog?

This is one of the areas where Topological Data Analysis (TDA)—a type of AI that can illuminate the black box—is particularly effective. Unlike other AI solutions, TDA produces visual “maps” of data. So in the example of climbing a hill in the fog, using TDA, you would now know whether you were at the global optima (the summit) or merely at some local maxima (a false summit) because you could literally see your location in the networks.

In fact for every atomic operation or “action,” with TDA we can find our location somewhere in the network. As a result, we always know where we are, where we came from and where (to the extent the prediction is correct) we are going next.

Critically, for the purpose of cracking the black box, with this knowledge we also know why.

For example, a study that attempts to predict the likelihood of a person contracting lung cancer based on personal behavioral data might strongly predict a positive outcome for a particular person. In that case, transparency would tell you what inputs were used, what algorithm was used and the parameters, but not why the prediction was made.

By contrast, justification would tell you everything that transparency revealed, while also explaining the prediction by highlighting that the person is a heavy smoker or, more precisely, that the person is in a high-dimensional neighborhood where a statistically overwhelming number of other people are heavy smokers. This information builds intuition at the atomic decision level.

Justification differs from transparency in that it concerns the output of a methodology, rather than describing what was done computationally.

Furthermore, one can, with topological data analysis, continue to move “upstream,” understanding the role of the local models, the groups within those local models, the rows within a node and ultimately the single point of data. This is extremely powerful, not just for its ability to justify a model’s behavior no matter how complex it may be, but also for understanding how to repair the model. That is why so many organizations are now applying TDA as a microscope to their existing machine learning models to determine where those models are failing, even when they were not built with TDA.

Justification is not simply a “feature” of AI—it is core to the success of the technology. The amount of work underway to move beyond transparency is a testament to its importance.

TDA gets us there today, without sacrificing performance. In the AI arms race, that is worth something.

More for you

Loading data for hdm_tax_topic #better-outcomes...