In a previous post last month, I pointed out that the EU white paper on Artificial Intelligence seemed to fall short of the EU Commission's stated ambitions to be a global leader on AI governance, presenting tentative ideas, at best. Thus, AI practitioners should be aware that Asian regulators are well-positioned to lead the international community towards more responsible AI governance. One specific, important topic that Singapore's publications champion, but the EU white paper essentially ignores, is the need for Explainability in AI systems.
Explainability By the Numbers
Oddly, the EU Commission's white paper doesn't even attempt to directly address the topic of AI Explainability. In fact, the word explainability itself appears only once in the 355 paragraphs, 564 sentences and 14,240 words that make up the document. What's more, closely-related words, such as explain, explainable and explanation, do not appear even once. Additionally, the EU document lacks terms that are often substituted for explainability, such as interpret (2), traceability (1) and transparency (6). Consequently, it would appear that something about AI Explainability and AI Transparency makes the authors of the EU white paper reluctant to discuss it.
In comparison, Singapore's Model Artificial Intelligence Governance Framework includes nearly 50 uses of the words explainability, explain, explainable and explanation. Terms that are often substituted for explainability, such as interpret (8), traceability (13) and transparency (10), also appear considerably more often than in the EU white paper. Singapore's focus on the importance of Explainable AI is in keeping with publications from other financial regulators, including Hong Kong's HKMA, the UK's Bank of England and the World Economic Forum. Thus, it would appear that Singapore is on the forefront of establishing responsible AI governance practices, while Europe lags behind.
Why Asia Should Keep the Focus on Explainability
One could surmise that the omission of Explainable AI from the EU white paper is the result of the authors being convinced by those whom they regulate that AI Explainability is too difficult to achieve. However, it is hard to imagine EU regulators taking a similar laissez-faire position on human-generated decisions - so why let mechanical black-boxes off the hook? Especially since AI Explainability is not so difficult when engineers design transparency into a system from the beginning, rather than attempt post hoc explanations. If you have the time, listen to Li Feifei from Stanford's Human-Centered AI Institute discuss her optimism that engineers will address the challenges that arise from AI.
In February 2020, Professor Brent Mittelstadt from the Oxford Internet Institute (OII) gave a talk at Imperial College, London, during which he coherently discussed which aspects of AI decision-making should be explained, to whom they should be understandable and how that can be achieved. One approach that the OII has explored, alongside broader questions and solutions, is counterfactual explanations. Such an approach can help both scientists and non-scientists understand how, if variable X was different when the criteria was input into a black box, then the output of the algorithm would have been different in a meaningful way. The point is that Explainability needs to be designed into all AI systems, regardless of their level of criticality, and regardless of how difficult some practitioners may say that would be.
What's Next?
Let's hope that the second draft of the EU's white paper on Artificial Intelligence, due later this year, devotes fewer words to presenting vague "Ecosystems" of AI Excellence and AI Trust, and includes more that articulate a practical framework for responsible AI Governance that includes expectations for Explainable AI. In the meantime, look to Asian regulators if you want to stay ahead of the international curve.
--------------------------------
About the Author: Alan Hammes is a Technology Governance Specialist with experience helping large, multinational organisations adopt emerging technologies in a safe, controlled and ethical manner. Alan has led global and Asia-specific initiatives governing the use of Artificial Intelligence/Machine Learning (AI/ML), Robotic Process Automation (RPA) and Public Cloud. He is equally dedicated to helping companies navigate the evolving regulatory landscape for other emerging technologies, including Distributed Ledger Technology/Blockchain (DLT), Internet of Things (IoT), Big Data Management/Data Privacy, 5G, FinTech and RegTech.