High Risk Use Cases Get the Focus
The EU white paper appears to miss at least two opportunities to move the global AI governance conversation forward. First, it declares that the EU needs a regulatory framework to govern AI, but then immediately recommends that government scrutiny under the framework be limited to only "high-risk" AI use cases. This kind of risk-based approach has previously been introduced in guidelines and reports from other jurisdictions, and the EU white paper does not appear to add anything new to the discussion. For example, no clear position is taken on what criteria would be used to identify high-risk AI applications.
Previously, in 2018, the Monetary Authority of Singapore (MAS) recommended highlighting "materiality" to determine how much government scrutiny an AI system might require, and in 2019 the Hong Kong Monetary Authority (HKMA) recommended utilising a risk-based approach to govern AI. Yet, the EU white paper surprisingly does not list finance as a representative sector for high-risk AI applications. To be fair, the list that the EU paper provides is not meant to be exhaustive. Nevertheless, omissions like this raise questions about the block's determination to lead international AI governance with the same authority that it showed with the GDPR. One has to wonder if sufficient effort is being expended to offset China's activities within international standards bodies, including the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) and the International Telecommunication Union (ITU).
Can the EU Be Trusted on Trust?
The EU white paper's second missed opportunity to show leadership arises from the authors' recommendation that Europe build a "unique AI Ecosystem of trust." This lofty-sounding proposal certainly seems reasonable, but the international community has already highlighted the importance of trustworthiness in responsibly adopting AI. In fact, the European Commission's own High-Level Expert Group on Artificial Intelligence (AI HLEG) previously published "Ethics guidelines for trustworthy AI" in April 2019. The OECD also released five principles for trustworthy AI in May 2019. That earlier work is not built upon by the recent EU white paper, but perhaps EU regulators simply need more time.
Laying the Groundwork for Convergence
One interesting tidbit in the EU white paper that might positively influence international regulation of AI is a recommendation that assessments of high-risk AI applications be required for "all economic actors addressed by the requirements, regardless of their place of establishment." It is not clear if this recommendation is only intended to apply to EU member states, but a reasonable assumption would be that AI owners from other regions will be expected to comply. If so, this seems like a step in the right direction for the convergence of AI regulations across the globe.
It is possible that the ambitions that European Commission President Ursula von der Leyen laid out in 2019 can still be met by the European Commission's final AI strategy report, due to be released in May 2020. In the meantime, China will continue it's momentum using backdoor efforts to influence international standards. Consequently, AI owners and regulators in Asia and the rest of the world may have to step up their own efforts to provide GDPR-like leadership for the governance of AI.
--------------------------------
About the Author: Alan Hammes is a Technology Governance Specialist with experience helping large, multinational organisations adopt emerging technologies in a safe, controlled and ethical manner. Alan has led global and Asia-specific initiatives governing the use of Artificial Intelligence/Machine Learning (AI/ML), Robotic Process Automation (RPA) and Public Cloud. He is equally dedicated to helping companies navigate the evolving regulatory landscape for other emerging technologies, including Distributed Ledger Technology/Blockchain (DLT), Internet of Things (IoT), Big Data Management/Data Privacy, 5G, FinTech and RegTech.