March 06 2020

Europe's non-GDPR-like Proposal for Governing Artificial Intelligence

Written by

Back in 2016, when the European Union (EU) released its General Data Protection Regulation (GDPR), lawmakers from the rest of the world welcomed it as a pioneering model to study and cite. So, when European Commission President Ursula von der Leyen announced in 2019 that her organisation had ambitions to take GDPR-like leadership in regulating Artificial Intelligence (AI), technologists and governance professionals across the globe took note. Ms. von der Leyen stated in a speech before the European Parliament last November, "With the General Data Protection Regulation we set the pattern for the world. We have to do the same with artificial intelligence."

Roughly 100 days later, in February 2020, the EU published the strategy paper, "White Paper on Artificial Intelligence - A European approach to excellence and trust." Disappointingly, an initial reading of the document suggests that regulators in Asia and the rest of the world should not expect GDPR-like leadership from Europe on the responsible use of AI. The authors of the EU white paper were certainly limited by the tight, 100-day deadline that was imposed upon them. Nevertheless, from an AI governance perspective, their report and its proposals seem timid, rather than bold. There is little that compares it to the ambitions that the GDPR showed for protecting data privacy. Consequently, the direction of AI governance may continue to be driven by countries like China, whose 2017 Artificial Intelligence Development Plan (新一代人工智能发展规划) highlighted their focus on quietly influencing international standards.

High Risk Use Cases Get the Focus

The EU white paper appears to miss at least two opportunities to move the global AI governance conversation forward. First, it declares that the EU needs a regulatory framework to govern AI, but then immediately recommends that government scrutiny under the framework be limited to only "high-risk" AI use cases. This kind of risk-based approach has previously been introduced in guidelines and reports from other jurisdictions, and the EU white paper does not appear to add anything new to the discussion. For example, no clear position is taken on what criteria would be used to identify high-risk AI applications. 

Previously, in 2018, the Monetary Authority of Singapore (MAS) recommended highlighting "materiality" to determine how much government scrutiny an AI system might require, and in 2019 the Hong Kong Monetary Authority (HKMA) recommended utilising a risk-based approach to govern AI. Yet, the EU white paper surprisingly does not list finance as a representative sector for high-risk AI applications. To be fair, the list that the EU paper provides is not meant to be exhaustive. Nevertheless, omissions like this raise questions about the block's determination to lead international AI governance with the same authority that it showed with the GDPR. One has to wonder if sufficient effort is being expended to offset China's activities within international standards bodies, including the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) and the International Telecommunication Union (ITU).

Can the EU Be Trusted on Trust?

The EU white paper's second missed opportunity to show leadership arises from the authors' recommendation that Europe build a "unique AI Ecosystem of trust." This lofty-sounding proposal certainly seems reasonable, but the international community has already highlighted the importance of trustworthiness in responsibly adopting AI. In fact, the European Commission's own High-Level Expert Group on Artificial Intelligence (AI HLEG) previously published "Ethics guidelines for trustworthy AI" in April 2019. The OECD also released five principles for trustworthy AI in May 2019. That earlier work is not built upon by the recent EU white paper, but perhaps EU regulators simply need more time.

Laying the Groundwork for Convergence

One interesting tidbit in the EU white paper that might positively influence international regulation of AI is a recommendation that assessments of high-risk AI applications be required for "all economic actors addressed by the requirements, regardless of their place of establishment." It is not clear if this recommendation is only intended to apply to EU member states, but a reasonable assumption would be that AI owners from other regions will be expected to comply. If so, this seems like a step in the right direction for the convergence of AI regulations across the globe.

It is possible that the ambitions that European Commission President Ursula von der Leyen laid out in 2019 can still be met by the European Commission's final AI strategy report, due to be released in May 2020. In the meantime, China will continue it's momentum using backdoor efforts to influence international standards. Consequently, AI owners and regulators in Asia and the rest of the world may have to step up their own efforts to provide GDPR-like leadership for the governance of AI.

-------------------------------- 

alan.hammes headshotAbout the Author: Alan Hammes is a Technology Governance Specialist with experience helping large, multinational organisations adopt emerging technologies in a safe, controlled and ethical manner. Alan has led global and Asia-specific initiatives governing the use of Artificial Intelligence/Machine Learning (AI/ML), Robotic Process Automation (RPA) and Public Cloud. He is equally dedicated to helping companies navigate the evolving regulatory landscape for other emerging technologies, including Distributed Ledger Technology/Blockchain (DLT), Internet of Things (IoT), Big Data Management/Data Privacy, 5G, FinTech and RegTech.