A report by Ernst & Young (EY), one of the Big Four accounting firms, on the global AI regulatory landscape is receiving renewed interest after President Biden signed a sweeping executive order on Monday that aims to monitor and regulate the risks of artificial intelligence (AI) while also harnessing its potential.
The EY report, titled “The Artificial Intelligence (AI) global regulatory landscape: Policy trends and considerations to build confidence in AI,” was published last month. Its goal is to clarify the global AI regulatory environment, providing policymakers and businesses with a roadmap to understand and navigate this complex landscape.
The report is based on an analysis of eight major jurisdictions that have shown significant AI legislative and regulatory activity: Canada, China, the European Union, Japan, Korea, Singapore, the United Kingdom and the United States.
It reveals that despite their different cultural and regulatory contexts, these jurisdictions share many of the same objectives and approaches for AI governance. They all aim to minimize potential harms from AI while maximizing benefits to society, and they all align with the OECD AI principles endorsed by the G20 in 2019, which emphasize human rights, transparency, risk management and other ethical considerations.
However, the report also identifies some divergences and challenges in the global AI regulatory environment. For instance, the report says the E.U. has taken among the most proactive stances globally, proposing a comprehensive AI Act that would impose mandatory requirements for high-risk AI uses, such as biometric identification or critical infrastructure.
China has also shown a willingness to regulate core aspects of AI, such as content recommendation or facial recognition. On the other hand, the report claims the U.S. had adopted a light-touch approach, focusing on voluntary industry guidance and sector-specific rules.
The global AI regulatory landscape is dynamic and evolving
Notably, the EY’s analysis of the U.S. regulation on AI is now outdated because of President Biden’s executive order signed earlier this week. The executive order is considered by most experts to be the most significant action on AI taken by any government, and it goes beyond the voluntary industry guidance and sector-specific rules that the report described as the U.S. approach.
The executive order builds off voluntary commitments made earlier this year by 15 tech companies, including Microsoft and Google, to allow outside testing of their AI systems before public release and to develop ways to identify AI-generated content.
The White House last year also rolled out an “AI Bill of Rights,” offering companies guidelines aimed at protecting consumers using automated systems, though that guidance was non-binding.
As our lead AI reporter, Sharon Goldman wrote earlier this week, the executive order will require developers of powerful AI systems to share the results of their safety tests with the federal government before they are released to the public and to notify the government if their AI models pose national security, economic or health risks. The order will also address other issues such as immigration, biotechnology, labor and content moderation.
There are other major developments since the EY report was published last month that are reshaping the global AI regulatory landscape. For example, the U.K. Government published an AI White Paper that outlines the country’s proposed framework for AI regulation. The U.K. framework is based on four principles: proportionality, accountability, transparency, and ethics and dovetails with the EU’s approach.
These developments show that the global AI regulatory landscape is dynamic and rapidly evolving and that policymakers and businesses need to stay updated and engaged with the latest trends and best practices. Still, the EY report remains a valuable resource for understanding and navigating the regulatory environment. Still, it may need to be supplemented with new information as new rules and initiatives emerge.
Tangible insights for policymakers and businesses
The EY report highlights several trends and best practices in AI regulation that remain relevant, such as:
- A risk-based approach that tailors oversight to the intended use case and risk profile of AI systems.
- Consideration of sector-specific risks and oversight needs, such as healthcare, finance or transportation.
- Initiatives addressing AI’s impacts on adjacent policy areas, such as data privacy, cybersecurity and content moderation.
- Use of regulatory sandboxes to develop AI rules collaboratively with stakeholders.
The EY report concludes with a call for ongoing engagement among government officials, corporate executives, and other stakeholders to strike the right balance between regulation and innovation.
Overall, EY’s report provides a roadmap for understanding the rapidly evolving AI regulatory landscape. It underlines the need for increased dialogue among policymakers, the private sector, and civil society to close the AI confidence gap, prevent policy fragmentation, and realize the full potential of AI. It is a must-read for anyone seeking to navigate the complex ethical challenges relating to AI and understand the dynamic AI policy landscape on a global scale.
TechForgePulse's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.