Conception

Regulatory Definitions of AI: A Global Perspective

Author: Amirhossein Amini

Introduction

Artificial intelligence (AI) is often seen as elusive and difficult to define. As John McCarthy, one of AI’s pioneers, famously described in 1956, AI is "a machine that behaves in a way that would be deemed intelligent if a human acted thus." As technology evolves, so does the challenge of defining AI in legal frameworks. With AI playing an increasingly critical role across industries, how do legal systems across different regions adapt to this evolving technology?

This article explores the legal definitions of AI across different jurisdictions and examines how these definitions shape regulations and impact governance. We will address both the technical challenges and other issues inherent in these definitions.

I. Artificial Intelligence in International Law

OECD’s Functional Approach

The Organization for Economic Co-operation and Development (OECD) defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments." It operates "by interpreting structured or unstructured data, assessing the relevance of information, and determining the best course of action to achieve human-set goals." This definition emphasizes the role of humans in setting objectives and AI’s capacity to generate predictive outputs and adapt post-deployment.

UNESCO’s Ethical Considerations

In contrast, UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence defines AI systems as "technological systems that can process data and act autonomously based on the recognition of patterns to perform tasks that would normally require human intelligence." This definition highlights AI’s cognitive abilities, including reasoning, learning, perception, and control, thus framing AI as capable of imitating aspects of human intelligence.

Comparing OECD and UNESCO

While the OECD focuses on AI’s functional adaptability—emphasizing decision-making based on data inputs and human-set objectives—UNESCO's definition centers more on AI’s ability to replicate human-like cognitive tasks. Both organizations consider the entire lifecycle of AI, from design to decommissioning, shaping how AI is governed internationally.

II. Artificial Intelligence Definitions in European Law

EU’s Comprehensive Definition

The EU defines an AI system in its June 2024 AI Act as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment." These systems, for explicit or implicit objectives, infer from the input they receive how to generate outputs, such as predictions, content, recommendations, or decisions, that can influence both physical and virtual environments.

Legal Implications in Europe

The EU's definition of AI carries significant legal weight, especially when applied to industries like autonomous driving. For instance, liability becomes a complex issue when AI systems can learn post-deployment. If an AI system makes a decision that causes harm, determining responsibility—whether the manufacturer, developer, or user—is crucial for legal accountability.

III. North American Definitions of Artificial Intelligence

Canada’s Bill C-27

According to Canada’s Bill C-27, AI is defined as "a technological system that autonomously or partly autonomously processes data related to human activities through techniques such as genetic algorithms, neural networks, or machine learning to generate content or make decisions, recommendations, or predictions." This definition focuses on the technical processes used by AI systems, emphasizing their ability to operate independently or semi-independently in processing data and making outcomes.

The US Approach

The National Artificial Intelligence Initiative Act (2020) in the United States defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments," echoing similar elements found in the OECD definition. The broader AI Bill of Rights Project refers to AI as "automated systems" with the potential to significantly impact society, focusing on their autonomy and influence on human rights.

Similarly, California’s Senate Bill SB 1047 defines artificial intelligence as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” This aligns with the National Artificial Intelligence Initiative Act’s definition, focusing on AI systems that operate autonomously to achieve specific objectives and impact both real and virtual environments

Legal Implications in North America

The definitions in Canada’s Bill C-27 and the US approaches highlight AI’s autonomy and technical processes, providing flexibility for regulation across different sectors. However, this flexibility leads to challenges in harmonizing regulations between industries and jurisdictions, particularly in cross-border applications. Without a unified approach, legal frameworks may become fragmented, complicating compliance and enforcement.

IV. Artificial Intelligence Definitions in Asian Countries

Japan’s and Singapore’s Evolving Definitions

Japan’s Guidelines for AI define an AI system as "a system, such as a machine, robot, or cloud infrastructure, that operates autonomously and incorporates a learning function" (JIS X 22989:2023). These include advanced models like generative AI, which produce outputs such as predictions and recommendations from input data. AI services leverage these systems, combining technology with human oversight and stakeholder communication.

In Singapore, the Model AI Governance Framework defines AI as "a set of technologies that simulate human traits such as knowledge, reasoning, problem-solving, perception, learning, and planning." Depending on the AI model, these technologies produce outputs such as predictions, recommendations, or classifications. AI systems rely on algorithms to generate models, which are then selected and deployed in production environments.

Taiwan’s AI 2.0 Action Plan

In Taiwan, while the country hasn't yet formalized an AI definition in law, the Financial Supervisory Commission's "Guidelines for the Use of AI in the Financial Sector" (June 2024) offers a working definition. It describes AI systems as capable of learning from extensive datasets and employing machine learning techniques to perform human-like tasks such as perception, prediction, decision-making, planning, reasoning, and communication.

Taiwan's approach to AI extends beyond definition. The government's "AI 2.0 Action Plan" (2023) focuses on talent development, technological advancement, and addressing societal impacts. Furthermore, Taiwan aspires to become an "Artificial Intelligence Island,"  as declared by President-elect Lai in May 2024, aiming to integrate AI across various sectors to bolster national capabilities. 

V. Technical Gaps in Legal Definitions

While these legal definitions serve essential purposes in framing AI regulation, they do not fully account for certain technical realities.

Autonomy and Adaptability

Legal definitions often emphasize the autonomy and adaptability of AI systems, especially given the rise of advanced models like GPT-4. However, AI systems today still depend heavily on pre-defined datasets and objectives, with limited ability to redefine their goals. Even the most advanced AI, such as those using reinforcement learning or self-supervised learning, cannot operate with full autonomy. Legal frameworks should better reflect these technical limitations.

Continuous Learning and Foundation Models

Many legal definitions reference continuous learning, yet most AI models, including GPT-4 and Meta’s LLaMA, do not learn in real time. Instead, these models are updated through periodic retraining. A more accurate depiction of continuous learning would involve systems being updated post-deployment but not continuously learning from live data inputs in the way that some legal definitions suggest.

Multi-Modal AI and Human Intelligence Imitation

AI systems are increasingly multi-modal, processing text, images, and even video simultaneously, as seen with GPT-4's ability to handle image inputs. However, the idea that AI can fully imitate human intelligence—highlighted in certain legal definitions—is an overstatement. AI remains a tool designed to simulate specific cognitive tasks based on training data and lacks the general intelligence that defines human cognition.

VI. Other Challenges in AI Definitions

In addition to the technical consideration, other challenges affect the effectiveness of AI definitions in legal frameworks.

Lack of Uniformity Across Jurisdictions

Different regions have varied definitions of AI, leading to inconsistent regulation and enforcement. The OECD, EU, and US define AI differently, which complicates global governance for multinational companies.

Ethical and Societal Impact Overlooked

Current definitions often focus on the functionality of AI but fail to address the ethical and societal impacts, such as privacy concerns, biases in decision-making, or potential job displacement. For instance, the OECD and EU definitions focus on technical aspects without integrating the broader societal consequences.

Conclusion and Future Outlook

As AI continues to evolve, legal systems must regularly update definitions to align with technological advancements. While autonomy, adaptability, and continuous learning remain pivotal to legal frameworks, these concepts must be accurately framed within the technical limitations of AI today. Furthermore, non-technical challenges like legal uniformity, ethical and societal impact must be addressed for AI governance to be truly effective. By refining legal definitions, we can better regulate AI’s role in society while preparing for its future capabilities.


Sources

  1. John McCarthy - Wikipedia. Information about John McCarthy, one of the pioneers of AI who provided early definitions of artificial intelligence.
    https://fr.wikipedia.org/wiki/John_McCarthy

  2. OECD - Artificial Intelligence. The OECD's official page, detailing its definition and governance framework for artificial intelligence.
    https://www.oecd.org/

  3. UNESCO - Artificial Intelligence. UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence, focusing on AI’s cognitive abilities and ethical considerations.
    https://www.unesco.org/en/artificial-intelligence

  4. EU Commission - AI Regulation. The European Union’s comprehensive approach to AI regulation, including the June 2024 AI Act.
    https://commission.europa.eu/index_fr

  5. Canada’s Bill C-27. Official details on Canada’s Bill C-27, which defines AI and outlines its legal implications in Canadian law.
    https://www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c27_1.html#:~
    =Overview-,Bill%20C%2D27%2C%20An%20Act%20to%20enact%20the%20Consumer%20Privacy,Digital%20Charter%20Implementation%20Act%2C%202022

  6. National Artificial Intelligence Initiative Act (2020). The National Artificial Intelligence Initiative Act of 2020, outlining the US’s approach to AI governance and research.
    https://www.congress.gov/bill/116th-congress/house-bill/6216

  7. AI Bill of Rights Project. A US project focused on the rights and protections related to AI, aiming to ensure AI’s societal impact is considered in governance.
    https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

  8. Japan’s Guidelines for AI. Japan’s official guidelines for AI, focusing on the development and ethical use of AI technologies in society.
    https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_9.pdf

  9. Singapore’s Model AI Governance Framework. A voluntary framework for AI governance in Singapore, highlighting ethical principles for AI use across industries.
    https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework

  10. Taiwan’s AI 2.0 Action Plan. Taiwan’s AI 2.0 Action Plan, which focuses on the societal impacts of AI and the country’s plans to advance AI technology.
    https://practiceguides.chambers.com/practice-guides/fintech-2024/taiwan/trends-and-developments

  11. Reinforcement Learning - Wikipedia. A detailed explanation of reinforcement learning, a key technique in AI’s development of autonomous capabilities.
    https://en.wikipedia.org/wiki/Reinforcement_learning

  12. Self-Supervised Learning - Wikipedia. Information about self-supervised learning, a key approach in modern AI training.
    https://en.wikipedia.org/wiki/Self-supervised_learning

Recevoir des notifications pour chaque nouveau message

Recevoir des notifications pour chaque nouveau message

AI/Arb. All right reserved. © 2024

AI/Arb. All right reserved. © 2024

AI/Arb. All right reserved. © 2024