Arbitration

Arbitration

Arbitration

AI and Legal Tech: Compliance Pathways for Downstream Providers

Authors: Gabriel Durán and Josafat Leyva Ferzuli

Artificial intelligence is steadily integrating into legal practice, transforming the way legal professionals conduct research, draft contracts, and analyse case law. As General-Purpose AI (GPAI) models gain traction in law firms and legal technology providers, their influence is becoming increasingly tangible. However, with this integration comes heightened regulatory scrutiny, as policymakers seek to balance innovation with accountability.

The European Union Artificial Intelligence Act (EU AI Act), the world’s most comprehensive legal framework for artificial intelligence, introduces a new era of compliance obligations. This regulation classifies AI systems based on risk tiers, imposing strict requirements on high-risk applications, including those deployed in legal services. Of particular significance is the August 2, 2025 deadline, when obligations for GPAI models and their downstream providers will come into force. For software companies developing AI-driven legal tools, early compliance is not merely advisable—it is imperative to sustain competitiveness in a rapidly evolving regulatory environment.


This article explores the regulatory obligations faced by downstream providers developing specialized AI platforms using Large Language Models (LLM) Application Programming Interfaces (API), examining the legal frameworks that govern their development and deployment. Unlike AI systems used directly in judicial or arbitral decision-making—classified as high-risk under the EU AI Act—this analysis focuses on AI tools designed to support legal professionals and law firms, which do not fall under the high-risk category due to their advisory and assistive functions rather than their role in issuing binding legal decisions.

The EU AI Act’s Risk-Based Framework

The EU AI Act classifies AI systems based on their potential harm, imposing regulatory obligations proportional to their risk level. It distinguishes between prohibited AI (banned from deployment), high-risk AI (subject to strict compliance requirements), and limited-risk AI (which may be used with minimal restrictions) and minimal risk AI.

AI systems classified as presenting an unacceptable risk, are prohibited outright under Article 5. These include systems that manipulate human behaviour using subliminal techniques, exploit vulnerable individuals or attempt to evaluate people through opaque “social scoring” mechanisms. The Act also bans the use of AI for predictive policing based on profiling, for emotion recognition in workplaces or schools, and for constructing facial recognition databases by scraping images from public sources. 

High-risk AI systems defined under Article 6 and detailed in Annex III are permitted but subject to strict legal requirements due to their potential to significantly affect individuals’ lives, for example through decisions on education, employment, access to public services, or law enforcement. Particularly relevant in legal contexts, systems that assist in the administration of justice—including those used by judges to interpret law or assess facts—fall squarely within this high-risk category. To be deployed lawfully, such systems must comply with comprehensive requirements for risk management, data governance, transparency, human oversight, and ongoing monitoring.

In contrast, limited-risk AI systems are subject only to transparency obligations. These are systems that interact with humans or generate content, such as chatbots, document automation tools, or legal research assistants. Users must be clearly informed that they are engaging with an AI system. 

Finally, the vast majority of AI applications—such as spam filters, recommendation engines, or internal productivity tools—are considered to pose minimal or no risk.

Understanding Downstream Providers Using LLM APIs

A significant portion of AI-driven legal technology does not originate from companies developing their own artificial intelligence models from the ground up. Instead, many legal tech firms act as downstream providers, leveraging LLM APIs—such as OpenAI’s GPT, Anthropic’s Claude, or Google Gemini—to build specialized AI platforms tailored to specific legal workflows. 

An API allows these firms to access and use powerful AI models over the internet, without needing to host or train the models themselves. In other words, the API acts as a bridge between the legal tech application and the advanced AI model, enabling developers to integrate pre-trained capabilities directly into their tools.

Unlike AI model creators, who invest in large-scale training and infrastructure, downstream providers integrate pre-trained LLMs into their own applications, refining their capabilities for legal-specific tasks. These platforms support a wide range of AI-enhanced legal services, catering to diverse needs, including evidence discovery, contract analysis, due diligence, and even predictive analytics for judicial or arbitral decision-making.

By using LLM APIs, downstream providers leverage the technological advancements of major AI companies without the need to develop foundational models themselves—an undertaking that is both highly complex and cost-prohibitive. However, their dependence on external AI models does not exempt them from regulatory scrutiny. As these AI-powered tools become central to legal practice, regulators are increasingly focusing on how they are developed, modified, and deployed, raising critical compliance considerations under the EU AI Act. Indeed, relying on these powerful models is particularly relevant, as it entails obligations related to the GPAI models, as will be explained below.

In this context, the classification of AI systems based on a tailored LLM under the Act depends significantly on who is using the system, what the system does, and how it influences the outcome of a legal process.

i. Decision Making

a. Judges

AI systems used to assist judges in making decisions—such as in sentencing, case assessment, or legal interpretation— are subject to the high-risk requirements in Annex III, Section 8 of the Act, including robust risk management, data governance, transparency, human oversight, and post-market monitoring.

b. Arbitrators

For AI systems assisting arbitrators, the classification under the AI Act is more nuanced. Arbitration often involves commercial parties, frequently corporations, meaning fundamental rights of individuals may not be directly at stake. In such cases, the AI system might not be classified as high-risk. However, in arbitration cases involving individuals, the same concerns arise as in court proceedings. Thus, depending on the type of dispute and the parties involved, systems used in arbitration could still qualify as high-risk.

ii. Lawyers, Law Firms, and Legal Support

AI systems used by lawyers and legal support staff—for tasks such as document review, legal research, contract drafting, and case prediction—will not generally be considered high-risk systems. These systems assist professionals rather than directly affecting individuals’ legal status or rights. As such, they will be considered limited-risk AI systems. This classification mandates only basic transparency obligations, such as disclosing that an AI system is being used. 

In summary, the application of the EU AI Act in legal contexts is highly contextual. Where AI directly influences decisions with legal effect on individuals (e.g., judicial or some arbitral decisions), systems are subject to stringent high-risk regulations. Conversely, when AI functions as a support tool for legal professionals without directly affecting individuals’ rights, it is regulated under the more lenient limited-risk category. 

General Purpose Models and Further Modifications by Downstream Providers 

Article 3(63) of the EU AI Act defines a GPAI model as an AI system that, regardless of how it is placed on the market, demonstrates significant generality by competently performing a wide range of distinct tasks and can be integrated into various downstream applications. 

To clarify, Recital 98 explains that AI models with at least a billion parameters, trained on large amounts of data using self-learning techniques, are considered to display significant generality and perform a wide range of tasks. Recital 99 further states that large generative AI models (like those that create text, images, audio, or video) are typical examples of GPAI models because they can adapt to different tasks and generate content in multiple formats. This includes LLMs, which, even within a single modality like text, are considered GPAI because of their versatility across numerous tasks. 

Importantly, GPAIs can be modified or fine-tuned to create new models. This is the process we discussed as downstream providing, and the entities that make these changes may be considered providers of new models under the AI Act. In such cases, the legal obligations that apply to GPAI providers also apply to these downstream providers—but only for the parts they modified or fine-tuned, not the entire original model.

An example of this is a law firm that uses a general-purpose language model, like a GPT, through an API. The firm builds a contract review tool by fine-tuning the model using thousands of their own legal documents, improving its ability to detect risky clauses specific to their clients' needs.

Because the firm has modified the original model for a specialised legal use, it may now be considered a provider of a new AI model under the EU AI Act. This means the firm must comply with certain legal obligations—but only in connection with the modifications it introduced, not the base model delivered through the API.

Next Steps for Downstream Providers

With the August 2, 2025 deadline approaching, downstream providers using general-purpose AI models—especially in the legal sector—should take proactive steps to prepare for compliance under the EU AI Act. By this date, you should be able to demonstrate that:

  • You are sourcing GPAI models from compliant providers (or have documented their compliance status and addressed any known gaps). This includes requesting technical documentation, transparency summaries, and information on training data from your LLM API providers.

  • You have implemented core transparency and documentation measures in your own AI tools. Even if full obligations for high-risk systems only apply in 2026, early adoption signals seriousness to regulators and clients alike.

  • You are not using or integrating prohibited AI practices, such as emotion recognition in sensitive environments or covert biometric surveillance—these practices are already banned, as the prohibition took effect six months after the EU AI Act entered into force.

  • Your internal team understands the AI Act obligations relevant to their roles, and you have trained staff to handle audits or respond to inquiries from regulators, clients, or data subjects.

  • You’ve assessed your tools' risk classification and role in legal processes, ensuring that assistive legal tools are properly treated as limited-risk systems, and any features that might influence legal outcomes are reviewed for potential high-risk classification.

  • You maintain up-to-date internal documentation on how your AI tools function, how they were developed or fine-tuned, and how risks are being managed post-deployment.

Looking Ahead: The GPAI Code of Practice

As downstream providers prepare for the obligations ahead, further guidance is on the horizon. A key development is the Code of Practice for General-Purpose AI Models, which aims to support compliance with the EU AI Act, especially for providers of GPAI and models with systemic risk. The third draft, published on March 11, 2025, offers valuable insights into expected transparency, copyright, and risk management practices. The final version of the Code is expected in May 2025 and will serve as a practical reference for AI developers and downstream providers alike. Once published, we will return to this topic to offer a more detailed analysis of how the Code may shape implementation strategies and regulatory alignment in the legal tech sector.

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post

AI/Arb. All right reserved. © 2024

AI/Arb. All right reserved. © 2024

AI/Arb. All right reserved. © 2024