AI

AI

AI

Can Common Law–Trained AI Think Like a Civil Lawyer?

Legal professionals increasingly turn to large language models (LLMs) for assistance in drafting submissions, evaluating claims, and structuring procedural documents in arbitration. These AI models can mimic expert reasoning with uncanny fluency. But beneath their precision lies a quiet risk: they reflect the legal traditions and linguistic structures they were trained in.

Most LLMs are built on English-language legal texts dominated by US and UK practice. Studies have shown that models tend to exhibit cultural bias toward English-centric norms even in tasks involving moral reasoning across multiple languages, such as the MFQ-2 framework. In law, this often translates into procedural assumptions rooted in adversarial logic, even when operating under civil law systems. This is not just legal drift—it is linguistic and epistemic drift, with consequences for how AI frames fairness, structure, and legal legitimacy. Therefore, when prompted to operate under civil law frameworks like those in France or Switzerland, they may carry over procedural assumptions rooted in adversarial, common law logic. This “jurisdictional drift” can shape how AI tools frame fairness, structure document production, or apply soft law instruments.

To test this, we ran a simple experiment.

The Experiment

We posed the same procedural question, how to structure document production in international arbitration, to two advanced LLMs (GPT-4o and Claude 3.7). The only difference is the legal context. One prompt placed the arbitration in New York, the other in Paris under French arbitration law.

Prompt A (New York Arbitration):

“You are a legal assistant in an international arbitration seated in New York. The tribunal is considering how to structure document production. What principles apply? What role do the IBA Rules play? Should parties exchange Redfern Schedules?”

Prompt B (French Arbitration):

“You are a legal assistant in an international arbitration seated in Paris under French arbitration law. The tribunal is considering how to structure document production. What principles apply? What role does the IBA Rules play, if any? Should parties exchange Redfern Schedules?”

We asked GPT-4o and Claude 3.7 to respond to both questions. 

Results

Table: Comparative Summary of Model Responses

(We used new chats for each prompt to avoid hinting at context or inviting revisionist answers.)

Model

Legal Seat

Tone and Reasoning

Treatment of IBA Rules and Redfern Schedules

Claude 3.7

New York

Confident, pragmatic, practice-oriented

Assumes IBA/Redfern as standard tools without caveats

GPT 4o

New York

Formalist, cautious, structured

Treats IBA/Redfern as common but conditional

Claude 3.7

Paris

Adjusted tone, but still normative and prescriptive

Frames IBA/Redfern as helpful upgrades, not cultural imports

GPT 4o

Paris

Context-sensitive, doctrinally accurate

Recommends IBA/Redfern gently, with harmonization logic

New York Scenario

Claude’s response to the New York-seated arbitration question correctly identified the general principles guiding document production in international arbitration:

  • Disclosure in arbitration is more limited than in U.S. litigation.

  • Document requests must meet the standards of specificity, materiality, and proportionality.

  • Arbitral tribunals enjoy broad discretion in procedural matters.

  • The IBA Rules are widely accepted but non-binding, serving as soft law.

  • The seat of arbitration (New York) does not automatically import domestic discovery standards.

It even gestures toward institutional diversity, noting that the seat’s influence does not automatically trigger domestic procedural rules.

Yet this well-reasoned response also reveals something deeper: a quiet bias. Claude’s suggestions to adopt the IBA Rules at the procedural hearing and propose Redfern Schedules are presented as procedural common sense, not context-sensitive choices. There is no reference to party agreement, the tribunal’s legal background, or possible limitations arising from applicable institutional rules. In doing so, Claude treats tools of procedural convenience as global infrastructure.

ChatGPT’s output in response to the same prompt illustrates a contrasting tendency. While also grounded in international arbitration norms, ChatGPT’s response is more formalistic and cautious:

  • It emphasizes that the tribunal retains broad procedural discretion.

  • It defers to institutional rules and party agreement.

  • It states clearly that a New York seat does not imply adoption of U.S.-style discovery.

  • It walks the user through key principles such as relevance, materiality, specificity, proportionality, and confidentiality.

  • The IBA Rules are introduced as soft law and carefully explained, but are not assumed to be binding unless expressly adopted.

  • Redfern Schedules are described as a widely used mechanism, but their use is framed as conditional on party and tribunal preferences.

This marks an important difference. Claude simulates arbitration as practiced, particularly from a globalized common law vantage. ChatGPT, by contrast, offers a more abstracted and rule-based view, emphasizing principles without always connecting them to legal cultures. The result is that while Claude presents procedural norms as expected, ChatGPT wraps them in disclaimers. Yet neither model fully captures the socio-legal context in which such decisions are made. One drifts into normalization; the other abstracts into procedural idealism. In both cases, the AI provides technically sound answers, but contextually incomplete.

Paris Scenario

Claude’s reply to the Paris-seated arbitration prompt reflects a stronger awareness of civil law procedural constraints. It correctly notes that French arbitration law, codified in Articles 1464 to 1475 of the French Code of Civil Procedure:

  • Provides a minimalist procedural baseline,

  • Places significant discretion in the hands of the tribunal,

  • Favors a more restrictive approach to document production,

  • Limits disclosure obligations,

  • Requires each party to carry its burden of proof, with no expectation of proactive document exchange.

Yet despite this context-sensitive foundation, Claude still subtly promotes common law procedural tools as default enhancements:

  • The IBA Rules are introduced as “valuable soft law guidance” and a “common reference point.”

  • Redfern Schedules are described as “frequently used” and “common practice.”

  • Implementation is encouraged using prescriptive language like “consider adopting,” “implement Redfern Schedules,” and “discuss early.”

This framing suggests that these instruments are upgrades to French practice, rather than potentially foreign to its procedural DNA.

ChatGPT’s reply to the Paris-seated arbitration prompt is more cautious and conceptually refined. It begins by acknowledging that French arbitration law is minimalist and flexible:

  • It highlights the relevant provisions of the French Code of Civil Procedure.

  • Emphasizes party autonomy as a foundational principle.

  • Notes that tribunals retain discretion, but one shaped by civil law logic.

  • Reiterates that broad disclosure is disfavored.

  • Reinforces that parties must prove their claims without extensive document exchange.

As the analysis moves to the IBA Rules and Redfern Schedules, ChatGPT subtly shifts toward procedural harmonization:

  • The IBA Rules are introduced as non-binding soft law but are often incorporated “in whole or in part” by tribunals.

  • Redfern Schedules are described as standard procedural tools—common and advisable.

Although its tone is more reserved than Claude’s, the practical implication is similar. Even while avoiding prescriptive language, ChatGPT centers hybrid instruments as procedural defaults. It does not question whether such tools might override core civil law principles. Instead, it narrates them as neutral features of international arbitration.

Both models adapt to civil law frameworks with improved sensitivity, but they do so by smoothing over legal pluralism. Procedural culture is framed as a matter of preference or efficiency, not legal tradition.

What if the Prompt Were More Precise?

To test how far prompting can go, we modified Prompt B: “You are a legal assistant in an international arbitration seated in Paris under French arbitration law. The tribunal is considering how to structure document production. In civil law systems, document production is typically narrow, limited to key identified materials, and tribunal-led. The IBA Rules are not mandatory, and Redfern Schedules are uncommon. How should the tribunal proceed?”

With this framing, both models adjusted. GPT-4o stated that broad adversarial discovery may not be appropriate, while Claude emphasized proportionality and tribunal discretion. This suggests that drift can be mitigated through structured prompting, but only if the user is aware of the need to do so.

Does This Drift Matter?

Many arbitrators are pragmatic, and hybrid procedures are common. Using the IBA Rules or Redfern Schedules in a French-seated arbitration isn’t necessarily wrong, especially if the parties agree. Yet, when AI tools simulate procedural reasoning, they risk presenting some approaches as universal when they are not. If procedural culture is flattened in the name of efficiency, tribunals may lose sight of the pluralism that arbitration was built to accommodate.

Practical Takeaways: Using AI in Cross-Jurisdictional Arbitration

  1. Prompt precisely, not vaguely. Stating "under French arbitration law" may not suffice. Clarify procedural assumptions.

  2. Treat AI outputs as drafts, not doctrine. Use them to organize and structure, but always review critically.

  3. Watch for legal cultural bleed. AI may not hallucinate facts, but it can replicate norms too broadly.

  4. Push for jurisdictionally diverse training data. Models trained on plural legal traditions will yield more context-sensitive results.

Final Thought

In arbitration, where procedural flexibility meets normative variation, that distinction matters. If we want AI to assist across legal traditions, we can’t assume it knows how to shift procedural gears. We have to teach it how.

That need is especially relevant for new professionals. Seasoned practitioners may navigate procedural plurality by instinct or training, often shaping bespoke frameworks informed by years of practice. However, for newer entrants, especially those engaging with AI for procedural scaffolding, these tools often carry implicit weight. The tendency to consult AI for quick reference, particularly under time pressure or uncertainty, amplifies the normative impact of procedural assumptions embedded in LLMs.

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post

AI/Arb. All right reserved. © 2024

AI/Arb. All right reserved. © 2024

AI/Arb. All right reserved. © 2024