top of page

Generative AI in Mediation: Emerging Duties for Lawyers and Mediators

by Pierfrancesco C. Fasano


Introduction: AI Enters the Mediation Room


While generative AI (GenAI) is rapidly transforming legal practice, its impact becomes even more delicate when used by a lawyer acting as mediator or by a lawyer representing a party in mediation.


The CCBE Guide does not expressly address this scenario; however, its core principles—confidentiality, independence, competence and transparency—apply in mediation with heightened intensity due to the confidential, informal and trust-based nature of the process.


AI is not incompatible with mediation, but its use requires a markedly cautious, informed and regulated approach.


1. AI in Mediation: A High-Sensitivity Ethical Environment


Mediation introduces two structural conditions that amplify the ethical challenges associated with AI use.


• Strengthened Confidentiality


Everything disclosed in mediation—statements, documents, caucus discussions, proposals—is protected by strict confidentiality and inadmissibility rules.


Using GenAI to:

• summarise sessions,

• analyse confidential documents,

• draft proposals or negotiation scenarios,


may inadvertently transfer privileged information into a system that can retain, reuse, or expose it.


In mediation, therefore, prompts must not contain any identifying, confidential or case-specific data, unless the tool operates under robust safeguards (local environment, zero-retention, contractual protections and technical isolation).


• A Non-adversarial and Collaborative Setting


GenAI tends to:

• simplify narratives,

• reinforce pre-existing biases,

• generate “optimal solutions” disconnected from the parties’ real interests.


In a negotiation context, such outputs may distort expectations or influence the bargaining dynamic in ways inconsistent with the mediator’s duty to maintain a balanced and fair process.


2. The Lawyer Representing a Party: Duties, Risks and Limitations


2.1 Professional Competence and Verification of Outputs


Under the CCBE principles, lawyers must verify all AI-generated material.

In mediation this obligation is even stronger, as unverified outputs may:

• mislead the client about realistic settlement options;

• introduce inaccurate legal reasoning;

• shape negotiations on the basis of fabricated or biased content.


The lawyer must re-write, filter and validate any AI-generated proposal before presenting it to the other side or to the mediator.


2.2 Transparency Towards the Client


Clients should be informed if the lawyer uses GenAI to:

• draft preparatory notes or summaries,

• assess potential settlement scenarios,

• produce draft settlement agreements.


This is because AI may influence the client’s perception of what constitutes an acceptable outcome—directly affecting the solicitor’s duty to provide clear, independent and reasoned advice.


2.3 Conflicts of Interest


Large-scale GenAI systems may be trained on datasets containing information from multiple users.

In mediation, this raises heightened risks:

• data relating to one party may be indirectly combined with data from past or current clients;

• confidential information could be captured and reused by the model;

• inadvertent information crossing between separate matters becomes a real possibility.


In a non-adversarial context, these risks undermine both confidentiality and the structural integrity of the process.


3. The Lawyer Acting as Mediator: AI and the Challenge to Impartiality


3.1 Independence and Neutrality Threatened


A mediator must ensure full independence, neutrality and even-handedness.

Using GenAI to:

• generate settlement options,

• evaluate BATNA/WATNA scenarios,

• summarise private caucus information,


may inadvertently introduce algorithmic bias, distort the mediator’s judgement or create the impression that the proposed solutions originate from the software rather than from the mediator’s own neutral analysis.


3.2 The Risk of Unintentional “Automated Mediation”


Even when AI is used privately as an internal support tool, it may influence the mediator’s reasoning.

This creates what the CCBE describes as “automation complacency”—the risk that professional judgement is subtly replaced by the output of a system whose reasoning cannot be scrutinised.


3.3 Confidentiality in the Mediator’s Hands


The mediator often holds asymmetric information between the parties.

Introducing such information into an AI tool—whether anonymised or not—may breach:

• statutory confidentiality obligations,

• the duty of neutrality,

• international standards such as UNCITRAL, ICC and CEDR guidelines.


4. Recommended Good Practice for Lawyers and Mediators Using Generative AI


For lawyers representing parties

• Use GenAI only for non-confidential, non-identifying tasks.

• Never enter information from caucus sessions or confidential exchanges into prompts.

• Abstract or heavily sanitise any document before using AI tools.

• Independently verify, correct and re-draft any AI-generated text.

• Inform clients whenever AI is used for preparatory or drafting purposes.


For mediators

• Do not use GenAI to evaluate settlement options or generate proposals.

• Avoid cloud-based models for any activity connected to the mediation.

• Do not rely on AI-generated assessments of parties’ behaviour, tone or interests.

• Use only isolated, local, zero-retention systems for administrative tasks (if at all).

• Include explicit information on AI usage, where relevant, in mediation protocols.


Conclusion: AI May Support Preparation, But Mediation Remains a Human Process


GenAI can assist lawyers in preparing strategies and documents, but it cannot replace the human judgement, neutrality and relational competence that lie at the heart of civil and commercial mediation.


For mediators, the use of AI should remain exceptional, strictly controlled and technically secure.


For lawyers, the tool may be useful, but only within strict boundaries: no confidential prompts, full verification of outputs, and complete transparency with the client.


In mediation, more than anywhere else, AI may be a supporting instrument—never a decision-maker, never an adviser, never a third party.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • Whatsapp
  • Google Places
  • LinkedIn
  • X
  • Youtube

© 2001 - 2025 MFSD srl - P. Iva 04810100968

bottom of page