Resources

SafePassage: Bringing Trust and Traceability to AI-Powered Mass Tort Workflows

Written by Joe Barrow | Oct 13, 2025 2:08:57 AM

In recent years, AI has become capable of reading and analyzing text at extraordinary speed, processing and cross-referencing information far faster than any human could. That power brings enormous potential, but in law, speed without accuracy is a risk.

The legal world has already seen firsthand what happens when AI models hallucinate. From briefs that cite non-existent cases to extractions that don’t match the source record, the risks are real, and in mass tort litigation, they scale quickly.

That challenge sparked a focused effort by several members of our team at Pattern. Machine learning engineers Joe Barrow and Raj Patel, along with AI analysts Misha Kharkovski, Ben Davies, and Ryan Schmitt, asked an important question: 

How can we build guardrails that make every extraction verifiable? 

Their answer is SafePassage, a research framework that strengthens our existing quality systems with an additional layer of evidence-based validation, making it easier for legal teams to see not just what the AI extracted, but why it's correct. It’s detailed in the latest research paper: SafePassage: High-Fidelity Information Extraction with Black Box LLMs.

More than a technical achievement, SafePassage was built to earn trust. For legal teams managing thousands of claims, SafePassage provides the structure and safeguards needed to make AI an asset rather than a risk.

What is SafePassage?

In mass tort and complex litigation, AI models are tasked with extracting structured data from large volumes of source documents, including medical records, claim forms, court filings, and more. But using AI models for extraction introduces a new risk: hallucinations. Legal teams need to be able to trust that extracted outputs are accurate, explainable, and verifiable. 

This is the goal of SafePassage. It is a three-step approach designed to make AI outputs more trustworthy. 


  1.  Generate
    The system generates structured fields (for example, Hearing Date or Presiding Judge) along with a snippet of context directly from the record itself. For instance, if the field is Presiding Judge, the system might output “Judge Smith” paired with the snippet “…before the Honorable Judge Smith in the Superior Court…”.

  2.  Align
    That context snippet is checked against the actual document to confirm it truly exists in the source, even if there are OCR errors, formatting quirks, or small changes. This prevents the system from introducing fabricated or mistranscribed text.

  3. Score
    A lightweight verifier evaluates whether the context actually supports the extracted answer. If the system labeled “plaintiff Joe Barrow” as the Presiding Judge, the system flags it as unsupported. This ensures that extractions are correctly tied to the right field.

Together, these steps act as a guardrail system - protective checks we have mentioned before -  that ensures every answer is grounded in evidence.

Why SafePassage Matters for Mass Tort Workflows

Built-in traceability and trustworthiness

Every data point extracted by SafePassage includes a supporting snippet from the original record and is double-checked in two ways:

  1. Traceability – we can link the extracted information to content from the document.
  2. Trustworthiness – reason to believe the evidence supports the extracted information

This ensures that reviewers are looking at evidence-backed information they can rely on during audits, negotiations, and settlement reviews, and they can easily inspect the source material.

Fewer re-dos and faster throughput

Unsupported extractions are filtered or flagged before reviewers ever see them. This reduces the time spent chasing incorrect data or reworking files. In practice, the SafePassage approach significantly reduces error rates and is already driving improvements to Pattern Data’s AutoReview engine, the system that automatically extracts and classifies information from case records prior to human review.

Lower operating costs at scale

SafePassage includes a built-in verification layer that checks each extraction for accuracy before it’s delivered. Instead of relying on large, general-purpose AI models to perform this task, it utilizes a smaller model trained specifically for legal data review. That enables it to perform thousands of quality checks per second at a fraction of the cost, reducing expenses and accelerating large-scale reviews without compromising accuracy.

Better setup, better outcomes

SafePassage scoring can also be used to measure how well an AI model is configured before it goes live. For example, when teams adjust how a model identifies key fields, such as exposure dates or injuries, a higher SafePassage score indicates that the model is making stronger, evidence-backed connections. This provides a fast way to confirm quality early, reducing the need for large-scale rework once reviews begin.

Right-sized human oversight

The SafePassage paper underscores a crucial truth: no language model will ever be perfect. Instead of trying to eliminate human involvement, SafePassage focuses it where it matters most. Reviewers spend their time on exceptions, rare or ambiguous cases, and final validations —the areas that truly require human judgment, while the system handles repetitive checks in the background. This balance helps ensure every case is reviewed efficiently and defensibly.

Where You Will See It in Practice

  • Case Intake/Screening – SafePassage supports the early review of medical records, claim forms, and exposure documents by providing evidence-backed extractions and flagging unsupported fields before they reach reviewers.

  • Case Development – Teams can use SafePassage scores to test and refine AI model setups, improving extraction quality before large-scale processing begins.

  • Settlement Preparation – Because every data point is tied back to its source, SafePassage reduces disputes and rework during case tiering, scoring, and settlement audits.

SafePassage is one more example of how research at Pattern Data connects directly to real-world legal workflows. It extends our commitment to pairing AI performance with evidence, transparency, and reliability — the foundation of any defensible case.

If you would like to learn more, you can read the full paper SafePassage: High-Fidelity Information Extraction with Black Box LLMs.