Resources

More informed, not less involved: what MTMP spring 2026 revealed about AI in mass tort litigation

Written by Ashley Grodnitzky | Apr 23, 2026 1:39:13 PM

 

The question that shaped the conversation at MTMP Spring 2026 was one most firms are quietly wrestling with: do you actually need purpose-built legal technology, or is a general AI subscription enough?

That question sat just beneath the surface throughout the event, where Pattern Data CPO James Nix joined Nathan Walter (Briefpoint), Eric Baum (Filevine), Tim Short (Supio), and Luis Prasad-Bernier (EvenUp) to discuss what’s actually working—and what isn’t—when AI meets real litigation pressure.

It’s clear that firms are adopting AI. The question has shifted to where it fits, and whether it can meet the demands and standards of real legal workflows.

Some are still using isolated tools. Others are moving toward connected systems that support the full litigation lifecycle from screening through settlement.

Here’s what stood out.

The conversation has changed

A few years ago, the question at events like MTMP was whether firms should be using AI at all. That question has been settled. The moderator opened with a show of hands, and the majority of attendees confirmed they are already using AI in some form. The conversation has moved on.

What it has moved to is harder: how do you know if what you are doing actually works? And how do you build on it in a way that holds up under real litigation pressure?

The wide spectrum of how law firms are using AI

James opened by acknowledging something firms need to hear directly: the definition of "using AI" spans an enormous range right now.

On one end, some firms are asking questions in Adobe Acrobat's assistant tool and calling it AI. On the other hand, some have aspirations to train their own models. Most are somewhere in the middle, with a handful of attorneys using general LLMs on their own, without firm-wide structure or oversight.

That lack of structure creates a different kind of risk. Not just inconsistent outputs, but no shared way to measure accuracy, track what’s been reviewed, or enforce the guardrails and data controls firms rely on.

The meaningful distinction James drew is not about the sophistication of the tool. It is about whether AI is integrated into how the firm operates or whether it sits outside of it.

Firms are beginning to separate into two distinct approaches. Some are embedding AI directly into intake, triage, and case workflows before a human ever opens a file, while others are using it on the margins, with individual attorneys applying general-purpose tools on their own time.

Both are using AI. The outcomes are not the same.

The difference shows up earliest at intake, where structured screening and eligibility validation determine whether a case moves forward or creates downstream rework.

Why general AI tools fall short at scale for legal use cases

The central debate of the session was the one the moderator framed directly: why not just use Claude or another general model? It is capable. It is cheap. Why pay for something on top of it?

The panelists gave several answers that are worth understanding clearly.

The processing problem

Case records in mass tort litigation aren't clean, searchable files and every one requires contextual interpretation to mean anything, and there are thousands of them across thousands of claimants. General tools process one document at a time, within a single context window. Purpose-built legal platforms run hundreds or thousands of documents in parallel, with discrete checks firing on each one.

James shared that when Pattern processes a medical record, it spawns somewhere around 800 to 900 individual checks across that single file, a mix of model-driven analysis and mandatory verification steps that are part of the platform architecture. That is not something you replicate by asking a general model a question.

The validation problem

When you use a general tool, you have no way to measure how reliable the output is across a class of similar documents. Purpose-built platforms can give you that.

James was direct: Pattern can tell clients, with confidence, what their success rate is on a given data point. That number matters when you are using AI-assisted output to make decisions about a docket. You need to know the risk you are taking on.

The data problem

General models are trained on publicly available content, which includes a lot of unreliable sources. Legal platforms draw from your own files, your own institutional knowledge, and verified records. As several panelists noted, the results you get are only as trustworthy as the data that produced them. The sources matter.

Understanding what separates purpose-built legal AI from general tools starts with understanding what those tools are actually doing and what they're not.

"More informed, not less involved"

Pattern's internal principle is: more informed, not less involved. AI should make attorneys and staff more capable of exercising judgment, not a substitute for it. The goal is not to remove the professional from the process. The goal is to make sure that when they do engage, they have better information and less noise to work through.

This is a cultural posture as much as a technical one. Several panelists pointed to cases where firms handed off AI output without review and faced consequences, including sanctions for filings that contained fabricated case citations. The platform guardrails help. The firm culture matters at least as much.

Hallucinations are real. They are also manageable.

Every panelist was honest about this: AI systems make mistakes. The question is how you build around that reality.

James noted that 2023 was a different world, and that current model performance has improved considerably. But the honest answer is that the guardrails, validation pipelines, and task-specific checks that legal platforms build are specifically designed to reduce and quantify that risk. A good platform tells you its failure rate on specific tasks. A general tool does not.

Nathan Walter from Briefpoint made a point worth repeating: verification only works if it takes less time than doing the task from scratch. Purpose-built platforms design for that. They surface the work so a professional can check it efficiently, not re-do it entirely.

How Pattern approaches this differently

Pattern's platform is built around a different premise than most of what was described on the panel. Pattern is not a document processing tool or a discovery tool. It is a case evaluation platform designed to give firms visibility across an entire docket.

The platforms on this panel each occupy a specific workflow. Pattern's position is at the docket level. The question Pattern is built to answer is not "what does this document say?" but "what does this docket tell you, where do you need to focus, and are you prepared for what comes next?"

That kind of inventory-level intelligence carries through from screening at intake to prioritization during development to export-ready work product at settlement. The data and logic built at intake inform every stage that follows. Nothing gets re-reviewed from scratch.

For firms managing large inventories, that matters. Mass tort litigation does not reward reviewing cases one at a time.

How to rethink AI in your firm

The panel converged on a few practical points for firms that are still figuring this out.

Understand where AI will actually be applied before you buy anything. The best tool is the one that fits the workflow it is meant to improve. Generic implementations produce generic results.

Ask vendors about their data governance, SOC 2 Type 2 compliance, HIPAA compliance, and data processor agreements. The last one tends to get overlooked. When a vendor uses downstream services, your data may be flowing places you have not thought through.

Build a firm culture around verification, not rubber-stamping. AI should increase the information available to your team. Someone still has to review it.

The bigger picture

Across all five panelists, the consistent message was this: AI is no longer optional in mass tort practice. Firms that build structured, validated workflows around it now are creating a durable advantage. Firms that treat it as a curiosity or limit it to individual use cases are falling behind.

The conversation has moved from "should we?" to "how do we do this right?" MTMP was a useful reminder of where that line is.

If you want to talk through how Pattern fits into your current workflow, reach out to us at info@patterndata.ai or schedule a conversation here.

 

Frequently Asked Questions

What is the difference between general AI and purpose-built legal AI?

General AI tools process one document at a time, draw from publicly available sources, and have no way to measure accuracy on specific legal tasks. Purpose-built legal AI is trained on domain-specific data including case files, medical records, and litigation criteria, and runs thousands of documents in parallel with validation checks on each one. The difference isn't just capability. It's accountability: purpose-built platforms can tell you their success rate on a given data point. General tools cannot.

How do legal AI platforms prevent hallucinations?

No AI system eliminates hallucinations entirely, but purpose-built platforms reduce and quantify the risk. They ground outputs in your firm's own case files, apply validation guardrails before results are delivered, and build verification systems designed to make checking AI output faster than redoing the work from scratch. As Nathan Walter of Briefpoint put it: "Verification is only good if the time and effort it takes to verify is less than the time and effort it would take to do it from scratch."

What should law firms ask vendors about data security?

Ask whether the vendor holds SOC 2 Type 2 and HIPAA certifications and request their reports. Ask whether the system is closed, meaning your data never leaves their infrastructure. And review data processor agreements. A vendor can be SOC 2 compliant while still passing data to third-party downstream services. Those agreements get overlooked more than they should.

Why can't mass tort firms just use Claude or ChatGPT?

General models don't scale to mass tort demands. They process one document at a time, draw from unverified sources, and provide no measurable accuracy rate on specific tasks, which means no way to quantify the risk you're taking on. Purpose-built platforms run thousands of documents in parallel, validate against known benchmarks, and keep data within a closed, compliant system.