Legal AI in 2025: How AI legal agents and reasoning models are reshaping mass tort litigation

Author: Ashley Grodnitzky


What will the shape of the legal AI landscape be, and how will it evolve? It’s a question more law firms are asking, especially those managing high-volume, document-heavy work like mass tort litigations where AI can offer incredible support. There are two concepts at the forefront of the discussion: LLM agents and reasoning models

At Pattern Data, we’ve been rethinking how we talk about our platform, and more importantly, how we build it. There’s been a lot of buzz around AI agents lately. From AI legal agents, often powered by reasoning models, that automate document review and reduce error in claim eligibility checks, we’re driving and seeing real progress in how legal tech AI can ease the burden on case teams and improve litigation and settlement outcomes.  Some of it’s hype. Some of it’s valid. And some of it gets real when you’re actually doing the work of automating complex legal workflows.

We recently hosted a session with Joe Barrow, one of our expert machine learning engineers, to unpack what AI agents are (and aren’t), how reasoning models are pushing boundaries, and how these ideas are shaping our product roadmap. Here's our take.

What is an AI Agent?

Defining an AI agent can be tricky. There isn't one universally agreed-upon definition. Some might say it’s a buzzword, but a more practical way to think about it is: an LLM in a loop with tools and a goal to accomplish.

Breaking it down, an agent involves:

  • Goal: A specific task or objective to achieve.
  • Loop: A process that continues until the goal is met.
  • LLM: A large language model providing the intelligence (common commercial examples: ChatGPT, Claude, etc.)
  • Tools: External functions or resources the LLM can use. 

In the legal world, this can look like a “reviewer agent” designed to support the analysis of evidence. The agent pulls in relevant case documents, medical records, interprets diagnostic testing, or summarizes claimant intake forms — looping through each case until it’s fully reviewed and classified.

A clear example of an MDL-specific goal for an AI agent might be: “Determine whether a claimant meets the injury eligibility criteria for a specific products liability program.”

To get there, the agent would:

  • Pull and analyze case documents, including medical records
  • Check for evidence of product use and extract dates
  • Select and interpret medical records relating to injuries
  • Apply the correct program criteria, such as the scoring rules

Other example goals for agents in mass tort litigation include:

  • “Generate a draft notice for an expedited pay claimant based on hearing loss score.”
  • “Identify missing documentation required for claim submission.”
  • “Match a claimant’s injury profile to the correct compensation tier.”

 

Why Aren't Agents Everywhere Yet?

While agents sound powerful, they aren't in widespread use. There are open questions and challenges around deciding when an agent has completed its task and what steps it should take next. It's an area ripe for innovation and development.

Reasoning Models: Trading Compute for Better Answers

Reasoning models offer a different approach. They allow you to trade test-time compute, often meaning generating more tokens, for a better response. Essentially, they spend more time “thinking” to produce a more accurate or detailed output.

This is crucial for complex tasks like visual analogical reasoning, where LLMs often struggle. Reasoning models can help bridge the gap, enabling AI to solve problems that are easy for humans but difficult for machines.

This is especially useful in legal work, where AI often has to:

  • Compare timelines and facts across documents
  • Interpret ambiguous data (like fluctuating audiograms)
  • Make a series of structured decisions (e.g., does this person qualify? What tier? What’s missing?)

In the 3M Combat Arms Earplug Settlement Program, for example, reasoning models played a role in scoring injury levels, selecting the most appropriate audiograms, and helping identify edge cases that needed human review.

The Connection Between Agents and Reasoning Models

Reasoning models and agents are closely related. Deciding which tool an agent should use, or whether its goal has been accomplished often requires reasoning. Models fine-tuned for reasoning can be particularly helpful in this context. 

Legal AI tools, like Pattern Data that support litigation and settlement, often blend reasoning models operating within agentic loops. The reasoning model acts as the intelligent driver, using the agentic loop to access tools, data and pursue a defined goal through complex decision making. 

In essence, the reasoning model is the agent, orchestrating the workflow and making the intricate decisions to achieve its objective. 

Looking Ahead

 

What This Means for Law Firms and Mass Tort Cases

The AI legal agent is no longer theoretical—it’s a working part of litigation, settlement processing and analysis at Pattern. We’ve already used it to process hundreds of thousands of claims, generate notices, calculate eligibility, and flag exceptions for human intervention and review.

Here’s why this matters:

  • Faster document review = faster resolutions
  • More accurate eligibility checks = fewer disputes and faster payments
  • Real-time tracking transparency = better knowledge for clients and co-counsel
  • Less manual work = more time for strategic legal decisions

The AI landscape is moving fast—especially when it comes to LLM agents and reasoning models. At Pattern Data, we're focused on turning these advancements into real, practical tools for the legal world. We see huge potential to simplify complex litigation workflows, cut down on manual effort, and improve accuracy across the board. As we continue building, we'll share what we’re learning and how it applies to your work. Our goal is to help law firms and claims administrators apply AI where it makes a difference—early, often, and with confidence.

If you're exploring how AI fits into your legal workflows, here are a few resources that dig deeper into the realities, challenges, and best practices:


back to all news