February 27, 2024

Jennifer Hoekstra on Navigating ESI and AI in Mass Torts

Author: Pawan Murthy


Jennifer Hoekstra, a partner at Aylstock, Witkin, Kreis & Overholtz, delves into the intersection of AI, ESI (Electronically Stored Information), and mass tort litigation. After spending years on plaintiff-specific discoveries related to ESI, Jennifer provides tips on how law firms can gain benefits from AI software while maintaining proper protocols for client data confidentiality. 

Q: What is ESI, and how does it relate to using AI in law? 

A:  I frequently address this topic in my monthly webinars for Thompson Reuters. Electronically Stored Information, or ESI, encompasses various digital documents and data that might be relevant in legal proceedings. When we refer to ESI, we’re considering all electronically stored and accessible data in the ordinary course of business. This includes emails, billing records, shared files, and data on individual computers.

ESI, in essence, is any conceivable form of electronically stored and accessible data. It’s akin to asking where documents are stored physically, like in a file cabinet or closet, but in a digital context. The same thoroughness applied to physical document storage must be applied to electronic storage to ensure comprehensive collection. With the prevalence of AI in devices, we must realize that data lives on various platforms, like Google search histories, social media algorithms, internal chat systems used by a company, data trails within those tools, and corporate social media profiles. For example, understanding whether a company uses Office 365 or a different version of Microsoft Office is vital. This distinction affects how data is shared, such as through email attachments or embedded file links, influencing collection methods.

Regarding the intersection with AI, many companies unknowingly utilize AI to organize and manage their files. A commonly used program might be AI-based, intelligently organizing and prioritizing documents for easier access. When collecting ESI, assuming that an AI system is managing this information is imperative. Understanding the specifics of this AI system is critical to ensure that no data is overlooked during the collection process.

Q:  Do you have any tips or best practices you can share with law firms considering AI tools? 

A: It’s essential to recognize that you might already be using AI, perhaps unknowingly. If you’re operating with the latest versions of software like Microsoft Office or G-Suite, there’s a likelihood that your search algorithms are AI-driven. The same applies if you’ve recently acquired a client management system or similar software – it likely incorporates some form of AI.

When considering using AI for tasks such as generating client letters or conducting research, it’s crucial to have proper protocols in place, akin to the adoption of any new technology. Best practices include deploying the software in a closed environment to avoid disrupting existing systems or ensuring that client materials and confidential information are isolated and secure.

A common oversight I’ve observed involves document translation. For instance, translating a document protected under a confidentiality order through an open-source platform like Google Translate can inadvertently breach that order. Nowadays, tools like Google Translate are often incorporated into confidentiality and foreign language protocols, reflecting how AI has changed our approach to such tasks.

My primary recommendation is to treat AI not as an isolated tool but as a component that requires pilot testing before mainstream implementation. In a typical law firm structure, only a few individuals may need ongoing access to AI. It’s helpful to think of AI as akin to a new associate or senior paralegal – consider its role within your organization, define the scope of its usage, and ensure that its integration is deliberate and controlled rather than a haphazard embrace of the latest technology.

Q: What role do you think AI will play in the future of mass torts? Are judges becoming more aware of their capabilities (good or bad) in the courtroom? 

A: The awareness of AI is undoubtedly growing in courtrooms. The publicity surrounding poorly executed AI-driven briefings has contributed to this increased awareness. While some judges might react by imposing blanket bans on AI technology in their courtrooms, they may not fully comprehend the extent of what that entails. Such a ban could inadvertently prohibit essential functions like Google searches or voice dictation, as these tools also incorporate AI elements.

However, I believe most judges, especially those overseeing mass tort cases, are more informed and understand that technology, including AI, plays a significant role in litigation and discovery processes.

Looking towards the future of mass torts, I foresee AI evolving to predict litigation opportunities. It might analyze adverse event reports to the FDA and identify potential issues with certain drugs, suggesting new avenues for legal action. AI could also detect trends and patterns in client intake materials or medical records, indicating a commonality in medication usage and side effects among many clients. While we’re not fully there yet, there are firms actively pursuing such predictive capabilities, viewing AI as a natural progression from data mining client files to identifying future litigations.

Facebook Twitter LinkedIn

back to all news