Start of Main Content

How AI tool Elsa could shape the FDA review process

Jul 15th, 2025

By Nicole Witowski 4 min read
ai-tool-elsa-changes-how-the-fda-handles-reviews

As federal agencies look to modernize operations with artificial intelligence, the FDA is moving ahead with its own in-house tool: an AI assistant called Elsa. The agency rolled out the technology in June, describing it as a way to help streamline regulatory review, from summarizing adverse event reports to performing label comparisons.

The rollout comes earlier than expected, meeting an internal goal to scale AI tools agency-wide by mid-year. But while the launch marks a milestone in the FDA’s digital transformation efforts, early reports suggest Elsa may be more of a starting point than a finished product.

AI aimed at improving internal workflows

Elsa, short for “Electronic Language System Assistant,” was built using Anthropic’s Claude model and operates within AWS’s secure GovCloud environment. According to the FDA, the tool is not trained on confidential submissions from drug or device sponsors. Instead, it’s designed to assist FDA staff by accessing internal documents to help generate summaries, draft code to aid database development, and assist with inspection planning.

The agency has framed the tool as a productivity enhancer, particularly for reviewers tasked with navigating large volumes of information. Leadership has pointed to early pilot use cases that reportedly saved staff time on administrative tasks. Still, with limited detail shared by the FDA about how the tool was trained or tested, and a relatively short pilot period, some aspects of its real-world performance remain unclear.

Internal observations and early limitations

Currently, the tool seems to function primarily as a summarization engine and is not yet embedded in core FDA systems used for product review or decision-making. Some staff have voiced concern that the tool was deployed too quickly and without sufficient safeguards or training protocols in place.

These early concerns have been echoed in initial user feedback, which points to a few growing pains. Some staff have reported challenges with system integration, uploading documents, and inconsistent output quality when summarizing information.

Concerns about accuracy are especially salient given recent public examples of generative AI’s limitations. For instance, the Department of Health and Human Services’ “Make America Healthy Again” report, released in May 2025, came under fire for citing studies that do not exist—a well-documented risk associated with current AI models.

While Elsa runs in a more controlled environment, some reviewers have pointed out that the tool's responses sometimes contain inaccuracies or incomplete data. In a regulatory context where evidence standards are critical, even small inconsistencies could complicate rather than speed up the review process.

Part of a broader shift, but still early days

The launch of Elsa comes amid a broader push across the federal government to integrate AI into operational workflows. Within the FDA, the tool has been promoted as a step toward reducing manual burdens on staff and addressing resource gaps exacerbated by recent layoffs and hiring freezes. Some see the move as a pragmatic response to agency-wide staffing constraints, while others worry it could reflect a premature bet on technology not yet ready to support core regulatory work.

The FDA’s use of AI for internal operations mirrors what’s happening in the broader life sciences industry. Over the last two decades, the agency has reviewed and cleared more than 950 AI-enabled devices, the bulk of which have been approved within the last five years. For companies operating in the life sciences, this trend suggests a dual transformation: as biopharma and medtech build and validate products that use AI, they also need to prepare for a regulatory environment that’s evolving in parallel.

Emerging considerations for sponsors

As AI tools like Elsa become more embedded in regulatory workflows, questions around transparency and accountability are beginning to surface. Industry stakeholders are asking whether sponsors will be informed when AI plays a role in the review of their submissions. If an AI-generated summary or analysis contributes to a delay or rejection, companies may have limited visibility into how those outputs influenced the outcome, raising concerns about unvalidated or opaque reasoning.

Looking ahead, life sciences companies should also consider how AI-generated outputs could influence day-to-day interactions with regulatory review teams. While tools like Elsa may improve efficiency in processing application materials, early-stage limitations, such as potential misinterpretations or hallucinated information, could result in unclear or inconsistent feedback. This, in turn, may require additional back-and-forth with FDA staff to resolve discrepancies. Being able to spot and clarify those inconsistencies early will be essential to keeping submissions on track.

Legal experts have also noted that the use of AI in regulatory decision-making could become a factor in future disputes, particularly if sponsors seek to challenge decisions based on how those tools were applied. As adoption grows, clear governance and disclosure frameworks will be needed, not just for the FDA but also for sponsors navigating a shifting regulatory landscape.

Cautious optimism, tempered by technical realities

AI is expected to play a larger role in how regulatory agencies operate, but the FDA’s experience with Elsa underscores the challenges of turning that vision into operational change. The tool may eventually help reduce administrative overhead and support reviewers in specific scenarios, but the current version falls short of being a fully integrated solution.

As with many early AI deployments, Elsa’s future impact will likely depend on continued refinement, greater integration with FDA systems, and clear governance policies around its use. For now, the agency appears to be laying the groundwork for transformation. It remains to be seen whether this transformation can be achieved while maintaining regulatory rigor.

Empowering teams beyond the approval process

As agencies like the FDA continue testing AI for internal use, life sciences organizations should track these developments closely. Regulatory expectations, workflows, and data standards are evolving, and early tools like Elsa offer a glimpse into the future of how regulators may engage with scientific and clinical information at scale.

While these shifts shape the front end of the approval process, teams across medical affairs and commercial strategy must also adapt. Definitive Healthcare helps organizations see these landscapes clearly with expert data, real-world insights, and analytics that power stakeholder engagement and strategic decision-making. Start a free trial now.

Nicole Witowski

About the Author

Nicole Witowski

Nicole Witowski is a Senior Content Writer at Definitive Healthcare. She brings more than 10 years of experience writing about the healthcare industry. Her work has been…

Author profile