The warning letter to Purolea Cosmetics reads, at first glance, like a cautionary tale about artificial intelligence in pharmaceutical manufacturing. But what it actually documents is something far more familiar and problematic: a breakdown in fundamental GMP understanding paired with the misplaced confidence that technology can compensate for it.
The FDA’s observations were not subtle. Following a three-day inspection, the agency cited failures across core quality systems: inadequate oversight by the quality control unit, lack of batch record review, absence of process validation, and insufficient production controls. Layered on top of that was the company’s reliance on AI to generate specifications, procedures, and master records—without meaningful human review.
That combination is what turned a compliance gap into a warning letter.
The Regulatory Issue Isn’t AI—It’s Accountability
One line from the FDA warning letter stands out for its clarity: If AI is used in document creation, firms “must review the AI generated documents to ensure they were accurate and actually compliant with CGMP.”
This is not new regulatory thinking. It’s reinforcement of a principle as old as GMP itself: you are responsible for your systems, regardless of how they are built. The quality unit remains explicitly accountable for approving or rejecting all procedures and ensuring compliance. Delegating authorship (or review) to AI does not dilute that responsibility. If anything, it raises the bar – because now the “author” cannot explain its reasoning, and the burden shifts even more heavily to the reviewer.
The Purolea response—that they were “not aware” of process validation requirements because AI did not surface them—is particularly telling. It reframes a knowledge gap as a tooling issue. Not surprisingly, the FDA did not accept that explanation.
A Familiar Pattern, Just With New Tools
Strip away the AI layer, and the observations read like a textbook example of early-stage GMP immaturity:
- No assurance that procedures were followed
- No documented review of batch records prior to release
- No validated manufacturing processes
- Inadequate laboratory testing for microbial contamination
- Facility conditions allowing contamination risks (including direct exposure to the external environment)
The presence of insects, debris, and uncontrolled ingress points is not a technology problem – it is a basic control problem. In that context, AI didn’t create the compliance risk, it simply amplified it. It gave the appearance of structure (documents, specifications, records) without the existence of any type of underlying, functional quality system. That’s a particularly dangerous failure mode in regulated environments—because it looks compliant from the outside.
Outputs Matter More Than Inputs—At Least for Now
One of the more interesting aspects of this letter is what FDA chose not to emphasize: there is no direct scrutiny of the AI system itself—no discussion of model validation, training data, or system qualification.
That omission is not accidental.
FDA expectations around computerized systems have evolved. There was a time when even standard off-the-shelf tools like Excel or Word were subject to heavy validation expectations. More recently, the agency has taken a more risk-based view—particularly for widely used commercial tools—focusing less on validating the tool itself and more on how it is used and controlled. This letter reflects that shift.
There’s an implicit distinction emerging:
- Commercial, off-the-shelf AI tools: unlikely to require traditional validation in the GMP sense, provided their outputs are rigorously reviewed and verified by qualified personnel.
- Internally developed AI systems: far more likely to trigger expectations around validation, documentation, and control, because the firm owns the logic and the risk profile.
In both cases, FDA’s position is consistent: the output is what matters, and whether it comes from a validated system, a commercial tool, or a human author—if it’s wrong, it’s a compliance failure.
The Real Control Point: Human Expertise
If there is an overarching theme to this warning letter, it’s not about technology. It’s about the role—and limits—of human oversight, because AI doesn’t remove the need for expertise. It shifts where that expertise is applied.
Instead of writing documents from scratch, experienced professionals are now expected to critically evaluate outputs generated by systems that can sound authoritative while being fundamentally incorrect. That is a different skill set – and arguably a more demanding one – because catching what’s missing is harder than creating something from the ground up.
And this is where the implications extend beyond a single warning letter. Newer professionals are relying on increasingly complex AI tools to learn, despite the fact that the tools themselves are not reliable in a GMP context. Which raises a difficult question: How do you build the next generation of GMP expertise in an environment where the first draft is increasingly written by a system that doesn’t fully understand the rules?
Unfortunately this is not a hypothetical concern. It’s already happening.
What This Means for the Industry
It might be tempting to read this warning letter as FDA drawing a line against AI. But that doesn’t hold up under scrutiny, because nothing in the letter suggests the agency objects to AI-assisted document generation. They’re not – they’re rejecting the idea that AI can stand in for a functioning quality system, or for the people responsible for it.
In other words, the issue isn’t automation—it’s abdication. For manufacturers, the implication is straightforward, even if uncomfortable: AI does not reduce the need for GMP expertise. It increases it. Because now, in addition to understanding the regulation, you also need to understand when the system helping you might be wrong.
A Final Observation
There’s a certain irony in this case. The company used AI to try to align with FDA expectations—and in doing so, demonstrated they didn’t understand those expectations in the first place. They also didn’t fully understand that technology has a way of exposing organizational truths rather than fixing them. In regulated environments, those truths surface quickly—and publicly.
AI will continue to find its place in pharmaceutical manufacturing. But it will do so on GMP’s terms, not the other way around. And GMP still expects something AI cannot provide: informed judgment.
Regulatory requirements surrounding AI are constantly changing – and so should your approach to them.
Here are some other related topics that you might find interesting:
- Blog: Biocompatibility Testing and Fraudulent Data in the Age of AI
- Video: QMSR, AI Reviewers and the Value of a Good FDA or Internal Audit
Fair warning: none of it is particularly glamorous—but it could be the difference between making or breaking your program.

