Beyond Knowledge Toward Wisdom
Case Study: Regulatory Affairs in MedTech
Before artificial intelligence entered the picture, expertise in regulatory affairs and most specialized professions was built on three foundations.
First, access was scarce. Regulations such as the EU Medical Device Regulation (MDR) or the In Vitro Diagnostic Regulation (IVDR) were notoriously dense, scattered across long texts, guidance papers, and internal memos. Accessing and making sense of them was not straightforward, and only those with the right background and experience could navigate them effectively.
Second, expertise relied heavily on networks. Much of regulatory practice was not just about reading the law, but about knowing how it was applied. How would a notified body interpret a particular clause? What evidence was likely to convince a reviewer? These insights were rarely written down; they were shared through professional networks, conferences, and years of accumulated experience.
Finally, there was a premium on specialization. Because mistakes were costly—sometimes delaying a product launch by years, companies were willing to pay significant fees for consultants or in-house experts who could reduce risk. Their real value was not only knowing the rules but also knowing how to apply them in practice.
In that world, an expert was a living library, a trusted interpreter who could translate complexity into guidance.
How AI Changes the Equation?
AI has altered this balance by transforming how information is accessed and processed.
Knowledge that once required hours of research can now be surfaced in minutes. Tools like GPT, Gemini, or Claude can generate a step-by-step regulatory pathway for EU or U.S. market entry on demand. They can digest thousands of pages of documentation and deliver a concise summary that no individual could produce so quickly.
For many, this creates an illusion of competence. A Medtech startup founder might ask, “What do I need to launch a Class IIa device in Europe?” and receive a confident, polished response. On the surface, it feels as if the expert has been replaced.
But this shift introduces a new challenge: when information is abundant and instantly available, how do we distinguish between correct and misleading?
The Core Challenge: Evaluating AI’s Accuracy
The most pressing problem is not the information AI delivers, but our ability to evaluate it.
A regulatory professional might recognize when an AI-generated roadmap contains gaps or errors. However, a startup founder or product manager without that background cannot easily judge. This creates a paradox: those most likely to rely on AI are often the least equipped to verify its accuracy.
AI-generated expertise has several pitfalls. Sometimes, models “hallucinate,” producing references or interpretations that appear credible but are fabricated. Other times, they oversimplify, missing subtle exceptions or context. Training data may also be outdated, meaning answers lag behind evolving regulations. Perhaps most dangerously, the structured, confident tone of AI output can mask errors that are hard to spot.
There are ways to mitigate these risks. One approach is to compare results across multiple models and look for consistency. Another is to ask AI for citations and check them against official MDR or IVDR texts, or FDA sources. A third—and perhaps the most reliable—is to combine AI’s speed with the oversight of a human expert, who can validate and refine its recommendations.
Even with these tools, however, the role of the human professional remains central.
What People Pay for Today
In an era where AI can produce roadmaps and summaries, companies are less willing to pay simply for access to “the rules.” The value of expertise has shifted.
What organizations now seek is trust. They need assurance that decisions will not expose them to costly delays or compliance failures. They pay for contextual interpretation, because regulations are not applied in a vacuum—each notified body, each jurisdiction, has its own nuances. They also value foresight. A consultant who can anticipate upcoming MDCG guidance or spot regulatory trends helps companies prepare today for tomorrow’s challenges.
Practical experience continues to matter as well. Regulations may be written texts, but they are lived differently in practice across countries and organizations. Finally, experts bring something AI cannot replicate advocacy. They can represent a company to regulators, negotiate requirements, and carry professional credibility into conversations where trust is as important as evidence.
In short, companies are paying not for facts but judgment, foresight, and reduced risk.
The Future Value of Experts
Looking ahead, the role of experts in Medtech regulatory affairs—and beyond—will continue to evolve.
Rather than serving as gatekeepers of knowledge, experts will act as curators, validating and contextualizing AI outputs. They will be fewer text interpreters and more strategic advisors, guiding decisions that align regulatory obligations with business objectives. The strongest professionals will not resist AI but integrate it into their work, teaching organizations how to benefit from these tools while avoiding over-reliance.
Crucially, experts will provide foresight. While AI can summarize what is written today, only human professionals can anticipate how regulations will likely change tomorrow, and what those changes will mean for a company’s strategy.
Case Illustration: The MedTech Startup Dilemma
Consider a medtech startup developing a wearable cardiac monitor. The team relies solely on AI to plan its EU regulatory pathway. The AI provides a solid roadmap: classify the device as Class IIa, prepare the technical file, engage a notified body, and apply for CE marking.
But the AI misses essential details. The notified body they approach has recently adopted a stricter stance on clinical evidence for wearables. Germany’s national authorities impose additional expectations on post-market surveillance. And the startup underestimates its timeline, assuming nine months when the reality is closer to eighteen.
The consequences are costly: delays in approval and strained investor relations. Only after bringing in a regulatory consultant do they recognize the blind spots. The consultant doesn’t replace the AI; instead, they validate the correct parts of the roadmap, adjust what’s missing, and help the team adopt a safer, more reliable process. The outcome is a hybrid model: AI accelerates the work, while the expert ensures accuracy.
Conclusion: Beyond Knowledge, Toward Wisdom
AI has democratized access to knowledge, but knowledge alone is not wisdom.
In today’s environment, companies are not paying experts for access to information. They pay for insight, judgment, and foresight—to evaluate, contextualize, and act confidently.
In Medtech regulatory affairs, experts are not just providers of rules but strategic partners. The professionals who will thrive in this new landscape embrace AI as a tool, while offering what AI cannot: human judgment, contextual understanding, and credibility.
The future of expertise lies not in competing with AI, but in elevating it, using technology to accelerate the work while ensuring that the outcomes remain trustworthy, strategic, and wise.