This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

FDA and EMA jointly issue guiding principles of good AI practice in drug development

In January 2026, the FDA and the European Medicines Agency (the “EMA”) published a set of guiding principles for the use of AI across the medicine lifecycle, spanning research, clinical development, manufacturing and post‑market monitoring. These principles are intended to support international regulatory convergence and underpin future guidance in both jurisdictions. The principles are:

  1. Human-centric by design – to align with human and ethical values;

  2. Risk-based approach;

  3. Adherence to relevant standards (e.g. GxP);

  4. Clear context of use;

  5. Multidisciplinary expertise;

  6. Data governance and documentation;

  7. Model design and development practices;

  8. Risk-based performance assessment;

  9. Life cycle management; and

  10. Clear, essential information (i.e. plain language).

AI and machine‑learning technologies are increasingly being incorporated into medical devices, with applications ranging from imaging systems that assist in identifying skin cancer to smart sensors capable of estimating the risk of cardiac events. Unlike traditional software, AI‑enabled medical devices can adapt and improve their performance over time as they are exposed to real‑world data.

Regulators have recognised both the potential benefits of these technologies and the limitations of regulatory frameworks designed around static software. We are yet to see a regulator implement guidance in this space, but the FDA published two draft guidance documents in January 2025 on lifecycle management and marketing submission considerations for AI‑enabled medical devices and on the use of AI to support regulatory decision‑making for drugs and biological products.

The EMA’s approach, reflected in the European Commission’s proposed Biotech Act and ongoing pharmaceutical reform, similarly accommodates broader use of AI in regulatory decision‑making and encourages controlled experimentation with innovative AI methods.

Both regulators emphasise the importance of human oversight and ethical governance when deploying AI technologies in healthcare. As AI adoption continues to expand, regulators internationally are expected to complement principles based frameworks with further practical guidance, seeking to enable responsible innovation while maintaining patient safety.

EMA and the U.S. Food and Drug Administration (FDA) have jointly identified ten principles for good artificial intelligence (AI) practice in the medicines lifecycle.

Tags

knowledge lawyer, managing associate, london, life sciences & healthcare, technology, commercial data & tech