FDA for AI
Proposed regulatory framework to vet new AI models before release.
First Mentioned
5/10/2026, 4:50:25 AM
Last Updated
5/10/2026, 4:54:15 AM
Research Retrieved
5/10/2026, 4:54:15 AM
Summary
The 'FDA for AI' is a proposed regulatory framework currently under consideration by the White House and the US Government to enforce AI safety and cybersecurity standards. Modeled after the Food and Drug Administration's oversight of medical products, this concept gained momentum following the release of advanced AI models like Mythos, which raised significant security concerns. The proposal has sparked a divide between those advocating for strict regulation, such as Bernie Sanders, and those favoring a pro-innovation approach, including Donald Trump. Tech industry figures like David Sacks and Brad Gerstner have expressed strong opposition to the idea, viewing it as a potential hindrance to technological progress. This discussion occurs alongside the existing FDA's active efforts to regulate AI-enabled medical devices and drug development processes through risk-based credibility assessment frameworks.
Referenced in 1 Document
Research Data
Extracted Attributes
Proposed By
White House and US Government
Regulatory Model
Food and Drug Administration (FDA)
Triggering Event
Development of advanced AI models like Mythos
Primary Objective
Enforce AI safety and cybersecurity in the AI era
FDA Staffing Status
Down by approximately 2,500 (15%) as of September 2025
Existing FDA AI Authorizations
Over 1,016 AI/ML-enabled medical devices authorized as of 2025
Timeline
- FDA published a discussion paper on the use of AI in drug development. (Source: FDA Web Search)
2023-05-01
- FDA published 'Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together'. (Source: FDA Web Search)
2024-03-01
- FDA revised its publication on AI and medical product alignment. (Source: FDA Web Search)
2025-02-01
- FDA held a hybrid public workshop to discuss guiding principles for responsible AI use in drug products. (Source: FDA Web Search)
2025-08-06
- Reports indicated FDA staffing levels were down by approximately 2,500 from 2023 levels. (Source: Bipartisan Policy Center)
2025-09-01
- FDA held a second public workshop regarding AI in the development of safe and effective drug products. (Source: FDA Web Search)
2025-10-07
Wikipedia
View on WikipediaAI agent
In the context of generative artificial intelligence, AI agents (also referred to as compound AI systems or agentic AI) are a class of intelligent agents that can pursue goals, use tools, and take actions with varying degrees of autonomy. In practice, they usually operate within human-defined objectives, constraints, and available tools.
Web Search Results
- FDA Oversight: Understanding the Regulation of Health AI ...
The Food and Drug Administration (FDA) is the primary federal agency responsible for regulating medical devices. This issue brief explains how the FDA regulates medical devices enabled by artificial intelligence (AI) across the product lifecycle. It recounts how FDA frameworks were built for static devices with specific indications, describes the steps the agency is taking to modernize oversight for advanced AI technologies, and highlights key challenges ahead. This is the second policy brief in the Bipartisan Policy Center’s series on the regulatory and coverage landscape for health AI. Part one covered oversight of health AI outside of the FDA’s jurisdiction, and part three will focus on how the Centers for Medicare & Medicaid Services (CMS) pays for health AI. [...] As technologies become more sophisticated and raise complex regulatory questions, the FDA faces workforce and capacity constraints that limit its ability to evaluate AI-enabled medical devices quickly and comprehensively. As of September 2025, staffing levels were down by approximately 2,500, or nearly 15%, from 2023. During recent congressional hearings, lawmakers raised the possibility of using AI to support regulatory functions, such as adverse event monitoring. [...] In certain cases, the FDA exercises “enforcement discretion” when a tool technically meets the definition of a medical device but poses low risk. For devices under enforcement discretion, the FDA does not expect manufacturers to submit a premarket review application or to register their device with the agency. The FDA often applies this approach to software that supports general wellness or self-management (e.g., weight logging, medication reminders). The FDA’s Digital Health Policy Navigator can help developers determine whether their product meets the definition of a device and, if so, whether it falls under enforcement discretion. If the answer is unclear, developers are advised to seek legal or regulatory advice from experts.
- Artificial Intelligence-Enabled Medical Devices - FDA
The devices in this list have met the FDA’s applicable premarket requirements, including a focused review of the device’s overall safety and effectiveness, which includes an evaluation of study appropriateness for the device’s intended use and technological characteristics. A direct link to the FDA’s database entry of an AI-enabled medical device is provided. The database entry contains releasable information, such as summaries of safety and effectiveness. Note, the summaries are not all inclusive and do not include most of the information that may be submitted in an application. [...] To support transparency in the use of modern AI technologies, the FDA will explore methods to identify and tag medical devices that incorporate foundation models encompassing a wide range of AI systems, from large language models (LLMs) to multimodal architectures. This identification will help innovators, healthcare providers, and patients recognize when LLM-based functionality is present in a medical device. To facilitate the FDA’s development of methods to identify AI-enabled medical devices more easily, including identifying those devices incorporating LLM-based functionality in a future update of this list, sponsors are encouraged to include appropriate information in their public summaries.
- FDA Guidelines on AI Models: Ensuring Safe & Effective Drug Development | Saama
## Implications for the Future of AI in Drug Development The FDA’s draft guidance is a landmark development in the responsible use of AI in drug development. It provides a clear, structured framework for sponsors to follow, making it easier to navigate the complexities of regulatory compliance. With AI playing a central role in transforming the drug development process, these guidelines will likely encourage even more innovation and integration of AI into regulatory decision-making. [...] Schedule a Demo # FDA Guidelines on AI Models: Ensuring Safe & Effective Drug Development #### In This Article Artificial Intelligence (AI) and Machine Learning (ML) have become key players in reshaping how we develop drugs, but with this potential comes the critical need for proper regulation. How do we make sure these advanced technologies are used safely in regulatory processes? Recently, the U.S. Food and Drug Administration (FDA) released a draft guidance titled “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products” to help answer this question. [...] This guidance is a key step toward ensuring that AI technologies are responsibly and effectively integrated into the regulatory process for drug development. In this blog, we’ll explore the FDA’s guidelines and what they mean for the future of AI in drug regulation. ## A Risk-Based Credibility Assessment Framework The FDA’s draft guidance provides recommendations for sponsors- such as pharmaceutical companies and other stakeholders- on how to use AI to generate data that can be used in regulatory decisions related to the safety, effectiveness, or quality of drugs. The guidance emphasizes the importance of a risk-based credibility assessment framework to ensure that AI models used in this context are trustworthy and robust.
- How AI is used in FDA-authorized medical devices: a taxonomy across 1,016 authorizations | npj Digital Medicine
Over one thousand artificial intelligence (AI)/machine learning (ML)-enabled medical devices have been authorized by the US Food and Drug Administration (FDA). Given the vast range of potential applications of AI in clinical care1."),2."),3."),4."),5.") and the increasing focus on translating AI into routine practice, it is critical to understand what types of AI devices are currently authorized for clinical use, and how these use cases are evolving over time. The FDA’s current classification systems for medical devices, including product codes and device classes, provide a broad characterization that do not fully capture key dimensions of AI use. Recent work has provided insights into subsets of devices6."),7."),8."),9."),10."),11."), but a comprehensive characterization of [...] 50 Altmetric Metrics details ### Subjects ## Abstract We reviewed 1016 FDA authorizations of AI/ML-enabled medical devices to develop a taxonomy capturing key variations in clinical and AI-related features. Quantitative image analysis remains the most common application, but its relative proportion has declined recently. Over 100 devices leverage AI for data generation, though none yet involve LLMs. Our taxonomy clarifies current AI usage in medical devices and provides a foundation for tracking developments as applications evolve. [...] LLM-assisted strategies combined with rule-based logic could help infer AI/ML usage even when it is not explicitly mentioned. Given that the FDA updates its general databases of authorization summaries weekly, such approaches could facilitate more continuous identification of AI/ML-enabled products and, in turn, updates to our taxonomy database.
- Artificial Intelligence for Drug Development | FDA
discussion paper published in May 2023 on AI use in drug development; (3) CDER’s experience with over 500 submissions with AI components from 2016 to 2023; and (4) hybrid public workshops for interested parties held on August 6, 2024 and October 7, 2025 to discuss the guiding principles for the responsible use of AI in the development of safe and effective drug and biological products. [...] These activities also helped informed a recent publication titled: “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” published in March 2024 (revised in February 2025), which describes how FDA’s medical product Centers plan to align their efforts to advance the responsible use of AI for medical products. This entails building regulatory approaches that, to the extent feasible, can be applied across various medical products and uses within the health care delivery system. AI will undoubtedly play a critical role in the drug development life cycle and CDER plans to continue developing and adopting a risk-based regulatory framework that promotes innovation and protects patient safety. [...] CDER is committed to ensuring that drugs are safe and effective while facilitating innovation. FDA published a draft guidance in 2025 titled, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision Making for Drug and Biological Products.” This guidance provides recommendations to industry on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs. The content of this draft guidance was informed by (1) feedback received in December 2022 as part of an expert workshop convened by the Duke Margolis Institute for Health Policy on behalf of CDER/FDA; (2) over 800 comments received from external parties on the discussion paper published in May 2023 on AI use in drug