To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work.
“The AI revolution has arrived,” Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months.
“We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,” he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable.
Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals.
Yet behind the scenes, the agency’s slick AI project has been greeted with a shrug — or outright alarm.
Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates.
But it has also made up nonexistent studies, known as AI “hallucinating,” or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said.
“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one employee — a far cry from what has been publicly promised.
“AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have” to check for fake or misrepresented studies, a second FDA employee said.
Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That’s because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information.
All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine.
“The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,” the FDA said in a statement on its launch in June.
But speaking to CNN at the FDA’s White Oak headquarters this week, Makary says that right now, most of the agency’s scientists are using Elsa for its “organization abilities” like finding studies and summarizing meetings.
The FDA’s head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies.
“Elsa is no different from lots of [large language models] and generative AI,” he told CNN. “They could potentially hallucinate.”
Walsh also said Elsa’s shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries.
Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI.
“I have not heard those specific concerns, but it’s optional,” he said. “They don’t have to use Elsa if they don’t find it to have value.”
Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: “You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.”