(Image courtesy of the FDA)

The FDA’s exciting new list of artificial intelligence and machine learning-enabled devices highlights opportunity for improvement.

Craig Coombs and Qiang Kou, Nyquist Data

The FDA released a list of cleared or approved artificial intelligence and machine learning-enabled devices in September, documenting much of the agency’s work in the innovative area of AI/ML.

Extracting this information from the FDA’s decades-old database is labor-intensive at best — and often impossible. Despite the time spent by the FDA to make this new list, a lack of even text-based searching capability makes the list itself cumbersome and time-consuming to review.

Wouldn’t it be better if there was an AI resource that could quickly compile a list of FDA’s AI/ML clearances and approvals, allowing searches in seconds rather than hours?

Like many databases, the FDA database uses text-matching to find relevant entries. The weakness of text-matching is that it doesn’t match the search term as a whole. For example, if you search for “pain,” all records containing “pain” will be reported — along with records containing “Spain” and “painting.”

And text-matching often misses relevant results. When users search “pediatric,” the FDA database won’t search for other spellings such as “paediatric” or related terms such as “neonate,” “newborn,” “infant,” “children” and so forth.

The strength of modern AI-powered searching algorithms is that they can understand the content and help users find what they are looking for. A JAMA paper published in July used a type of AI searching algorithm known as natural language processing (NLP) to identify an additional 23% of FDA database reports with patient deaths that  were not classified as deaths using the text-match searching. Many of those misclassified reports never mentioned “death,” but instead “patient expired” or “could not be resuscitated.” This demonstrates the limitation of text-matching and why the medical device industry should switch to modern AI-powered methods, not just for database searching, but also to interrogate adverse event reporting, recalls, and application review details.

The authors used Nyquist Data’s commercially available AI-powered searching engine to evaluate FDA’s database with keywords “machine learning”, “artificial intelligence”, “deep learning” or “neural network.” We only used the publicly available FDA database, which includes the summary files disclosed by the FDA. We quickly generated a list of 222 devices and compared it to the FDA’s list, finding three key differences:

1. Text-searching typically omits relevant citations.

NLP methods discovered more than 20 AI/ML devices that were not in the FDA’s list but were approved earlier than June 2021. For example, the RapidScreen RS-2000 system approved in 2001 clearly stated it used an artificial neural network for classification, and the Pathwork Diagnostics Tissue Of Origin Test approved in 2008 also clearly stated it used a machine learning approach based on marker selection to build a predictive model.

2. A text-matching search will always be out-of-date compared to an AI-enabled search because of the speed of AI.

The latest device in FDA’s list is the Precise Position from Philips, approved on June 17, 2021. According to our research, FDA approved at least 24 AI/ML devices after the list’s cutoff date. FDA says it will periodically update the list, but an AI/ML enabled search engine using NLP algorithms can automatically update in milliseconds.

3. AI/ML search engines are more flexible, yet have clear search and exclusion criteria.

The FDA said it created its list “by searching FDA’s publicly-facing information, as well as by reviewing information in the publicly available resources cited below and in other publicly available materials published by the specific manufacturers.” How they searched the information and with which keywords is unclear.

Gili Pro BioSensor is on the list, but the publicly available Reclassification Order mentioned nothing about “machine learning” or “artificial intelligence,” only that it uses an optical sensor system and software algorithms to obtain and analyze video signals and estimate vital signs.

The RX-1 Rhythm Express Remote Cardiac Monitoring System is also included but didn’t explicitly say “machine learning” or “artificial intelligence.” It did mention that an embedded algorithm processes the acquired ECG to detect arrhythmias, compress the ECG, and remove most in-band noise without distorting ECG morphology. There are at least 10 other ECG analysis devices included in FDA’s list, but none explicitly mentioned “machine learning” or “artificial intelligence” in their public information. Assuming these examples belong on the list means the FDA may have employed personal knowledge of the devices in a time-consuming search, underscoring the deficiencies of text-matching searches.

Filtering for further intelligence

Besides those differences between text-matching and AI algorithms, the AI results could be further interrogated to find interesting results. What if you wanted to know which AI/ML devices required a clinical trial for market clearance? The FDA’s list requires that you individually call up each device, one by one, to review the content of the submission. Using Nyquist Data’s commercially available AI search engine, the authors immediately developed a list of AI/ML devices and simultaneously filtered for descriptions of clinical trials.

Most AI/ML devices are radiological FDA clearances that did not require clinical trials to establish substantial equivalence to predicate devices. However, there are five AI/ML devices that submitted clinical trial data to support their claims of substantial equivalence or claims of safety and effectiveness: one in ophthalmology (K200667), one in microbiology (K142677) and three in radiology (P200003, DEN170073 and K183019). This information is critical to determining potential test criteria of innovations in the same fields. This additional searching takes milliseconds using a commercial AI search engine, but could take many hours using the FDA database search engine. In addition to the amount of work to use non-AI search algorithms, one cannot discount the loss of valuable information that is easily missed.

We are excited by the FDA’s innovation, flexibility and intelligence demonstrated in approving/clearing medical devices associated with AI and ML. Their list improves regulatory intelligence for all.

Craig Coombs Nyquist Data
Craig Coombs [Photo courtesy of Nyquist Data]

Nonetheless, as regulatory professionals, we should be using AI/ML to unlock the FDA databases for improved regulatory intelligence. The higher quality regulatory intelligence that comes from the use of AI algorithms can result in better business practices in regulatory and clinical affairs, quality management, and business strategy development.

Craig Coombs is president of Coombs Medical Device Consulting, an instructor in medical device submissions at the University of California, Santa Cruz, and on the board of advisors for Nyquist Data.    

Qiang Kou Nyquist Data
Qiang Kou [Photo courtesy of Nyquist Data]

Qiang Kou is the tech co-founder of Nyquist Data. He holds a PhD in bioinformatics from Indiana University.

The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of MassDevice.com or its employees.