Applying the machine learning form of artificial intelligence to medicine is hampered by the sensitivity of the data that will be used to train the models.
A new effort known as “federated” training of AI aims to keep the data private, but allow algorithm developers and practitioners to benefit from the interaction of real data sets and new ML models.
Too: Google’s MedPaLM Emphasizes Human Physicians in Medical AI
MedPerf, a group formed by the nonprofit MLCommons Association, an industry association that benchmarks computer chips for their performance on AI tasks, aims to solve the data impasse, as described in an inaugural position paper published Monday by the prestigious scientific journal Nature. ,
Medperf takes benchmark AI models and sends them to clinicians who have the data; Practitioners then report how the model performed against the data. The group says this means developers of AI programs can gain access to private datasets they would never otherwise have access to, while physicians get to see how AI can make predictions on the data for their patients’ health. Can answer about or not. Because of the interchange, the data does not leave the secure facilities of the physicians.
“The objective of this approach is the widespread adoption of medical AI, leading to more efficient, reproducible and cost-effective clinical practice, ultimately improving patient outcomes,” the group wrote in the paper, “MedPerf’s Federated Benchmarking of Medical Artificial Intelligence with”. Published in Nature Machine Intelligence Imprint of nature.
The paper was authored by lead author Alexandros Karagiris of the University of Strasbourg in France and 76 other contributors representing more than 20 companies, including Nvidia and Microsoft, and 20 academic institutions and nine hospitals in 13 countries and five continents.
Karagiris and team say the initial use of MedPerf has been in radiology and surgery, in sample benchmark tests. But, they write, the platform “could be easily adapted to other biomedical tasks such as computational pathology, genomics, natural language processing (NLP), or the use of structured data from patient medical records.”
Basic ideas of the approach are presented A summary schematic on the MLCommons web siteEven more along with a blog post,
Too: Should AI Come To Your Doctor’s Office? The co-founders of OpenAI think so
“Medical AI is essential to the potential impact it can have on everyone across the planet, and I am especially proud of the broad community engagement we have seen with MedPerf – researchers, hospitals, technologists and more,” David Kantor, executive director of MLCommons, said in an emailed statement.
“MedPurf has been a huge community effort, and we are excited to see it continue to grow and flourish, which will ultimately improve medical care for everyone,” Cantor said.
MedPerf’s platform includes MLCubs, a method for building secure application containers similar to Docker. The platform has three different MLQubes, one for data preparation, one for hosting the model, and a third for evaluating the output to assess the model’s performance on benchmark tests.
Too: These are my 5 favorite AI tools for work
As described by Karagiris and team in the article,
Models MLCube includes a pre-trained AI model to be evaluated as part of the benchmark. It provides a single function, predict, which calculates predictions on the data output prepared by the data preparation MLCube. In the future case of an API-only model, this would be the container hosting the API wrapper to access the private model.
Medperf also contributed hug facePopular repository of AI models. They write, “The Hugging Face Hub can also facilitate automated evaluation of models and provide a leaderboard of the best models based on benchmark specifications.”
Another partner is Sage BioNetworks, which develops the Synapse platform for data sharing that has been used in crowd-sourced data challenges. “Many of the ad hoc components required for MedPerf-FETS integration were built on the Synapse platform,” say the authors. “Synapse supports research data sharing and can be used to aid in the execution of community challenges.”
Too: AI bots are excelling in medical school exams, but should they be your doctor?
The MedPerf approach has already been tested on a challenge organized by several academic institutions, known as the Federated Tumor Segmentation Challenge, where neural nets are challenged to identify brain tumors – specifically, gliomas – in MRI images. FeTS 2022 Challenge In which MedPerf participated, took place at 32 participating sites on six continents.
“In addition, MedPerf was validated through a series of pilot studies with academic groups involved in a multi-institutional collaboration for the purposes of research and development of medical AI models,” the authors said.
Medperf hopes to expand the platform to many more participants, announcing, “We are currently working on general purpose evaluation of Healthcare AI through larger collaborations.”
MedPerf is described in the paper as having now passed the initial “proof of concept” stage, and is in the midst of transitioning from alpha to beta stage. Next steps include opening the benchmarking task generally to outside participants.
Too: Generative AI can reduce drug prices. This way
Part of the paper calls for stakeholders to come forward and contribute, asking “health care stakeholders to form benchmark committees that define specifications and oversee analyses,” and “data Owners (eg, healthcare organizations, physicians) have to register their data” in the platform (no data sharing required).”
The code for MedPerf is Posted on GitHub.











