Federated Learning Enables Big Data for Rare Cancer Boundary Detection

@article{Pati2022FederatedLE,
  title={Federated Learning Enables Big Data for Rare Cancer Boundary Detection},
  author={Sarthak Pati and Ujjwal Baid and Brandon Edwards and Micah J. Sheller and Shih-Han Wang and G. Anthony Reina and Patrick Foley and Alexey Gruzdev and Deepthi Karkada and Christos Davatzikos and Chiharu Sako and Satyam Ghodasara and Michel Bilello and Suyash Mohan and Philipp Vollmuth and Gianluca Brugnara and Chandrakanth Jayachandran Preetha and Felix Sahm and Klaus H. Maier-Hein and Maximilian Zenk and Martin Bendszus and Wolfgang Wick and Evan Calabrese and Jeffrey D. Rudie and Javier E. Villanueva-Meyer and So Yeon Cha and Madhura Ingalhalikar and Manali Jadhav and Umang Pandey and Jitender Saini and John Garrett and Matthew H Larson and Robert Jeraj and Stuart Currie and Russell Frood and Kavi Fatania and Raymond Y Huang and Ken Chang and C B Quintero and J. Capellades and Josep Puig and Johannes Trenkler and Josef Pichler and Georg Necker and Andreas Haunschmidt and Stephan Meckel and Gaurav Shukla and Spencer Liem and Gregory S. Alexander and Joseph Lombardo and Joshua David Palmer and Adam E. Flanders and Adam P. Dicker and Haris I. Sair and Craig K. Jones and Archana Venkataraman and Meirui Jiang and Tiffany Y. So and Cheng Chen and Pheng-Ann Heng and Qi Dou and Michal Kozubek and Filip Lux and Jan Mich'alek and Petr Matula and Milovs Kevrkovsk'y and Tereza Kopvrivov'a and Marek Dost'al and V'aclav Vyb'ihal and Michael A. Vogelbaum and James R. Mitchell and J.M. Farinhas and Joseph A. Maldjian and Chandan Ganesh Bangalore Yogananda and Marco C. Pinho and Divya Reddy and James Holcomb and Benjamin C. Wagner and Benjamin M. Ellingson and Timothy F. Cloughesy and Catalina Raymond and Talia C. Oughourlian and Akifumi Hagiwara and Chencai Wang and Minh-Son To and Sargam Bhardwaj and Chee Chong and Marc Agzarian and Alexandre Xavier Falc{\~a}o and Samuel Botter Martins and Bernardo Teixeira and Fl{\'a}via Sprenger and D. Menotti and Diego Rafael Lucio and Pamela J. LaMontagne and Daniel Marcus and Benedikt Wiestler and Florian Kofler and Ivan Ezhov and Marie Metz and Rajan Jain and Matthew C. H. Lee and Yvonne W. Lui and Richard McKinley and Johannes Slotboom and P. Radojewski and Raphael Meier and Roland Wiest and Derrick Murcia and Eric Fu and Rourke Haas and Johnna Kellie Fox Thompson and David Ryan Ormond and Chaitra Badve and Andrew E. Sloan and Vachan Vadmal and Kristin A Waite and Rivka R. Colen and Linmin Pei and Murat Ak and Ashok Srinivasan and J.R. Bapuraj and Arvind Rao and Nicholas C. Wang and Ota Yoshiaki and Toshio Moritani and Sevcan Turk and Joonsan Lee and Snehal Prabhudesai and Fanny Mor'on and Jacob J Mandel and Konstantinos Kamnitsas and Ben Glocker and Luke Dixon and Matthew Williams and Peter Zampakis and Vasileios Panagiotopoulos and Panagiotis Tsiganos and Sotiris Alexiou and Ilias Haliassos and Evangelia I. Zacharaki and Konstantinos Moustakas and Christina Kalogeropoulou and Dimitrios Kardamakis and Yoon Seong Choi and Seung-Koo Lee and Jong-Hee Chang and Sung Soo Ahn and Bing Luo and Laila M. Poisson and Ning Wen and Pallavi Tiwari and Ruchika Verma and Rohan Bareja and Ipsa Yadav and Jonathan Chen and Neeraj Kumar and Marion Smits and Sebastian R. van der Voort and Ahmed T. A. Alafandi and Fatih Incekara and Maarten M J Wijnenga and Georgios Kapsas and Renske Gahrmann and Joost W. Schouten and Hendrikus J. Dubbink and Arnaud J. P. E. Vincent and Martin J. Bent and Pim J. French and Stefan Klein and Yading Yuan and Sonam Sharma and T. C. Tseng and Saba Adabi and Simone P. Niclou and Olivier Keunen and Annika Hau and Martin Valli{\`e}res and David Fortin and Martin Lepage and Bennett A. Landman and Karthik Ramadass and Kaiwen Xu and Silky Chotai and Lola B. Chambless and Akshitkumar M. Mistry and Reid C. Thompson and Yuriy Gusev and Krithika Bhuvaneshwar and Anousheh Sayah and Camelia Bencheqroun and Anas Belouali and Subha Madhavan and Thomas C. Booth and Alysha Chelliah and Marc Modat and Haris Shuaib and Carmen Dragos and Aly H Abayazeed and Kenneth E. Kolodziej and Michael Hill and Ahmed Abbassy and Shady Mohamed Tarek Gamal and Mahmoud Mekhaimar and Mohamed Qayati and Mauricio Reyes and Ji Eun Park and Jihye Yun and Ho Sung Kim and Abhishek Mahajan and Mark Muzi and Sean Benson and Regina Beets-Tan and Jonas Teuwen and Alejandro Herrera-Trujillo and Mar{\'i}a Trujillo and William Escobar and Ana Lorena Abello and Jos{\'e} Bernal and Jhonny C. G'omez and Josephine Choi and Stephen Seung-Yeob Baek and Yusung Kim and Heba Ismael and Bryan G. Allen and John M. Buatti and Aikaterini Kotrotsou and Hongwei Li and Tobias Weiss and Michael Weller and Andrea Bink and Bertrand Pouymayou and Hassan Fathallah Shaykh and Joel H. Saltz and Prateek Prasanna and Sampurna Shrestha and K. M. Mani and David Payne and Tahsin M. Kurç and Enrique Pel{\'a}ez and Heydy Franco-Maldonado and Francis R. Loayza and Sebasti{\'a}n Quevedo and Pamela Guevara and Esteban Torche and Crist{\'o}bal Mendoza and Franco Vera and Elvis R'ios and Eduardo L'opez and Sergio A. Velast{\'i}n and Godwin I. Ogbole and Dotun Oyekunle and Olubunmi Odafe-Oyibotha and Babatunde Osobu and Mustapha Shu'aibu and Adeleye Dorcas and Mayowa Soneye and Farouk Dako and Amber L. Simpson and Mohammad Hamghalam and Jacob J. Peoples and Ricky Hu and Anh N Tran and D A Cutler and Fabio Ynoe Moraes and Michael A Boss and James F. Gimpel and Deepak Kattil Veettil and Kendall Schmidt and Brian Bialecki and Sai Rama Raju Marella and Cynthia Price and Lisa Cimino and Charles Apgar and Prashant Shah and Bjoern H Menze and Jill S. Barnholtz-Sloan and Jason Martin and Spyridon Bakas},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.10836}
}
. Although machine learning (ML) has shown promise in numerous domains, there are concerns about generalizability to out-of-sample data. This is currently addressed by centrally sharing ample, and importantly diverse, data from multiple sites. However, such centralization is challenging to scale (or even not feasible) due to various limitations. Federated ML (FL) provides an alternative to train accurate and generalizable ML models, by only sharing numerical model updates. Here we present… 

Figures from this paper

MammoDL: Mammographic Breast Density Estimation using Federated Learning

MammoDL uses the U-Net deep learning architecture to quantitatively assess breast tissue density and complexity from mammograms and enables a data privacy approach by using federated learning.

OpenFL: the open federated learning library

This manuscript presents OpenFL and summarizes its motivation and development characteristics, with the intention of facilitating its application to existing ML/DL model training in a production environment, and describes the first real-world healthcare federations that use the OpenFL library.

The federated tumor segmentation (FeTS) tool: an open-source solution to further solid tumor research

The primary aim of the FeTS tool is to facilitate this harmonized processing and the generation of gold standard reference labels for tumor sub-compartments on brain magnetic resonance imaging, and further enable federated training of a tumors sub-compartment delineation model across numerous sites distributed across the globe, without the need to share patient data.

References

SHOWING 1-10 OF 104 REFERENCES

Federated Learning for Breast Density Classification: A Real-World Implementation

This study investigates the use of federated learning (FL) to build medical imaging classification models in a real-world collaborative setting and shows that despite substantial differences among the datasets from all sites and without centralizing data, it can successfully train AI models in federation.

Multi-task Federated Learning for Heterogeneous Pancreas Segmentation

Heterogeneous optimization methods that show improvements for the automated segmentation of pancreas and pancreatic tumors in abdominal CT images with FL settings are investigated.

Distributed deep learning networks among institutions for medical imaging

It is shown that distributing deep learning models is an effective alternative to sharing patient data, and this finding has implications for any collaborative deep learning study.

Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data

It is shown that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and the effects of data distribution across collaborating institutions on model quality and learning patterns are investigated.

Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation

This study introduces the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data, and demonstrates that the performance of Federated semantic segmentation models on multimodal brain scans is similar to that of models trained by sharing data.

GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable End-to-End Clinical Workflows in Medical Imaging

The Generally Nuanced Deep Learning Framework (GaNDLF) is proposed, which aims to provide an end-to-end solution for all DL-related tasks, to tackle problems in medical imaging and provide a robust application framework for deployment in clinical workflows.

Federated learning for predicting clinical outcomes in patients with COVID-19.

This study facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.

The future of digital health with federated learning

This paper considers key factors contributing to this issue, explores how federated learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.

OpenFL: An open-source framework for Federated Learning

The first use of the OpenFL framework to train consensus ML models in a consortium of international healthcare organizations, as well as how it facilitates the first computational competition on FL are described.
...