• Corpus ID: 208513437

ModelHub.AI: Dissemination Platform for Deep Learning Models

  title={ModelHub.AI: Dissemination Platform for Deep Learning Models},
  author={Ahmed Hosny and Michael Schwier and Christoph Berger and Evin Pinar {\"O}rnek and Mehmet Turan and Phi Vu Tran and Leon Weninger and Fabian Isensee and Klaus Maier-Hein and Richard McKinley and Michael T. Lu and Udo Hoffmann and Bjoern H. Menze and Spyridon Bakas and Andriy Y Fedorov and Hugo J.W.L. Aerts},
Recent advances in artificial intelligence research have led to a profusion of studies that apply deep learning to problems in image analysis and natural language processing among others. Additionally, the availability of open-source computational frameworks has lowered the barriers to implementing state-of-the-art methods across multiple domains. Albeit leading to major performance breakthroughs in some tasks, effective dissemination of deep learning algorithms remains challenging, inhibiting… 

Figures and Tables from this paper

GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable End-to-End Clinical Workflows in Medical Imaging

The Generally Nuanced Deep Learning Framework (GaNDLF) is proposed, which aims to provide an end-to-end solution for all DL-related tasks, to tackle problems in medical imaging and provide a robust application framework for deployment in clinical workflows.

Portable framework to deploy deep learning segmentation models for medical images

The open-source, GPL copyrighted framework developed in this work has been successfully used to deploy Deep Learning based segmentation models for five in-house developed and published models, facilitating multi-institutional outcomes modeling studies.

DISTRIBUTED COLLABORATIVE FRAMEWORK FOR DEEP LEARNING IN OBJECT DETECTION A THESIS IN Computer Science Presented to the Faculty of the University of Missouri-Kansas City in partial fulfillment of the requirements for the degree MASTER OF SCIENCE By SIRISHA RELLA

A distributed-collaborative framework to build practical object detection models based on a novel approach for collaborative group inferencing that is designed with a single-class-single-model mechanism for multiple objects in a distributed manner is proposed.

A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions

This model was chosen as it highlights a complex scenario in which the training data comes from multiple sources, as well as how to address the visibility of contributions for models that use transfer learning.

Domain adaptation for segmentation of critical structures for prostate cancer therapy

A semi-supervised domain adaptation (DA) method to refine the model’s performance in the target domain, which has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies.

SensiX: A Platform for Collaborative Machine Learning on the Edge

SensiX is presented, a personal edge platform that stays between sensor data and sensing models, and ensures best-effort inference under any condition while coping with device and data variabilities without demanding model engineering.

NCI Imaging Data Commons

Imaging Data Commons provides access to curated imaging collections, accompanied by documentation, a user forum, and a growing number of analysis use cases that aim to demonstrate the value of a data commons framework applied to cancer imaging research.

Meta-repository of screening mammography classifiers

This paper presents a meta-analyses of the immune system’s response to chemotherapy and shows clear patterns of decline in the immune systems of women aged between the ages of 40 and 60.

An Empirical Study of Artifacts and Security Risks in the Pre-trained Model Supply Chain

Deep neural networks achieve state-of-the-art performance on many tasks, but require increasingly complex architectures and costly training procedures. Engineers can reduce costs by reusing a



DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers

A novel extensible approach, DLPaper2Code, to extract and understand deep learning design flow diagrams and tables available in a research paper and convert them to an abstract computational graph, in real-time.

DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

The proposed DeepInfer is an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform that allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration.

Caffe: Convolutional Architecture for Fast Feature Embedding

Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.

Towards Unified Data and Lifecycle Management for Deep Learning

A high-level domain specific language (DSL) is proposed, inspired by SQL, to raise the abstraction level and thereby accelerate the modeling process and to manage the variety of data artifacts, especially the large amount of checkpointed float parameters.

DLHub: Model and Data Serving for Science

This work presents the Data and Learning Hub for science (DLHub), a multi-tenant system that provides both model repository and serving capabilities with a focus on science applications and shows that relative to other model serving systems, DLHub provides greater capabilities, comparable performance without memoization and batching, and significantly better performance when the latter two techniques can be employed.

Rethinking the Inception Architecture for Computer Vision

This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.

Recent Trends in Deep Learning Based Natural Language Processing [Review Article]

This paper reviews significant deep learning related models and methods that have been employed for numerous NLP tasks and provides a walk-through of their evolution.

Clipper: A Low-Latency Online Prediction Serving System

Clipper is introduced, a general-purpose low-latency prediction serving system that introduces a modular architecture to simplify model deployment across frameworks and applications and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks.

Kipoi: accelerating the community exchange and reuse of predictive models for genomics

Kipoi, a collaborative initiative to define standards and to foster reuse of trained models in genomics, is presented, providing a unified framework to archive, share, access, use, and build on models developed by the community.

Understanding Convolution for Semantic Segmentation

DUC is designed to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling, and a hybrid dilated convolution (HDC) framework in the encoding phase is proposed.