Corpus ID: 203952148

ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations

@article{Sharma2019ExpertMatcherAM,
  title={ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations},
  author={Vivek Sharma and Praneeth Vepakomma and Tristan Swedish and Kenglun Chang and Jayashree Kalpathy-Cramer and Ramesh Raskar},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.03731}
}
Recently, there has been the development of Split Learning, a framework for distributed computation where model components are split between the client and server (Vepakomma et al., 2018b). As Split Learning scales to include many different model components, there needs to be a method of matching client-side model components with the best server-side model components. A solution to this problem was introduced in the ExpertMatcher (Sharma et al., 2019) framework, which uses autoencoders to match… Expand
AdaSplit: Adaptive Trade-offs for Resource-constrained Distributed Deep Learning
  • Ayush Chopra, Surya Kant Sahu, +4 authors Ramesh Raskar
  • Computer Science
  • 2021
Distributed deep learning frameworks like Federated learning (FL) and its variants are enabling personalized experiences across a wide range of web clients and mobile/IoT devices. However, theseExpand
Split Learning for collaborative deep learning in healthcare
TLDR
This work proves the significant benefit of distributed learning in healthcare, and paves the way for future real-world implementations of split learning based approach in the medical field. Expand
Self-supervised Face Representation Learning
This thesis investigates fine-tuning deep face features in a self-supervised manner for discriminative face representation learning, wherein we develop methods to automatically generate pseudo-labelsExpand
Towards a Universal Gating Network for Mixtures of Experts
TLDR
Multiple data-free methods for the combination of heterogeneous neural networks, ranging from the utilization of simple output logit statistics, to training specialized gating networks, are proposed. Expand
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges. Expand
From Server-Based to Client-Based Machine Learning
TLDR
A literature review on the progressive development of machine learning from server based to client based, which revisit a number of widely used server-based and client-based machine learning methods and applications and discusses the challenges and future directions. Expand
IMULet: A Cloudlet for Inertial Tracking
TLDR
This paper proposes an edge cloud-based inertial tracking architecture that overcomes the above limitations and proposes a cloud-side ML model that tracks the temporal dynamics of inertial signals. Expand
Unleashing the Tiger: Inference Attacks on Split Learning
TLDR
This paper exposes vulnerabilities of the split learning protocol and demonstrates its inherent insecurity by introducing general attack strategies targeting the reconstruction of clients' private training sets and extending previously devised attacks for Federated Learning. Expand
FedML: A Research Library and Benchmark for Federated Machine Learning
TLDR
FedML is introduced, an open research library and benchmark that facilitates the development of new federated learning algorithms and fair performance comparisons and can provide an efficient and reproducible means of developing and evaluating algorithms for the Federated learning research community. Expand

References

SHOWING 1-10 OF 18 REFERENCES
ExpertMatcher: Automating ML Model Selection for Users in Resource Constrained Countries
TLDR
ExpertMatcher, a method for automating deep learning model selection using autoencoders, is introduced, which allows resource-constrained clients in developing countries to utilize the most relevant ML models for their given task without having to evaluate the performance of each ML model. Expand
Expert Gate: Lifelong Learning with a Network of Experts
TLDR
A model of lifelong learning, based on a Network of Experts, with a set of gating autoencoders that learn a representation for the task at hand, and, at test time, automatically forward the test sample to the relevant expert. Expand
Split learning for health: Distributed deep learning without sharing raw patient data
TLDR
This paper compares performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN. Expand
Distilling the Knowledge in a Neural Network
TLDR
This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Expand
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. Expand
No Peek: A Survey of private distributed deep learning
TLDR
The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks. Expand
RCV1: A New Benchmark Collection for Text Categorization Research
TLDR
This work describes the coding policy and quality control procedures used in producing the RCV1 data, the intended semantics of the hierarchical category taxonomies, and the corrections necessary to remove errorful data. Expand
Network of Experts for Large-Scale Image Categorization
TLDR
A tree-structured network architecture for large-scale image classification that can be built from any existing convolutional neural network (CNN) and demonstrates its generality by adapting 4 popular CNNs for image categorization into the form of networks of experts. Expand
Adaptive Mixtures of Local Experts
TLDR
A new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases, which is demonstrated to be able to be solved by a very simple expert network. Expand
Flash Photography for Data-Driven Hidden Scene Recovery
TLDR
It is shown that, unlike previously thought, the area that extends beyond the corner is essential for accurate object localization and identification. Expand
...
1
2
...