An Overview of Deep Learning Architectures in Few-Shot Learning Domain

@article{Jadon2020AnOO,
  title={An Overview of Deep Learning Architectures in Few-Shot Learning Domain},
  author={Shruti Jadon},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.06365}
}
Since 2012, Deep learning has revolutionized Artificial Intelligence and has achieved state-of-the-art outcomes in different domains, ranging from Image Classification to Speech Generation. Though it has many potentials, our current architectures come with the pre-requisite of large amounts of data. Few-Shot Learning (also known as one-shot learning) is a sub-field of machine learning that aims to create such models that can learn the desired objective with less data, similar to how humans… Expand
Prototype-Based Personalized Pruning
TLDR
A dynamic personalization method called prototype-based personalized pruning (PPP) is proposed which considers both ends of personalization and model efficiency and can easily prune the network with a prototype representing the characteristics of personal data and it performs well without retraining or finetuning. Expand
AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization
TLDR
A study of domain adaptation for the abstractive summarization task across six diverse target domains in a low-resource setting and finds that continuing pre-training could lead to the pre-trained model's catastrophic forgetting, and a learning method with less forgetting can alleviate this issue. Expand
COVID-19 detection from scarce chest x-ray image data using few-shot deep learning approach
  • Shruti Jadon
  • Computer Science, Engineering
  • Medical Imaging
  • 2021
TLDR
This work has experimented with well-known solutions for data scarcity in deep learning to detect COVID-19 using siamese networks, and proposed a custom few-shot learning approach that was able to achieve 96.4% accuracy an improvement from 83% using baseline models. Expand
Challenges and approaches to time-series forecasting in data center telemetry: A Survey
TLDR
This evaluation attempts to summarize and evaluate the performance of well known time series forecasting techniques for telemetry data and hopes that this evaluation provides a comprehensive summary to innovate in forecasting approaches fortelemetry data. Expand
Knowledge-Assisted Deep Reinforcement Learning in 5G Scheduler Design: From Theoretical Framework to Implementation
TLDR
A knowledge-assisted deep reinforcement learning (DRL) algorithm to design wireless schedulers in the fifth-generation (5G) cellular networks with time-sensitive traffic and an architecture for online training and inference, where K-DDPG initializes the Scheduler off-line and then fine-tunes the scheduler online to handle the mismatch between off- line simulations and non-stationary real-world systems. Expand

References

SHOWING 1-10 OF 32 REFERENCES
Optimization as a Model for Few-Shot Learning
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. Expand
Improving Siamese Networks for One Shot Learning using Kernel Based Activation functions
TLDR
This paper presents a method to improve on their accuracy using Kafnets (kernel-based non-parametric activation functions for neural networks) by learning proper embeddings with relatively less number of epochs and achieves strong results which exceed those of ReLU based deep learning models. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
One-shot Learning with Memory-Augmented Neural Networks
TLDR
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated. Expand
Image Deformation Meta-Networks for One-Shot Learning
TLDR
This work combines a meta-learner with an image deformation sub-network that produces additional training examples, and optimize both models in an end-to-end manner to significantly outperform state-of-the-art approaches on widely used one-shot learning benchmarks. Expand
Learning Deep Representation for Imbalanced Classification
TLDR
The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution. Expand
Finding Task-Relevant Features for Few-Shot Learning by Category Traversal
TLDR
A Category Traversal Module is introduced that can be inserted as a plug-and-play module into most metric-learning based few-shot learners, identifying task-relevant features based on both intra-class commonality and inter-class uniqueness in the feature space. Expand
Weakly Supervised One-Shot Detection with Attention Siamese Networks
TLDR
The attention Siamese networks are evaluated on a one-shot detection task from the audio domain, where it detects audio keywords in spoken utterances and considerably outperforms a baseline approach and yields a 42.6% average precision for detection across 10 unseen classes. Expand
...
1
2
3
4
...