Few-Shot Decoding of Brain Activation Maps

@article{Bontonou2021FewShotDO,
  title={Few-Shot Decoding of Brain Activation Maps},
  author={Myriam Bontonou and Giulia Lioi and Nicolas Farrugia and Vincent Gripon},
  journal={2021 29th European Signal Processing Conference (EUSIPCO)},
  year={2021},
  pages={1326-1330}
}
Few-shot learning addresses problems for which a limited number of training examples are available. So far, the field has been mostly driven by applications in computer vision. Here, we are interested in adapting recently introduced few-shot methods to solve problems dealing with neuroimaging data, a promising application field. To this end, we create a neuroimaging benchmark dataset for few-shot learning and compare multiple learning paradigms, including meta-learning, as well as various… 

Figures and Tables from this paper

Pruning Graph Convolutional Networks to select meaningful graph frequencies for fMRI decoding

TLDR
A deep learning architecture is introduced and a pruning methodology is adapted to automatically identify the graph frequencies that are the most useful to decode fMRI signals, and it is shown that low graph frequencies are consistently identified as the most important for fMRI decoding.

On the benefits of self-taught learning for brain decoding

TLDR
It is shown that such a self-taught learning process always improves the performance of the classifiers but the magnitude of the bene⬁ts strongly depends on the number of data available both for pre- training and pre-training and on the complexity of the targeted downstream task.

A Novel Semi-supervised Meta Learning Method for Subject-transfer Brain-computer Interface

TLDR
The proposed SSML learns a meta model with the existing subjects, then tunes the model in a semi-supervised learning manner, i.e. using few labeled and many unlabeled samples of target subject for calibration.

Easy—Ensemble Augmented-Shot-Y-Shaped Learning: State-of-the-Art Few-Shot Classification with Simple Components

TLDR
This work proposes a simple way to train few-shot classification models, with the aim of reaching top performance on multiple standardized benchmarks in the field.

References

SHOWING 1-10 OF 32 REFERENCES

Transferability of Brain decoding using Graph Convolutional Networks

TLDR
The results indicate that in contrast to natural images, the scanning condition, instead of task domain, has a larger impact on feature transfer for medical imaging.

Functional annotation of human cognitive states using deep graph convolution

Information-based functional brain mapping.

TLDR
The development of high-resolution neuroimaging and multielectrode electrophysiological recording provides neuroscientists with huge amounts of multivariate data, but the local averaging standardly applied to this end may obscure the effects of greatest neuroscientific interest.

Hype versus hope: Deep learning encodes more predictive and robust brain imaging representations than standard machine learning

TLDR
A large-scale systematic comparison of SML approaches versus DL profiled in a ten-way age and gender-based classification task on 12,314 structural MRI images shows that DL methods, if implemented and trained following the prevalent DL practices, have the potential to substantially improve compared to S ML approaches.

Leveraging the Feature Distribution in Transfer-based Few-Shot Learning

TLDR
A transfer-based novel method that builds on two steps: preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and leveraging this preprocessing using an optimal-transport inspired algorithm.

How to train your MAML

TLDR
This paper proposes various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAMl, which it is called M AML++.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

A Closer Look at Few-shot Classification

TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.

Charting the Right Manifold: Manifold Mixup for Few-shot Learning

TLDR
This work observes that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance, and proposes the proposed method S2M2, which beats the current state-of-the-art accuracy on standard few- shot learning datasets.

Image Deformation Meta-Networks for One-Shot Learning

TLDR
This work combines a meta-learner with an image deformation sub-network that produces additional training examples, and optimize both models in an end-to-end manner to significantly outperform state-of-the-art approaches on widely used one-shot learning benchmarks.