Continual Learning at the Edge: Real-Time Training on Smartphone Devices

@article{Pellegrini2021ContinualLA,
  title={Continual Learning at the Edge: Real-Time Training on Smartphone Devices},
  author={Lorenzo Pellegrini and Vincenzo Lomonaco and Gabriele Graffieti and Davide Maltoni},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.13127}
}
. On-device training for personalized learning is a challenging research problem. Being able to quickly adapt deep prediction models at the edge is necessary to better suit personal user needs. However, adaptation on the edge poses some questions on both the efficiency and sustainability of the learning process and on the ability to work under shifting data distributions. Indeed, naively fine-tuning a prediction model only on the newly available data results in catastrophic forgetting, a sud-den… 

Figures and Tables from this paper

Online Continual Learning for Embedded Devices

Criteria that online continual learners must meet to effectively perform real-time, on-device learning are identified and the performance, memory usage, compute requirements, and ability to generalize to out-of-domain inputs are measured.

SparCL: Sparse Continual Learning on the Edge

This work proposes a novel framework called Sparse Continual Learning (SparCL), which is the first study that leverages sparsity to enable cost-effective continual learning on edge devices and consistently improves the training efficiency of existing state-of-the-art CL methods and further improves the SOTA accuracy.

Federated Learning - Methods, Applications and beyond

This paper provides a brief overview on existing Methods and Applications in the field of vertical and horizontal Federated Learning, as well as Federated Transfer Learning.

Generative Negative Replay for Continual Learning

It is shown that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes.

O NLINE C ONTINUAL L EARNING FOR E MBEDDED D EVICES

These methods were chosen due to their ability to learn one sample at a time in a single pass over a dataset without task labels (i.e., online continual learning) with low memory and compute requirements.

A weakly supervised approach for recycling code recognition

References

SHOWING 1-8 OF 8 REFERENCES

Continual Learning on the Edge with TensorFlow Lite

This paper shows that although transfer learning is a good first step for on-device model training, it suffers from catastrophic forgetting when faced with more realistic scenarios, and expands the TensorFlow Lite library to include continual learning capabilities.

Latent Replay for Real-Time Continual Learning

This paper introduces an original technique named Latent Replay where, instead of storing a portion of past data in the input space, it is proposed to store activations volumes at some intermediate layer, which can significantly reduce the computation and storage required by native rehearsal.

Avalanche: an End-to-End Library for Continual Learning

The proposed Avalanche, an open-source end-to-end library for continual learning research based on PyTorch is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.

CORe50: a New Dataset and Benchmark for Continuous Object Recognition

This work proposes a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduces baseline approaches for different continuous learning scenarios.

Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches

A novel continual learning protocol based on the CORe50 benchmark is introduced and two rehearsal-free continual learning techniques are proposed, CWR* and AR1*, that can learn effectively even in the challenging case of nearly 400 small non-i.i.d. incremental batches.

Replay in Deep Learning: Current Approaches and Missing Biological Elements

This letter provides the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks and identifies multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial Neural networks.

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.