A Multi-label Continual Learning Framework to Scale Deep Learning Approaches for Packaging Equipment Monitoring

@article{Pezze2022AMC,
  title={A Multi-label Continual Learning Framework to Scale Deep Learning Approaches for Packaging Equipment Monitoring},
  author={Davide Dalle Pezze and Denis Deronjic and Chiara Masiero and Diego Tosato and Alessandro Beghi and Gian Antonio Susto},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.04227}
}
Continual Learning aims to learn from a stream of tasks, being able to remember at the same time both new and old tasks. While many approaches were proposed for single-class classification, multi-label classification in the continual scenario remains a challenging problem. For the first time, we study multi-label classification in the Domain Incremental Learning scenario. Moreover, we propose an efficient approach that has a logarithmic complexity with regard to the number of tasks, and can be… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 42 REFERENCES

FORMULA: A Deep Learning Approach for Rare Alarms Predictions in Industrial Equipment

A novel deep learning-based approach called FORMULA (alarm FORecasting in MUlti-LAbel setting), which outperforms not only classic multilabel techniques but also models based on recurrent neural networks and exhibits a lower computational burden.

Imbalanced Continual Learning with Partitioning Reservoir Sampling

This work jointly address the two independently solved problems, Catastropic Forgetting and the long-tailed label distribution by first empirically showing a new challenge of destructive forgetting of the minority concepts on the tail, and proposes a new sampling strategy for replay-based approach named Partitioning Reservoir Sampling (PRS), which allows the model to maintain a balanced knowledge of both head and tail classes.

A Review on Multi-Label Learning Algorithms

This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms with relevant analyses and discussions.

A Deep Learning Model for Smart Manufacturing Using Convolutional LSTM Neural Network Autoencoders

An end-to-end model for multistep machine speed prediction that comprises a deep convolutional LSTM encoder–decoder architecture is proposed and extensive empirical analyses demonstrate the value of the proposed method when compared with the state-of-the-art predictive models.

A Deep Convolutional Autoencoder-Based Approach for Anomaly Detection With Industrial, Non-Images, 2-Dimensional Data: A Semiconductor Manufacturing Case Study

This paper investigates unsupervised anomaly detection for 2-dimensional data in manufacturing environment and provides an approach that exploit Deep Learning-based architecture for handling the data at hand.

Three scenarios for continual learning

Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario.

CNN-RNN: A Unified Framework for Multi-label Image Classification

The proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image- label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework.

Classifier chains for multi-label classification

This paper presents a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity, and illustrates the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.

Lifelong Learning with Dynamically Expandable Networks

The obtained network fine-tuned on all tasks obtained significantly better performance over the batch models, which shows that it can be used to estimate the optimal network structure even when all tasks are available in the first place.