Gated recurrent units and temporal convolutional network for multilabel classification
@article{Nanni2021GatedRU, title={Gated recurrent units and temporal convolutional network for multilabel classification}, author={Loris Nanni and Alessandra Lumini and Alessandro Manfe and Sheryl Brahnam and Giorgio Venturin}, journal={ArXiv}, year={2021}, volume={abs/2110.04414} }
Multilabel learning tackles the problem of associating a sample with multiple class labels. This work proposes a new ensemble method for managing multilabel classification: the core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam optimization approach. Multiple Adam variants, including novel one proposed here, are compared and tested; these variants are based on the difference between present and past…
Figures and Tables from this paper
One Citation
Identifying the kind behind SMILES - anatomical therapeutic chemical classification using structure-only representations
- Computer ScienceBriefings Bioinform.
- 2022
A light-weight prediction model is proposed which is consists of a straightforward tokenization that extracts and embeds statistically and physicochemically meaningful tokens, and a deep network backed by a set of pyramid kernels to capture multi-resolution chemical structural characteristics.
References
SHOWING 1-10 OF 75 REFERENCES
Orderless Recurrent Models for Multi-Label Classification
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper proposes ways to dynamically order the ground truth labels with the predicted label sequence, allowing for the faster training of more optimal LSTM models for multi-label classification and outperforms other CNN-RNN models.
Exploiting Adam-like Optimization Algorithms to Improve the Performance of Convolutional Neural Networks
- Computer ScienceArXiv
- 2021
Adam based variants based on the difference between the present and the past gradients, the step size is adjusted for each parameter and proposed ensemble obtains very high performance, it obtains accuracy comparable or better than actual state of the art.
CNN-RNN: A Unified Framework for Multi-label Image Classification
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
The proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image- label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework.
Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization
- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2006
Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multILabel learning algorithms.
Joint Feature Selection and Classification for Multilabel Learning
- Computer ScienceIEEE Transactions on Cybernetics
- 2018
This paper proposes a novel method which can perform joint feature selection and classification for multilabel learning, named JFSC, which learns both shared features and label-specific features by considering pairwise label correlations, and builds the multILabel classifier on the learned low-dimensional data representations simultaneously.
Incorporating Multiple Cluster Centers for Multi-Label Learning
- Computer ScienceInf. Sci.
- 2022
An evolutionary approach to build ensembles of multi-label classifiers
- Computer ScienceInf. Fusion
- 2019
High performing ensemble of convolutional neural networks for insect pest image detection
- Computer ScienceEcol. Informatics
- 2022
Multi-label classification with weighted classifier selection and stacked ensemble
- Computer ScienceInf. Sci.
- 2021
Deep Learning for Multilabel Land Cover Scene Categorization Using Data Augmentation
- Computer ScienceIEEE Geoscience and Remote Sensing Letters
- 2019
A data augmentation technique that can drastically increase the size of a smaller data set to copious amounts is proposed that outperforms the current state of the art by more than 6% in terms of the F-score metric.