CascadeML: An Automatic Neural Network Architecture Evolution and Training Algorithm for Multi-label Classification

@article{Pakrashi2019CascadeMLAA,
  title={CascadeML: An Automatic Neural Network Architecture Evolution and Training Algorithm for Multi-label Classification},
  author={Arjun Pakrashi and Brian Mac Namee},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.10551}
}
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time. A common but trivial approach is to train individual binary classifiers per label, but the performance can be improved by considering associations within the labels. Like with any machine learning algorithm, hyperparameter tuning is important to train a good multi-label classifier model. The task of selecting the best hyperparameter settings for an algorithm is an… 

FrugalMCT: Efficient Online ML API Selection for Multi-Label Classification Tasks

FrugalMCT is a principled framework that adaptively selects the APIs to use for different data in an online fashion while respecting the user’s budget, and allows combining ML APIs’ predictions for any single data point, and selects the best combination based on an accuracy estimator.

Hyperparameter optimization in deep multi-target prediction

This work can be seen as the first attempt at a single AutoML framework for most problem settings that fall under the umbrella of multi-target prediction, which includes popular ML settings such as multi-label classification, multivariate regression, multi-task learning, dyadic prediction, matrix completion, and zero-shot learning.

A Framework for Exploring and Modelling Neural Architecture Search Methods

This paper aims to close this knowledge gap by summarising search decisions and strategies and proposing a schematic framework that applies quantitative and qualitative metrics for prototyping, comparing, and benchmarking the NAS methods.

Improving Misfire Fault Diagnosis with Cascading Architectures via Acoustic Vehicle Characterization

A multi-task convolutional neural network that predicts and cascades vehicle attributes to enhance misfire fault detection and develops and implements novel deep learning cascading architectures, which are defined as conditional, multi-level networks that process raw audio to extract highly granular insights for vehicle understanding.

The AI Mechanic: Acoustic Vehicle Characterization Neural Networks

The AI mechanic, an acoustic vehicle characterization deep learning system, is introduced as an integrated approach using sound captured from mobile devices to enhance transparency and understanding of vehicles and their condition for non-expert users.

T HE AI M ECHANIC : A COUSTIC V EHICLE C HARACTERIZATION N EURAL N ETWORKS

The AI mechanic, an acoustic vehicle characterization deep learning system, is introduced as an integrated approach using sound captured from mobile devices to enhance transparency and understanding of vehicles and their condition for non-expert users.

AutoML for Multi-Label Classification: Overview and Empirical Evaluation

A grammar-based best-first search is found to compare favorably to other optimizers in AutoML for MLC, and is proposed as a benchmarking framework that supports a fair and systematic comparison.

References

SHOWING 1-10 OF 43 REFERENCES

Multi-instance multi-label image classification: A neural approach

Towards a method for automatically selecting and configuring multi-label classification algorithms

The first method (an evolutionary algorithm) for solving the Auto-ML task in MLC, i.e., the first method for automatically selecting and configuring the best MLC algorithm for a given input dataset is proposed.

Ml-rbf: RBF Neural Networks for Multi-Label Learning

A neural network based multi-label learning algorithm named Ml-rbf is proposed, which is derived from the traditional radial basis function (RBF) methods and achieves highly competitive performance to other well-established multi- label learning algorithms.

Large-Scale Multi-label Text Classification - Revisiting Neural Networks

It is shown that BP-MLL's ranking loss minimization can be efficiently and effectively replaced with the commonly used cross entropy error function, and that several advances in neural network training that have been developed in the realm of deep learning can be effectively employed in this setting.

Deep Learning for Multi-label Classification

It is shown that a proper development of the feature space can make labels less interdependent and easier to model and predict at inference time, and a deep network is presented that outperforms a number of competitive methods from the literature.

Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization

Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multILabel learning algorithms.

Classifier chains for multi-label classification

This paper presents a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity, and illustrates the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.

Automated Selection and Configuration of Multi-Label Classification Algorithms with Grammar-Based Genetic Programming

The proposed Grammar-based Genetic Programming (GGP) method, an Automated Machine Learning (Auto-ML) method for Multi-Label Classification (MLC) based on the MEKA tool, achieves the best predictive accuracy among all five evaluated methods.