• Corpus ID: 239998424

Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using off-the-Shelf ANNs

@article{Latotzke2021CascadedCF,
  title={Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using off-the-Shelf ANNs},
  author={Cecilia Latotzke and Johnson Loh and Tobias Gemmeke},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14256}
}
Machine-learning classifiers provide high quality of service in classification tasks. Research now targets cost reduction measured in terms of average processing time or energy per solution. Revisiting the concept of cascaded classifiers, we present a first of its kind analysis of optimal pass-on criteria between the classifier stages. Based on this analysis, we derive a methodology to maximize accuracy and efficiency of cascaded classifiers. On the one hand, our methodology allows cost… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES
CoCoST: A Computational Cost Efficient Classifier
TLDR
CoST is presented, a novel and effective approach for building classifiers which achieve state-of-the-art classification accuracy, while keeping the expected computational cost of classification low, even without feature selection.
Classifier cascades and trees for minimizing feature evaluation cost
TLDR
Two algorithms are developed to efficiently balance the performance with the test-time cost of a classifier in real-world settings, and find their trained classifiers lead to high accuracies at a small fraction of the computational cost.
Cost-Sensitive Tree of Classifiers
TLDR
This paper addresses the challenge of balancing the test-time cost and the classifier accuracy in a principled fashion by constructing a tree of classifiers, through which test inputs traverse along individual paths.
Scalable-effort classifiers for energy-efficient machine learning
TLDR
This paper proposes scalable-effort classifiers, a new approach to optimizing the energy efficiency of supervised machine-learning classifiers that dynamically adjust their computational effort depending on the difficulty of the input data, while maintaining the same level of accuracy.
Energy-conscious fuzzy rule-based classifiers for battery operated embedded devices
A fuzzy rule-based classifier is proposed in this paper where the number of rules in the knowledge base that are fired when an object is classified is anti-monotone with respect to the prior
A framework for the automated generation of power-efficient classifiers for embedded sensor nodes
TLDR
Both simulation and real-time operation of the classifiers demonstrate that the multi-tiered classifier determines states as accurately as a single-trigger (binary) wakeup system while drawing as little as half as much power and with only a negligible increase in latency.
Optimized Hierarchical Cascaded Processing
TLDR
A roofline model for cascaded systems is proposed, derives system level trade-offs and proves the approaches validity through a visual classification case-study.
Multi-class active learning for image classification
TLDR
An uncertainty measure is proposed that generalizes margin-based uncertainty to the multi-class case and is easy to compute, so that active learning can handle a large number of classes and large data sizes efficiently.
Cascade^CNN: Pushing the Performance Limits of Quantisation in Convolutional Neural Networks
TLDR
An automated toolflow that pushes the quantisation limits of any given CNN model, aiming to perform high-throughput inference, without the need of retraining the model or accessing the training data is presented.
An Analysis of Deep Neural Network Models for Practical Applications
TLDR
This work presents a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption and believes it provides a compelling set of information that helps design and engineer efficient DNNs.
...
1
2
3
...