Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification

@article{Yuan2021LargescaleRD,
  title={Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
  author={Zhuoning Yuan and Yan Yan and Milan Sonka and Tianbao Yang},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={3020-3029}
}
Deep AUC Maximization (DAM) is a new paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. Most previous works of AUC maximization focus on the perspective of optimization by designing efficient stochastic algorithms, and studies on generalization performance of large-scale DAM on difficult tasks are missing. In this work, we aim to make DAM more practical for interesting real-world applications (e.g., medical image classification). First, we propose… 

End-to-End Semi-Supervised Ordinal Regression AUC Maximization with Convolutional Kernel Networks

This paper decomposes the ordinal regression into a series of binary classification subproblems and proposes an unbiased non-convex objective function to optimize AUC, such that both labeled and unlabeled data can be used to enhance the model performance.

Large-scale Optimization of Partial AUC in a Range of False Positive Rates

An efficient approximated gradient descent method based on the Moreau envelope smoothing technique, inspired by recent advances in non-smooth DC optimization is developed, and the e-ectiveness of the proposed algorithms in training both linear models and deep neural networks for partial AUC maximization and sum of ranked range loss minimization is numerically demonstrated.

Performance or Trust? Why Not Both. Deep AUC Maximization with Self-Supervised Learning for COVID-19 Chest X-ray Classifications

Comparisons show that leveraging the new surrogate loss on self-supervised models can produce label-efficient networks that are both high-performing and trustworthy, and adopt a new quantification score to measure a model's trustworthiness.

Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm

This work presents a simpler reformulation of the PAUC optimization problem in an asymptotically unbiased and instance-wise manner, and presents new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs.

A Unified Framework against Topology and Class Imbalance

This work develops a unified topology-aware AUC optimization (TOPOAUC) framework, which could simultaneously deal with the topology and class imbalance problem in graph learning and proposes a T opology-A ware I mportance L earning mechanism (TAIL), which considers theTopology of pairwise nodes and different contributions of topology information to pairwise node neighbors.

DORA: Exploring outlier representations in Deep Neural Networks

DORA (Data-agnOstic Representation Analysis): the first automatic data-agnostic method for the detection of potentially infected representations in Deep Neural Networks is introduced and it is shown that contaminated representations found by DORA can be used to detect infected samples in any given dataset.

MedShift: identifying shift data for medical dataset curation

Experiments show the proposed shift data detection pipeline, called MedShift, can be beneficial for medical centers to curate high-quality datasets more efficiently.

EXACT: How to Train Your Accuracy

A new optimization framework is proposed by introducing stochasticity to a model’s output and optimizing expected accuracy, i.e. accuracy of the Stochastic model, which is a powerful alternative to widely used classification losses.

Enhance Chest X-ray Classification with Multi-image Fusion and Pseudo-3D Reconstruction

A novel dual-view deep learning framework is presented to enhance the classification that uses an intermediate bi-directional fusion architecture to exploit the intrinsic spatial correlation registered between the two views.

FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming

A novel and scalable data architecture for GNN acceleration, named FlowGNN, which can support a wide range of GNN models with message-passing mechanism and delivers ultra-fast real-time GNN inference without any graph pre-processing, making it agnostic to dynamically changing graph structures.

References

SHOWING 1-10 OF 41 REFERENCES

Stochastic AUC Maximization with Deep Neural Networks

Stochastic AUC maximization problem with a deep neural network as the predictive model is considered and Polyak-Łojasiewicz (PL) condition is explored, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme.

Maximizing AUC with Deep Learning for Classification of Imbalanced Mammogram Datasets

A new deep learning approach for classification of mammograms that requires only a global binary label that directly maximizes the Area Under the ROC Curve (AUC), providing an unbiased loss.

On the Consistency of AUC Pairwise Optimization

A sufficient condition for the asymptotic consistency of learning approaches based on surrogate loss functions is provided, and it is proved that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent.

Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks

This work proposes and analyzes a communication-efficient distributed optimization algorithm based on a non-convex concave reformulation of the AUC maximization, in which the communication of both the primal variable and the dual variable between each worker and the parameter server only occurs after multiple steps of gradient-based updates in each worker.

Online AUC Maximization

This work develops online learning algorithm for maximizing Area Under the ROC curve (AUC), a metric that is widely used for measuring the classification performance for imbalanced data distributions, and presents two algorithms for online AUC maximization with theoretic performance guarantee.

Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening

A deep convolutional neural network for breast cancer screening exam classification, trained, and evaluated on over 200000 exams, and it is shown that a hybrid model, averaging the probability of malignancy predicted by a radiologist with a prediction of the neural network is more accurate than either of the two separately.

Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate

To the best of the knowledge, this is the first stochastic algorithm for AUC maximization with a statistical convergence rate as fast as O(1/n) up to a logarithmic factor.

Stochastic Proximal Algorithms for AUC Maximization

This paper develops a novel stochastic proximal algorithm for AUC maximization which is referred to as SPAM, and achieves a convergence rate of O( log t t ) for strongly convex functions while both space and per-iteration costs are of one datum.

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features.