Domain Invariant Model with Graph Convolutional Network for Mammogram Classification

@article{Wang2022DomainIM,
  title={Domain Invariant Model with Graph Convolutional Network for Mammogram Classification},
  author={Chu-ran Wang and Jing Li and Xinwei Sun and Fandong Zhang and Yizhou Yu and Yizhou Wang},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.09954}
}
—Due to its safety-critical property, the image-based diagnosis is desired to achieve robustness on out-of-distribution (OOD) samples. A natural way towards this goal is capturing only clinically disease-related features, which is composed of macroscopic attributes ( e.g. , margins, shapes) and microscopic image-based features ( e.g. , textures) of lesion-related areas. How-ever, such disease-related features are often interweaved with data-dependent (but disease irrelevant) biases during… 

References

SHOWING 1-10 OF 23 REFERENCES

DIVA: Domain Invariant Variational Autoencoders

The Domain Invariant Variational Autoencoder (DIVA) is proposed, a generative model that tackles the problem of domain generalization by learning three independent latent subspaces, one for the domain,One for the class, and one for any residual variations.

3D Deep Learning from CT Scans Predicts Tumor Invasiveness of Subcentimeter Pulmonary Adenocarcinomas.

A deep learning system based on 3D convolutional neural networks and multitask learning, which automatically predicts tumor invasiveness, together with 3D nodule segmentation masks is developed, which could help doctors work efficiently and facilitate the application of precision medicine.

ICADx: interpretable computer aided diagnosis of breast masses

Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification, implying that the proposal could be a promising approach to develop the CADx system.

Signed Laplacian Deep Learning with Adversarial Augmentation for Improved Mammography Diagnosis

A signed graph regularized deep neural network with adversarial augmentation, named DiagNet, which uses adversarial learning to generate positive and negative mass-contained mammograms for each mass class and a deep convolutional neural network is trained by jointly optimizing the signedgraph regularization and classification loss.

Why do deep convolutional networks generalize so poorly to small image transformations?

The results indicate that the problem of insuring invariance to small image transformations in neural networks while preserving high accuracy remains unsolved.

Domain-Adversarial Training of Neural Networks

A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.

Learning to Balance Specificity and Invariance for In and Out of Domain Generalization

This work introduces Domain-specific Masks for Generalization, a model for improving both in-domain and out-of-domain generalization performance and encourages the masks to learn a balance of domain-invariant and domain-specific features, thus enabling a model which can benefit from the predictive power of specialized features while retaining the universal applicability of Domain-Invariant features.

Guided Variational Autoencoder for Disentanglement Learning

An algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.