• Corpus ID: 104292006

Context Encoding Chest X-rays

@article{Belli2018ContextEC,
  title={Context Encoding Chest X-rays},
  author={Davide Belli and Shi Hu and Ecem Sogancioglu and Bram van Ginneken},
  journal={arXiv: Computer Vision and Pattern Recognition},
  year={2018}
}
Chest X-rays are one of the most commonly used technologies for medical diagnosis. Many deep learning models have been proposed to improve and automate the abnormality detection task on this type of data. In this paper, we propose a different approach based on image inpainting under adversarial training first introduced by Goodfellow et al. We configure the context encoder model for this task and train it over 1.1M 128x128 images from healthy X-rays. The goal of our model is to reconstruct the… 
1 Citations

Figures and Tables from this paper

Deep learning‐based X‐ray inpainting for improving spinal 2D‐3D registration

TLDR
This work investigates the use of deep‐learning‐based inpainting for removing implant projections from the X‐rays to improve the registration performance in 2D and 3D intraoperative images.

References

SHOWING 1-10 OF 20 REFERENCES

Chest X-ray Inpainting with Deep Generative Models

TLDR
This paper investigates the performance of three recently published deep learning based inpainted models: context encoders, semantic image inpainting, and the contextual attention model, applied to chest x-rays, as the chest exam is the most commonly performed radiological procedure.

Context Encoders: Feature Learning by Inpainting

TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

Semantic Image Inpainting with Deep Generative Models

TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.

Generative Image Inpainting with Contextual Attention

TLDR
This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.

Deep multi-scale video prediction beyond mean square error

TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.

Image Inpainting for Irregular Holes Using Partial Convolutions

TLDR
This work proposes the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels, and outperforms other methods for irregular masks.

Image-to-Image Translation with Conditional Adversarial Networks

TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Deep Residual Learning for Image Recognition

TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Globally and locally consistent image completion

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TLDR
This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.