GANs for Biological Image Synthesis

@article{Osokin2017GANsFB,
  title={GANs for Biological Image Synthesis},
  author={Anton Osokin and Anatole Chessel and Rafael Edgardo Carazo-Salas and Federico Vaggi},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={2252-2261}
}
  • A. Osokin, A. Chessel, F. Vaggi
  • Published 15 August 2017
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
In this paper, we propose a novel application of Generative Adversarial Networks (GAN) to the synthesis of cells imaged by fluorescence microscopy. Compared to natural images, cells tend to have a simpler and more geometric global structure that facilitates image generation. However, the correlation between the spatial pattern of different fluorescent proteins reflects important biological functions, and synthesized images have to capture these relationships to be relevant for biological… 
Generative Adversarial Networks for Augmenting Training Data of Microscopic Cell Images
TLDR
It is shown that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network.
Learning Generative Models of Tissue Organization with Supervised GANs
TLDR
This paper focuses on building generative models of electron microscope (EM) images in which the positions of cell membranes and mitochondria have been densely annotated, and proposes a two-stage procedure that produces realistic images using Generative Adversarial Networks (or GANs) in a supervised way.
Mol2Image: Improved Conditional Flow Models for Molecule to Image Synthesis
TLDR
This paper proposes Mol2Image: a flow-based generative model for molecule to cell image synthesis, which shows quantitatively that the method learns a meaningful embedding of the molecular intervention, which is translated into an image representation reflecting the biological effects of the intervention.
When Deep Learning Meets Cell Image Synthesis
  • M. Kozubek
  • Computer Science
    Cytometry. Part A : the journal of the International Society for Analytical Cytology
  • 2019
TLDR
Deep learning methods developed by the computer vision community are successfully being adapted for use in biomedical image analysis and synthesis applications with some delay and in cell image synthesis, significant improvements can be obtained by splitting the task into learning and generating object shapes based on image segmentation.
Quality Assessment of Synthetic Fluorescence Microscopy Images for Image Segmentation
TLDR
This work proposes three quality metrics that quantify the fidelity of the foreground signal, the background noise, and blurring, respectively, of synthesized fluorescence microscopy images of mitochondria synthesized by two representative GANs.
MelanoGANs: High Resolution Skin Lesion Synthesis with GANs
TLDR
This work tries to generate realistically looking high resolution images of skin lesions with GANs, using only a small training dataset of 2000 samples, and quantitatively and qualitatively compares state-of-the-art GAN architectures such as DCGAN and LAPGAN against a modification of the latter for the task of image generation at a resolution of 256x256px.
GAN-based synthetic brain MR image generation
TLDR
This novel realistic medical image generation approach shows that GANs can generate 128 χ 128 brain MR images avoiding artifacts, and even an expert physician was unable to accurately distinguish the synthetic images from the real samples in the Visual Turing Test.
Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell Microscopy
TLDR
Multi-StyleGAN is proposed as a descriptive approach to simulate time-lapse fluorescence microscopy imagery of living cells, based on a past experiment, and this novel generative adversarial network synthesises a multi-domain sequence of consecutive timesteps.
CytoGAN: Generative Modeling of Cell Images
TLDR
When evaluated for their ability to group cell images responding to treatment by chemicals of known classes, it is found that adversarially learned representations are superior to autoencoder-based approaches.
...
...

References

SHOWING 1-10 OF 61 REFERENCES
Conditional Image Synthesis with Auxiliary Classifier GANs
TLDR
A variant of GANs employing label conditioning that results in 128 x 128 resolution image samples exhibiting global coherence is constructed and it is demonstrated that high resolution samples provide class information not present in low resolution samples.
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
TLDR
Markovian Generative Adversarial Networks (MGANs) are proposed, a method for training generative networks for efficient texture synthesis that surpasses previous neural texture synthesizers by a significant margin and applies to texture synthesis, style transfer, and video stylization.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Generative Adversarial Text to Image Synthesis
TLDR
A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Generative Image Modeling Using Style and Structure Adversarial Networks
TLDR
This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.
Generative Visual Manipulation on the Natural Image Manifold
TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.
Conditional Image Generation with PixelCNN Decoders
TLDR
The gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.
...
...