Dr.VAE: Drug Response Variational Autoencoder
@article{Rampek2017DrVAEDR, title={Dr.VAE: Drug Response Variational Autoencoder}, author={Ladislav Ramp{\'a}{\vs}ek and Daniel Hidru and Petr Smirnov and Benjamin Haibe-Kains and Anna Goldenberg}, journal={arXiv: Machine Learning}, year={2017} }
We present two deep generative models based on Variational Autoencoders to improve the accuracy of drug response prediction. Our models, Perturbation Variational Autoencoder and its semi-supervised extension, Drug Response Variational Autoencoder (Dr.VAE), learn latent representation of the underlying gene states before and after drug application that depend on: (i) drug-induced biological change of each gene and (ii) overall treatment response outcome. Our VAE-based models outperform the…
35 Citations
Dropout Feature Ranking for Deep Learning Models
- Computer ScienceArXiv
- 2017
This work proposes a new general feature ranking method for deep learning that performs on par or compares favorably to eight strawman, classical and deep-learning feature ranking methods in two simulations and five very different datasets on tasks ranging from classification to regression, in both static and time series scenarios.
MichiGAN: sampling from disentangled representations of single-cell data using generative adversarial networks
- Computer Science, BiologyGenome biology
- 2021
MichiGAN is developed, a novel neural network that combines the strengths of VAEs and GANs to sample from disentangled representations without sacrificing data generation quality and allows us to manipulate semantically distinct aspects of cellular identity.
DeepProfile: Deep learning of cancer molecular profiles for precision medicine
- Computer SciencebioRxiv
- 2018
We present the DeepProfile framework, which learns a variational autoencoder (VAE) network from thousands of publicly available gene expression samples and uses this network to encode a…
Evaluating deep variational autoencoders trained on pan-cancer gene expression
- Computer Science, Biology
- 2017
This work trains and compares the three VAE architectures to other dimensionality reduction techniques, and compares performance in a supervised learning task predicting gene inactivation pan-cancer and in a latent space analysis of high grade serous ovarian cancer (HGSC) subtypes.
Sampling from Disentangled Representations of Single-Cell Data Using Generative Adversarial Networks
- Computer Science, BiologybioRxiv
- 2021
MichiGAN1, a novel neural network that combines the strengths of VAEs and GANs to sample from disentangled representations without sacrificing data generation quality, is developed and allows us to manipulate semantically distinct aspects of cellular identity and predict single-cell gene expression response to drug treatment.
Variational Autoencoder for Anti-Cancer Drug Response Prediction
- Biology, Computer ScienceArXiv
- 2020
This work seeks to predict the response of different anti-cancer drugs with variational autoencoders (VAE) and multi-layer perceptron (MLP) and shows that the model can generate unseen effective drug compounds for specific cancer cell lines.
Extracting a Biologically Relevant Latent Space from Cancer Transcriptomes with Variational Autoencoders
- Biology, Computer SciencebioRxiv
- 2017
The extent to which avariational autoencoders can be trained to model cancer gene expression, and whether or not such a VAE would capture biologically-relevant features are determined.
Unsupervised deep learning with variational autoencoders applied to breast tumor genome-wide DNA methylation data with biologic feature extraction
- Computer Science
- 2018
An unsupervised deep learning framework with variational autoencoders (VAEs) is employed to learn latent representations of the DNA methylation landscape from three independent breast tumor datasets to demonstrate the feasibility of VAEs to track representative differential methylation patterns among clinical subtypes of tumors.
Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents
- Computer ScienceECML/PKDD
- 2022
The studied approach shows that training of VAEs is indeed possible without sampling-based approximation and reparameterization, and makes VAEs competitive where they have previously been outperformed by non-generative approaches.
Use of Deep Learning in Personalized Medicine: Current Trends and the Future Perspective
- Medicine, Computer ScienceProceedings of the 2nd International Conference on ICT for Digital, Smart, and Sustainable Development, ICIDSSD 2020, 27-28 February 2020, Jamia Hamdard, New Delhi, India
- 2021
This paper aims to canvass the research studies that have been conducted in the previous 2-3 years to employ ML and DL techniques in predicting disorders as well as predicting responses to drugs from scans, images and other similar data.
References
SHOWING 1-10 OF 16 REFERENCES
THE VARIATIONAL FAIR AUTOENCODER
- Computer Science
- 2016
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation with an additional penalty term based on the “Maximum Mean Discrepancy” (MMD) measure.
MADE: Masked Autoencoder for Distribution Estimation
- Computer ScienceICML
- 2015
This work introduces a simple modification for autoencoder neural networks that yields powerful generative models and proves that this approach is competitive with state-of-the-art tractable distribution estimators.
Auto-Encoding Variational Bayes
- Computer ScienceICLR
- 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Semi-supervised Learning with Deep Generative Models
- Computer ScienceNIPS
- 2014
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
Improved Variational Inference with Inverse Autoregressive Flow
- Computer ScienceNIPS 2016
- 2017
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors.
Adam: A Method for Stochastic Optimization
- Computer ScienceICLR
- 2015
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
- Computer ScienceICLR
- 2016
The "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies and significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
- Computer ScienceICML
- 2014
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…
Systematic Assessment of Analytical Methods for Drug Sensitivity Prediction from Cancer Cell Line Data
- BiologyPacific Symposium on Biocomputing
- 2014
This work evaluated over 110,000 different models, based on a multifactorial experimental design testing systematic combinations of modeling factors within several categories of modeling choices, suggesting that model input data and choice of compound are the primary factors explaining model performance.
Variational Inference with Normalizing Flows
- Computer Science, MathematicsICML
- 2015
It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.