LogGENE: A smooth alternative to check loss for Deep Healthcare Inference Tasks

  title={LogGENE: A smooth alternative to check loss for Deep Healthcare Inference Tasks},
  author={Aryaman Jeendgar and Aditya Pola and Soma S. Dhavala and Snehanshu Saha},
High-throughput Genomics is ushering a new era in personalized health care, and targeted drug design and delivery. Mining these large datasets, and obtaining calibrated predictions is of immediate relevance and utility. In our work, we develop methods for Gene Expression Inference based on Deep neural networks. However, unlike typical Deep learning methods, our inferential technique, while achieving state-of-the-art performance in terms of accuracy, can also provide explanations, and report… 

Figures and Tables from this paper



Deep learning suggests that gene expression is encoded in all parts of a co-evolving interacting gene regulatory structure

Deep learning is applied on over 20,000 mRNA datasets to examine the genetic regulatory code controlling mRNA abundance in 7 model organisms, finding that motif interactions could explain the whole dynamic range of mRNA levels.

ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning

ADAHESSIAN is a new stochastic optimization algorithm that directly incorporates approximate curvature information from the loss function, and it includes several novel performance-improving features, including a fast Hutchinson based method to approximate the curvature matrix with low computational overhead.

Genetic Neural Networks: an artificial neural network architecture for capturing gene expression relationships

The Genetic Neural Network is presented, an artificial neural network for predicting genome-wide gene expression given gene knockouts and master regulator perturbations that was 40% more accurate on average than competing architectures (MLP, RNN, BiRNN) when compared on hundreds of curated and inferred transcription modules.

Gene expression inference with deep learning

A deep learning method (abbreviated as D-GEX) to infer the expression of target genes from theexpression of landmark genes, which shows that deep learning achieves lower error than linear regression in 99.97% of the target genes.

LipGene: Lipschitz continuity guided adaptive learning rates for fast convergence on Microarray Expression Data Sets.

This work proposes the application of a novel adaptive learning rate paradigm, guided by Lipschitz continuity of the loss functions (LipGene), to the task of Gene Expression Inference using shallow neural networks, and can reduce compute infrastructure required for training by using smaller networks with a minimal compromise on the prediction error.

Learning Multiple Quantiles With Neural Networks

A neural network model for estimation of multiple conditional quantiles that satisfies the noncrossing property is presented and a new algorithm for fitting the proposed model is developed to use the first-order optimization method.

Estimation and Applications of Quantiles in Deep Binary Classification

This work defines a new loss called binary quantile regression loss, compute the Lipschitz constant of the proposed loss and show that its curvature is bounded under some regularity conditions, and demonstrates that quantiles aid in explainability as they can be used to obtain several univariate summary statistics that can be directly applied to existing explanation tools.

A deep learning framework for high-throughput mechanism-driven phenotype compound screening and its application to COVID-19 drug repurposing

A mechanism-driven neural network-based method DeepCE is proposed, which utilizes graph neural network and multi-head attention mechanism to model chemical substructure-Gene and gene-gene associations, for predicting the differential gene expression profile perturbed by de novo chemicals.

Biological interpretation of deep neural network for phenotype prediction based on gene expression

An original approach for biological interpretation of deep learning models for phenotype prediction from gene expression data is proposed, which produces interpretations more coherent with biology than the state-of-the-art based approaches.

Quantile Regression Neural Networks: A Bayesian Approach

It is shown that the posterior distribution for feedforward neural network quantile regression is asymptotically consistent under a misspecified ALD model and this consistency proof embeds the problem from density estimation domain and uses bounds on the bracketing entropy to derive the posterior consistency over Hellinger neighborhoods.