Learn More
We introduce a novel algorithm for inference of causal gene interactions, termed CaSPIAN (Causal Subspace Pursuit for Inference and Analysis of Networks), which is based on coupling compressive sensing and Granger causality techniques. The core of the approach is to discover sparse linear dependencies between shifted time series of gene expressions using a(More)
We propose a novel group testing method, termed semi-quantitative group testing, motivated by a class of problems arising in genome screening experiments. Semi-quantitative group testing (SQGT) is a (possibly) non-binary pooling scheme that may be viewed as a concatenation of an adder channel and an integer-valued quantizer. In its full generality, SQGT can(More)
We consider the problem of noiseless and noisy lowrank tensor completion from a set of random linear measurements. In our derivations, we assume that the entries of the tensor belong to a finite field of arbitrary size and that reconstruction is based on a rank minimization framework. The derived results show that the smallest number of measurements needed(More)
We propose a novel group testing method, termed semiquantitative group testing (SQGT), motivated by a class of problems arising in genome screening experiments. The SQGT is a (possibly) nonbinary pooling scheme that may be viewed as a concatenation of an adder channel and an integer-valued quantizer. In its full generality, SQGT may be viewed as a unifying(More)
We describe a generalization of the group testing problem termed symmetric group testing. Unlike in classical binary group testing, the roles played by the input symbols zero and one are “symmetric” while the outputs are drawn from a ternary alphabet. Using an information-theoretic approach, we derive sufficient and necessary conditions for(More)
We introduce a novel probabilistic group testing framework, termed Poisson group testing, in which the number of defectives follows a right-truncated Poisson distribution. The Poisson model applies to a number of biological testing scenarios, where the subjects are assumed to be ordered based on their arrival times and where the probability of being(More)
There is a growing belief that in the face of high complexity, checklists and other simple scorecards or algorithms can significantly improve people’s performance on decision-making tasks [1]. An example of such a tool in medicine, the clinical prediction rule, is a simple decision-making rubric that helps physicians estimate the likelihood of a patient(More)
We introduce a parallel algorithmic architecture for metagenomic sequence assembly, termed MetaPar, which allows for significant reductions in assembly time and consequently enables the processing of large genomic datasets on computers with low memory usage. The gist of the approach is to iteratively perform read (re)classification based on phylogenetic(More)
MOTIVATION Cancer genomes exhibit a large number of different alterations that affect many genes in a diverse manner. An improved understanding of the generative mechanisms behind the mutation rules and their influence on gene community behavior is of great importance for the study of cancer. RESULTS To expand our capability to analyze combinatorial(More)
Metagenomics is an emerging field of molecular biology concerned with analyzing the genomes of environmental samples comprising many different diverse organisms. Given the nature of metagenomic data, one usually has to sequence the genomic material of all organisms in a batch, leading to a mix of reads coming from different DNA sequences. In deep(More)