Elisabeth Gassiat

Learn More
In this paper, an approximate maximum likelihood method for blind source separation and deconvolution of noisy signal is proposed. This technique relies upon a data augmentation scheme, where the (unobserved) input are viewed as the missing data. In the technique described in this contribution, the input signal distribution is modeled by a mixture of(More)
This paper describes universal lossless coding strategies for compressing sources on countably infinite alphabets. Classes of memoryless sources defined by an envelope condition on the marginal distribution provide benchmarks for coding techniques originating from the theory of universal coding over finite alphabets. We prove general upper bounds on minimax(More)
We consider the estimation of the number of hidden states (the order) of a discrete-time finite-alphabet hidden Markov model (HMM). The estimators we investigate are related to code-based order estimators: penalized maximum-likelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan. We prove strong consistency(More)
This paper sheds light on adaptive coding with respect to classes of memoryless sources over a countable alphabet defined by an envelope function with finite and non-decreasing hazard rate (log-concave envelope distributions). We prove that the auto-censuring (AC) code is adaptive with respect to the collection of such classes. The analysis builds on the(More)
We address the issue of order identification for hmm with Poisson and Gaussian emissions. We prove information-theoretic bic-like mixture inequalities in the spirit of (Finesso, 1991; Liu & Narayan, 1994; Gassiat & Boucheron, 2003). These inequalities lead to consistent penalized estimators that need no prior bound on the order nor on the parameters of the(More)
Abstract: Motivated by applications in genetic fields, we propose to estimate the heritability in high-dimensional sparse linear mixed models. The heritability determines how the variance is shared between the different random components of a linear mixed model. The main novelty of our approach is to consider that the random effects can be sparse, that is(More)
Abstract. We establish that for q ≥ 1, the class of convex combinations of q translates of a smooth probability density has local doubling dimension proportional to q. The key difficulty in the proof is to control the local geometric structure of mixture classes. Our local geometry theorem yields a bound on the (bracketing) metric entropy of a class of(More)
Consider an i.i.d. sequence of random variables whose distribution f lies in one of a nested family of models Mq , q ≥ 1. We obtain a sharp characterization of the pathwise fluctuations of the generalized likelihood ratio statistic under entropy assumptions on the model classes Mq. Moreover, we develop a technique to obtain local entropy bounds from global(More)