Mark M. Christiansen

Learn More
Lower bounds for the average probability of error of estimating a hidden variable X given an observation of a correlated random variable Y, and Fano's inequality in particular, play a central role in information theory. In this paper, we present a lower bound for the average estimation error based on the marginal distribution of X and the principal inertias(More)
We explore properties and applications of the principal inertia components (PICs) between two discrete random variables <inline-formula> <tex-math notation="LaTeX">$X$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$Y$ </tex-math></inline-formula>. The PICs lie in the intersection of information and estimation theory, and(More)
How hard is it to guess a password? Massey showed that a simple function of the Shannon entropy of the distribution from which the password is selected is a lower bound on the expected number of guesses, but one which is not tight in general. In a series of subsequent papers under ever less restrictive stochastic assumptions, an asymptotic relationship as(More)
We present a new information-theoretic definition and associated results, based on list decoding in a source coding setting. We begin by presenting list-source codes, which naturally map a key length (entropy) to list size. We then show that such codes can be analyzed in the context of a novel information-theoretic metric, &#x03F5;-symbol secrecy, that(More)
Consider the situation where a word is chosen probabilistically from a finite list. If an attacker knows the list and can inquire about each word in turn, then selecting the word via the uniform distribution maximizes the attacker's difficulty, its Guesswork, in identifying the chosen word. It is tempting to use this property in cryptanalysis of(More)
A string is sent over a noisy channel that erases some of its characters. Knowing the statistical properties of the string’s source and which characters were erased, a listener that is equipped with an ability to test the veracity of a string, one string at a time, wishes to fill in the missing pieces. Here we characterize the influence of the stochastic(More)
The guesswork problem was originally motivated by a desire to quantify computational security for single user systems. Leveraging recent results from its analysis, we extend the remit and utility of the framework to the quantification of the computational security of multi-user systems. In particular, assume that V users independently select strings(More)
We present information-theoretic definitions and results for analyzing symmetric-key encryption schemes beyond the perfect secrecy regime, i.e. when perfect secrecy is not attained. We adopt two lines of analysis, one based on lossless source coding, and another akin to ratedistortion theory. We start by presenting a new information-theoretic metric for(More)
Guesswork is the position at which a random string drawn from a given probability distribution appears in the list of strings ordered from the most likely to the least likely. We define the tilt operation on probability distributions and show that it parametrizes an exponential family of distributions, which we refer to as the tilted family of the source.(More)