#### Filter Results:

#### Publication Year

1978

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

We present an accelerated gradient method for non-convex optimization problems with Lip-schitz continuous first and second derivatives. The method requires time O(−7/4 log(1//)) to find an-stationary point, meaning a point x such that ∇f (x) ≤. The method improves upon the O(−2) complexity of gradient descent and provides the additional second-order… (More)

We use smoothed analysis techniques to provide guarantees on the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions, quadratic loss and a single output, under mild over-parametrization. We prove that for a MNN with one hidden layer, the training error is… (More)

We compare the maximum achievable rates in single-carrier and OFDM modulation schemes, under the practical assumptions of i.i.d. finite alphabet inputs and linear ISI with additive Gaussian noise. We show that the Shamai-Laroia approximation serves as a bridge between the two rates: while it is well known that this approximation is often a lower bound on… (More)

Partial matching of geometric structures is important in computer vision , pattern recognition and shape analysis applications. The problem consists of matching similar parts of shapes that may be dissimilar as a whole. Recently, it was proposed to consider partial similarity as a multi-criterion optimization problem trying to simultaneously maximize the… (More)

We consider the discrete-time intersymbol interference (ISI) channel model, with additive Gaussian noise and fixed i.i.d. inputs. In this setting, we investigate the expression put forth by Shamai and Laroia as a conjectured lower bound for the input-output mutual information after application of a MMSE-DFE receiver. A low-SNR expansion is used to prove… (More)

We investigate the existance of simple policies in finite discounted cost Markov Decision Processes, when the discount factor is not constant. We introduce a class called " exponentially representable " discount functions. Within this class we prove existence of optimal policies which are eventually stationary—from some time N onward, and provide an… (More)

—We consider mean squared estimation with lookahead of a continuous-time signal corrupted by additive white Gaussian noise. We investigate the connections between lookahead in estimation, and information under this model. We show that the mutual information rate function, i.e., the mutual information rate as function of the signal-to-noise ratio (SNR) does… (More)

We generalize the geometric discount of finite discounted cost Markov Decision Processes to " exponentially representable " discount functions, prove existence of optimal policies which are stationary from some time N onward, and provide an algorithm for their computation. Outside this class, optimal " N-stationary " policies in general do not exist.

We consider mean squared estimation with lookahead of a continuous-time signal corrupted by additive white Gaussian noise. We show that the mutual information rate function, i.e., the mutual information rate as function of the signal-to-noise ratio (SNR), does not, in general, determine the minimum mean squared error (MMSE) with fixed finite lookahead, in… (More)

The synthesis of polypeptides with the properties of alpha and beta tropomyosin was investigated in differentiating cultures of a myogenic cell line and in a wheat germ cell-free system directed by purified RNA extracted at different stages of differentiation. The polypeptides co-migrate with tropomyosin in isoelectric focusing and SDS two-dimensional gel… (More)