Farzad Yousefian

Learn More
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, in the first part of(More)
Motivated by problems arising in decentralized control problems and non-cooperative Nash games, we consider a class of strongly monotone Cartesian variational inequality (VI) problems, where the mappings either contain expectations or their evaluations are corrupted by error. Such complications are captured under the umbrella of Cartesian stochastic(More)
— We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive(More)
We have previously highlighted the ability of testosterone (T) to improve differentiation and myotube hypertrophy in fusion impaired myoblasts that display reduced myotube hypertrophy via multiple population doublings (PD) versus their parental controls (CON); an observation which is abrogated via PI3K/Akt inhibition (Deane et al. 2013). However, whether(More)
We consider the solution of monotone stochastic variational inequalities and present an adaptive steplength stochastic approximation framework with possibly multivalued mappings. Traditional implementations of SA have been characterized by two challenges. First, convergence ofstandard SA schemes requiresa strongly or strictly monotone single-valued mapping,(More)
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a compact and convex set. Traditionally, stochastic approximation (SA) schemes for SVIs have relied on strong monotonicity and Lipschitzian properties of the underlying map. We present a regularized smoothed SA (RSSA) scheme wherein the stepsize,(More)
— We consider a class of stochastic nondifferentiable optimization problems where the objective function is an expectation of a random convex function, that is not necessarily differentiable. We propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function.(More)
— We consider stochastic variational inequality problems where the mapping is monotone over a compact convex set. We present two robust variants of stochastic extragradient algorithms for solving such problems. Of these, the first scheme employs an iterative averaging technique where we consider a generalized choice for the weights in the averaged sequence.(More)
We consider multiuser optimization problems and Nash games with stochastic convex objectives, instances of which arise in decentralized control problems. The associated equilibrium conditions of both problems can be cast as Cartesian stochastic variational inequality problems with mappings that are strongly monotone but not necessarily Lipschitz continuous.(More)
  • 1