Farzad Yousefian

Learn More
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, in the first part of(More)
— We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive(More)
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a compact and convex set. Traditionally, stochastic approximation (SA) schemes for SVIs have relied on strong monotonicity and Lipschitzian properties of the underlying map. We present a regularized smoothed SA (RSSA) scheme wherein the stepsize,(More)
We consider the solution of monotone stochastic variational inequalities and present an adaptive steplength stochastic approximation framework with possibly multivalued mappings. Traditional implementations of SA have been characterized by two challenges. First, convergence ofstandard SA schemes requiresa strongly or strictly monotone single-valued mapping,(More)
— We consider stochastic variational inequality problems where the mapping is monotone over a compact convex set. We present two robust variants of stochastic extragradient algorithms for solving such problems. Of these, the first scheme employs an iterative averaging technique where we consider a generalized choice for the weights in the averaged sequence.(More)
— We consider a class of stochastic nondifferentiable optimization problems where the objective function is an expectation of a random convex function, that is not necessarily differentiable. We propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function.(More)
We consider multiuser optimization problems and Nash games with stochastic convex objectives, instances of which arise in decentralized control problems. The associated equilibrium conditions of both problems can be cast as Cartesian stochastic variational inequality problems with mappings that are strongly monotone but not necessarily Lipschitz continuous.(More)
  • 1