Samaneh Azadi

Learn More
We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates.(More)
Precisely-labeled data sets with sufficient amount of samples are notably important for training deep convolutional neural networks (CNNs). However, many of the available real-world data sets contain erroneously labeled samples and the error in labels of training sample makes it a daunting task to learn a well-performing deep CNN model. In this work, we(More)
Investigations of biological ultrastructure, such as comprehensive mapping of connections within a nervous system, increasingly rely on large, high-resolution electron microscopy (EM) image volumes. However, discontinuities between the registered section images from which these volumes are assembled, due to variations in imaging conditions and section(More)
In this paper, sliding mode control method is studied for controlling DC motor because of its robustness against model uncertainties and external disturbances, and also its ability in controlling nonlinear and MIMO systems. In this method, using high control gain to overcome uncertainties lead to occur chattering phenomena in control law which can excite(More)
1. The strongly convex case 1.1. Proof of Lemma 1 Lemma 1. Let f be µ-strongly convex, and let x k+1 , y k+1 and λ k+1 be computed as per Alg. 2. For all x ∈ X and y ∈ Y, and w ∈ Ω, it holds for k ≥ 0 that f (x k) − f (x) + h(y k+1) − h(y) + ⟨w k+1 − w, F (w k+1)⟩ ≤ η k 2 ∥g k ∥ 2 2 − µ 2 ∆ k + 1 2η k [∆ k − ∆ k+1 ] + β 2 [A k − A k+1 ] + 1 2β [L k − L k+1(More)
  • 1