Cyclic Stochastic Optimization: Generalizations, Convergence, and Applications in Multi-Agent Systems
Hernandez Cuevas, Karla
MetadataShow full item record
Stochastic approximation (SA) is a powerful class of iterative algorithms for nonlinear root-finding that can be used for minimizing a loss function, L(θ), with respect to a parameter vector θ, when only noisy observations of L(θ) or its gradient are available (through the natural connection between root-finding and minimization); SA algorithms can be thought of as stochastic line search methods where the entire parameter vector is updated at each iteration. The cyclic approach to SA is a variant of SA procedures where θ is divided into multiple subvectors that are updated one at a time in a cyclic manner. This dissertation focuses on studying the asymptotic properties of cyclic SA and of the generalized cyclic SA (GCSA) algorithm, a variant of cyclic SA where the subvector to update may be selected according to a random variable or according to a predetermined pattern, and where the noisy update direction can be based on the updates of any SA algorithm (e.g., stochastic gradient, Kiefer–Wolfowitz, or simultaneous perturbation SA). The convergence of GCSA, asymptotic normality of GCSA (related to rate of convergence), and efficiency of GCSA relative to its non-cyclic counterpart are investigated both analytically and numerically. Specifically, conditions are obtained for the convergence with probability one of the GCSA iterates and for the asymptotic normality of the normalized iterates of a special case of GCSA. Further, an analytic expression is given for the asymptotic relative efficiency (when efficiency is defined in terms of mean squared error) between a special case of GCSA and its non-cyclic counterpart. Finally, an application of the cyclic SA scheme to a multi-agent stochastic optimization problem is investigated. This dissertation also contains two appendices. The first appendix generalizes Theorem 2.2 in Fabian (1968) (a seminal paper in the SA literature that derives general conditions for the asymptotic normality of SA procedures) to make the result more applicable to some modern applications of SA including (but not limited to) the GCSA algorithm, certain root-finding SA algorithms, and certain second-order SA algorithms. The second appendix considers the problem of determining the presence and location of a static object within an area of interest by combining information from multiple sensors using a maximum-likelihood-based approach.
Showing items related by title, author, creator and subject.
STUDY OF STOCHASTIC POLYAK STEP-SIZES FOR STOCHASTIC OPTIMIZATION Wang, Yichuan; 0000-0002-2927-5601 (Johns Hopkins UniversityUSA, 2022-12-07)Stochastic gradient descent (SGD) is commonly used in solving finite sum optimization problems. The main parameter one needs to choose for this algorithm is the step-size. Adaptive step sizes are particularly appealing as ...
Advances in System Identification and Stochastic Optimization Wang, Long; 0000-0001-6328-8222 (Johns Hopkins UniversityUSA, 2021-03-30)This work studies the framework of systems with subsystems, which has numerous practical applications, including system reliability estimation, sensor networks, and object detection. Consider a stochastic system composed ...
Topology Optimization for Eigenvalue Problems with Applications to Phononic Crystals and Stochastic Dynamics Yang, Yang (Johns Hopkins UniversityUSA, 2016-08-08)Topology optimization is the process of exploring the optimal layout of material within a design domain. It is a free-form technique as material can be added or removed from any location, making it more general than sizing ...