Show simple item record

dc.contributor.advisorNaiman, Daniel Q
dc.creatorHernandez Cuevas, Karla
dc.date.accessioned2019-03-07T03:41:06Z
dc.date.available2019-03-07T03:41:06Z
dc.date.created2017-08
dc.date.issued2017-07-18
dc.date.submittedAugust 2017
dc.identifier.urihttp://jhir.library.jhu.edu/handle/1774.2/60200
dc.description.abstractStochastic approximation (SA) is a powerful class of iterative algorithms for nonlinear root-finding that can be used for minimizing a loss function, L(θ), with respect to a parameter vector θ, when only noisy observations of L(θ) or its gradient are available (through the natural connection between root-finding and minimization); SA algorithms can be thought of as stochastic line search methods where the entire parameter vector is updated at each iteration. The cyclic approach to SA is a variant of SA procedures where θ is divided into multiple subvectors that are updated one at a time in a cyclic manner. This dissertation focuses on studying the asymptotic properties of cyclic SA and of the generalized cyclic SA (GCSA) algorithm, a variant of cyclic SA where the subvector to update may be selected according to a random variable or according to a predetermined pattern, and where the noisy update direction can be based on the updates of any SA algorithm (e.g., stochastic gradient, Kiefer–Wolfowitz, or simultaneous perturbation SA). The convergence of GCSA, asymptotic normality of GCSA (related to rate of convergence), and efficiency of GCSA relative to its non-cyclic counterpart are investigated both analytically and numerically. Specifically, conditions are obtained for the convergence with probability one of the GCSA iterates and for the asymptotic normality of the normalized iterates of a special case of GCSA. Further, an analytic expression is given for the asymptotic relative efficiency (when efficiency is defined in terms of mean squared error) between a special case of GCSA and its non-cyclic counterpart. Finally, an application of the cyclic SA scheme to a multi-agent stochastic optimization problem is investigated. This dissertation also contains two appendices. The first appendix generalizes Theorem 2.2 in Fabian (1968) (a seminal paper in the SA literature that derives general conditions for the asymptotic normality of SA procedures) to make the result more applicable to some modern applications of SA including (but not limited to) the GCSA algorithm, certain root-finding SA algorithms, and certain second-order SA algorithms. The second appendix considers the problem of determining the presence and location of a static object within an area of interest by combining information from multiple sensors using a maximum-likelihood-based approach.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherJohns Hopkins University
dc.subjectstochastic optimization
dc.subjectstochastic approximation
dc.subjectoptimization
dc.subjectmulti-agent optimization
dc.subjectrandomized block-coordinate descent
dc.subjectsystem identification
dc.subjectcyclic stochastic optimization
dc.subjectcyclic stochastic approximation
dc.subjectstochastic gradient
dc.subjectasymptotic normality
dc.subjectcyclic
dc.titleCyclic Stochastic Optimization: Generalizations, Convergence, and Applications in Multi-Agent Systems
dc.typeThesis
thesis.degree.disciplineApplied Mathematics & Statistics
thesis.degree.grantorJohns Hopkins University
thesis.degree.grantorWhiting School of Engineering
thesis.degree.levelDoctoral
thesis.degree.namePh.D.
dc.date.updated2019-03-07T03:41:06Z
dc.type.materialtext
thesis.degree.departmentApplied Mathematics and Statistics
dc.contributor.committeeMemberSpall, James C
dc.contributor.committeeMemberArora, Raman
dc.contributor.committeeMemberRobinson, Daniel P
dc.publisher.countryUSA


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record