Advances in System Identification and Stochastic Optimization
MetadataShow full item record
This work studies the framework of systems with subsystems, which has numerous practical applications, including system reliability estimation, sensor networks, and object detection. Consider a stochastic system composed of multiple subsystems, where the outputs are distributed according to many of the most common distributions, such as Gaussian, exponential and multinomial. In Chapter 1, we aim to identify the parameters of the system based on the structural knowledge of the system and the integration of data independently collected from multiple sources. Using the principles of maximum likelihood estimation, we provide the formal conditions for the convergence of the estimates to the true full system and subsystem parameters. The asymptotic normalities for the estimates and their connections to Fisher information matrices are also established, which are useful in providing the asymptotic or finite-sample confidence bounds. The maximum likelihood approach is then connected to general stochastic optimization via the recursive least squares estimation in Chapter 2. For stochastic optimization, we consider minimizing a loss function with only noisy function measurements and propose two general-purpose algorithms. In Chapter 3, the mixed simultaneous perturbation stochastic approximation (MSPSA) is introduced, which is designed for mixed variable (mixture of continuous and discrete variables) problems. The proposed MSPSA bridges the gap of dealing with mixed variables in the SPSA family, and unifies the framework of simultaneous perturbation as both the standard SPSA and discrete SPSA can now be deemed as two special cases of MSPSA. The almost sure convergence and rate of convergence of the MSPSA iterates are also derived. The convergence results reveal that the finite-sample bound of MSPSA is identical to discrete SPSA when the problem contains only discrete variables, and the asymptotic bound of MSPSA has the same order of magnitude as SPSA when the problem contains only continuous variables. In Chapter 4, the complex-step SPSA (CS-SPSA) is introduced, which utilizes the complex-valued perturbations to improve the efficiency of the standard SPSA. We prove that the CS-SPSA iterates converge almost surely to the optimum and achieve an accelerated convergence rate, which is faster than the standard convergence rate in derivative-free stochastic optimization algorithms.