## A Fast Reduced-Space Algorithmic Framework for Sparse Optimization

 dc.contributor.advisor Spall, James C dc.contributor.committeeMember Basu, Amitabh dc.contributor.committeeMember Curtis, Frank E dc.contributor.committeeMember Robinson, Daniel P dc.creator Chen, Tianyi dc.date.accessioned 2019-03-07T03:08:34Z dc.date.available 2019-03-07T03:08:34Z dc.date.created 2018-12 dc.date.issued 2018-08-20 dc.date.submitted December 2018 dc.date.updated 2019-03-07T03:08:34Z dc.description.abstract Optimization is a crucial scientific tool used throughout applied mathematics. In optimization one typically seeks the lowest value of a chosen objective function from among a set of allowable inputs to that function, i.e., to compute a minimizer. Once an optimization problem is formulated for a particular task of interest, an algorithm for locating such a minimizer is employed. For many applications, the optimization problem may possess one or more solutions, some of which may not be desirable from the perspective of the application. In such settings, a popular approach is to augment the objective function through the use of regularization, which should be carefully chosen to ensure that solutions of the regularized optimization problem are useful to the application of interest. Perhaps the most popular type of regularization is l1-regularization, which has received special attention in the last two decades. Motivation for incorporating l1-regularization is its sparsity-inducing and shrinkage properties, which have demonstrated its utility in improving the interpretation and accuracy of model estimation in both theoretical and practical aspects. Many methods have been proposed for solving l1-regularization problems. Roughly, there are first-order methods (i.e., those that only use first-order derivatives), which have small computational iteration complexity but often are inefficient on realistic applications, and second-order methods (i.e., those that use first- and second-order derivatives), which have large computational iteration complexity but are robust and efficient in terms of the number of iterations typically required. In this thesis we present a new second-order framework that aims to balance the strengths of first-order and second-order methods. Specifically, our framework uses a limited amount of second-derivative information by making use of a mechanism for predicting those variables that will be zero at a solution. In this manner, the computational iteration complexity can be controlled. Moreover, by using second-derivative information within certain computed subspaces, our framework is highly efficient and robust in terms of the overall number of iterations typically required. We present numerical comparisons to other state-of-the-art first and second-order methods that validate our approach and further investigate an implementation of our approach that uses parallel computation. dc.format.mimetype application/pdf dc.identifier.uri http://jhir.library.jhu.edu/handle/1774.2/60069 dc.language.iso en_US dc.publisher Johns Hopkins University dc.publisher.country USA dc.subject optimization dc.subject sparsity inducing regularization dc.title A Fast Reduced-Space Algorithmic Framework for Sparse Optimization dc.type Thesis dc.type.material text thesis.degree.department Applied Mathematics and Statistics thesis.degree.discipline Mathematics thesis.degree.grantor Johns Hopkins University thesis.degree.grantor Whiting School of Engineering thesis.degree.level Doctoral thesis.degree.name Ph.D.
##### Original bundle
Now showing 1 - 1 of 1
Name:
CHEN-DISSERTATION-2018.pdf
Size:
4.13 MB
Format:
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
Size:
5.84 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
Size:
2.68 KB
Format:
Plain Text
Description: