A Fast Reduced-Space Algorithmic Framework for Sparse Optimization

dc.contributor.advisorSpall, James C
dc.contributor.committeeMemberBasu, Amitabh
dc.contributor.committeeMemberCurtis, Frank E
dc.contributor.committeeMemberRobinson, Daniel P
dc.creatorChen, Tianyi
dc.date.accessioned2019-03-07T03:08:34Z
dc.date.available2019-03-07T03:08:34Z
dc.date.created2018-12
dc.date.issued2018-08-20
dc.date.submittedDecember 2018
dc.date.updated2019-03-07T03:08:34Z
dc.description.abstractOptimization is a crucial scientific tool used throughout applied mathematics. In optimization one typically seeks the lowest value of a chosen objective function from among a set of allowable inputs to that function, i.e., to compute a minimizer. Once an optimization problem is formulated for a particular task of interest, an algorithm for locating such a minimizer is employed. For many applications, the optimization problem may possess one or more solutions, some of which may not be desirable from the perspective of the application. In such settings, a popular approach is to augment the objective function through the use of regularization, which should be carefully chosen to ensure that solutions of the regularized optimization problem are useful to the application of interest. Perhaps the most popular type of regularization is l1-regularization, which has received special attention in the last two decades. Motivation for incorporating l1-regularization is its sparsity-inducing and shrinkage properties, which have demonstrated its utility in improving the interpretation and accuracy of model estimation in both theoretical and practical aspects. Many methods have been proposed for solving l1-regularization problems. Roughly, there are first-order methods (i.e., those that only use first-order derivatives), which have small computational iteration complexity but often are inefficient on realistic applications, and second-order methods (i.e., those that use first- and second-order derivatives), which have large computational iteration complexity but are robust and efficient in terms of the number of iterations typically required. In this thesis we present a new second-order framework that aims to balance the strengths of first-order and second-order methods. Specifically, our framework uses a limited amount of second-derivative information by making use of a mechanism for predicting those variables that will be zero at a solution. In this manner, the computational iteration complexity can be controlled. Moreover, by using second-derivative information within certain computed subspaces, our framework is highly efficient and robust in terms of the overall number of iterations typically required. We present numerical comparisons to other state-of-the-art first and second-order methods that validate our approach and further investigate an implementation of our approach that uses parallel computation.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://jhir.library.jhu.edu/handle/1774.2/60069
dc.language.isoen_US
dc.publisherJohns Hopkins University
dc.publisher.countryUSA
dc.subjectoptimization
dc.subjectsparsity inducing regularization
dc.titleA Fast Reduced-Space Algorithmic Framework for Sparse Optimization
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentApplied Mathematics and Statistics
thesis.degree.disciplineMathematics
thesis.degree.grantorJohns Hopkins University
thesis.degree.grantorWhiting School of Engineering
thesis.degree.levelDoctoral
thesis.degree.namePh.D.
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
CHEN-DISSERTATION-2018.pdf
Size:
4.13 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
5.84 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.68 KB
Format:
Plain Text
Description: