An online conjugate gradient algorithm for large-scale data analysis in machine learning.

Citation metadata

Date: Feb. 2021
From: AIMS Mathematics(Vol. 6, Issue 2)
Publisher: AIMS Press
Document Type: Article
Length: 4,631 words
Lexile Measure: 1650L

Document controls

Main content

Abstract :

In recent years, the amount of available data is growing exponentially, and large-scale data is becoming ubiquitous. Machine learning is a key to deriving insight from this deluge of data. In this paper, we focus on the large-scale data analysis, especially classification data, and propose an online conjugate gradient (CG) descent algorithm. Our algorithm draws from a recent improved FletcherReeves (IFR) CG method proposed in Jiang and Jian[13] as well as a recent approach to reduce variance for stochastic gradient descent from Johnson and Zhang [15]. In theory, we prove that the proposed online algorithm achieves a linear convergence rate under strong Wolfe line search when the objective function is smooth and strongly convex. Comparison results on several benchmark classification datasets demonstrate that our approach is promising in solving large-scale machine learning problems, viewed from the points of area under curve (AUC) value and convergence behavior. Keywords: machine learning; online learning; stochastic optimization; conjugate gradient; variance reduction Mathematics Subject Classification: 65K05, 68W27 1
Get Full Access
Gale offers a variety of resources for education, lifelong learning, and academic research. Log in through your library to get access to full content and features!
Access through your library

Source Citation

Source Citation   

Gale Document Number: GALE|A693494225