#### Filter Results:

- Full text PDF available (8)

#### Publication Year

2015

2017

- This year (3)
- Last 5 years (8)
- Last 10 years (8)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Haishan Ye, Luo Luo, Zhihua Zhang
- ArXiv
- 2017

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, subsampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of… (More)

- Yujun Li, Kaichun Mo, Haishan Ye
- AAAI
- 2016

Kaczmarz algorithm is an efficient iterative algorithm to solve overdetermined consistent system of linear equations. During each updating step, Kaczmarz chooses a hyperplane based on an individual equation and projects the current estimate for the exact solution onto that space to get a new estimate. Many vairants of Kacz-marz algorithms are proposed on… (More)

- Haishan Ye, Zhihua Zhang
- 2015

In this paper, we study subspace embedding problem and obtain the following results: 1. We extend the results of approximate matrix multiplication from the Frobenius norm to the spectral norm. Assume matrices A and B both have at most r stable rank and˜r rank, respectively. Let S be a subspace embedding matrix with l rows which depends on stable rank, then… (More)

- Haishan Ye, Qiaoming Ye, Zhihua Zhang
- ArXiv
- 2016

Generalized matrix approximation plays a fundamental role in many machine learning problems, such as CUR decomposition, kernel approximation, and matrix low rank approximation. Especially with Today's applications involved in larger and larger dataset, more and more efficient generalized matrix approximation algorithems become a crucially important research… (More)

- Haishan Ye, Luo Luo, Zhihua Zhang
- ICML
- 2017

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, stochastic second order methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of… (More)

- Haishan Ye, Yujun Li, Zhihua Zhang
- ArXiv
- 2015

Prior optimal CUR decomposition and near optimal column reconstruction methods have been established by combining BSS sampling and adaptive sampling. In this paper, we propose a new approach to the optimal CUR decomposition and near optimal column reconstruction by just using leverage score sampling. In our approach, both the BSS sampling and adaptive… (More)

- Haishan Ye, Zhihua Zhang
- ArXiv
- 2017

Optimization plays a key role in machine learning. Recently, stochastic second-order methods have attracted much attention due to their low computational cost in each iteration. However, these algorithms might perform poorly especially if it is hard to approximate the Hessian well and efficiently. As far as we know, there is no effective way to handle this… (More)

- Haishan Ye, Luo Luo, Zhihua Zhang
- ArXiv
- 2016

Many machine learning models depend on solving a large scale optimization problem. Recently, sub-sampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost at each iteration while commanding a high convergence rate. In… (More)

- ‹
- 1
- ›