#### Filter Results:

- Full text PDF available (6)

#### Publication Year

2015

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Yujun Li, Kaichun Mo, Haishan Ye
- AAAI
- 2016

Kaczmarz algorithm is an efficient iterative algorithm to solve overdetermined consistent system of linear equations. During each updating step, Kaczmarz chooses a hyperplane based on an individual equation and projects the current estimate for the exact solution onto that space to get a new estimate. Many vairants of Kacz-marz algorithms are proposed on… (More)

- Haishan Ye, Zhihua Zhang
- 2015

In this paper, we study subspace embedding problem and obtain the following results: 1. We extend the results of approximate matrix multiplication from the Frobenius norm to the spectral norm. Assume matrices A and B both have at most r stable rank and˜r rank, respectively. Let S be a subspace embedding matrix with l rows which depends on stable rank, then… (More)

- Haishan Ye, Qiaoming Ye, Zhihua Zhang
- ArXiv
- 2016

Generalized matrix approximation plays a fundamental role in many machine learning problems, such as CUR decomposition, kernel approximation, and matrix low rank approximation. Especially with Today's applications involved in larger and larger dataset, more and more efficient generalized matrix approximation algorithems become a crucially important research… (More)

- Haishan Ye, Yujun Li, Zhihua Zhang
- ArXiv
- 2015

Prior optimal CUR decomposition and near optimal column reconstruction methods have been established by combining BSS sampling and adaptive sampling. In this paper, we propose a new approach to the optimal CUR decomposition and near optimal column reconstruction by just using leverage score sampling. In our approach, both the BSS sampling and adaptive… (More)

- Haishan Ye, Luo Luo, Zhihua Zhang
- ArXiv
- 2016

Many machine learning models depend on solving a large scale optimization problem. Recently, sub-sampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost at each iteration while commanding a high convergence rate. In… (More)

- Haishan Ye, Luo Luo, Zhihua Zhang
- ArXiv
- 2017

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, subsampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of… (More)

- Haishan Ye, Zhihua Zhang
- ArXiv
- 2015

- ‹
- 1
- ›