Abstract
We present two stochastic variance-reduced PCA algorithms and their convergence analyses. By deriving explicit forms of step size, epoch length and batch size to ensure the optimal runtime, we show that the proposed algorithms can attain the optimal runtime with any batch sizes. Also, we establish global convergence of the algorithms based on a novel approach, which studies the optimality gap as a ratio of two expectation terms. The framework in our analysis is general and can be used to analyze other stochastic variance-reduced PCA algorithms and improve their analyses. Moreover, we introduce practical implementations of the algorithms which do not require hyper-parameters. The experimental results show that the proposed methodsd outperform other stochastic variance-reduced PCA algorithms regardless of the batch size.
Original language | English (US) |
---|---|
Pages (from-to) | 4302-4312 |
Number of pages | 11 |
Journal | Proceedings of Machine Learning Research |
Volume | 108 |
State | Published - 2020 |
Event | 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020 - Virtual, Online Duration: Aug 26 2020 → Aug 28 2020 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability