Skip to content

Latest commit

 

History

History
49 lines (49 loc) · 2.1 KB

2019-06-25-diakonikolas19c.md

File metadata and controls

49 lines (49 loc) · 2.1 KB
abstract section title layout series id month tex_title firstpage lastpage page order cycles bibtex_author author date address publisher container-title volume genre issued pdf extras
We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization.
contributed
Lower Bounds for Parallel and Randomized Convex Optimization
inproceedings
Proceedings of Machine Learning Research
diakonikolas19c
0
Lower Bounds for Parallel and Randomized Convex Optimization
1132
1157
1132-1157
1132
false
Diakonikolas, Jelena and Guzm\'{a}n, Crist\'{o}bal
given family
Jelena
Diakonikolas
given family
Cristóbal
Guzmán
2019-06-25
PMLR
Proceedings of the Thirty-Second Conference on Learning Theory
99
inproceedings
date-parts
2019
6
25