Skip to content

Latest commit

 

History

History
52 lines (52 loc) · 1.83 KB

2021-07-01-aviv21a.md

File metadata and controls

52 lines (52 loc) · 1.83 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Asynchronous Distributed Learning : Adapting to Gradient Delays without Prior Knowledge
We consider stochastic convex optimization problems, where several machines act asynchronously in parallel while sharing a common memory. We propose a robust training method for the constrained setting and derive non asymptotic convergence guarantees that do not depend on prior knowledge of update delays, objective smoothness, and gradient variance. Conversely, existing methods for this setting crucially rely on this prior knowledge, which render them unsuitable for essentially all shared-resources computational environments, such as clouds and data centers. Concretely, existing approaches are unable to accommodate changes in the delays which result from dynamic allocation of the machines, while our method implicitly adapts to such changes.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
aviv21a
0
Asynchronous Distributed Learning : Adapting to Gradient Delays without Prior Knowledge
436
445
436-445
436
false
Aviv, Rotem Zamir and Hakimi, Ido and Schuster, Assaf and Levy, Kfir Yehuda
given family
Rotem Zamir
Aviv
given family
Ido
Hakimi
given family
Assaf
Schuster
given family
Kfir Yehuda
Levy
2021-07-01
Proceedings of the 38th International Conference on Machine Learning
139
inproceedings
date-parts
2021
7
1