Improving scalability of parallel CNN training by adaptively adjusting parameter update frequency

Sunwoo Lee*, Qiao Kang, Reda Al-Bahrani, Ankit Agrawal, Alok Choudhary, Wei keng Liao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Synchronous SGD with data parallelism, the most popular parallelization strategy for CNN training, suffers from the expensive communication cost of averaging gradients among all workers. The iterative parameter updates of SGD cause frequent communications and it becomes the performance bottleneck. In this paper, we propose a lazy parameter update algorithm that adaptively adjusts the parameter update frequency to address the expensive communication cost issue. Our algorithm accumulates the gradients if the difference of the accumulated gradients and the latest gradients is sufficiently small. The less frequent parameter updates reduce the per-iteration communication cost while maintaining the model accuracy. Our experimental results demonstrate that the lazy update method remarkably improves the scalability while maintaining the model accuracy. For ResNet50 training on ImageNet, the proposed algorithm achieves a significantly higher speedup (739.6 on 2048 Cori KNL nodes) as compared to the vanilla synchronous SGD (276.6) while the model accuracy is almost not affected (<0.2% difference).

Original languageEnglish (US)
Pages (from-to)10-23
Number of pages14
JournalJournal of Parallel and Distributed Computing
Volume159
DOIs
StatePublished - Jan 2022

Keywords

  • Communication cost
  • Data parallelism
  • Deep learning
  • Parameter update frequency

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Improving scalability of parallel CNN training by adaptively adjusting parameter update frequency'. Together they form a unique fingerprint.

Cite this