TY - GEN

T1 - Scheduling distributed clusters of parallel machines

T2 - 24th Annual European Symposium on Algorithms, ESA 2016

AU - Murray, Riley

AU - Chao, Megan

AU - Khuller, Samir

PY - 2016/8/1

Y1 - 2016/8/1

N2 - The Map-Reduce computing framework rose to prominence with datasets of such size that dozens of machines on a single cluster were needed for individual jobs. As datasets approach the exabyte scale, a single job may need distributed processing not only on multiple machines, but on multiple clusters. We consider a scheduling problem to minimize weighted average completion time of n jobs on m distributed clusters of parallel machines. In keeping with the scale of the problems motivating this work, we assume that (1) each job is divided into m "subjobs" and (2) distinct subjobs of a given job may be processed concurrently. When each cluster is a single machine, this is the NP-Hard concurrent open shop problem. A clear limitation of such a model is that a serial processing assumption sidesteps the issue of how different tasks of a given subjob might be processed in parallel. Our algorithms explicitly model clusters as pools of resources and effectively overcome this issue. Under a variety of parameter settings, we develop two constant factor approximation algorithms for this problem. The first algorithm uses an LP relaxation tailored to this problem from prior work. This LP-based algorithm provides strong performance guarantees. Our second algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster. This mapping-based algorithm is combinatorial and extremely fast. These are the first constant factor approximations for this problem.

AB - The Map-Reduce computing framework rose to prominence with datasets of such size that dozens of machines on a single cluster were needed for individual jobs. As datasets approach the exabyte scale, a single job may need distributed processing not only on multiple machines, but on multiple clusters. We consider a scheduling problem to minimize weighted average completion time of n jobs on m distributed clusters of parallel machines. In keeping with the scale of the problems motivating this work, we assume that (1) each job is divided into m "subjobs" and (2) distinct subjobs of a given job may be processed concurrently. When each cluster is a single machine, this is the NP-Hard concurrent open shop problem. A clear limitation of such a model is that a serial processing assumption sidesteps the issue of how different tasks of a given subjob might be processed in parallel. Our algorithms explicitly model clusters as pools of resources and effectively overcome this issue. Under a variety of parameter settings, we develop two constant factor approximation algorithms for this problem. The first algorithm uses an LP relaxation tailored to this problem from prior work. This LP-based algorithm provides strong performance guarantees. Our second algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster. This mapping-based algorithm is combinatorial and extremely fast. These are the first constant factor approximations for this problem.

KW - Approximation algorithms

KW - Distributed computing

KW - LP relaxations

KW - Machine scheduling

KW - Primal-dual algorithms

UR - http://www.scopus.com/inward/record.url?scp=85013018573&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85013018573&partnerID=8YFLogxK

U2 - 10.4230/LIPIcs.ESA.2016.68

DO - 10.4230/LIPIcs.ESA.2016.68

M3 - Conference contribution

AN - SCOPUS:85013018573

T3 - Leibniz International Proceedings in Informatics, LIPIcs

BT - 24th Annual European Symposium on Algorithms, ESA 2016

A2 - Zaroliagis, Christos

A2 - Sankowski, Piotr

PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing

Y2 - 22 August 2016 through 24 August 2016

ER -