Lessons learned from exploring the backtracking paradigm on the GPU

John Jenkins, Isha Arkatkar, John D. Owens, Alok Choudhary, Nagiza F. Samatova*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Scopus citations


We explore the backtracking paradigm with properties seen as sub-optimal for GPU architectures, using as a case study the maximal clique enumeration problem, and find that the presence of these properties limit GPU performance to approximately 1.4-2.25 times a single CPU core. The GPU performance "lessons" we find critical to providing this performance include a coarse-and-fine-grain parallelization of the search space, a low-overhead load-balanced distribution of work, global memory latency hiding through coalescence, saturation, and shared memory utilization, and the use of GPU output buffering as a solution to irregular workloads and a large solution domain. We also find a strong reliance on an efficient global problem structure representation that bounds any efficiencies gained from these lessons, and discuss the meanings of these results to backtracking problems in general.

Original languageEnglish (US)
Title of host publicationEuro-Par 2011 Parallel Processing - 17th International Conference, Proceedings
Number of pages13
EditionPART 2
StatePublished - 2011
Event17th International Conference on Parallel Processing, Euro-Par 2011 - Bordeaux, France
Duration: Aug 29 2011Sep 2 2011

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume6853 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Other17th International Conference on Parallel Processing, Euro-Par 2011

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)


Dive into the research topics of 'Lessons learned from exploring the backtracking paradigm on the GPU'. Together they form a unique fingerprint.

Cite this