Edge Partitioning in Parallel Structured Duplicate Detection

Abstract

Heuristic-search planners that use A* and related graph search algorithms must be parallelized to harness advances in computing power that are based on increasing use of multi-core processors. Although a graph can always be converted to an equivalent tree that can be easily searched in parallel, such a conversion increases the size of the search space exponentially, and the resulting overhead is hard to justify in the context of parallel search for which the speedup ratio is bounded by the number of parallel processes, a polynomial resource in most practical settings. A more direct approach to parallelizing graph search is needed. The challenge in parallelizing graph search is duplicate detection, which requires checking newly generated nodes against the set of already visited nodes. If performed naively, duplicate detection may require excessive synchronization among concurrent search processes (e.g., to maintain the open and closed lists of A*). Here we show how edge partitioning, a technique that was developed originally for reducing the number of time-consuming disk I/O operations in external-memory search, can be used in a parallel setting to reduce the frequency with which search processes need to synchronize with one another, effectively reducing the primary source of overhead in parallel graph search.

Extracted Key Phrases

Cite this paper

@inproceedings{Zhou2010EdgePI, title={Edge Partitioning in Parallel Structured Duplicate Detection}, author={Rong Zhou and Tim Schmidt and Eric A. Hansen and Minh Binh Do and Serdar Uckun}, booktitle={SOCS}, year={2010} }