Multi-Level Parallelism for Time- and Cost-Efficient Parallel Discrete Event Simulation on GPUs

bibtex
@inproceedings{kunzMultilevelParallelismTime2012,
  author = {Kunz, Georg and Schemmel, Daniel and Gross, James and Wehrle, Klaus},
  title = {{{Multi-Level}} {{Parallelism}} for {{Time-}} and {{Cost-Efficient}} {{Parallel}} {{Discrete}} {{Event}} {{Simulation}} on {{GPUs}}},
  booktitle = {2012 {{ACM/IEEE/SCS}} 26th {{Workshop}} on {{Principles}} of {{Advanced}} and {{Distributed}} {{Simulation}} {{(PADS}} 2012)},
  location = {Zhangjiajie, China},
  pages = {23--32},
  year = {2012},
  doi = {10.1109/PADS.2012.27},
}

Developing complex technical systems requires a systematic exploration of the given design space in order to identify optimal system configurations. However, studying the effects and interactions of even a small number of system parameters often requires an extensive number of simulation runs. This in turn results in excessive runtime demands which severely hamper thorough design space explorations.

In this paper, we present a parallel discrete event simulation scheme that enables cost- and time-efficient execution of large scale parameter studies on GPUs. In order to efficiently accommodate the stream-processing paradigm of GPUs, our parallelization scheme exploits two orthogonal levels of parallelism: External parallelism among the inherently independent simulations of a parameter study and internal parallelism among independent events within each individual simulation of a parameter study. Specifically, we design an event aggregation strategy based on external parallelism that generates workloads suitable for GPUs. In addition, we define a pipelined event execution mechanism based on internal parallelism to hide the transfer latencies between host- and GPU-memory. We analyze the performance characteristics of our parallelization scheme by means of a prototype implementation and show a 25-fold performance improvement over purely CPU-based execution.

Page 1 of ?