Incremental flattening for nested data parallelism

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Compilation techniques for nested-parallel applications that can adapt to hardware and dataset characteristics are vital for unlocking the power of modern hardware. This paper proposes such a technique, which builds on fattening and is applied in the context of a functional data-parallel language. Our solution uses the degree of utilized parallelism as the driver for generating a multitude of code versions, which together cover all possible mappings of the application's regular nested parallelism to the levels of parallelism supported by the hardware. These code versions are then combined into one program by guarding them with predicates, whose threshold values are automatically tuned to hardware and dataset characteristics. Our unsupervised method-of statically clustering datasets to code versions-is different from autotuning work that typically searches for the combination of code transformations producing a single version, best suited for a specific dataset or on average for all datasets. We demonstrate-by fully integrating our technique in the repertoire of a compiler for the Futhark programming language-significant performance gains on two GPUs for three real-world applications, from the financial domain, and for six Rodinia benchmarks.

OriginalsprogEngelsk
TitelPPoPP 2019 - Proceedings of the 24th Principles and Practice of Parallel Programming
ForlagAssociation for Computing Machinery
Publikationsdato16 feb. 2019
Sider53-67
ISBN (Elektronisk)9781450362252
DOI
StatusUdgivet - 16 feb. 2019
Begivenhed24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2019 - Washington, USA
Varighed: 16 feb. 201920 feb. 2019

Konference

Konference24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2019
LandUSA
ByWashington
Periode16/02/201920/02/2019
SponsorACM SIGHPC, ACM SIGPLAN

ID: 230447731