Dataset Sensitive Autotuning of Multi-versioned Code Based on Monotonic Properties: Autotuning in Futhark
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
Dataset Sensitive Autotuning of Multi-versioned Code Based on Monotonic Properties : Autotuning in Futhark. / Munksgaard, Philip; Breddam, Svend Lund; Henriksen, Troels; Gieseke, Fabian Cristian; Oancea, Cosmin.
Trends in Functional Programming - 22nd International Symposium, TFP 2021, Revised Selected Papers. red. / Viktoria Zsok; John Hughes. Springer Science and Business Media Deutschland GmbH, 2021. s. 3-23 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bind 12834 LNCS).Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Dataset Sensitive Autotuning of Multi-versioned Code Based on Monotonic Properties
T2 - 22nd International Symposium on Trends in Functional Programming, TFP 2021
AU - Munksgaard, Philip
AU - Breddam, Svend Lund
AU - Henriksen, Troels
AU - Gieseke, Fabian Cristian
AU - Oancea, Cosmin
PY - 2021
Y1 - 2021
N2 - Functional languages allow rewrite-rule systems that aggressively generate a multitude of semantically-equivalent but differently-optimized code versions. In the context of GPGPU execution, this paper addresses the important question of how to compose these code versions into a single program that (near-)optimally discriminates them across different datasets. Rather than aiming at a general autotuning framework reliant on stochastic search, we argue that in some cases, a more effective solution can be obtained by customizing the tuning strategy for the compiler transformation producing the code versions. We present a simple and highly-composable strategy which requires that the (dynamic) program property used to discriminate between code versions conforms with a certain monotonicity assumption. Assuming the monotonicity assumption holds, our strategy guarantees that if an optimal solution exists it will be found. If an optimal solution doesn’t exist, our strategy produces human tractable and deterministic results that provide insights into what went wrong and how it can be fixed. We apply our tuning strategy to the incremental-flattening transformation supported by the publicly-available Futhark compiler and compare with a previous black-box tuning solution that uses the popular OpenTuner library. We demonstrate the feasibility of our solution on a set of standard datasets of real-world applications and public benchmark suites, such as Rodinia and FinPar. We show that our approach shortens the tuning time by a factor of 6 × on average, and more importantly, in five out of eleven cases, it produces programs that are (as high as 10 × ) faster than the ones produced by the OpenTuner-based technique.
AB - Functional languages allow rewrite-rule systems that aggressively generate a multitude of semantically-equivalent but differently-optimized code versions. In the context of GPGPU execution, this paper addresses the important question of how to compose these code versions into a single program that (near-)optimally discriminates them across different datasets. Rather than aiming at a general autotuning framework reliant on stochastic search, we argue that in some cases, a more effective solution can be obtained by customizing the tuning strategy for the compiler transformation producing the code versions. We present a simple and highly-composable strategy which requires that the (dynamic) program property used to discriminate between code versions conforms with a certain monotonicity assumption. Assuming the monotonicity assumption holds, our strategy guarantees that if an optimal solution exists it will be found. If an optimal solution doesn’t exist, our strategy produces human tractable and deterministic results that provide insights into what went wrong and how it can be fixed. We apply our tuning strategy to the incremental-flattening transformation supported by the publicly-available Futhark compiler and compare with a previous black-box tuning solution that uses the popular OpenTuner library. We demonstrate the feasibility of our solution on a set of standard datasets of real-world applications and public benchmark suites, such as Rodinia and FinPar. We show that our approach shortens the tuning time by a factor of 6 × on average, and more importantly, in five out of eleven cases, it produces programs that are (as high as 10 × ) faster than the ones produced by the OpenTuner-based technique.
KW - Autotuning
KW - Compilers
KW - Flattening
KW - GPGPU
KW - Nested parallelism
KW - Performance
UR - http://www.scopus.com/inward/record.url?scp=85115197944&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-83978-9_1
DO - 10.1007/978-3-030-83978-9_1
M3 - Article in proceedings
AN - SCOPUS:85115197944
SN - 9783030839772
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 23
BT - Trends in Functional Programming - 22nd International Symposium, TFP 2021, Revised Selected Papers
A2 - Zsok, Viktoria
A2 - Hughes, John
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 17 February 2021 through 19 February 2021
ER -
ID: 299693781