Deca: a Garbage Collection Optimizer for In-memory Data Processing

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Deca : a Garbage Collection Optimizer for In-memory Data Processing. / Shi, Xuanhua; Ke, Zhixiang; Zhou, Yongluan; Jin, Hai; Lu, Lu; Zhang, Xiong; He, Ligang; Hu, Zhenyu; Wang, Fei.

In: A C M Transactions on Computer Systems, Vol. 36, No. 1, 3, 2019.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Shi, X, Ke, Z, Zhou, Y, Jin, H, Lu, L, Zhang, X, He, L, Hu, Z & Wang, F 2019, 'Deca: a Garbage Collection Optimizer for In-memory Data Processing', A C M Transactions on Computer Systems, vol. 36, no. 1, 3. https://doi.org/10.1145/3310361

APA

Shi, X., Ke, Z., Zhou, Y., Jin, H., Lu, L., Zhang, X., He, L., Hu, Z., & Wang, F. (2019). Deca: a Garbage Collection Optimizer for In-memory Data Processing. A C M Transactions on Computer Systems, 36(1), [3]. https://doi.org/10.1145/3310361

Vancouver

Shi X, Ke Z, Zhou Y, Jin H, Lu L, Zhang X et al. Deca: a Garbage Collection Optimizer for In-memory Data Processing. A C M Transactions on Computer Systems. 2019;36(1). 3. https://doi.org/10.1145/3310361

Author

Shi, Xuanhua ; Ke, Zhixiang ; Zhou, Yongluan ; Jin, Hai ; Lu, Lu ; Zhang, Xiong ; He, Ligang ; Hu, Zhenyu ; Wang, Fei. / Deca : a Garbage Collection Optimizer for In-memory Data Processing. In: A C M Transactions on Computer Systems. 2019 ; Vol. 36, No. 1.

Bibtex

@article{3852732091de4cd19ca76a36513cfee3,
title = "Deca: a Garbage Collection Optimizer for In-memory Data Processing",
abstract = "In-memory caching of intermediate data and active combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in big data processing systems such as Spark and Flink. However, it has also been widely reported that these techniques would create a large amount of long-living data objects in the heap. These generated objects may quickly saturate the garbage collector, especially when handling a large dataset, and hence, limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which, by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allo- cates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark, which transparently decomposes and groups objects with similar lifetimes into byte arrays and releases their space altogether when their lifetimes come to an end. When systems are processing very large data, Deca also provides field-oriented memory pages to ensure high compression efficiency. Extensive experimental studies using both synthetic and real datasets shows that, in comparing to Spark, Deca is able to 1) reduce the garbage collection time by up to 99.9%, 2) reduce the memory consumption by up to 46.6% and the storage space by 23.4%, 3) achieve 1.2x-22.7x speedup in terms of execution time in cases without data spilling and 16x-41.6x speedup in cases with data spilling, and 4) provide the similar performance comparing to domain specific systems.",
author = "Xuanhua Shi and Zhixiang Ke and Yongluan Zhou and Hai Jin and Lu Lu and Xiong Zhang and Ligang He and Zhenyu Hu and Fei Wang",
year = "2019",
doi = "10.1145/3310361",
language = "English",
volume = "36",
journal = "ACM Transactions on Computer Systems",
issn = "0734-2071",
publisher = "Association for Computing Machinery, Inc.",
number = "1",

}

RIS

TY - JOUR

T1 - Deca

T2 - a Garbage Collection Optimizer for In-memory Data Processing

AU - Shi, Xuanhua

AU - Ke, Zhixiang

AU - Zhou, Yongluan

AU - Jin, Hai

AU - Lu, Lu

AU - Zhang, Xiong

AU - He, Ligang

AU - Hu, Zhenyu

AU - Wang, Fei

PY - 2019

Y1 - 2019

N2 - In-memory caching of intermediate data and active combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in big data processing systems such as Spark and Flink. However, it has also been widely reported that these techniques would create a large amount of long-living data objects in the heap. These generated objects may quickly saturate the garbage collector, especially when handling a large dataset, and hence, limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which, by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allo- cates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark, which transparently decomposes and groups objects with similar lifetimes into byte arrays and releases their space altogether when their lifetimes come to an end. When systems are processing very large data, Deca also provides field-oriented memory pages to ensure high compression efficiency. Extensive experimental studies using both synthetic and real datasets shows that, in comparing to Spark, Deca is able to 1) reduce the garbage collection time by up to 99.9%, 2) reduce the memory consumption by up to 46.6% and the storage space by 23.4%, 3) achieve 1.2x-22.7x speedup in terms of execution time in cases without data spilling and 16x-41.6x speedup in cases with data spilling, and 4) provide the similar performance comparing to domain specific systems.

AB - In-memory caching of intermediate data and active combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in big data processing systems such as Spark and Flink. However, it has also been widely reported that these techniques would create a large amount of long-living data objects in the heap. These generated objects may quickly saturate the garbage collector, especially when handling a large dataset, and hence, limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which, by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allo- cates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark, which transparently decomposes and groups objects with similar lifetimes into byte arrays and releases their space altogether when their lifetimes come to an end. When systems are processing very large data, Deca also provides field-oriented memory pages to ensure high compression efficiency. Extensive experimental studies using both synthetic and real datasets shows that, in comparing to Spark, Deca is able to 1) reduce the garbage collection time by up to 99.9%, 2) reduce the memory consumption by up to 46.6% and the storage space by 23.4%, 3) achieve 1.2x-22.7x speedup in terms of execution time in cases without data spilling and 16x-41.6x speedup in cases with data spilling, and 4) provide the similar performance comparing to domain specific systems.

U2 - 10.1145/3310361

DO - 10.1145/3310361

M3 - Journal article

VL - 36

JO - ACM Transactions on Computer Systems

JF - ACM Transactions on Computer Systems

SN - 0734-2071

IS - 1

M1 - 3

ER -

ID: 209317770