Scaling Law for Time Series Forecasting

Research output: Working paperPreprintResearch

Standard

Scaling Law for Time Series Forecasting. / Shi, Jingzhe; Ma, Qinwei; Ma, Huan; Li, Lei.

arxiv.org, 2024.

Research output: Working paperPreprintResearch

Harvard

Shi, J, Ma, Q, Ma, H & Li, L 2024 'Scaling Law for Time Series Forecasting' arxiv.org. <https://arxiv.org/abs/2405.15124>

APA

Shi, J., Ma, Q., Ma, H., & Li, L. (2024). Scaling Law for Time Series Forecasting. arxiv.org. https://arxiv.org/abs/2405.15124

Vancouver

Shi J, Ma Q, Ma H, Li L. Scaling Law for Time Series Forecasting. arxiv.org. 2024.

Author

Shi, Jingzhe ; Ma, Qinwei ; Ma, Huan ; Li, Lei. / Scaling Law for Time Series Forecasting. arxiv.org, 2024.

Bibtex

@techreport{5e4bd81d705d4a52b311fefbc137c39c,
title = "Scaling Law for Time Series Forecasting",
abstract = " Scaling law that rewards large datasets, complex models and enhanced data granularity has been observed in various fields of deep learning. Yet, studies on time series forecasting have cast doubt on scaling behaviors of deep learning methods for time series forecasting: while more training data improves performance, more capable models do not always outperform less capable models, and longer input horizons may hurt performance for some models. We propose a theory for scaling law for time series forecasting that can explain these seemingly abnormal behaviors. We take into account the impact of dataset size and model complexity, as well as time series data granularity, particularly focusing on the look-back horizon, an aspect that has been unexplored in previous theories. Furthermore, we empirically evaluate various models using a diverse set of time series forecasting datasets, which (1) verifies the validity of scaling law on dataset size and model complexity within the realm of time series forecasting, and (2) validates our theoretical framework, particularly regarding the influence of look back horizon. We hope our findings may inspire new models targeting time series forecasting datasets of limited size, as well as large foundational datasets and models for time series forecasting in future works.\footnote{Codes for our experiments will be made public at: \url{https://github.com/JingzheShi/ScalingLawForTimeSeriesForecasting}. ",
keywords = "cs.LG, cs.AI",
author = "Jingzhe Shi and Qinwei Ma and Huan Ma and Lei Li",
note = "20 pages",
year = "2024",
language = "Udefineret/Ukendt",
publisher = "arxiv.org",
type = "WorkingPaper",
institution = "arxiv.org",

}

RIS

TY - UNPB

T1 - Scaling Law for Time Series Forecasting

AU - Shi, Jingzhe

AU - Ma, Qinwei

AU - Ma, Huan

AU - Li, Lei

N1 - 20 pages

PY - 2024

Y1 - 2024

N2 - Scaling law that rewards large datasets, complex models and enhanced data granularity has been observed in various fields of deep learning. Yet, studies on time series forecasting have cast doubt on scaling behaviors of deep learning methods for time series forecasting: while more training data improves performance, more capable models do not always outperform less capable models, and longer input horizons may hurt performance for some models. We propose a theory for scaling law for time series forecasting that can explain these seemingly abnormal behaviors. We take into account the impact of dataset size and model complexity, as well as time series data granularity, particularly focusing on the look-back horizon, an aspect that has been unexplored in previous theories. Furthermore, we empirically evaluate various models using a diverse set of time series forecasting datasets, which (1) verifies the validity of scaling law on dataset size and model complexity within the realm of time series forecasting, and (2) validates our theoretical framework, particularly regarding the influence of look back horizon. We hope our findings may inspire new models targeting time series forecasting datasets of limited size, as well as large foundational datasets and models for time series forecasting in future works.\footnote{Codes for our experiments will be made public at: \url{https://github.com/JingzheShi/ScalingLawForTimeSeriesForecasting}.

AB - Scaling law that rewards large datasets, complex models and enhanced data granularity has been observed in various fields of deep learning. Yet, studies on time series forecasting have cast doubt on scaling behaviors of deep learning methods for time series forecasting: while more training data improves performance, more capable models do not always outperform less capable models, and longer input horizons may hurt performance for some models. We propose a theory for scaling law for time series forecasting that can explain these seemingly abnormal behaviors. We take into account the impact of dataset size and model complexity, as well as time series data granularity, particularly focusing on the look-back horizon, an aspect that has been unexplored in previous theories. Furthermore, we empirically evaluate various models using a diverse set of time series forecasting datasets, which (1) verifies the validity of scaling law on dataset size and model complexity within the realm of time series forecasting, and (2) validates our theoretical framework, particularly regarding the influence of look back horizon. We hope our findings may inspire new models targeting time series forecasting datasets of limited size, as well as large foundational datasets and models for time series forecasting in future works.\footnote{Codes for our experiments will be made public at: \url{https://github.com/JingzheShi/ScalingLawForTimeSeriesForecasting}.

KW - cs.LG

KW - cs.AI

M3 - Preprint

BT - Scaling Law for Time Series Forecasting

PB - arxiv.org

ER -

ID: 395084536