Skip to content

Latest commit

 

History

History
178 lines (111 loc) · 7.96 KB

README.md

File metadata and controls

178 lines (111 loc) · 7.96 KB

Note

Given the prevalence of large models. We release a open codebase OpenLTM to explore the design philosophy of large time-series models, which contains a simple pipeline to train and evaluate large time-series models :)

Timer (Large Time-Series Model)

This repo provides official code, datasets and checkpoints for Timer: Generative Pre-trained Transformers Are Large Time Series Models. [Poster], [Slides].

Updates

🚩 News (2024.12) We enhanced Timer with our further work and pre-trained on 307B time points. The checkpoint will be released on HuggingFace and support zero-shot forecasting.

🚩 News (2024.10) We release numpy format of UTSD. An easier and more efficient dataloader can be found here.

🚩 News (2024.10) Timer is included in OpenLTM (Open-Source Large Time-Series Models).

🚩 News (2024.6) Pre-training dataset (UTSD) is available in HuggingFace. Dataloader is also contained.

🚩 News (2024.5) Accepted by ICML 2024, a camera-ready version of 31 pages.

🚩 News (2024.2) Releasing model checkpoints and code for fine-tuning.

Introduction

Time Series Transformer (Timer) is a Generative Pre-trained Transformer for general time series analysis. You can visit our Homepage for a more detailed introduction.

Datasets

We propose Unified Time Series Datasets (UTSD), which encompass well-curated time series to facilitate the research on large time-series models. Our dataset is released in HuggingFace.

Usage

You can access and load UTSD in the style of TSLib based on the following:

# huggingface-cli login
# export HF_ENDPOINT=https://hf-mirror.com 

python ./scripts/UTSD/download_dataset.py

# dataloader
python ./scripts/UTSD/utsdataset.py

Tasks

Forecasting: We provide all scripts as well as datasets for few-shot forecasting in this repo.

Imputation: We propose segment-level imputation, which is more challenging than point-level imputation.

Anomaly Detection: We provide new benchmarks of predictive anomaly detection on UCR Anomaly Archive.

We provide detailed README files illustrating each task under the folder ./scripts/.

Code for Fine-tuning

  1. Install Pytorch and necessary dependencies.
pip install -r requirements.txt
  1. Put downstream datasets from Google Drive and Baidu Drive under the folder ./dataset/.

  2. Put the checkpoint from Google Drive and Baidu Drive under the folder ./checkpoints/.

  3. Train and evaluate the model. We provide the above tasks under the folder ./scripts/.

# forecasting
bash ./scripts/forecast/ECL.sh

# segement-level imputation
bash ./scripts/imputation/ECL.sh

# anomaly detection
bash ./scripts/anomaly_detection/UCR.sh

Train on Custom Dataset

To fine-tune on your time series dataset, you can try out the following steps:

  1. The essense is to reload the customized dataloader and load the pre-trained checkpoint (See ./scripts/ folder).
  2. CIDatasetBenchmark/CIAutoRegressionDatasetBenchmark in the data_provider folder can train and evaluate models in direct / iterative multi-step mode.

Approach

Pre-training and Adaptation

To pre-train on heterogeneous time series, we propose single-series sequence (S3), reserving series variations with the unified context length. Further, we convert forecasting, imputation, and anomaly detection into a unified generative task.

Model Architecture

Given the limited exploration of the backbone for large time-series models, we extensively evaluate candidate backbones and adopt the decoder-only Transformer with autoregressive generation towards LTMs.

Performance

Timer achieves state-of-the-art performance in each task and we present the pre-training benefit on few-shot scenarios.

Scalability

By increasing the parameters and pre-training scale, Timer achieves notable performance improvement: 0.231 $\to$ 0.138 (−40.3%), surpassing the previous state-of-the-art deep forecasters.

300

Flexible Sequence Length

The decoder-only architecture provides the flexibility to accommodate time series of different lookback and forecast lengths.

300

Related Works

Given the significant value to researchers and practitioners, we provide a summary of several concurrent large time-series models:

  • MOMENT is trained on large scale by masking modeling. It can be applied to zero-shot forecasting by concatenating lookback series with a mask with the length to be predicted.
  • Chronos is a probabilistic point-level forecaster developed by Amazon. Chronos-S1 samples one prediction trajectory and Chronos-S20 uses the mean of sampled 20 trajectories.
  • TimesFM from Google is trained on 100B time points. We use the official checkpoint from HuggingFace. It supports dynamic input and output prediction lengths.
  • Moirai is developed by Saleforce, exceling at multivariate time series. It has three different checkpoints, labeled as Moirai-S, Moirai-M, and Moirai-L.

300

Citation

If you find this repo helpful, please cite our paper.

@inproceedings{liutimer,
  title={Timer: Generative Pre-trained Transformers Are Large Time Series Models},
  author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  booktitle={Forty-first International Conference on Machine Learning}
}

@article{liu2024timer,
  title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting},
  author={Liu, Yong and Qin, Guo and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  journal={arXiv preprint arXiv:2410.04803},
  year={2024}
}

Contributors

If you have any questions or want to use the code, feel free to contact: