Skip to content

Commit

Permalink
Use absolute link to images in the README (#223)
Browse files Browse the repository at this point in the history
*Issue #, if available:*

*Description of changes:* This PR uses absolute link to the images so
they show up correct on other places such as PyPi.


By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
  • Loading branch information
abdulfatir authored Dec 2, 2024
1 parent 86f755c commit 60bcded
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
<div align="center">
<img src="figures/chronos-logo.png" width="60%">
<img src="https://raw.githubusercontent.com/amazon-science/chronos-forecasting/main/figures/chronos-logo.png" width="60%">
</div>


<div align="center">

# Chronos: Learning the Language of Time Series
Expand Down Expand Up @@ -33,7 +32,7 @@ Chronos is a family of **pretrained time series forecasting models** based on la
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).

<p align="center">
<img src="figures/main-figure.png" width="100%">
<img src="https://raw.githubusercontent.com/amazon-science/chronos-forecasting/main/figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
Expand Down Expand Up @@ -65,7 +64,7 @@ The models in this repository are based on the [T5 architecture](https://arxiv.o
The following figure showcases the remarkable **zero-shot** performance of Chronos and Chronos-Bolt models on 27 datasets against local models, task-specific models and other pretrained models. For details on the evaluation setup and other results, please refer to [the paper](https://arxiv.org/abs/2403.07815).

<p align="center">
<img src="figures/zero_shot-agg_scaled_score.svg" width="100%">
<img src="https://raw.githubusercontent.com/amazon-science/chronos-forecasting/main/figures/zero_shot-agg_scaled_score.svg" width="100%">
<br />
<span>
Fig. 2: Performance of different models on Benchmark II, comprising 27 datasets <b>not seen</b> by Chronos and Chronos-Bolt models during training. This benchmark provides insights into the zero-shot performance of Chronos and Chronos-Bolt models against local statistical models, which fit parameters individually for each time series, task-specific models <i>trained on each task</i>, and pretrained models trained on a large corpus of time series. Pretrained Models (Other) indicates that some (or all) of the datasets in Benchmark II may have been in the training corpus of these models. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the Agg. Relative WQL and MASE, respectively.
Expand Down

0 comments on commit 60bcded

Please sign in to comment.