Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add laplace distribution to torch distributions #2962

Merged

Conversation

melopeo
Copy link
Contributor

@melopeo melopeo commented Aug 14, 2023

Description of changes:
Add Laplace distribution to torch/distributions

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Please tag this pr with at least one of these labels to make our release process faster: BREAKING, new feature, bug fix, other change, dev setup

@melopeo melopeo added the enhancement New feature or request label Aug 14, 2023
@melopeo melopeo requested review from lostella and abdulfatir August 14, 2023 09:02
@melopeo melopeo merged commit 3014464 into awslabs:dev Aug 14, 2023
@melopeo melopeo deleted the add_laplace_distribution_to_torch_distributions branch August 14, 2023 11:27
@kashif
Copy link
Contributor

kashif commented Aug 17, 2023

@melopeo is the loss with Laplace generally huge? compared with say the Student-t head?

@melopeo
Copy link
Contributor Author

melopeo commented Aug 17, 2023

Hello @kashif ,

thanks for reaching out.

I just did a small experiment based on your question. By using the "constant" synthetic dataset I get the following:

  • student-t: Epoch 99: : 50it [00:01, 45.90it/s, loss=-5.83, v_num=205, train_loss=-5.68]
  • Laplace: Epoch 99: : 50it [00:00, 50.11it/s, loss=-4.71, v_num=204, train_loss=-4.58]

From this experiment it seems that the losses are on the same scale.

Here is the code I used for it.

from pprint import pprint
  
  from gluonts.dataset.repository import get_dataset
  from gluonts.evaluation import make_evaluation_predictions, Evaluator
  from gluonts.torch import DeepAREstimator as DeepAREstimatorTorch
  from gluonts.torch.distributions import (
      StudentTOutput,
      LaplaceOutput,
  )
    
  distr_output = StudentTOutput()
  # distr_output = LaplaceOutput()
  
  if __name__ == "__main__":
      dataset = get_dataset("constant", regenerate=False)
  
      estimator = DeepAREstimatorTorch(
          prediction_length=dataset.metadata.prediction_length,
          freq=dataset.metadata.freq,
          distr_output=distr_output,
      )
  
      predictor = estimator.train(dataset.train)
  
      forecast_it, ts_it = make_evaluation_predictions(
          dataset.test, predictor=predictor, num_samples=100
      )
  
      agg_metrics, item_metrics = Evaluator()(
          ts_it, forecast_it, num_series=len(dataset.test)
      )
      pprint(agg_metrics)

@kashif
Copy link
Contributor

kashif commented Aug 17, 2023

ok cool! thanks!

@lostella lostella added new feature (one of pr required labels) torch This concerns the PyTorch side of GluonTS labels Aug 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request new feature (one of pr required labels) torch This concerns the PyTorch side of GluonTS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants