-
Notifications
You must be signed in to change notification settings - Fork 773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve documentation on forecast evaluation #3238
Labels
documentation
This item involves documentation issues
Comments
@7dy15 thanks for opening the issue. In fact, import pandas as pd
from gluonts.dataset.pandas import PandasDataset
from gluonts.dataset.split import split
from gluonts.model.npts import NPTSPredictor
from gluonts.model.seasonal_naive import SeasonalNaivePredictor
from gluonts.model.evaluation import evaluate_forecasts
from gluonts.ev.metrics import MASE, RMSE, MeanWeightedSumQuantileLoss
url = "https://raw.githubusercontent.com/numenta/NAB/master/data/realTweets/Twitter_volume_AMZN.csv"
df = pd.read_csv(url, header=0, index_col="timestamp", parse_dates=True).resample("1h").sum()
dataset = PandasDataset(df, target="value")
prediction_length = 24
training_dataset, test_template = split(
dataset, date=pd.Period("2015-04-07 00:00:00", freq="1h")
)
test_data = test_template.generate_instances(
prediction_length=prediction_length,
windows=3,
)
seasonal_naive = SeasonalNaivePredictor(prediction_length=prediction_length, season_length=7*24)
npts = NPTSPredictor(prediction_length=prediction_length)
metrics = [MASE(), RMSE(), MeanWeightedSumQuantileLoss(quantile_levels=[0.1, 0.5, 0.9])]
forecasts_seasonal_naive = list(seasonal_naive.predict(test_data.input))
eval_seasonal_naive = evaluate_forecasts(forecasts_seasonal_naive, test_data=test_data, metrics=metrics)
print(f"seasonal naive:\n {eval_seasonal_naive}")
forecasts_npts = list(npts.predict(test_data.input))
eval_npts = evaluate_forecasts(forecasts_npts, test_data=test_data, metrics=metrics)
print(f"npts:\n {eval_npts}") which will output
I'm turning this into a documentation issue, since it's not really a bug. Again, thanks for spotting this! Steps to solving this could include
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
## Description
When I finisned training the model and got the prediction result,I wanted to evaluate the result with forecast_it, ts_it =
make_evaluation_predictions() as the tutorial suggested, But when I tried to convert the the forecast_it and ts_it with list() function,
an error rose:
To Reproduce
forecast_it, ts_it = make_evaluation_predictions(
dataset=test_data, # test dataset
predictor=predictor, # predictor
num_samples=100, # number of sample paths we want for evaluation
)
forecasts_ev =list(forecast_it)
tss = list(ts_it)
The text was updated successfully, but these errors were encountered: