Skip to main content

COMING SOON

 

Decisions are only as good as the forecasts behind them. That’s why Pigment provides built-in accuracy measures, enabling you to:

  • Understand how well the model fits your historical data

  • Build trust in the forecasted values

  • Compare multiple models or configurations objectively

  • Choose between simple and advanced forecasting approaches

This article explains the accuracy measures Pigment calculates and their interpretations.

 

 

Before you begin

 

How accuracy is calculated

 

To evaluate accuracy, Pigment uses a backtesting approach: we hide a portion of historical data (the backtest period) and ask the model to forecast it. We then compare the forecasted values to the actuals using the measures described below.

By default, Pigment uses the first 80% of the dataset to train the model and the last 20% as the backtest period.

 

Implications of calculating accuracy measures

 

Calculating accuracy measures requires running the model twice:

  1. Once to calculate accuracy as above.

  2. A second time to generate the final forecast.

Because of this extra step, the number of forecasted time series and so compute time is doubled.

That’s why accuracy calculation is optional. We recommend enabling it only when you’re evaluating model performance, not for every forecast run.

 

Which accuracy measures does Predictions calculate?

 

After accuracy measure calculation is enabled in the settings, Predictions calculates three commonly-used forecast accuracy measures:

 

Mean Absolute Error (MAE)

  • Formula:

 

 

 

  • Example:
    Let’s say your actual sales for three weeks are a100, 120, 130]
    and your forecasted sales were o90, 100, 130].

 

 

 

 

 

 

  • Interpretation:
    “On average, my forecast is off by 10 units. That’s the typical error I can expect.”

  • Application in Pigment:

    1. First, Pigment calculates MAE by time series.

    2. Then, Pigment calculates the Metric MAE by summing MAE across all time series of the Metric.

 

 

 

Root Mean Squared Error (RMSE)

  • Formula:

 

 

 

  • Example:
    Let’s take the same forecasts and errors as the first example.

 

 

 

 

 

 

 

 

  • Interpretation:
    “My forecast is off by about 12.9 units on average. But this test punishes larger mistakes more heavily.”

  • Application in Pigment:

    1. First, Pigment calculates RMSE by time series.

    2. Then, Pigment calculates the Metric RMSE by summing RMSE across all time series of the Metric.

 

 

 

Mean Absolute Scaled Error (MASE)

 

This measure uses the concept of the “naive” forecast. A “naive” forecast is a simple prediction method where the forecasted values are all assumed to be the same as the most recently observed value.

  • Formula:

 

 

 

  • Example:
    Using the above examples, your model has an MAE of 10. Pigment calculates the “naive” MAE based on your last actual. Imagine that your last actual is 100. Your naive MAE is:

     

     

     

     

     

     

    And your MASE is:
     

 

 

 

  • Interpretation:

    “My model performs 40% better than a naive forecast.”

    • MASE < 1 → Better than naive

    • MASE > 1 → Worse than naive

  • Application in Pigment:

    1. First, Pigment calculates MASE by time series.

    2. Then, Pigment calculates the Metric MASE by computing the weighted average of the MASE across all time series of the Metric.

 

 

 

Learn more

Be the first to reply!

Reply