TS Compare Tool
It provides a number of commonly used measures of model accuracy in terms of comparing each model's point forecasts with the actual values of the field being forecast for a holdout set of data. In addition, both a plot and a table of actual and forecast values are provided. The inputs to the macro are one or more time series models (which have been unioned together) that are based on the same field, and the same estimation dataset, and an Alteryx data stream containing the actual values for the holdout period, along with values of any covariates that may have been used in creating the model. The actual values need to be for the time periods immediately following the time periods used to create the models.
Chapter 2, Section 5 of of Hyndman and Athanasopoulos's online book Forecasting: Principals and Practice provides a good discussion of the measures used to assess forecast model accuracy.
This tool uses the R tool. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R Tool. See Download and Use Predictive Tools.
The TS Compare tool requires a Designer data stream that is either:
- A set of time series models that predict the same field, ideally estimating the same time periods, that have been unioned together.
- An Alteryx data stream that contains the same field as the one forecast by the time series ARIMA or ETS tools, but for time periods that immediately following the time periods used to estimate the models. If one of the models to be compared is an ARIMA model with covariates, then any covariate fields used should also be included in this data stream.
The size of the holdout set should be at least as long as the number periods into the future the model will be used to predict in production. If the available total sample is large, the size of the holdout set is often lager than the number of periods to be forecast, and is often between 10% and 20% of the available data.
Use the Graphics Options tab to set the controls for the output.
- Plot size: Select inches or centimeters for the size of the graph.
- Graph resolution: Select the resolution of the graph in dots per inch: 1x (96 dpi); 2x (192 dpi); or 3x (288 dpi). Lower resolution creates a smaller file and is best for viewing on a monitor. Higher resolution creates a larger file with better print quality.
- Base font size (points): Select the size of the font in the graph.
View the output
Connect a Browse tool to each output anchor to view results.
- O anchor: Contains a data stream of the names of each model examined and its accuracy statistics. The accuracy statistics are the mean forecast error (ME), the square root of the mean squared forecast errors (RMSE), the mean absolute values of the forecast errors (MAE), the mean percentage forecast error (MPE), the mean absolute percentage forecast error (MAPE), and the mean absolute scale error (MASE). The most commonly focused on of these is the MAPE measure, however, the MASE measure addresses some shortcomings of MAPE. For all measures, models with smaller values of these measures are preferred to those with larger values.
- R anchor: Consists of the report snippets of a table with the actual and forecast values, a table of the accuracy statistics for each model, and a plot that shows all the values of the time series and forecast values for all the models being compared.
- I anchor: An interactive html dashboard consisting of plots and metrics. You can interact with the visualizations by clicking on the different graphical elements to reveal more information, values, metrics and analytics.