The Model Comparison tool compares the performance of one or more different predictive models based on the use of a validation, or test data set. It generates a report, a table of basic error measurements, and a table of prediction results of each model. The tool supports all binary classification1The case where the target variable has only two levels (or classes). For example, "Yes" and "No"., multinomial classification2The case where the target variable has more than two levels (classes). For example "car", "bus", "train", and "airplane". and regression (continuous target variable) models.
For classification problems, the report contains the overall accuracy, the accuracy per class, the F1 score, and the confusion matrix for each model. For binary classification models, Performance Diagnostic Plots, which include comparisons of each model in the form of a set of lift curve, gain chart, precision and recall curve, and ROC curve plots are also reported. For regression models, the report includes the correlation between predicted and actual values, the root mean square error (RMSE), the mean absolute error (MAE), the mean percentage error (MPE), and the mean absolute percentage error (MAPE) of each model's predictions. It should be noted that the MPE and MAPE measures are not defined if any of the values of the target variable are equal to zero since they both involve dividing by the actual value of each observation. In these cases, the Weighted Absolute Percentage Error (the sum of the absolute errors divided by the sum of the actual values) is reported instead of the MAPE, and MPE is replaced by the sum of the errors over the sum of the actual values. While it easy to come up with contrived examples where the sum of the target values equals zero, this is unlikely to happen in actual practice. A plot of actual versus predicted values for each model is also provided.
Note that although this tool supports comparison of multiple models, users can also use only one model and obtain a performance report similar to the multiple model case. The difference between the report obtained from model comparison and the report output from the R anchor of a predictive tool (e.g. Boosted Model) is that the former uses a testing data set that is different from the training data set that builds the model, consequently it yields an out sample performance evaluation for the model.
This tool is not automatically installed with Alteryx Designer. To use this tool, download it from the Alteryx Analytics Gallery.
The Model Comparison tool requires two input data streams.
The positive class in target variable (binary classification only, optional):
For regression problems, since the target variable contains continuous numbers, the concept of "class" doesn't apply. For multinomial classification models, the report provides a full confusion matrix for each model, thus picking or not picking a positive class won't affect the outputs. For binary classification models, the positive class should be the outcome on which the analysis is focused. For example, if the objective is to determine which customers are more likely to respond to a direct marketing campaign, and the response values are coded as "Yes" and "No", then the likely focus will be on the "Yes" responses, and this should be selected as the "positive class" in the model comparison..
This configuration is optional. When this value is left blank, the last value of an alphabetical sort of the class names is used as the "positive class." As an example, if the target variable takes on the values "False" and "True", then the positive class becomes "True" by default since it falls after "False" in an alphabetical sort.
©2018 Alteryx, Inc., all rights reserved. Allocate®, Alteryx®, Guzzler®, and Solocast® are registered trademarks of Alteryx, Inc.