Skip to main content

Model Comparison Tool Icon Model Comparison Tool

The Model Comparison tool compares the performance of one or more different predictive models based on the use of a validation, or test dataset. It generates a report, a table of basic error measurements, and a table of prediction results of each model. The tool supports all binary classification, where the target variable has only two levels, such as "Yes" and "No", multinomial classification, where the target variable has more than two levels, such as "car", "bus", "train", and "airplane", and regression (continuous target variable) models.

For classification problems, the report contains the overall accuracy, the accuracy per class, the F1 score, and the confusion matrix for each model. For binary classification models, Performance Diagnostic Plots, which include comparisons of each model in the form of a set of lift curve, gain chart, precision and recall curve, and ROC curve plots are also reported. For regression models, the report includes the correlation between predicted and actual values, the root mean square error (RMSE), the mean absolute error (MAE), the mean percentage error (MPE), and the mean absolute percentage error (MAPE) of each model's predictions. It should be noted that the MPE and MAPE measures are not defined if any of the values of the target variable are equal to zero since they both involve dividing by the actual value of each observation. In these cases, the Weighted Absolute Percentage Error (the sum of the absolute errors divided by the sum of the actual values) is reported instead of the MAPE, and MPE is replaced by the sum of the errors over the sum of the actual values. While it easy to come up with contrived examples where the sum of the target values equals zero, this is unlikely to happen in actual practice. A plot of actual versus predicted values for each model is also provided.

Note that although this tool supports comparison of multiple models, users can also use only one model and obtain a performance report similar to the multiple model case. The difference between the report obtained from model comparison and the report output from the R anchor of a predictive tool (e.g. Boosted Model) is that the former uses a testing dataset that is different from the training dataset that builds the model, consequently it yields an out sample performance evaluation for the model.

Connect Inputs

The Model Comparison tool requires 2 input data streams:

  • M anchor: A union of different models generated by any Alteryx predictive tool's O output anchor. To compare more than one model, combine multiple model objects together in a single data stream.

  • D anchor: The testing dataset, which is usually different from the training dataset that was used to build the models.

Configure the Tool

The positive class in target variable (binary classification only, optional): Optional. When this value is left blank, the last value of an alphabetical sort of the class names is used as the positive class.

If the target variable takes on the values "False" and "True", then the positive class becomes "True" by default since it falls after "False" in an alphabetical sort.

Configuration Option Constraints

For regression problems, since the target variable contains continuous numbers, the concept of class doesn't apply. For multinomial classification models, the report provides a full confusion matrix for each model, thus picking or not picking a positive class won't affect the outputs. For binary classification models, the positive class should be the outcome on which the analysis is focused. For example, if the objective is to determine which customers are more likely to respond to a direct marketing campaign, and the response values are coded as "Yes" and "No", then the likely focus will be on the "Yes" responses, and this should be selected as the "positive class" in the model comparison.

View the Output

Connect a Browse tool to each output anchor to view results.

  • E anchor: A table of error measures.

  • P anchor: The actual and the various predicted values.

  • R anchor: A report containing the error measures and a set of diagnostic plots.