Support Vector Machine Tool

Support Vector Machines (SVM), or Support Vector Networks (SVN), are a popular set of supervised learning algorithms originally developed for classification (categorical target) problems, and late extended to regression (numerical target) problems. SVMs are popular because they are memory efficient, can address a large number of predictor variables (although they can provide poor fits if the number of predictors exceeds the number of estimation records), and are versatile since they support a large number of different "kernel" functions.

The basic idea behind the method is to the predictor variables are to find the best equation of a line (one predictor), a plane (two predictors) , or a hyperplane (three or more predictors) that maximally separates the groups of records, based on a measure of distance, the estimation records into different groups based on the target variable. A kernel function provides the measure of distance that causes to records to be placed in the same or different groups, and involves taking a function of the predictor variables to define the distance metric.

A short video that illustrates how this works can be found here, and a very approachable discussion of the topic can be found here. The extent that groups are separated conditional on the kernel function used is known as the maximal margin. Finally, the separation of the groups may not be perfect, but a cost parameter (which is the cost of placing an estimation record into the "wrong" group) can also be specified.

This tool uses the e1071 R package.

This tool uses the R tool. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R Tool.

Connect an input

An Alteryx data stream that includes a target field of interest along with one or more possible predictor fields.

Configure the tool

Required Parameters

  • Model Name: Each model needs a name so it can later be identified. Model names must start with a letter and may contain letters, numbers, and the special characters period (".") and underscore ("_"). No other special characters are allowed, and R is case sensitive.
  • Select the Target Field: Select the field from the data stream you want to predict.
  • Select the Predictor Fields: Choose the fields from the data stream you believe "cause" changes in the value of the target variable.
  • Columns containing unique identifiers, such as surrogate primary keys and natural primary keys, should not be used in statistical analyses. They have no predictive value and can cause runtime exceptions.

  • Choose the Method of classification or regression based on the target variable you want to predict. Generally, if the target variable you choose is string or boolean type, it is probably a classification problem. If it is numeric type, chances are it is a regression problem.
    • Classification

      • C-classification: Optimizes the decision plane while allowing for some amount of error
      • nu-classification: Similar to C-classification, but enables the user to limit the amount of error by selecting The value of nu.
    • Regression

      • epsilon regression
      • nu regression: Similar to epsilon regression, but enables the user to limit the amount of error by selecting The value of nu.

Model Customization (Optional)

The model customization section is where the user chooses kernel type and related parameters of each kernel. Select Specify Model Parameters to customize the model.

User provides parameters: Select to directly set the needed parameters.

Kernel Type: Determines the metric used to measure the seperation between groups

  • Linear: Useful when the relation between the classes and predictors is a simple line, plane, or hyperplane
    • cost: The cost associated with mis-grouping a record. Lower value of cost allow for a certain level of error in forming groups of records in order to avoid overfitting.
  • Polynomial: The distance is measured using a polynomial function of the predictor variables
    • cost: The cost associated with mis-grouping a record. Lower value of cost allow for a certain level of error in forming groups of records in order to avoid overfitting.
    • degree: Degree of the polynomial kernel. Increasing the degree of the polynomial allows the margin between groups to be more flexible, thus less error for the estimation sample. However, at the cost of overfitting the model to the estimation sample.
    • gamma: Coefficient of the inner-product term in the polynomial kernel.
    • coef0: The constant term in the polynomial formulation.
  • Radial (default): Good for nonlinearly separable data.
    • cost: Allows certain error in classification to avoid overfitting.
    • gamma: coefficient of the power term in the radial basis function kernel. The larger gamma is, the richer the feature space is, thus the less error for the training set; however, it may also lead to bad overfitting.
  • Sigmoid: mainly used as a proxy for neural networks
    • gamma: Defines the influence on the training example.
    • coef0: The constant term in the sigmoid kernel.

Machine tunes parameters: Select to provide a range of parameters and computationally find the best parameters by searching a grid of possible values, which is more computationally expensive and hence takes longer since it carries out a 10-fold cross validation to test the model on multiple parameter values. However, it is likely to result in a model that better fits the data.

The parameters need to be selected in this case are analogous to those for the case of “User provides parameters” section, but with the following the differences:

  • Number of candidates: How many values of the parameters the user wish to test (default: 5)
  • Kernel Type (Grid Search): See the “User provides parameters” section. The user specifies the min and max values of certain parameters. The model will generate certain number of candidates set in “Number of candidates” and find the best one using a 10-fold cross validation.

Graphics Options

  • Plot size: Set the width and height dimensions of the resulting plot, using either inches or centimeters.
  • Graph resolution: Select the resolution of the graph in dots per inch: 1x (96 dpi); 2x (192 dpi); or 3x (288 dpi). Lower resolution creates a smaller file and is best for viewing on a monitor. Higher resolution creates a larger file with better print quality.

  • Base font size: The number of points of the base font used in the plots produced by the macro

View the output

  • O anchor: The "O" output consists of a table of the serialized model with its model name. A Score tool and test data set can be used after obtaining the output from the SVM tool.
  • R anchor: The "R" output consists of the report snippets generated by the Support Vector Machine tool. The report is different for classification and regression, since they have different performance evaluation methods.