Skip to main content

Support Vector Machine Tool Icon Support Vector Machine Tool

One Tool Example

Support Vector Machine has a One Tool Example. Go to Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer.

Support Vector Machines (SVM), or Support Vector Networks (SVN), are a popular set of supervised learning algorithms originally developed for classification (categorical target) problems, and late extended to regression (numerical target) problems. SVMs are popular because they are memory efficient, can address a large number of predictor variables (although they can provide poor fits if the number of predictors exceeds the number of estimation records), and are versatile since they support a large number of different "kernel" functions.

The basic idea behind the method is to find the best equation of a line (1 predictor), a plane (2 predictors), or a hyperplane (3 or more predictors) that maximally separates the groups of rows, based on a measure of distance, into different categories, which depend on the target variable. A kernel function provides the measure of distance that causes records to be placed in the same or different groups and involves taking a function of the predictor variables to define the distance metric.

A short video that illustrates how this works can be found

and a very approachable discussion of the topic can be found here. The extent that groups are separated conditional on the kernel function used is known as the maximal margin. Finally, the separation of the groups may not be perfect, but a cost parameter (which is the cost of placing an estimation record into the "wrong" group) can also be specified.

This tool uses the e1071 R package.

This tool uses the R tool. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R tool. Visit Download and Use Predictive Tools.

Connect an Input

Connect an Alteryx data stream that includes a target field of interest along with 1 or more possible predictor fields.

Configure the Tool

Required Parameters

  • Model Name: Each model needs a name so it can later be identified. Model names must start with a letter and can contain letters, numbers, and the special characters period (.) and underscore (_). No other special characters are allowed, and R is case sensitive.

  • Select the Target Field: Select the field from the data stream you want to predict.

  • Select the Predictor Fields: Choose the fields from the data stream you believe "cause" changes in the value of the target variable. Columns that contain unique identifiers, like surrogate primary keys and natural primary keys, should not be used in statistical analyses. They have no predictive value and can cause runtime exceptions.

  • Choose the Method of Classification or Regression based on the target variable you want to predict. Generally, if the target variable you choose is a string or boolean type, it is probably a classification problem. If the variable is a numeric type, chances are it is a regression problem.

    • Classification:

      • A basic model summary: The function call in R, target, predictors, and related parameters.

      • Model performance:

        • A Confusion Matrix

        • The SVM Classification Plots

        • The report explains how to interpret each performance evaluation measure.

    • Classification options:

      • C-classification: Optimizes the decision plane while allowing for some amount of error.

      • nu-classification: Similar to C-classification, but enables the user to limit the amount of error by selecting The value of nu.

    • Regression:

      • A basic model summary: The function call in R, target, predictors, and related parameters.

      • Model performance:

        • Root Mean Squared Error

        • R-squared

        • Mean Absolute Error

        • Median Absolute Error

        • Residual Plot

        • Residual Distribution

        • The report explains how to interpret each performance evaluation measure.

    • Regression options:

      • epsilon regression

      • nu regression: Similar to epsilon regression, but enables the user to limit the amount of error by selecting The value of nu.

Model Customization (Optional)

The model customization section is where you choose the kernel type and related parameters of each kernel. Select Specify Model Parameters to customize the model.

User provides parameters: Select to directly set the needed parameters.

Kernel Type: Determines the metric used to measure the separation between groups

  • Linear: Useful when the relation between the classes and predictors is a simple line, plane, or hyperplane.

    • cost: The cost associated with mis-grouping a record. A lower value of cost allows for a certain level of error in forming groups of records in order to avoid overfitting.

  • Polynomial: The distance is measured using a polynomial function of the predictor variables.

    • cost: The cost associated with mis-grouping a record. A lower value of cost allows for a certain level of error in forming groups of records in order to avoid overfitting.

    • degree: Degree of the polynomial kernel. Increasing the degree of the polynomial allows the margin between groups to be more flexible, thus less error for the estimation sample. However, at the cost of overfitting the model to the estimation sample.

    • gamma: Coefficient of the inner-product term in the polynomial kernel.

    • coef0: The constant term in the polynomial formulation.

  • Radial (default): Good for nonlinearly separable data.

    • cost: Allows for a certain level of error in classification to avoid overfitting.

    • gamma: Coefficient of the power term in the radial basis function kernel. The larger gamma is, the richer the feature space is, thus the less error for the training set; however, it may also lead to bad overfitting.

  • Sigmoid: Mainly used as a proxy for neural networks.

    • gamma: Defines the influence on the training example.

    • coef0: The constant term in the sigmoid kernel.

Machine tunes parameters: Select to provide a range of parameters and computationally find the best parameters by searching a grid of possible values. This is more computationally expensive and hence takes longer because it carries out a 10-fold cross-validation to test the model on multiple parameter values. However, it is likely to result in a model that better fits the data.

The parameters that need to be selected in this case are analogous to those for the case of "User provides parameters" section, but with these differences:

  • Number of candidates: How many values of the parameters the user wishes to test (default is 5.)

  • Kernel Type (Grid Search): Refer to the "User provides parameters" section. The user specifies the min and max values of certain parameters. The model generates a certain number of candidates set in "Number of candidates" and finds the best one using a 10-fold cross-validation.

Graphics Options

  • Plot size: Set the width and height dimensions of the resulting plot, using either inches or centimeters.

  • Graph resolution: Select the resolution of the graph in dots per inch: 1x (96 dpi), 2x (192 dpi), or 3x (288 dpi).

    • Lower resolution creates a smaller file and is best for viewing on a monitor.

    • Higher resolution creates a larger file with better print quality.

  • Base font size: The number of points of the base font used in the plots produced by the macro

View the Output

  • O anchor: The "O" output consists of a table of the serialized model with its model name. A Score tool and test dataset can be used after obtaining the output from the SVM tool.

  • R anchor: The "R" output consists of the report snippets generated by the Support Vector Machine tool. The report is different for classification and regression since they have different performance evaluation methods.