Skip to main content

Icon for the Image Recognition Tool Image Recognition

Use the Image Recognition tool to build a machine learning model that can classify images by group. You can use your own data and labels to train a new model, or you can use one of the pre-trained models we provide.

Alteryx Intelligence Suite Required

This tool is part of Alteryx Intelligence Suite. Intelligence Suite requires a separate license and add-on installer to Designer. After you install Designer, install Intelligence Suite and start your free trial.

Tool Components

The Image Recognition tool has 5 anchors (2 inputs and 3 outputs)::

  • T input anchor: Use the T input anchor to input the data you want to use for training.

  • V input anchor: Use the V input anchor to input the data you want to use for validation.

  • M output anchor: Use the M output anchor to pass the model you've built downstream.

  • E output anchor: Use the E output anchor to view model evaluation metrics. Metrics include information on the precision, recall, and accuracy of each classification label.

  • R output anchor: Connect the R output anchor to a Browse tool to view the model report. The report includes plots of accuracy and loss after each epoch. Use these plots to visualize if the tool sufficiently trained the model.

Important

The images you pass into Image Recognition must be in BLOB file format.

Configure the Tool

To use this tool...

  1. Drag the tool onto the canvas.

  2. Connect to upstream data with images you want to train your model to recognize. Note that the maximum image size is 512 × 512 pixels.

  3. Input your Training Images by specifying the Image Field and Image Labels.

  4. Input your Validation Images by specifying the Image Field and Image Labels.

  5. Run the workflow.

Options

An epoch is a single pass (forward and backward) of all data in a training set through a neural network. Epochs are related to iterations, but not the same. An iteration is a single pass of all data in a batch of a training set.

Increasing the number of epochs allows the model to learn from the training set for a longer time. But doing that also increases the computational expense.

You can increase the number of epochs to help reduce error in the model. But at some point, the amount of error reduction might not be worth the added computational expense. Also, increasing the number of epochs too much can cause problems of overfitting, while not using enough epochs can cause problems of underfitting.

Pre-trained models are models that contain feature-extraction methods with parameters that are already defined. Models with more parameters tend to be more accurate, but slower and computationally expensive. The opposite is true for models with fewer parameters; they tend to be less accurate, but faster and computationally cheap.

Here are simplified explanations of the pre-trained models included in the tool. Keep in mind that the performance of these models drastically depends on your data, so the summaries won't always be true.

  • VGG16 tends to be the most accurate, slowest, and most computationally expensive. Minimum image size: 32 × 32 pixels.

  • InceptionResNetV2 tends to balance accuracy, speed, and computational expense, with some bias toward accuracy. Minimum image size: 75 × 75 pixels.

  • Resnet50V2 tends to balance of accuracy, speed, and computational expense, with some bias toward speed and less computational expense. Minimum image size: 32 × 32 pixels.

  • InceptionV3 tends to be the least accurate (but still quite accurate), fastest, and least computationally expensive. Minimum image size: 75 × 75 pixels.

Each of those models was trained on a dataset that contained over 14 million images with more than 20,000 labels.

Choosing a pre-trained model allows you to skip training an entire neural network using your own images. When you choose to use a pre-trained model, you're effectively assuming that your input parameters match what the pre-trained model expects, so you don't need to rebuild a model that does about the same thing as the pre-trained one (and might even perform worse). Because many of the features from images tend to be the same as the ones the models have used during training, often you can safely assume that a pre-trained model will work with your input.

Use a pre-trained model when you have images with features that match what the pre-trained model expects and want to avoid training your own model.

A batch is a subset of the entire training dataset.

Decreasing the batch size allows you to stagger how much data passes through a neural network at any given time. Doing that allows you to train models without taking up as much memory as you would if passing all data through a neural network at once. Sometimes batching can speed up training. But breaking your data into batches might also increase error in the model.

Separate your data into batches when your machine is unable to process all the data at once, or if you want to reduce training time.