You are here: Getting Started > The User Environment > Tools

Alteryx Tools

Tools are organized in the Tool Palette into tool categories referenced and explained below. For more information on the Tool Palette and how to organize the tools and categories within it, see the Components page.

Each tool within Alteryx has a specific function. Configured tools make up workflows. Click and drag a tool from the Tool Palette into the workflow canvas to begin building a workflow. Tools are linked by clicking output anchors and dragging a downstream path to the next tool's input anchors. See Building Workflows for more information.

To learn more about the capability of each tool individually, click a link listed below.

In/Out

Spatial

Address

Preparation

Interface tools

Demographic Analysis

Join

Data Investigation

Behavior Analysis

Parse

Predictive

Calgary

Transform

Time Series

Developer

Reporting

Predictive Grouping

Laboratory

Documentation

Connectors

Unknown

Social Media

In-Database

Deprecated

 

Favorites

The Favorites category includes the most common tools used in workflow creation. You can add a tool to the Favorites by clicking on the gray star in the top right of the tool icon on the Tool Palette. A Favorite tool is indicated by a yellow star.

Browse:The Browse tool offers complete views of underlying data within the Alteryx workflow. A browser can be outputted via a Browse tool to view the resulting data anywhere within the workflow stream.

Filter: The Filter tool queries records in your file to meet specified criteria. The tool creates two outputs, True and False. True is where the data met the specified criteria, False is where it does not.

Formula: The Formula tool is a powerful processor of data and formulas. Use it to add a field to an input table, to create new data fields based on an expression or by assigning a data relationship, or to update an existing field based on these same premises.

Input Data: The Input tool can be the starting point for any project in Alteryx. Every project must have an input and output. The input tool opens the source data to be used in the analysis. The input tool reads information from the following file formats: CSV, MDB, DBF, XLS, MID/MIF, SHP, TAB, GEO, SZ, YXDB, SDF, FLAT, OleDB, Oracle Spatial.

Join:The Join tool combines two inputs based on a commonality between the two tables. Its function is like a SQL join but gives the option of creating 3 outputs resulting from the join.

Output Data: The Output tool is used anytime results are needed to be output to a file from the analysis. Every project must have an input and output. The Output tool opens the data results derived to from the analysis. The Output tool will write the results of the analysis to the same variety of formats specified for the input tool.

Summarize: The Summarize tool can conduct a host of Summary Processes within the input table, including: grouping, summing, count, spatial object processing, string concatenation, and much

Sample: The Sample tool extracts a specified portion of the records in the data stream.

Select: The Select tool is a multi-function utility that allows for selected fields to be carried through downstream, renaming fields, reordering field position in the file, changing the field type, and loading/saving field configurations.

Sort: The Sort tool arranges the records in a table in alphanumeric order, based on the values of the specified data fields. more.

Text Comment: The Text Comment tool adds annotation to the project workspace. This is useful to jot down notes, explain processes to share or reference later.

Text Input: The Text Input tool makes it possible for the user to manually type text to create small data files for input. It is useful for creating Lookup tables on the fly, for example.

Union: The Union tool appends multiple data streams into one unified steam. The tool accepts multiple inputs based on either field name or record position, creating a stacked output table. The user then has complete control to how these fields stack or match up.

back to top

 

In/Out Tools:

Each workflow must contain inputs and outputs. Both the Input and Output tool have different configuration properties, depending on the file type. The Browse tool offers a temporary view of what the data looks like in table, map, or report format. Click each tool to find out more.

Browse: The Browse tool offers complete views of underlying data within the Alteryx workflow. A browser can be outputted via a Browse tool to view the resulting data anywhere within the workflow stream.

Date Time Now: This Macro will return a single record: the Date and Time at the workflow runtime, and convert the value into the string format of the user's choosing.

Directory Tool: The directory tool returns all the files in a specified directory. Along with file names, other pertinent information about each file is returned, including file size, creation date, last modified, and much more.

Input Data:The input tool can be the starting point for any project in Alteryx. Every project must have an input and output. The input tool opens the source data to be used in the analysis. The input tool reads information from the following file formats: CSV, MDB, DBF, XLS, MID/MIF, SHP, TAB, GEO, SZ, YXDB, SDF, FLAT, OleDB, Oracle Spatial. 

Map Input: Manually draw or select map objects (points, lines, and polygons) to be stored in the workflow.

Output Data: The output tool is used anytime results are needed to be output to a file from the analysis. Every project must have an input and output. The output tool opens the data results derived to from the analysis. The Output tool will write the results of the analysis to the same variety of formats specified for the input tool.

Text Input:The Text Input tool makes it possible for the user to manually type text to create small data files for input. It is useful for creating Lookup tables on the fly, for example.

XDF Input: This tool enables access to an XDF format file (the format used by Revolution R Enterprise's RevoScaleR system to scale predictive analytics to millions of records) for either: (1) using the XDF file as input to a predictive analytics tool or (2) reading the file into an Alteryx data stream for further data hygiene or blending activities.

XDF Output: This tool reads an Alteryx data stream into an XDF format file, the file format used by Revolution R Enterprise's RevoScaleR system to scale predictive analytics to millions of records. By default, the new XDF files is stored as a temporary file, with the option of writing it to disk as a permanent file, which can be accessed in Alteryx using the XDF Input tool.

 

back to top

Preparation

The Preparation category includes tools that prepare data for downstream analysis.

Auto Field Strings: The Auto Field tool reads through an input file and sets the field type to the smallest possible size relative to the data contained within the column.

Date Filter: The Date Filter macro is designed to allow a user to easily filter data based on a date criteria using a calendar based interface.

Filter: The Filter tool queries records in your file to meet specified criteria. The tool creates two outputs, True and False. True is where the data met the specified criteria, False is where it does not.

Formula: The formula tool is a powerful processor of data and formulas. Use it to add a field to an input table, to create new data fields based on an expression or by assigning a data relationship, or to update an existing field based on these same premises.

Generate Rows: The Generate Rows tool will create new rows of data, at the record level. This tool is useful to create a sequence of numbers, transactions, or dates.

Multi Field Formula:The Multi Field Formula tool makes it easy to execute a single function on multiple fields.

Multi Row Formula: The Multi-Row Formula tool takes the concept of the Formula Tool a step further, allowing the user to utilize row data as part of the formula creation. This tool is useful for parsing complex data, and creating running totals, averages, percentages and other mathematical calculations.

Random n[%] of Records: This Macro will return an expected number of records resulting in a random sample of the incoming data stream.

Record ID:The Record ID tool assigns a unique identifier to each data record. The ID generated is a numeric value that could also be a string where leading zeros are appended to the ID.

Sample:The Sample tool extracts a specified portion of the records in the data stream.

Select:The Select tool is a multi-function utility that allows for selected fields to be carried through downstream, renaming fields, reordering field position in the file, changing the field type, and loading/saving field configurations.

Sort: The Sort tool arranges the records in a table in alphanumeric order, based on the values of the specified data fields.

Tile: The tile tool assigns a value (tile) based on ranges in the data.

Unique: The Unique Tool distinguishes whether a data record is unique or a duplicate by grouping on one or more specified fields, then sorting on those fields. The first record in each group is sent to the Unique output stream while the remaining records are sent to the Duplicate output stream.

 

back to top

Join:

The Join category includes tools that Join two or more streams of data by appending data to wide or long schemas.

Append Fields: The Append Fields tool will Append the fields of one small input (Source) to every record of another larger input (Target ). The result is a Cartesian Join where all records from both inputs are compared.

Find and Replace: The Find and Replace tool searches for data in one field from the input table and replaces it with a specified field from a different data table.

Fuzzy Match: The Fuzzy Matching tool can be used to identify non-identical duplicates of a database by specifying parameters to match on. Values need not be exact to find a match, they just need to fall within the user specified or prefabricated parameters set forth in the configuration properties.

Join:The Join tool combines two inputs based on a commonality between the two tables. Its function is like a SQL join but gives the option of creating 3 outputs resulting from the join.

Join Multiple: The Join Multiple tool combines two or more inputs based on a commonality between the input tables. Only the joined records are outputted through the tool, resulting in a wide (columned) file.

Make Group: The Make Group tool takes data relationships and assembles the data into groups based on those relationships.

Union: The Union tool appends multiple data streams into one unified steam. The tool accepts multiple inputs based on either field name or record position, creating a stacked output table. The user then has complete control to how these fields stack or match up.

back to top

Parse

The Parse tools separate data values into a standard table schema.

Date Time: The Date Time tool standardizes and formats date/time data so that it can be used in expressions and functions from the Formula or Filter tools.

Regular Expression: The Regular Expression tool is a robust data parser. There are four types of output methods that determine the type of parsing the tool will do. These methods are explained in the Configuration Properties.

XML Parse Tool:The XML parse tool will read in a chunk of Extensible Markup Language and parse it into individual fields.

Text to Columns: The text to columns tool takes the text in one column and splits the string value into separate, multiple fields based on a single or multiple delimiter (s).

 

back to top

Transform

Tools that summarize data are in the transform category.

Running Total: The Running Total tool calculates a cumulative sum, per record, in a file.

Count Records: This Macro returns a count of how many records are going through the tool.

Arrange Tool: The Arrange tool allows you to manually transpose and rearrange your data fields for presentation purposes. Data is transformed so that each record is turned into multiple records and columns can be created by using field description data or manually created.

Transpose: The Transpose tool pivots the orientation of the data table. It transforms the data so you may view Horizontal data fields on a vertical axis.

Cross Tab: The CrossTab pivots the orientation of the data table. It transforms the data so vertical data fields can be viewed on a horizontal axis, summarizing data where specified.

Weighted Average: This Macro will calculate the weighted average of an incoming data field. A weighted average is similar to a common average, but instead of all records contributing equally to the average, the concept of weight means some records contribute more than others.

Summarize: The Summarize tool can conduct a host of Summary Processes, including: grouping, summing, count, spatial object processing, string concatenation, and much more.

back to top

 

In-Database Tools

The In-Database tool category consists of tools that function like many of the Favorites. This category includes tools for connecting to a database and blending and viewing data, as well as tools for bringing other data into an In-Database workflow and writing data directly to a database.

 

 

Tool Name

Tool Description

Browse In-DB

Review your data at any point in an In-DB workflow. Note: Each Browse In-DB triggers a database query and can impact performance.

Connect In-DB

Establish a database connection for an In-DB workflow.

Data Stream In

Bring data from a standard workflow into an In-DB workflow.

Data Stream Out

Stream data from an In-DB workflow to a standard workflow, with an option to sort the records.

Filter In-DB

Filter In-DB records with a Basic filter or with a Custom expression using the database’s native language (e.g., SQL).

Formula In-DB

Create or update fields in an In-DB data stream with an expression using the database’s native language (e.g., SQL).

Join In-DB

Combine two In-DB data streams based on common fields by performing an inner or outer join.

Sample In-DB

Limit the In-DB data stream to a number or percentage of records.

Select In-DB

Select, deselect, reorder, and rename fields in an In-DB workflow.

Summarize In-DB

Summarize In-DB data by grouping, summing, counting, counting distinct fields, and more. The output contains only the result of the calculation(s).

Union In-DB

Combine two or more In-DB data streams with similar structures based on field names or positions. In the output, each column will contain the data from each input.

Write In-DB

Use an In-DB data stream to create or update a table directly in the database.

Reporting

The Reporting category includes tools that aid in data presentation and organization.

Charting: The Charting tool allows the user to display data in various chart types.

E-mail: Allows you to select from fields inputted to e-mail to recipients instead of having to use a batch e-mail as before. Automatically detects SMTP address, and will allow attachments or even e-mail generated reports.

Image: The Image Tool allows the user to add graphics to reports.

Layout:The Layout Tool enables the user to arrange Reporting Snippets.  

Map: The Map Tool enables the user to create a map image from the Alteryx GUI. The tool accepts multiple spatial inputs, allows for layering these inputs, and supports thematic map creation. Other cartographic features can be included such as a legend, scale and reference layers.

Render:The Render tool transforms report Snippets into presentation-quality reports in PDF, HTML, XLSX, DOCX,  RTF and Portfolio Composer (*.pcxml) formats.

Table:The Table tool allows the user to create basic data tables and pivot tables from their input data.

Text: The Text tool allows the user to add text to reports and documents.

Legend Builder: This macro takes the components output from the Legend Splitter macro and builds them back into a legend table.  If you add a Legend Builder tool immediately after a Legend Splitter tool, the resulting legend will be the same as the legend output originally from the Map tool. The purpose of the two macros is that you can change the data between them and therefore creating a custom legend

Legend Splitter: This macro will take a legend from the Map tool and split it into its component parts. Once split, the legend can be customized by the use of other tools. Be sure to use the Legend Builder macro to easily build the legend again.

Report Footer: This macro will allow a user to easily setup and put a footer onto their report.

Report Header: This macro will allow a user to easily setup and put a header onto their report.

back to top

Documentation

Documentation tools improve workflow presentation, annotation, and tool organization.

Explorer Box:The Explorer Box is populated with a web page or file location of the user's specification.

Text Comment: The Text Comment tool adds annotation to the project workspace. This is useful to jot down notes, explain processes to share or reference later.

Tool Container: The Tool Container allows the user to organize tools in a workflow. Tools can be placed inside the container to isolate a process. The container can then be collapsed, expanded or disabled.

back to top

 

Spatial

The tools contained within the Spatial category offer a large array of spatial data manipulation, processing, and object editing tools. Click on each tool to find out more.

Buffer:The Buffer tool will take any polygon or polyline spatial object and expand or contract its extents by the user specified value.

Create Points:The Create Points from Fields tool creates a point-type spatial object by specifying input fields containing the X coordinate (Longitude ) and the Y coordinate (Latitude ).

Distance:The Distance tool calculates the ellipsoidal direct point-to-point, point-to-edge, or the drive distance between two sets of spatial objects.

Find Nearest:The Find Nearest Points tool identifies the shortest distance between points or polygons in one file and the points, polygons, or lines in a second file.

Generalize:The Generalize tool will decrease the number of nodes that make up a polygon or polyline, making a simpler rendition of the original spatial object.

Make Grid: The Make Grid tool takes a spatial object and creates a grid. The resulting grid is either a single grid, bound to the extent of the input spatial objects, or individual grids that dissect each input polygon.

Non Overlapping Drivetime:This Macro will create drivetime trade areas, that do not overlap, for a point file. This macro requires licensed installation of Alteryx Drivetime to run successfully.

Poly Build:The PolyBuild tool takes a group of spatial point objects and draws a polygon or polyline in a specific order to represent that group of points.

Poly Split:The PolySplit tool takes polygon or polyline objects and splits them into their component point, line, or region objects.

Smooth:The Smooth tool takes a polygon or polyline object and adds nodes to smooth sharp angles into curves along the lines that make up the object.

Spatial Info:The Spatial Info tool extracts tabular information about the spatial object. Attributes such as: area, spatial object, number of parts, number of points, and centroid Latitude/Longitude coordinates can be appended.

Spatial Match: The Spatial Match tool establishes the spatial relationship (contains, intersects, touches, etc) between two sets of spatial objects. The tool accepts a set of spatial objects from the Left Input (Targets) and a set of spatial objects from the Right Input (Universe ). At least one input stream should contain Polygon type spatial objects.

Spatial Process: The Spatial Process tool performs high-level spatial object editing from a simple, single tool. You can combine multiple objects or cut the spatial objects of the input table.

Trade Area:The Trade Area tool creates regions around specified point objects in the input file. Trade Areas are created one of two ways: either by defining a radius around a point, or by a drivetime. Drive time trade area creation is only an option if a licensed installation of Alteryx Drivetime is detected.

back to top

Interface Tools

Interface tools are used to author apps and macros. These tools make it easy to design user interface elements and update workflow tools at runtime based on user specifications.

Action: Update values of development tools with the values from the interface questions at runtime.

Check Box: Display a check box option in an app.

Condition: Test for the presence of user selections. The state is either true or false.

Control Parameter: Control Parameter input for a Batch Macro.

Date: Display a calendar in app.

Drop Down: Display a single selection list in an app.

Error Message: Throw an Error message.

File Browse: Display a File Browse control in an app. This tool can be used to read an input or write an output.

Folder Browse: Display a Folder Browse control in an app. This Interface tool is not supported for running apps in the Alteryx Analytics Gallery

List Box: Display a multi-selection check box list in an app.

Macro Input: Input for a Macro.

Macro Output: Output of a Macro.

Map: Display an interactive map for the user to draw or select map objects in an app.

Numeric Up Down: Display a numeric control in an app.

Radio Button: Display a mutually exclusive option in an app.

Text Box: Display a free form text box in an app.

Tree: Display an organized, hierarchal data structure in an app.

back to top

 

Data Investigation

The Predictive Analytics tools are Macros that use the R tool. In order for these tools to function properly, you must have R installed as well as packages used by the R tool. Go to Help--> Install Predictive Tools to launch the Alteryx R installer. This will install the R program and the Predictive tools that use R.

Association Analysis: This tool allows a user to determine which fields in a database have a bivariate association with one another.

Contingency Table: Create a contingency table based on selected fields, to list all combinations of the field values with frequency and percent columns.

Create Samples: This tool creates a new field that indicates which of two or three random samples within the data to which each record is allocated. A new field is created with the assignment information.

Distribution Analysis: The Distribution Analysis macro allows you to fit one or more distributions to the input data and compare them based on a number of Goodness-of-Fit* statistics.  Based on the statistical significance (p-values) of the results of these tests, the user can determine which distribution best represents the data.

Field Summary Report: This tool provides the user a summary report of descriptive statistics for the selected data fields. This information provides the user a concise summary of the data fields to give the user a greater understanding of the data being analyzed. Also provided are “Remarks” which provide suggestions on best practices of managing the particular data field.

Frequency Table: Produce a frequency analysis for selected fields - output includes a summary of the selected field(s) with frequency counts and percentages for each value in a field.

Heat Plot: Uses a heat plot color map to show the joint distribution of two variables that are either continuous numeric variables or ordered categories.

Histogram: Provides a histogram plot for a numeric field. Optionally, it provides a smoothed empirical density plot. Frequencies are displayed when a density plot is not selected, and probabilities when this option is selected. The number of breaks can be set by the user, or determined automatically using the method of Sturges.

Oversample Field: This tool will sample incoming data so that there is equal representation of data values so they can be used effectively in a predictive model.

Pearson Correlation: The Pearson Correlation tool measures the linear dependence between two variables as well as the covariance. This tool replaces the now deprecated Pearson Correlation Coefficient macro.

Plot of Means: The Plot of Means tool takes a numeric or binary categorical field (with the binary categorical field converted into a set of zero and one values) as a response field along with a categorical field and plots the mean of the response field for each of the categories (levels) of the categorical field.

Scatterplot: This tool makes enhanced scatterplots, with options to include boxplots in the margins, a linear regression line, a smooth curve via non-parametric regression, a smoothed conditional spread, outlier identification, and a regression line.

Spearman Rank Correlation Coefficient: Spearman's rank correlation coefficient assesses how well an arbitrary monotonic function could describe the relationship between two variables, without making any other assumptions about the particular nature of the relationship between the variables.

Violin Plot: A violin plot shows the distribution of a single numeric variable, and conveys the density of the distribution. In addition to concisely showing the nature of the distribution of a numeric variable, violin plots are an excellent way of visualizing the relationship between a numeric and categorical variable by creating a separate violin plot for each value of the categorical variable.

back to top

Predictive

AB Analysis: Determine which group is the best fit for AB testing.

AB Controls: Match one to ten control units (e.g., stores, customers, etc.) to each member of a set of previously selected test units, on the basis of seasonal patterns and growth trends for a key performance indicator, along with other user specified criteria.

AB Treatment: Determine which group is the best fit for AB testing.

AB Trend: Create measures of trend and seasonal patterns that can be used in helping to match treatment to control units (e.g., stores or customers) for A/B testing. The trend measure is based on period to period percentage changes in the rolling average (taken over a one year period) in a performance measure of interest. The same measure is used to assess seasonal effects. In particular, the percentage of the total level of the measure in each reporting period is used to assess seasonal patterns.

Boosted Model:This tool provides generalized boosted regression models based on the gradient boosting methods of Friedman. It works by serially adding simple decision tree models to a model ensemble so as to minimize an appropriate loss function.

Count Regression: Estimate regression models for count data (e.g., the number of store visits a customer makes in a year), using Poisson regression, quasi-Poisson regression, or negative binomial regression. The R functions used to accomplish this are glm() (from the R stats package) and glm.nb() (from the MASS package).

Decision Tree: A decision tree learning model is a class of statistical methods that predict a target variable using one or more variables that are expected to have an influence on the target variable, and are often called predictor variables.

Forest Model: A forest learning model is a class of machine learning methods that predict a target variable using one or more variables that are expected to have an influence on the target variable, and are often called predictor variables.

Gamma Regression: Relate a Gamma distributed, strictly positive variable of interest (target variable) to one or more variables (predictor variables) that are expected to have an influence on the target variable.

Lift Chart: This tool produces two very commonly used charts of this type, the cumulative captured response chart (also called a gains chart) and the incremental response rate chart.

Linear Regression: A linear regression (also called a linear model or a least-squares regression) is a statistical method that relates a variable of interest (a target variable) to one or more variables that are expected to have an influence on the target variable, and are often called predictor variables.

Logistic Regression: A logistic regression model is a class of statistical methods that relates a binary (e.g., yes/no) variable of interest (a target variable) to one or more variables that are expected to have an influence on the target variable, and are often called predictor variables.

MB Rules: Step 1 of a Market Basket Analysis: Take transaction data and create either a set of association rules or frequent itemsets. A summary report of both the transaction data and the rules/itemsets is produced, along with a model object that can be further investigated in a MB Inspect tool.

MB Inspect: Step 2 of a Market Basket Analysis: Take the output of the MB Rules tool, and provide a listing and analysis of those rules that can be filtered on several criteria in order to reduce the number or returned rules or itemsets to a manageable number.

Neural Network: This tool allows a user to create a feedforward perceptron neural network model with a single hidden layer.

Naive Bayes: The Naive Bayes Classifier tool creates a binomial or multinomial probabilistic classification model of the relationship between a set of predictor variables and a categorical target variable.

Nested Test: A nested hypothesis test is used to examine whether two models, one of which contains a subset of the variables contained in the other, are statistically equivalent in terms of their predictive capability.

Score: The Score macro takes as inputs an R model object produced by the Logistic Regression, Decision Tree, Forest Model, or Linear Regression macro and a data stream that is consistent with the model object (in terms of field names and the field types) and outputs the data stream with a one (for a model with a continuous target) or two or more (for a model with a categorical target) "Score" (fitted value) field(s) appended to the data stream.

Spline Model: Predict a variable of interest (target variable) based on one or more  predictor variables using the two-step approach of Friedman's multivariate adaptive regression (MARS) algorithm.  Step 1 selects the most relevant variables for predicting the target variable and creates a piecewise linear function to approximate the relationship between the target and predictor variables. Step 2 smooths out the piecewise function, which minimizes the chance of overfitting the model to the estimation data. The Spine model is useful for a multitude of classification and regression problems and can automatically select the most appropriate model with minimal input from the user.

Stepwise: The Alteryx R-based stepwise regression tool makes use of both backward variable selection and mixed backward and forward variable selection.

Support Vector Machine: Support Vector Machines (SVM), or Support Vector Networks (SVN), are popular supervised learning algorithms used for classification problems, and are meant to accommodate instances where the data (i.e., observations) are considered linearly non-separable.

Test of Means: Compares the difference in mean values (using a Welch two sample t-test) for a numeric response field between a control group and one or more treatment groups.

back to top

Time Series

TS ARIMA: This tool estimates a univariate time series forecasting model using an autoregressive integrated moving average (or ARIMA) method.

TS_Compare: This macro compares one or more univariate time series models created with either the ETS or ARIMA macros.

Timeseries Filler Macro: The Time Series Filler macro allows a user to take a data stream of time series data and “fill in” any gaps in the series.

TS Covariate Forecast: The TS Covariate Forecast tool provides forecasts from an ARIMA model estimated using covariates for a user-specified number of future periods. In addition, upper and lower confidence interval bounds are provided for two different (user-specified) percentage confidence levels. For each confidence level, the expected probability that the true value will fall within the provided bounds corresponds to the confidence level percentage. In addition to the model, the covariate values for the forecast horizon must also be provided.

TS ETS: This tool estimates a univariate time series forecasting model using an exponential smoothing method.

TS Forecast: The TS Forecast tool provides forecasts from either an ARIMA or ETS  model for a user specified number of future periods.

TS Plot: This tool provides a number of different univariate time series plots that are useful in both better understanding the time series data and determining how to proceed in developing a forecasting model.

back to top

Predictive Grouping

Append Cluster: The Append Cluster tool appends the cluster assignments from a K-Centroids Cluster Analysis tool to a data stream.

Find Nearest Neighbors: Find the selected number of nearest neighbors in the "data" stream that corresponds to each record in the "query" stream based on their Euclidean distance.

K-Centroids Cluster Analysis: K-Centroids represent a class of algorithms for doing what is known as partitioning cluster analysis. These methods work by taking the records in a database and dividing (partitioning) them into the “best” K groups based on some criteria.

K-Centroids Diagnostics: The K-Centroids Diagnostic tool is designed to allow the user to make an assessment of the appropriate number of clusters to specify given the data and the selected clustering algorithm (K-Means, K-Medians, or Neural Gas). The tool is graphical, and is based on calculating two different statistics over bootstrap replicate samples of the original data for a range of clustering solution that differ in the number of clusters specified.

Principal Components: tool that allows the dimensions (the number of numeric fields) in a database to be reduced. It does this by transforming the original set of fields into a smaller set that accounts for most of the variance (i.e., information) in the data. The new fields are called factors, or principal components.

back to top

Connectors

Tools in the Connectors category are used to retrieve data or push data to the cloud or internet/intranet environment.

Amazon S3 Download Tool:The Amazon S3 Download tool will retrieve data stored in the cloud where it is hosted by Amazon Simple Storage Service.

Amazon S3 Upload Tool:The Amazon S3 Upload tool will transfer data from Alteryx to the cloud where it is hosted by Amazon Simple Storage Service.

Download Tool: The Download tool will retrieve data from a specified URL to be used in downstream processing or to be saved to a file.

HDFS Input: The HDFS Input tool reads data from a Hadoop Distributed File System. It is able to retrieve *.csv and *.avro files.

HDFS Output: The HDFS Output tool writes data to a Hadoop Distributed File System.

Google Analytics: The Google Analytics (“GA”) Macro returns statistics derived from the data collected by a Google Analytics tracking code.  You use the Core Reporting API to query for dimensions and metrics in order to build customized reports.

Marketo Input: The Marketo Input Tool reads Marketo records for a specified date range.

Marketo Output: Data is written back to Marketo using an 'Upsert' operation.

Marketo Append: The Marketo Append tool retrieves Marketo records and appends them to the records of an incoming data stream.

Salesforce Input: The Salesforce Input tool allows you to read and query tables from Salesforce.com into Alteryx.

SalesForce Output: The Salesforce Output tool allows you to write to Salesforce.com tables from Alteryx.

SharePoint List Input:The SharePoint Input tool reads lists from Sharepoint to be used as a data input in a workflow.

SharePoint List Output:The SharePoint output tool writes the content of a data stream to a Sharepoint list.

 

back to top

Address

The tools contained within the Address category include the ability to Standardize mailing lists and geocode to the 9-digit ZIP Code level. These tools require a special license and are US Data-specific. Click on each tool to find out more.

CASS:The CASS Tool takes the input address file and checks it against the USPS Coding Accuracy Support System.

Parse Address Tool: The Parse Address tool breaks down a street address into its component parts, such as street number, directional (S, NW, and so on), street name, and suffix (ST, RD, BLVD).

US Geocoder: This Macro will utilize many methods to geocode a customer file. This macro requires licensed installations of Alteryx Geocoder, CASS, and the ZIP + 4 coder to run successfully.

Street Geocode:Geocoding associates geographic coordinates with input addresses, letting you pinpoint locations and carry out geography-based analyses.

US Zip +4 Coder:The ZIP + 4 Coder associates geographic coordinates with input ZIP9 (also known as ZIP+4) codes in an address file, enabling the user to carry out geography-based analyses.

back to top

 

Demographic Analysis

The tools contained within the Demographic Analysis category offer the ability to extract data utilizing the Allocate Engine within Alteryx. You must have a license for an installed Allocate dataset to use these tools.

Allocate Input: The Allocate Input tool allows the user to pick geographies and data variables from any Allocate dataset installed on the user's system.

Allocate Append Data: The Allocate Append Data tool lets you append demographic fields from an existing Allocate installation.

Allocate MetaInfo Tool: The Allocate MetaInfo tool returns pertinent information about installed Allocate datasets.

Allocate Report:The Allocate Report tool allows the user to retrieve and run any pre-formatted or custom report associated with Allocate.

back to top

 

Behavior Analysis

The tools within the Behavior Analysis category offer the ability to extract data utilizing the Solocast Engine within Alteryx. In addition to the tools mentioned in this category, users can leverage the information generated from the Behavior Analysis tools, using the Summarize and the Browse Data tools.

Read Behavior Profile Set: The Read Behavior Profile Set tool allows you to select a specific type of dataset known as a Profile Set to use as an input in your workflow. Profile Sets are composed of Profiles.  A Profile is an object, whether it be a geography, a customer database, or a product within a syndicated product file - that has been assigned segmentation codes.  Segmentation codes are assigned based on the Block Group assignment of the object.

Compare Behavior: The Compare Behavior tool analyses 2 Profile sets, comparing one against the other. Think of it as building a sentence: "Analyze'this/these'Using 'this/these'."

Behavior Detail Fields: The Behavior Details tool returns detailed field information at the Cluster or Group level specific to the Profile.

Behavior MetaInfo: The Behavior MetaInfo tool returns pertinent information about installed Behavior Analysis data sets.

Cluster Code: The Cluster Code tool will append a Cluster Code field to a stream of records using a Cluster Level ID, such as a Block Group Key.

Create Behavior Profile: The Create Behavior Profile tool takes an incoming data stream and constructs a Behavior Profile from its contents. A Profile can be built via different modes including: Spatial Object, Known Geography Key, Combine Profiles, Cluster Code, and Cluster Level ID.

Write Behavior Profile Set: The Write behavior Profile Set tool takes an incoming data stream containing a Profile or collection of Profiles and writes out a Profile Set *.scd file.

Profile Detail Report: Reporting Macro that accepts a Profile input and generates a detailed report.

Profile Comparison Report:Reporting Macro that accepts two Profile inputs and generates a comparison report.

Profile Rank Report:The Profile Rank Report takes two Profile inputs (a Geography and a Product profile) and generates a rank report.

 

back to top

Calgary

Calgary is a list count data retrieval engine designed to perform analyses on large scale databases containing millions of records.

Calgary Loader:The Calgary Loader enables users to create a Calgary database (*.cydb) from any type of Input file. Each field contained in the Input file can be indexed to maximize the Calgary database performance.

Calgary Input:The Calgary Input tool enables users to query a Calgary database.

Calgary Join:The Calgary Join tool provides users with the ability to take an input file and perform joins against a Calgary database where an input record matches a Calgary database record based on specific join criteria.

Calgary Cross Count:The Calgary CrossCount tool enables users to aggregate data across multiple Calgary database fields to return a count per record group.

Calgary Cross Count Append:The Calgary CrossCount Append tool provides users with the ability to take an input file and append counts to records that join to a Calgary database where an input record matches a Calgary database record based on specific join criteria.

back to top

 

Developer

The Developer category includes specialized tools specific to Macro and Analytic App creation as well as running external programs.

API Output: This tool has no configuration. See the API help for more information.

Blob Input: The Blob input tool will read a Binary Large Object such as an image or media file, by browsing directly to a file or passing a list of files to read.

Blob Output: The Blob Output tool writes out each record into its own file.

Blob Convert: The Blob Convert tool will take different data types and either convert them to a Binary Large Object (Blob) or take a Blob and convert it to a different data type.

Block Until Done:The Block Until Done tool stops downstream processing until all records come through. This tool makes it possible to overwrite an input file.

Detour: The Detour tool is useful in constructing Analytic App or macro workflows, where the developer can prompt a user to bypass a process in a workflow.

Detour End: The Detour End tool will unify the data processes from a resulting Detour upstream into a single stream for further analysis in Analytic App and Macro workflows.

Dynamic Input: The dynamic input tool allows the user to read from an input database at runtime and dynamically choose what records to read in. Alteryx does not input the entire database table content, instead it filters the data and only returns the user specified criteria and joins it to the data coming into the tool.

Dynamic Rename Tool:The Dynamic Rename tool allows the user to quickly rename any or all fields within an input stream by employing the use of different methods. Additionally, dynamic or unknown fields can be renamed at runtime.

Dynamic Replace:The Dynamic replace tool allows the user to quickly replace data values on a series of fields. Say you have a hundred different income fields and instead of the actual value in each field, you want to represent the number with a code of A, B, C, D, etc. that represents a range. The Dynamic Replace tool can easily perform this task.

Dynamic Select: The Dynamic Select tool allows fields to be selected either by field type or via a formula. Additionally dynamic or unknown fields will also be selected by field type or via formula at runtime.

Field Info: The Field Info tool allows the user to see in tabular form, the name of fields within a datastream as well as the field order, field type and field size.

JSON Build: The JSON Build tool takes the table schema of the JSON Parse tool and builds it back into properly formatted Java Script Object Notation.

JSON Parse: The JSON Parse tool separates Java Script Object Notation text into a table schema for the purpose of downstream processing.

Message Tool:The Message tools allows the user to report messages about the process to the Output Window.

Run Command: The Run Command tool allows the user to run external command programs within Alteryx. This tool can be used as an Input, Output or as a pass through, intermediary tool.

R:The R tool is a code editor for users of R, an open-source code base used for statistical and predictive analysis.

Test: The test tool is useful for testing assumptions one may have regarding their data or a process.

back to top

Social Media Tools

DataSift Connector: The DataSift Connector Macro allows you to interact with DataSift's data platform to aggregate, filter, and extract insights from the billions of public conversations taking place on the world's leading social networks.

Foursquare Search:Search Foursquare Venues by location with an option to filter by a search term.

Gnip Search: Access and search your Gnip API stream on a real time basis.

Twitter Search: Search tweets of the last 7 days by given search terms with location and user relationship as optional properties.

back to top

 

Laboratory Tools

The Laboratory tool category contains new tools that have documented, known issues. They have been tested for stability and will be optimized in stable program updates. Your feedback when using these tools are welcome. Submit feedback to the Alteryx Community

JSON Build: The JSON Build tool takes the table schema of the JSON Parse tool and builds it back into properly formatted Java Script Object Notation.

Make Columns: The Make Columns tool takes rows of data and arranges them by wrapping records into multiple columns. The user can specify how many columns to create and whether they want records to layout horizontally or vertically.

 

back to top

Unknown Tool

Generic Tool: The Generic or Unknown tool is not visible in the toolbox because it is not a tool you would normally use. Instead it is a tool Alteryx uses to capture as much information about the incoming tool as possible to help you to continue with the creation of your new workflow. You will see the Generic tool on the workflow canvas when Alteryx is looking for a Macro that it cannot find.

back to top

Deprecated Tools

As improvements to the Alteryx program are implemented, some tools become obsolete. These tools are assigned to a tool category called Deprecated Tools. Workflows that were created with these tools in previous versions will still be able to function. Alteryx recommends updating these workflows to include the new tools' replacements as no resources will be allocated to support the older tools.

Teradata Bulk Loader: The Teradata Bulk Loader was deprecated in version 8.5. This functionality was added to the Output Tool.

The following tools are no longer in the Alteryx program:

 Map Server

Alteryx Reporting (Composer Tool)

 Chart

 

 

back to top

Related Topics IconRelated Topics