Output Data Tool
Use the Output Data tool to write results of a workflow to supported file types or data sources.
Use other tools to write to other supported data sources. For a complete list of data sources supported in Alteryx, see Supported Data Sources and File Formats.
- Click the Output Data tool in the tool palette and drag it to the workflow canvas area.
- In the Configuration window, click the Write to File or Database drop-down arrow.
Alteryx displays the Data connections window. Configure your data connection using one of the following: Recent, Saved, Files, Data Sources, or Gallery.
Select a recent connection. Recent connections contains recently configured files and data connections.
Click Clear list to delete all of your recent connections.
Select a Saved connection. To rename and edit your connections, use Manage Data Connections.
Click Select file to connect to a dataset.
Alteryx Database | .yxdb |
Alteryx Spatial Zip | .sz |
Avro | .avro |
Comma Separated Values | .csv |
dBase | .dbf |
ESRI Personal GeoDatabase | .mdb |
ESRI Shapefile | .shp |
Flat ASCII | .flat |
Geography Markup Language | .gml |
Google Earth/Google Maps | .kml |
HyperText Markup Language | .htm |
IBM SPSS | .sav |
JSON | .json |
MapInfo Professional Interchange | .mif |
MapInfo Professional Table | .tab |
Microsoft Access 2000-2003 | .mdb |
Microsoft Access 2007, 2010, 2013, 2016 | .accdb |
Microsoft Excel | .xlsx |
Microsoft Excel 1997-2003 | .xls |
Microsoft Excel Macro Enabled | .xlsm |
QlikView | .qvx |
SAS | .sas7bdat |
SQLite | .sqlite |
SRC Geography | .geo |
Tableau Data Extract | .tde |
Tableau Hyper Data Extract | .hyper |
Data sources displays supported and frequently used data sources.
- Tools - If you select Quick connect for a tool you have not installed, a browser opens to the Alteryx gallery for you to download and install that tool. Read the instructions on the page carefully. Once the tool is installed, the Input Data tool will change on the canvas to the tool you selected from the Data sources tab.
- Data sources
- ODBC launches the ODBC connection window that displays a filtered list of DSNs on the system that use that particular driver.
- OleDB launches the native Windows OleDB manager.
- OCI launches the Native Oracle OCI connection manager. From here, select the Net Service Name as defined in your tnsnames.ora file that you want to use for this connection as well as the username and password credentials.
- Bulk opens a special dialog allowing you set up a bulk connection for the selected connection type.
- Quick connect: For SQL or Oracle Quick connect - You can either use a pre-existing saved connection, or you can create a new saved connection. Refer to the following for details:
- All other Quick connections are connections using another tool.
Click Quick connect under HDFS to create a new Hadoop database connection.
Alteryx connects to a Hadoop Distributed File System and reads .csv and .avro files. All Hadoop distributions implementing the HDFS standard are supported.
HDFS can be read using httpfs (port 14000), webhdfs (port 50070), or Knox Gateway (8443). Consult with your Hadoop administrator for which to use. If you have a Hadoop High Availability (HA) cluster, your Hadoop admin must explicitly enable httpfs.
MapR may not support webhdfs.
In the HDFS Connection window:
- Select a server configuration: HTTPFS, WebHDFS, or Knox Gateway.
- Host: Specify the installed instance of the Hadoop server. The entry must be a URL or IP address.
- Port: Displays the default port number for httpfs (14000), webhdfs (50070), or Knox Gateway (8443), or enter a specific port number.
- URL: The URL defaults based on the Host. The URL can be modified.
- User Name: Depending on the cluster setup, specify the user name and password for access.
- httpfs: A user name is needed, but it can be anything.
- webhdfs: The user name is not needed.
- Knox Gateway: A user name and password is needed.
- Kerberos: Select a Kerberos authentication option for reading and writing to HDFS. The option you choose depends on how your IT admin configured the HDFS server:
- None: No authentication is used.
- Kerberos MIT: Alteryx uses the default MIT ticket to authenticate with the server. You must first acquire a valid ticket using the MIT Kerberos Ticket Manager.
- Kerberos SSPI: Alteryx uses Windows Kerberos keys for authentication, which are obtained when logging in to Windows with your Windows credentials. The User Name and Password fields are therefore not available.
- (Recommended) Click Test to test the connection.
- Click OK.
- Specify the path of the file (for example,
path/to/file.csv
), or browse to the file and select it. - Select the Avro or CSV file format and click OK.
Self-signed certificates are not supported in Alteryx. Use a trusted certificate when configuring Knox authentication.
To connect to HDFS for in-database processing, use the Connect In-DB Tool.
You can also make a Generic connection or a 32-bit connection to databases.
Before you connect to a database, consider the following:
- Both ODBC and OleDB connection types support spatial connections. Alteryx auto-detects if a database supports spatial functionality and displays the required configurations.
- To connect to a database for in-database processing, see In-Database Overview.
Point to an option and click a saved or shared data connection to connect it, or click Manage to view and edit connections.
All Connections: Displays a list of connections saved to your computer plus connections shared with from a Gallery.
My Computer: Displays a list of connections saved to your computer.
Gallery: Displays a list connections shared with you from a Gallery.
Add a Gallery: Opens the Gallery Login screen. Use your user name and password to log in. After logging in, return to Saved Data Connections and point to the Gallery in the list to view connections shared from the Gallery.
See Manage data connections for more on managing saved and shared data connections and troubleshooting.
Amazon | Amazon Athena |
Amazon Aurora | |
Amazon Redshift | |
Amazon S3 | |
Apache | Cassandra |
Hadoop Distributed File System (HDFS) | |
Hive | |
Spark | |
Cloudera | Impala |
Hadoop Distributed File System (HDFS) | |
Hive | |
Databricks | Databricks |
ESRI | ESRI GeoDatabase |
Exasolution | EXASOL |
Google BigQuery | |
Google Sheets | |
Hortonworks | Hadoop Distributed File System (HDFS) |
Hive | |
IBM | IBM DB2 |
IBM Netezza | |
Marketo | Marketo |
MapR | Hadoop Distributed File System (HDFS) |
Hive | |
Microsoft | Microsoft Analytics Platform System |
Microsoft Azure Data Lake Store | |
Microsoft Azure SQL Data Warehouse | |
Microsoft Azure SQL Database | |
Microsoft Cognitive Services | |
Microsoft Dynamics CRM | |
Microsoft OneDrive | |
Microsoft SharePoint | |
Microsoft Power BI | |
Microsoft SQL Server | |
MongoDB | MongoDB |
MySQL | MySQL |
Oracle | Oracle |
Pivotal | Pivotal Greenplum |
PostgreSQL | PostgreSQL |
Salesforce | Salesforce |
SAP | SAP Hana |
Snowflake |
|
Teradata | Teradata |
Vertica | Vertica |
Gallery displays each gallery and its URL that has been added on the local computer. A list below each gallery name contains the saved connections stored on the server you have access to.
Click +Gallery to add another gallery.
- Select file format options. Options vary based on the file or database to which you connect. See File Format Options.
- (Optional) Select Take File/Table Name From Field to write a separate file for each value in a selected field. Click the drop-down and select an option:
-
Append Suffix to File/Table Name: Appends the selected field name to the end of the name of the selected table.
-
Prepend Prefix to File/Table Name: Prepends the selected field name to the beginning of the name of the selected table.
-
Change File/Table Name: Changes the file name to the selected field name.
-
Change Entire File Path: Changes the file name to the name of the selected field. The name must be a complete file path. This option can overwrite an existing file if a file exists in the full path directory.
-
Click Field Containing File Name or Part of File Name and select a field.
-
(Optional) Select Keep Field in Output.
- After you run the workflow, select the Output Data tool.
- In the Results window, click .
- Locate the output file and click the file link to open it.
You can convert the Output Data tool to an Input Data Tool. You can undo this change if you have enough undo levels set in your User Settings.
To convert the Output Data tool to an Input Data Tool
- Right-click the Browse tool in your workflow.
- Select Convert To Input Data.
- Configure the tool.
You can now use the Output Data tool as an Input Data tool.
To use classic mode:
- Click Options > User Settings > Edit User Settings.
- On the Defaults tab, select the checkbox Use classic mode for the Input/Output tool menu options.
- Click OK.
- Click on the canvas, or press F5 to refresh.
You can now use the Output Data Tool classic mode to select your files and data sources.