Output Data Tool
The Output Data tool writes the results of a workflow to a file or database.
Use the Output Data tool to write results to the following supported data sources:
Alteryx Database | .yxdb |
Alteryx Spatial Zip | .sz |
Avro | .avro |
Comma Separated Values | .csv |
dBase | .dbf |
ESRI Personal GeoDatabase | .mdb |
ESRI Shapefile | .shp |
Flat ASCII | .flat |
Geography Markup Language | .gml |
Google Earth/Google Maps | .kml |
HyperText Markup Language | .htm |
IBM SPSS | .sav |
JSON | .json |
MapInfo Professional Interchange | .mif |
MapInfo Professional Table | .tab |
Microsoft Access 2000-2003 | .mdb |
Microsoft Access 2007, 2010, 2013, 2016 | .accdb |
Microsoft Excel Binary | .xlsb |
Microsoft Excel | .xlsx |
Microsoft Excel 1997-2003 | .xls |
Microsoft Excel Macro Enabled | .xlsm |
QlikView | .qvx |
SAS | .sas7bdat |
SQLite | .sqlite |
SRC Geography | .geo |
Tableau Data Extract | .tde |
Tableau Hyper Data Extract | .hyper |
Amazon | Amazon Athena |
Amazon Aurora | |
Amazon Redshift | |
Amazon S3 | |
Apache | Cassandra |
Hadoop Distrubted File System (HDFS) | |
Hive | |
Spark | |
Cloudera | Impala |
Hadoop Distributed File System (HDFS) | |
Hive | |
Databricks | Databricks |
ESRI | ESRI GeoDatabase |
Exasolution | EXASOL |
Google BigQuery | |
Google Sheets | |
Hortonworks | Hadoop Distrubted File System (HDFS) |
Hive | |
IBM | IBM DB2 |
IBM Netezza | |
Marketo | Marketo |
MapR | Hadoop Distrubted File System (HDFS) |
Hive | |
Microsoft | Microsoft Analytics Platform System |
Microsoft Azure Data Lake Store | |
Microsoft Azure SQL Data Warehouse | |
Microsoft Azure SQL Database | |
Microsoft Dynamics CRM | |
Microsoft OneDrive | |
Microsoft Power BI | |
Microsoft SharePoint | |
Microsoft SQL Server | |
MongoDB | MongoDB |
MySQL | MySQL |
Oracle | Oracle |
Pivotal | Pivotal Greenplum |
PostgreSQL | PostgreSQL |
Salesforce | Salesforce |
SAP | SAP Hana |
Snowflake |
|
Teradata | Teradata |
Vertica | Vertica |
Use other tools to write to other supported data sources. For a complete list of data sources supported in Alteryx, see Supported Data Sources and File Formats.
-
In the Configuration window, type a file path in Write to File or Database or select an option in the drop-down:
Microsoft SQL ServerClick Microsoft SQL Server to create a new Microsoft SQL Server database connection.
OracleClick Oracle to open create a new Oracle database connection.
HadoopClick Hadoop to create a new Hadoop database connection.
Alteryx connects to a Hadoop Distributed File System and reads .csv and .avro files. All Hadoop distributions implementing the HDFS standard are supported.
Configuring HDFS connectionsHDFS can be read using httpfs (port 14000), webhdfs (port 50070), or Knox Gateway (8443). Consult with your Hadoop administrator for which to use. If you have a Hadoop High Availability (HA) cluster, your Hadoop admin must explicitly enable httpfs.
MapR may not support webhdfs.
In the HDFS Connection window:
- Select a server configuration: HTTPFS, WebHDFS, or Knox Gateway.
- Host: Specify the installed instance of the Hadoop server. The entry must be a URL or IP address.
- Port: Displays the default port number for httpfs (14000), webhdfs (50070), or Knox Gateway (8443), or enter a specific port number.
- URL: The URL defaults based on the Host. The URL can be modified.
- User Name: Depending on the cluster setup, specify the user name and password for access.
- httpfs: A user name is needed, but it can be anything.
- webhdfs: The user name is not needed.
- Knox Gateway: A user name and password is needed.
- Kerberos: Select a Kerberos authentication option for reading and writing to HDFS. The option you choose depends on how your IT admin configured the HDFS server:
- None: No authentication is used.
- Kerberos MIT: Alteryx uses the default MIT ticket to authenticate with the server. You must first acquire a valid ticket using the MIT Kerberos Ticket Manager.
- Kerberos SSPI: Alteryx uses Windows Kerberos keys for authentication, which are obtained when logging in to Windows with your Windows credentials. The User Name and Password fields are therefore not available.
- (Recommended) Click Test to test the connection.
- Click OK.
- Specify the path of the file (for example,
path/to/file.csv
), or browse to the file and select it. - Select the Avro or CSV file format and click OK.
Self-signed certificates are not supported in Alteryx. Use a trusted certificate when configuring Knox authentication.
To connect to HDFS for in-database processing, use the Connect In-DB Tool.
Other DatabasesPoint to Other Databases to create a new database connection to a database other than Microsoft, Oracle, or Hadoop.
Select the database you want to connect to:
- ODBC
- OleDB
- Oracle OCI
- Oracle Bulk
- SQL Server Bulk (for Microsoft SQL Server and Microsoft Azure SQL Data Warehouse)
- Teradata Bulk
- Amazon Redshift Bulk
- Snowflake Bulk
- 32-Bit Database Connections
- Previous connections
Before you connect to a database, consider the following:
- Both ODBC and OleDB connection types support spatial connections. Alteryx auto-detects if a database supports spatial functionality and displays the required configurations.
- When connecting to any OleDB or ODBC database, be sure to use the native driver provided by the database vendor.
- The Choose Table or Specify Query Window window opens if the database has multiple tables. You can then select tables and construct queries.
-
To connect to a database for in-database processing, see In-Database Overview.
Saved Data ConnectionsPoint to an option and click a saved or shared data connection to connect it, or click Manage to view and edit connections.
- All Connections: Displays a list of connections saved to your computer plus connections shared with from a Gallery.
- My Computer: Displays a list of connections saved to your computer.
- Gallery: Displays a list connections shared with you from a Gallery.
- Add a Gallery: Opens the Gallery Login screen. Use your user name and password to log in. After logging in, return to Saved Data Connections and point to the Gallery in the list to view connections shared from the Gallery.
See Manage Data Connections for more on managing saved and shared data connections and troubleshooting.
- Select file format options. Options vary based on the file or database to which you connect. See File Format Options.
- (Optional) Select Take File/Table Name From Field to write a separate file for each value in a selected field.
- Click the drop-down and select an option:
- Append Suffix to File/Table Name: Appends the selected field name to the end of the name of the selected table.
- Prepend Prefix to File/Table Name: Prepends the selected field name to the beginning of the name of the selected table.
- Change File/Table Name: Changes the file name to the selected field name.
- Change Entire File Path: Changes the file name to the name of the selected field. The name must be a complete file path. This option can overwrite an existing file if a file exists in the full path directory.
- Click the drop-down and select an option:
- Click Field Containing File Name or Part of File Name and select a field.
- (Optional) Select Keep Field in Output.
For best performance and data integrity, close outputs before you run a workflow.
- After you run the workflow, select the Output Data tool.
- In the Results window, click .
- Locate the output file and click the file link to open it.