The Output Data tool writes the results of a workflow to a file or database.
Use the Output Data tool to write results to the following supported data sources:
Alteryx Database | .yxdb |
Alteryx Spatial Zip | .sz |
Apache Hadoop Avro | .avro |
ASCII Flat | .flat |
Autodesk | .sdf |
Comma Separated Values | .csv |
dBase | .dbf |
ESRI Personal GeoDatabase | .gdb |
ESRI Shapefile | .shp |
Google Earth/Maps | .kml |
IBM SPSS | .sav |
JSON | .json |
MapInfo Professional Interchange | .mid, .mif |
MapInfo Professional Table | .tab |
Microsoft Access 2000-2003 | .mdb |
Microsoft Excel 1997-2003 | .xls |
Microsoft Excel 2007, 2010, 2013, 2016 | .xlsx |
Microsoft Excel Macro Enabled | .xlsm |
Microsoft Office Access 2007, 2010, 2013, 2016 | .accdb |
OpenGIS | .mgl |
QlikView | .qvx |
SAS | .sas7bdat |
SQLite | .sqlite |
SRC Geography | .geo |
Tableau Data Extract | .tde |
Amazon | Amazon Aurora |
Amazon Redshift | |
Apache Hadoop | Cassandra |
Hadoop Distrubted File System (HDFS) | |
Hive | |
Spark | |
Cloudera | Impala |
Hadoop Distributed File System (HDFS) | |
Hive | |
DataStax | DataStax Enterprise, DataStax Community |
Exasolution | EXASOL |
Hortonworks | Hadoop Distrubted File System (HDFS) |
Hive | |
HP | Vertica |
IBM | IBM DB2 |
IBM Netezza/Pure Data Systems | |
MapR | Hadoop Distrubted FIle System (HDFS) |
Hive | |
Microsoft | Microsoft Azure SQL Data Warehouse |
Microsoft SQL Server 2008, 2012, 2014, 2016 | |
MySQL | MySQL |
Oracle | Oracle |
Pivotal | Pivotal Greenplum |
PostgreSQL | PostgreSQL |
SAP | SAP Hana |
Teradata | Teradata |
Teradata Aster |
Use other tools to write to other supported data sources. For a complete list of data sources supported in Alteryx, see Supported Data Sources.
Click File to browse to the file to connect to a file in a local or network directory.
Click Microsoft SQL Server to create a new Microsoft SQL Server database connection.
Click Oracle to open create a new Oracle database connection.
Click Hadoop to create a new Hadoop database connection.
Alteryx connects to a Hadoop Distributed File System and reads .csv and .avro files. All Hadoop distributions implementing the HDFS standard are supported.
HDFS can be read using httpfs (port 14000), webhdfs (port 50070), or Knox Gateway (8443). Consult with your Hadoop administrator for which to use. If you have a Hadoop High Availability (HA) cluster, your Hadoop admin must explicitly enable httpfs.
MapR may not support webhdfs.
In the HDFS Connection window:
Self-signed certificates are not supported in Alteryx. Use a trusted certificate when configuring Knox authentication.
path/to/file.csv
), or browse to the file and select it.To connect to HDFS for in-database processing, use the Connect In-DB Tool.
Point to Other Databases to create a new database connection to a database other than Microsoft, Oracle, or Hadoop.
Select the database you want to connect to:
Before you connect to a database, consider the following:
Point to an option and click a saved or shared data connection to connect it, or click Manage to view and edit connections.
See Manage Data Connections Window for more on managing saved and shared data connections and troubleshooting.
For best performance and data integrity, close outputs before you run a workflow.
©2018 Alteryx, Inc., all rights reserved. Allocate®, Alteryx®, Guzzler®, and Solocast® are registered trademarks of Alteryx, Inc.