Input Data Tool (Classic Mode)
Warnung
Classic Mode is not compatible when Data Connection Manager (DCM) is enabled.
The Input Data tool brings data in to your workflow by connecting to a file or database. Use Input Data to connect to these supported data sources:
File Types
File Type | File Extension |
---|---|
Alteryx Calgary | .cybd |
Alteryx Database | .yxdb |
Alteryx Spatial Zip | sv |
Avro | .avro |
Comma Separated Values | .csv |
dBase | dbf |
ESRI Personal GeoDatabase | .mdb |
ESRI Shapefile | .shp |
Extensible Markup Language | .xml |
Flat ASCII | .flat |
GIS | .grc, .grd |
Google Earth/Google Maps | .kml |
Gzip | .gz, .tgz |
IBM SPSS | .sav |
JSON | .json |
MapInfo Professional Interchange Format | .mif |
MapInfo Professional Table | .tab |
Microsoft Access 2000-2003 | .bdb |
Microsoft Access 2007, 2010, 2013, 2016 | .accdb |
Microsoft Excel Binary | .xlsb |
Microsoft Excel 1997-2003 | .xls |
Microsoft Excel | .xlsx |
Microsoft Excel Macro-Enabled | .xlsm |
QlikView | .qvx |
SAS | .sas7dbat |
SQLite | .sqlite |
SRC Geography | .geo |
Text | .txt, .asc, .dat, .log |
Zip | .zip |
Data Sources
Category | Data Source |
---|---|
Adobe | Adobe Analytics |
Amazon | Amazon Athena |
Amazon Aurora | |
Amazon Redshift | |
Amazon S3 | |
Apache | Cassandra |
Hadoop Distributed File System (HDFS) | |
Hive | |
Spark | |
Cloudera | Impala |
Hadoop Distributed File System (HDFS) | |
Hive | |
Databricks | Databricks |
ESRI | ESRI GeoDatabase |
Exasolution | EXASOL |
Google Analytics | |
Google BigQuery | |
Google Sheets | |
Hortonworks | Hadoop Distributed File System (HDFS) |
Hive | |
IBM | IBM DB2 |
IBM Netezza | |
Marketo | Marketo |
MapR | Hadoop Distributed File System (HDFS) |
Hive | |
Microsoft | Microsoft Analytics Platform System |
Microsoft Azure Data Lake Store | |
Microsoft Azure SQL Data Warehouse | |
Microsoft Azure SQL Database | |
Microsoft Cognitive Services | |
Microsoft Dynamics CRM | |
Microsoft OneDrive | |
Microsoft SharePoint | |
Microsoft SQL Server | |
Mongo DB | Mongo DB |
MySQL | MySQL |
NetSuite | NetSuite |
Oracle | Oracle |
Pivotal | Pivotal Greenplum |
PostgreSQL | PostgreSQL |
Salesforce | Salesforce |
SAP | SAP HANA |
Snowflake | Snowflake |
Teradata | Teradata |
Teradata Aster | |
Vertica | Vertica |
Use other tools to connect to other supported data sources. For a complete list of data sources supported in Alteryx, go to Supported Data Sources and File Formats.
Configure the Tool
Before you configure the tool, in the Configuration window, enter a file path in Connect a File or Database or select one of these options:
After you select an option below:
Select file format options. Options vary based on the file or database to which you connect. Go to File Format Options.
Preview the data layout.
Select File to browse to the file to connect to a file in a local or network directory.
Double-click a file to select it.
You can also connect to a file by clicking and dragging a file on your computer to the Alteryx canvas, adding an Input Data tool connected to the file you selected.
Continue scrolling for instructions on how to select multiple files and Excel files.
Go to Gzip File Support and Zip File Support for more information.
Select file type to extract: Use the dropdown to select the type of files to display.
Select files: Check the check box next to the file you want to extract.
To see all files in the Gzip or Zip file, including files that are not supported by Alteryx, select Other Files under Select file type to extract. Select a file type to Parse other files as.
Wichtig
Gzip and Zip files are not supported in Alteryx Server.
Select Microsoft SQL Server to create a new Microsoft SQL Server database connection. Go to Connect to a SQL Server Database for more information.
Select Oracle to create a new Oracle database connection. Go to Connect to an Oracle Database for more information.
Wichtig
Fixed length fields from Oracle are read in as variable length fields if they are longer than 40 characters.
Select Hadoop to create a new Hadoop database connection.
Alteryx connects to a Hadoop Distributed File System and reads .csv and .avro files. All Hadoop distributions implementing the HDFS standard are supported. Continue Scrolling for instructions on how to configure HDFC connections.
Point to Other Databases to create a new database connection to a database other than Microsoft, Oracle, or Hadoop.
Select the database you want to connect to:
ODBC
OleDB
Oracle OCI
Teradata Bulk
ESRI
32-Bit Database Connections
Previous Connections
Before you connect to a database, consider the following:
Both ODBC and OleDB connection types support spatial connections. Alteryx auto-detects if a database supports spatial functionality and displays the required configurations.
When connecting to any OleDB or ODBC database, be sure to use the native driver provided by the database vendor.
The Choose Table or Specify Query Window window opens if the database has multiple tables. You can then select tables and construct queries.
To connect to a database for in-database processing, go to In-Database Overview.
Wichtig
All ODBC, OCI, and OleDB data sources, the String and WString fields in Alteryx trim leading and trailing spaces when reading in data.
Point to an option and select a saved or shared data connection to connect it, or select Manage to view and edit connections.
All Connections: Displays a list of connections saved to your computer plus connections shared with you from a Gallery.
My Computer: Displays a list of connections saved to your computer.
Server: Displays a list of connections shared with you from Alteryx Server.
Add a Server: Opens the Server Sign-in screen. Use your username and password to sign in. After signing in, return to Saved Data Connections and point to the Server in the list to view connections shared from the Server.
Go to Manage Data Connections for more on managing saved and shared data connections and troubleshooting.
Select Multiple Files
In the file browse window, enter a wildcard as part of the file path.
Consider a case where you have multiple data tables with both:
Identical table structures, meaning the table contains the same columns, data types, and sheet names.
Similar names in the same directory.
Enter the file name they have in common and add an * to substitute all subsequent characters or a ? to substitute one character. Remember to include the file extension that is common to all files when specifying the file names.
This path brings in every .csv file contained within the data\datafiles directory with a file name that begins with ABCD.
data\datafiles\ABCD*.csv
It would bring in ABCD_4.csv and ABCD_012.csv
.
This path brings in every .csv file contained within the data\datafiles directory with a file name that begins with ABCD_ + 1 character.
data\datafiles\ABCD_?.csv
Excel
On Select Excel Input, select one of the Excel inputs:
Select a sheet to choose from the sheets available in the excel file.
Select a named range to choose from the named ranges available in the excel file.
Import only the list of sheet names to create output with a single column containing sheet names as values.
None of the related data is output when this option is selected.
Known Issues
The Access driver reads !!! as ### and both,,, and ... as ___. This can impact the sheet names and named ranges in an Excel file pulled into Designer.
Configure HDFS Connections
HDFS can be read using httpfs (port 14000), webhdfs (port 50070), or Knox Gateway (8443). Consult with your Hadoop administrator for which to use. If you have a Hadoop High Availability (HA) cluster, your Hadoop admin must explicitly enable httpfs.
MapR may not support webhdfs.
In the HDFS Connection window:
Select a server configuration: HTTPFS, WebHDFS, or Knox Gateway.
Host: Specify the installed instance of the Hadoop server. The entry must be a URL or IP address.
Port: Displays the default port number for httpfs (14000), webhdfs (50070), or Knox Gateway (8443), or enter a specific port number.
URL: The URL defaults based on the Host. The URL can be modified.
User Name: Depending on the cluster setup, specify the user name and password for access.
httpfs: A user name is needed, but it can be anything.
webhdfs: The user name is not needed.
Knox Gateway: A user name and password is needed.
Kerberos: Select a Kerberos authentication option for reading and writing to HDFS. The option you choose depends on how your IT admin configured the HDFS server:
None: No authentication is used.
Kerberos MIT: Alteryx uses the default MIT ticket to authenticate with the server. You must first acquire a valid ticket using the MIT Kerberos Ticket Manager.
Kerberos SSPI: Alteryx uses Windows Kerberos keys for authentication, which are obtained when logging in to Windows with your Windows credentials. The User Name and Password fields are therefore not available.
(Recommended) Select Test to test the connection.
Select OK.
Specify the path of the file (for example, path/to/file.csv), or browse to the file and select it.
Select the Avro or CSV file format and click OK.
To connect to HDFS for in-database processing, use the Connect In-DB tool.
Preview Details
The data layout preview is limited to 100 records or fewer in wide files. If data contains 1500 columns, at least one record and up to two rows are displayed. Use a Browse tool to view all data.
Anmerkung
For best performance and data integrity, close inputs before you build and run a workflow.