Skip to main content

Snowflake

Driver Configuration Requirements

For the in-database process, and to avoid errors when you write data with the Output Data tool, specify a Warehouse, Schema, and Database in the driver.

Type of Support

Read & Write; In-Database.

Validated On

Database Version: 6.8.1

ODBC Client Version: 3.0.0.1001 64-bit

Alteryx Tools Used to Connect

In-Database Workflow Process

Connect In-DB Tool

Blue icon with database being plugged in.

Link

Data Stream In Tool

Blue icon with a stream-like object flowing into a database.

Link

Input Data Tool

  1. Select the Input Data tool and navigate to the Input Data (1) - Configuration pane > Select the dropdown icon under Connect a File or Database.

  2. Navigate the Data connections window > All data sources > Snowflake > ODBC.

  3. In the Snowflake ODBC Connection pop-up window, enter your User name and Password.

  4. In the Choose Table or Specify Query window, select the Tables tab > Select a table > Select OK.

Classic Mode

If wanting to work in Classic mode, navigate to Options > User Settings > Edit User Settings > Select the Use classic mode for the Input/Output tool menu options checkbox.

  1. To specify a query, select the SQL Editor tab > Enter the query in the available space > Select the Test Query button > Select OK.

  2. Select the Output Data tool, and navigate to the menu > Options.

  3. Select User Settings > Edit User Settings > Select the Use classic mode for the Input/Output tool menu options checkbox.

  4. Navigate to the Output Data (1) - Configuration pane > Under Write to File or Database > Select the dropdown icon.

  5. Select Other Databases > Select Snowflake Bulk....

  6. You can also use the Input Data tool (Classic Mode) to select your files and data sources.

If wanting to switch to Classic mode, replace your Output Data tool with a new tool > Select the canvas, or F5 to refresh.

Output Data Tool

To fully support the read and write of tables and columns, set the Alteryx Designer Table/Fieldname SQL style option to Quoted. Quoted will specifically select the table in the database, while selecting None will produce results in all caps.

Review the following before you configure a Snowflake Bulk connection:

  • You can only write data with the Snowflake Bulk loader.

  • After writing data to the new output, the Snowflake Bulk loader removes the written data from the S3 bucket.

  • Maximum allowed length for text fields is 16,777,216 bytes.

For more information, see the Snowflake documentation.

Note

As of version 2023.2, Designer cancels all uncompleted in-database executions upon canceling workflow locally. This allows you to unblock your data pipelines. The feature improves overall database interaction.

Configure an ODBC Connection

  1. In the ODBC Data Source Administrator, select the Snowflake driver and select Configure.

  2. Enter your Connection Settings and credentials.

  3. Select OK to save the connection.

Note

To fully support the reading and writing of camel case tables and columns, you must set the Alteryx Designer Table/Fieldname SQL style option to quoted.

Configure Snowflake JWT for Key-Pair Authentication

To configure Snowflake JWT through the ODBC driver:

  1. Create the token as per the instructions from Snowflake: https://docs.snowflake.com/en/user-guide/key-pair-auth.html.

  2. Set the Authenticator in the ODBC DSN to SNOWFLAKE_JWT.

    Snowflake JWT
  3. In Alteryx, the connection string has to point at the file location, e.g.: odbc:DSN=Simba_Snowflake_JWT;UID=user;PRIV_KEY_FILE=G:\AlteryxDataConnectorsTeam\OAuth project\PEMkey\rsa_key.p8;PRIV_KEY_FILE_PWD=__EncPwd1__;JWT_TIMEOUT=120

    For more instructions, see Snowflake Documentation.

Configure Snowflake Bulk Connection to Write Data

Bulk loading is supported for connections using a DSN as well as DSN-less connections.

To configure the bulk loader using a DSN-less connection string, construct the connection string manually. The string has to contain parameters needed to write to Snowflake and those needed for staging. The ones needed for Snowflake can be found in the Snowflake documentation. The ones needed for staging can be found below.

  • Connection string example: snowbl:Driver= {SnowflakeDSIIDriver};Server=customerinstance.snowflakecomputing.com;Database=TEST_DB; Warehouse=TEST_WAREHOUSE;schema=PUBLIC;UID=user;PWD=password;Bucket=S3Bucket; Access=IAMAccessKey;Secret=IAMSecretKey;URL=s3.amazonaws.com;Region=us-east-1

To configure the bulk loader with a DSN, use the UI and follow the steps below:

  1. Select the Output Data tool, and navigate to the Output Data (1) - Configuration pane.

  2. Under Write to File or Database, select the dropdown icon > navigate the Data connections window to select the Data sources tab > All data sources > Snowflake > Select Bulk.

  3. In the Snowflake Bulk Connection window, select the Local tab > Enter your credentials in the Username (optional) and Password (optional) spaces > Select OK to view the Output Table pop-up window.

  4. Enter the table (or worksheet) name for the output file that was specified, either with the table name, or in the form of db.schema.tablename: this is your fully qualified table.

  5. Under Options, in the Output Data (1) - Configuration pane, select from the Table/FildName SQL Style dropdown either with Quoted or None.

  6. Under Take File/Table Name From Field, decide whether to select its checkbox for the Append Suffix to File/Table Name dropdown. If you select the checkbox, from the dropdown selections, choose to...

    1. Append Suffix to File/Table Name.

    2. Prepend Prefix to File/Table Name.

    3. Change File/Table Name

    4. Change Entire File Path.

  7. If you select the Append Suffix to File/Table Name checkbox, under Field Containing File Name or Part of File Name, select to use one of the following:

    1. SEQUENCE_CATALOG

    2. SEQUENCE_SCHEMA

    3. SEQUENCE_NAME

    4. SEQUENCE_OWNER

    5. DATA_TYPE

    6. NUMERIC_PRECISION

    7. NUMERIC_PRECISION_RADIX

    8. NUMERIC_SCALE

    9. START_VALUE

    10. MINIMUM_VALUE

    11. MAXIMUM_VALUE

    12. NEXT_VALUE

    13. INCREMENT

    14. CYCLE_OPTION

    15. CREATED

    16. LAST_ALTERED

    17. COMMENT

  8. Decide whether to select the Keep Field in Output checkbox, for the option that you have selected from the dropdown to use with your table's data.

Methods to Stage Data with Snowflake Bulk Connection

Configure a Snowflake Bulk Connection to Write to Amazon S3

  1. From the Output Data - Configuration window, select Write to File or Database and select Other Databases > Snowflake Bulk to display the Snowflake Bulk Connection window.

  2. Select a Data Source Name, or select ODBC Admin to create one. See ODBC and OLEDB Database Connections.

  3. Enter a User Name and Password, as necessary.

  4. In Amazon S3, enter your AWS Access Key and your AWS Secret Key.

  5. From Secret Key Encryption, select an encryption option:

    • Hide: Hides the password using minimal encryption.

    • Encrypt for Machine: Any user on the computer has full access to the connection.

    • Encrypt for User: The logged in user has full access to the connection on any computer.

  6. From Endpoint, select one of the following options:

    • Default: Amazon determines the endpoint based on the selected bucket.

    • Specific endpoint: To specify an S3 region in which the bucket resides, specify a custom endpoint or select from previously entered endpoints.

      • The S3 bucket must be in the specified S3 region. Otherwise, the following error displays: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.

      • Select Default to close the error window.

  7. (Optional) Select Use Signature V4 for Authentication to increase security beyond the default Signature Version 2. This option is automatically enabled for regions requiring Signature Version 4. Regions created after January 30, 2014 support only Signature Version 4. The following regions require Signature Version 4 authentication:

    • US East (Ohio) Region

    • Canada (Central) Region

    • Asia Pacific (Mumbai) Region

    • Asia Pacific (Seoul) Region

    • EU (Frankfurt) Region

    • EU (London) Region

    • China (Beijing) Region

  8. Select a Server-Side Encryption method for uploading to an encrypted Amazon S3 bucket. See Amazon Simple Storage Service Developer Guide.

    • None (Default): An encryption method is not used.

    • SSE-KMS: Use server-side encryption with AWS KMS-managed keys. You can also provide a KMS Key ID. When you select this method, Use Signature V4 for Authentication is enabled by default.

  9. Enter the Bucket Name of the AWS bucket in which the data objects are stored.

  10. Configure additional File Format Options in the Output Data (1) - Configuration pane. See File Format Options.

Configure Snowflake Bulk Connection to Write to Local Storage

You now have three enhanced options to choose from when you stage data to your local drive!

Select the Output Data tool, and navigate to the Output Data (1) Configuration pane > Under Write to File or Database, navigate the Data connections window and select the Data sources tab > All data sources > Snowflake > Select Bulk. In the Snowflake Bulk Connection window > Select the Local tab >

  1. User Stage: The internal stage provided by Snowflake, associated with the user.

    1. Select User > Select OK. In the Output Table pop-up window, enter the table (or worksheet) name for the output file format that was specified either with the table name, or in the form of db.schema.tablename: this is your fully qualified table. Select OK. Under Options in the Output Data (1) - Configuration pane, select from the Table/FieldName SQL Style dropdown either with Quoted or None.

  2. Table Stage: The internal stage provided by Snowflake, associated with the table.

    1. Select Table Stage > Select OK. In the Output Table pop-up window, enter the table (or worksheet) name for the output file format that was specified either with the table name, or in the form of db.schema.tablename: this is your fully qualified table. Select OK. Under Options in the Output Data (1) - Configuration pane, select from the Table/FieldName SQL Style dropdown either with Quoted or None.

      1. Internal Name Stage: In the Snowflake database, create and execute a CREATE STAGE command, and provide the stage name to the tool configuration.

        Note

        The maximum field size is 16MB. If the field size threshold is exceeded, an error will be thrown and no data will be written.

  3. Compression Type: Options are either “No Compression” or “Compress with GZip.”

    1. No compression - files are staged and uploaded as CSV

    2. Compress with GZIP - CSV files are zipped with GZIP

  4. Chunk Size in MB (1-999): This option allows you to select the size of each CSV that gets staged locally.

    Note

    Actual file size may vary from selected Chunk Size due to underlying formatting and compression.

  5. Number of Treads (1-99): This option specifies the number of threads to use for uploading files to Snowflake. Performance may improve for larger files when this value is increased. If 0 is entered, the Snowflake default is used (4).

From stage, each copy into the table can contain up to 1000 files. If there are more than 1000 files in stage, you may see multiple copy into statements. This is a Snowflake requirement for copy into statements using files. For more information, see Snowflake portal.

More information on Compression type, Chunk size in MB, andNumber of treads options can be found on Snowflake portal.