Databricks

Version:
2022.1
Last modified: September 09, 2022
Connection Type

ODBC (64-bit)

Driver Configuration Requirements

The host must be a Databricks cluster JDBC/ODBC Server hostname.

For optimal performance, you must enable the Fast SQLPrepare option within the driver Advanced Options to allow Alteryx to retrieve metadata without running a query.
The Enabled Translation for CTAS check box has to be deselected in the DSN. It is checked by default. 
To use Visual Query Builder, select the Get Tables With Query option within the driver Advanced Options.

Supported for both AWS and Azure.

Type of Support

Read & Write, In-Database

Validated On

Databricks Interactive and SQL Endpoint cluster, Simba Apache Spark Driver 2.06.23.

Alteryx Tools Used to Connect

Standard Workflow Processing

Link
Input Data Tool Icon

Input Data Tool

In-Database Workflow Processing

Link
Blue icon with database being plugged in.

Connect In-DB Tool

Link
Blue icon with a stream-like object flowing into a database.

Data Stream In Tool

If you have issues with reading or writing Unicode® characters, access the Simba Impala ODBC driver. Under Advanced Options, select Use SQL Unicode Types.

Read Support

Install and configure the Apache Spark ODBC driver:

  • Spark Server Type: Select the appropriate server type for the version of Apache Spark that you are running. If you are running Apache Spark 1.1 and later, then select Apache SparkThriftServer.
  • Authentication Mechanism: See the installation guide downloaded with the Simba Apache Spark driver to configure this setting based on your setup.

To set up the driver Advanced Options, see the installation guide downloaded with the Simba Apache Spark driver.

Write Support

For both standard and in-database workflows, use the Data Stream In tool to write to Databricks. Write support is via the Databricks Bulk Loader. Go to Manage In-DB Connections - Write.

Configure the Write tab

  1. Select Databricks Bulk Loader (Avro) or Databricks Bulk Loader (CSV). To write a table with field names that total more than 4000 characters, use CSV instead of Avro. The delimiter used for CSV is the start of heading (SOH) character.
  2. Select the Connection String dropdown, and then select New Databricks connection.
  3. Select an existing ODBC data source, or select ODBC Admin to create one.
  4. Enter a username and password. These fields cannot be blank.
  5. Enter the Databricks URL
    https://abc-abc123-123a.cloud.databricks.com

Databricks Delta Lake Bulk Connection

Follow below steps to configure Databricks Delta Lake bulk connection.

Databricks Delta Lake Bulk Connection is only available in Designer version 2022.1 and higher.

  1. Select Databricks Delta Lake Bulk Loader (Avro) or Databricks Delta Lake Bulk Loader (CSV). To write a table with field names that total more than 4000 characters.

  2. Select the Connection String dropdown, and then select New database connection.

  3. Select an existing ODBC data source, or select ODBC Admin to create one.

  4. Enter a username and password. These fields cannot be blank. Alteryx supports personal access tokens. The username is “token”. The password is the personal access token.

  5. Select a Staging Method (supported for both AWS and Azure):

    1. For Amazon S3

      1. Enter the AWS Access Key and Secret Key to authenticate;

      2. Select an Endpoint or leave as Default;

      3. Select Use Signature V4 for Authentication;

      4. Select the level of Server-Side Encryption needed, None is the default;

      5. Select a Bucket Name to use as the staging location.

    2. For Azure ADLS 

      For bulk loading for Azure there is only ADLS Gen 2 support.

      1. Select the ADLS Container;

      2. Enter the Shared Key;

      3. Enter the Storage Account;

      4. Enter an optional Temp Directory. When entering the Temp Directory, don’t repeat the Container name.

        example

        If the folder structure is Container/MyTempFolder/TempTables, only enter “MyTempFolder/TempTables”.

        If the directory entered here does not already exist, Alteryx will create one.
        Alteryx will create one sub-folder with the table name for each table that is staged.

    3. Select OK to apply.

Was This Page Helpful?

Running into problems or issues with your Alteryx product? Visit the Alteryx Community or contact support. Can't submit this form? Email us.