Databricks
Connection Type | ODBC (64-bit) |
Driver Configuration Requirements | The host must be a Databricks cluster JDBC/ODBC Server hostname. For optimal performance, you must enable the Fast SQLPrepare option within the driver Advanced Options to allow Alteryx to retrieve metadata without running a query. The Enabled Translation for CTAS check box has to be deselected in the DSN. It is checked by default. To use Visual Query Builder, select the Get Tables With Query option within the driver Advanced Options. Supported for both AWS and Azure. |
Type of Support | Read & Write, In-Database |
Validated On | Databricks Interactive and SQL Endpoint cluster, Simba Apache Spark Driver 2.06.23. |
Alteryx Tools Used to Connect
Standard Workflow Processing
In-Database Workflow Processing
Connect In-DB Tool
Data Stream In Tool
If you have issues with reading or writing Unicode® characters, access the Simba Impala ODBC driver. Under Advanced Options, select Use SQL Unicode Types.
The string length is controlled by the driver. You can change it in the Advanced Options for the ODBC DSN or through the Advanced Options for the Driver Configuration, which you can find in the driver install folder.
Read Support
Install and configure the Apache Spark ODBC driver:
Spark Server Type: Select the appropriate server type for the version of Apache Spark that you are running. If you are running Apache Spark 1.1 and later, then select Apache SparkThriftServer.
Authentication Mechanism: See the installation guide downloaded with the Simba Apache Spark driver to configure this setting based on your setup.
To set up the driver Advanced Options, see the installation guide downloaded with the Simba Apache Spark driver.
Write Support
For both standard and in-database workflows, use the Data Stream In tool to write to Databricks. Write support is via the Databricks Bulk Loader. Go to Manage In-DB Connections - Write.
Configure the Write tab
Select Databricks Bulk Loader (Avro) or DatabricksBulk Loader (CSV).To write a table with field names that total more than 4000 characters, use CSV instead of Avro. The delimiter used for CSV is the start of heading (SOH) character.
Select the Connection String dropdown, and then select New Databricks connection.
Select an existing ODBC data source, or select ODBC Admin to create one.
Enter a username and password. These fields cannot be blank.
Enter the Databricks URL
https://abc-abc123-123a.cloud.databricks.com
Warning
Including a trailing “/” in the URL (e.g. https://abc-abc123-123a.cloud.databricks.com/) will result in error.
Databricks Delta Lake Bulk Connection
Follow below steps to configure Databricks Delta Lake bulk connection.
Important
Databricks Delta Lake Bulk Connection is only available in Designer version 2022.1 and higher.
Select Databricks Delta Lake Bulk Loader (Avro) or Databricks Delta Lake Bulk Loader (CSV).To write a table with field names that total more than 4000 characters.
Select the Connection String dropdown, and then select New database connection.
Select an existing ODBC data source, or select ODBC Admin to create one.
Enter a username and password. These fields cannot be blank. Alteryx supports personal access tokens. The username is “token”. The password is the personal access token.
Select a Staging Method (supported for both AWS and Azure):
For Amazon S3
Enter the AWS Access Key and Secret Key to authenticate;
Select an Endpoint or leave as Default;
Select Use Signature V4 for Authentication;
Select the level of Server-Side Encryption needed, None is the default;
Select a Bucket Name to use as the staging location.
For Azure ADLS
Important
For bulk loading for Azure there is only ADLS Gen 2 support.
Select the ADLS Container;
Enter the Shared Key;
Enter the Storage Account;
Enter an optional Temp Directory. When entering the Temp Directory, don’t repeat the Container name.
example
If the folder structure is Container/MyTempFolder/TempTables, only enter “MyTempFolder/TempTables”.
If the directory entered here does not already exist, Alteryx will create one.
Alteryx will create one sub-folder with the table name for each table that is staged.
Select OK to apply.