Databricks
Type of Support: | Read & Write; In-Database |
Validated On: | Databricks version 2.18; Simba Apache Spark Driver 1.00.09 |
Connection Type: | ODBC (64-bit) |
Driver Details: | The ODBC driver can be downloaded here. In-Database processing requires 64-bit database drivers. |
Driver Configuration Requirements: |
The host must be a Databricks cluster JDBC/ODBC Server hostname. For optimal performance, you must enable the Fast SQLPrepare option within the driver Advanced Options to allow Alteryx to retrieve metadata without running a query. To use Visual Query Builder, select the Get Tables With Query option within the driver Advanced Options. |
Alteryx tools used to connect
- Input Data Tool (For standard workflow processing)
- Connect In-DB Tool and Data Stream In Tool (For In-Database workflow processing).
Additional Details
If you have issues with reading or writing Unicode® characters, access the Simba Impala ODBC driver. Under Advanced Options, select the “Use SQL Unicode Types” option.
Read Support
Install and configure the Apache Spark ODBC driver:
- Spark Server Type: Select the appropriate server type for the version of Apache Spark that you are running. If you are running Apache Spark 1.1 and later, then select Apache SparkThriftServer.
- Authentication Mechanism: See the installation guide downloaded with the Simba Apache Spark driver to configure this setting based on your setup.
To set up the driver Advanced Options, see the installation guide downloaded with the Simba Apache Spark driver.
Write Support
For both standard and in-database workflows, use the Data Stream In Tool to write to Databricks. Write support is via the Databricks Bulk Loader.
In the Manage In-DB Connections > Write tab:
- Select Databricks Bulk Loader (Avro) or Databricks Bulk Loader (CSV).
- Select the Connection String drop-down, and then select New Databricks connection.
- Select an existing ODBC data source, or click ODBC Admin to create one.
- Specify a user name and password. These fields cannot be blank.
-
Specify the Databricks URL
https://abc-abc123-123a.cloud.databricks.com
To write a table with field names that total more than 4000 characters, use CSV instead of Avro.