Databricks Volumes
Connection Type | ODBC (64-bit) |
Driver Configuration Requirements | The host must be a Databricks Unity Catalog cluster JDBC/ODBC Server hostname. |
Type of Support | In-Database Write |
Validated On | Databricks Cluster and SQL Warehouse Simba Apache Spark Driver 2.6.23.1039 |
Driver Details
In-Database processing requires 64-bit database drivers.
Alteryx Tools Used to Connect
In-Database Workflow Processing
Caution
Databricks Volumes is only supported using DCM.
Databricks Volumes is only supported using DSN-less connections.
Databricks Volumes is only supported for Unity Catalog.
Writing to Databricks Unity Catalog is only supported using the In-DB tools.
Alteryx supports MergeInDB for Databricks Unity Catalog, go to the Write Data In-DB tool.
Configure In-DB Connection
Open the Manage In-DB Connections window.
Select Databricks Unity Catalog in the Data Source dropdown.
Select New to create a new connection.
Enter a Connection Name.
On the Read tab, select Setup Connection to open the DCM connection manager for Databricks Unity Catalog. The DCM Connection Manager is pre-filtered to show only Apache Spark ODBC DSN-less with Simba Databricks Unity Catalog connections.
Select an existing connection or select +New to create a new connection. Go to Databricks Unity Catalog for configuring a new connection using DCM.
On the Write tab, select Databricks UC Volumes Bulk Loader (Avro) in the dropdown.
Select Setup Connection to open the DCM Connection Manager for the Databricks Connection. The DCM Connection Manager is pre-filtered to show only Apache Spark ODBC Bulk DSN-less with Databricks UC Volumes connections.
Select an existing connection or select +New to create a new connection. See below for configuring a new connection using DCM.
Select Apply and OK to save the connection and close the window.
If the In-DB Connections Manager was accessed through the Connect In-DB tool, the Choose Table or Specify Query window loads and allows you to select the tables.
Configure Apache Spark ODBC Bulk DSN-less with Databricks UC Volumes in DCM
This connection is used for writing data to Databricks Unity Catalog using Volumes staging.
Open Data Connection Manager and navigate to Apache Spark ODBC Bulk DSN-less with Databricks UC Volumes.
- From an Input tool or the In-DB Connection Manager, DCM is pre-filtered.
- From the File Menu, go to File > Manage Connections > +New > Apache Spark > Apache Spark ODBC Bulk DSN-less with Databricks UC Volumes.
Enter a Data Source Name.
Enter the Databricks Unity Catalog Host name.
The Port is set to 443 by default. Change as needed.
Enter the http path. The http path is the Databricks compute resources URL.
Enter the Catalog. This sets the catalog that is used for writing data and creating tables.
Enter the Schema. This sets the schema that is used for writing data and creating tables.
Enter the full path for the Databricks Volume in the format /Volumes/<catalog>/<schema>/<volume>/<path/to/folder>.
Select Save to save the Data Source.
Select +Connect Credential to add a Credential.
Select an Authentication Method.
To use a Personal Access Token, select Username and password as the authentication method and make the username “token”.
To use Azure AD, go to Databricks Azure OAuth Authentication.
Select an Existing Credential or select Create New Credential to create a new credential and enter the Personal Access Token or the information for Azure AD.
Select Link to link the credential to the Data Source.
Select Connect.