Configure for AWS Databricks
This section provides high-level information on how to configure the Designer Cloud Powered by Trifacta platform to integrate with Databricks hosted on AWS.
AWS Databricks is a unified data analytics platform that has been optimized for use on the AWS infrastructure.
For more information, see https://databricks.com/aws.
For documentation on AWS Databricks, see https://databricks.com/documentation.
Additional Databricks features supported by the platform:
Credential passthrough (AWS Databricks only): https://docs.databricks.com/security/credential-passthrough/iam-passthrough.html
Table access control: https://docs.databricks.com/security/access-control/table-acls/object-privileges.html
Prerequisites
The Designer Cloud Powered by Trifacta platform must be installed in a customer-managed AWS environment.
The base storage layer must be set to S3. For more information, see Set Base Storage Layer.
AWS Secrets Manager is required for AWS Databricks use. For more information, see Configure for AWS Secrets Manager.
Limitations
Import datasets created from nested folders is not supported for running jobs from AWS Databricks.
If the job is submitted using the User cluster mode and no cluster is available, the following are the launch times for a new cluster with and without instance pools:
Without instance pools: Up to 5 minutes to launch
With instance pools : Up to 30 seconds to launch
If the job is canceled during cluster startup:
The cluster startup continues. After the cluster is running, the job is terminated, and the cluster remains.
As a result, there is a delay in reporting the job cancellation in the Job Details page. The job should be reported as canceled not failed.
AWS Databricks integration works with Spark 2.4.x, Spark 3.0.1, Spark 3.2.0, and Spark 3.2.1.
Note
The version of Spark for AWS Databricks must be applied to the platform configuration through the
databricks.sparkVersion
property. Details are provided later.
Supported versions of Databricks
AWS Databricks 10.x
AWS Databricks 9.1 LTS (Recommended)
AWS Databricks 7.3 LTS
Job Limits
By default, the number of jobs permitted on an AWS workspace is set to 1000
.
The number of jobs that can be created per workspace in an hour is limited to
1000
.The number of jobs a workspace can create in an hour is limited to
5000
when using the run-submit API. This limit also affects jobs created by the REST API and notebook tasks. For more information, see "Configure Databricks job management" below.The number of actively concurrent job runs in a workspace is limited to
150
.
These limits apply to any jobs that use workspace data on the cluster.
Managing Limits
To enable retrieval and auditing of job information after a job has been completed, the Designer Cloud Powered by Trifacta platform does not delete jobs from the cluster. As a result, jobs can accumulate over time to exceed the number of jobs permitted on the cluster. If you reach these limits, you may receive a Quota for number of jobs has been reached
limit. For more information, see https://docs.databricks.com/user-guide/jobs.html.
Optionally, you can allow the Designer Cloud Powered by Trifacta platform to manage your jobs to avoid these limitations. For more information, see "Configure Databricks job management" below.
Enable
To enable AWS Databricks, perform the following configuration changes:
Steps:
You apply this change through the Workspace Settings Page. For more information, see Platform Configuration Methods.
Locate the following parameter, which enables Trifacta Photon for smaller job execution. Set it to
Enabled
:Photon execution
You do not need to save to enable the above configuration change.
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.Locate the following parameters. Set them to the values listed below, which enables AWS Databricks (small to extra-large jobs) running environments:
"webapp.runInDatabricks": true, "webapp.runWithSparkSubmit": false, "webapp.runInDataflow": false,
Do not save your changes until you have completed the following configuration section.
Configure
Configure cluster mode
When a user submits a job, the Designer Cloud Powered by Trifacta Enterprise Edition provides all the cluster specifications in the Databricks API and it creates cluster only for per-user or per-job, that means once the job is complete, the cluster is terminated. Cluster creation may take less than 30 seconds if instance pools are used. If the instance pools are not used, it may take 10-15 minutes.
For more information on job clusters, see https://docs.databricks.com/clusters/configure.html.
The job clusters automatically terminate after the job is completed. A new cluster is automatically created when the user next requests access to AWS Databricks access.
Cluster Mode | Description |
---|---|
USER | When a user submits a job, Designer Cloud Powered by Trifacta Enterprise Edition creates a new cluster and persists the cluster ID in Designer Cloud Powered by Trifacta Enterprise Edition metadata for the user if the cluster does not exist or invalid. If the user already has an existing interactive valid cluster, then the existing cluster is reused when submitting the job. Reset to JOB mode to run jobs in AWS Databricks. |
JOB | When a user submits a job, Designer Cloud Powered by Trifacta Enterprise Edition provides all the cluster specifications in the Databricks API. Databricks creates a cluster only for this job and terminates it as soon as the job completes. Default cluster mode to run jobs in AWS Databricks. |
Configure use of cluster policies
Optionally, you can configure the Designer Cloud Powered by Trifacta platform to use the Databricks cluster policies that have been specified by your Databricks administrator for creating and using clusters. These policies are effectively templates for creation and use of Databricks clusters and govern aspects of clusters such as the type and count of nodes the resources that can be accessed via the cluster, and other settings. For more information on Databricks cluster policies, see https://docs.databricks.com/administration-guide/clusters/policies.html.
Prerequisites
Note
Your Databricks administrator must create and deploy the Databricks cluster policies from which Alteryx users can select for their personal use.
Notes:
When this feature is enabled, each user may select the appropriate Databricks cluster policy to use for jobs. If none is selected by a user, jobs are launched without a cluster policy for the user using the Databricks properties set in platform configuration.
Note
Except for Spark version and cluster policy identifier in job-level overrides, other Databricks cluster configuration in the Designer Cloud Powered by Trifacta platform is ignored when this feature is in use. Other job-level overrides are also ignored.
If a cluster policy is modified and existing clusters are using it, then subsequent job executions using that policy attempt to use the same cluster. This can cause issues in performance and even job failures.
Tip
Avoid editing cluster policies that are in use, as these changed policies may be applied to clusters generated under the old policies. Instead, you should create a new policy and assign it for use.
If the cluster policy references a Databricks instance pool that does not exist, the job fails.
Steps:
You apply this change through the Workspace Settings Page. For more information, see Platform Configuration Methods.
Locate the following parameter and set it to
Enabled
:Databricks Cluster Policies
Save your changes and restart the platform.
Note
Each user must select a cluster policy to use. For more information, see Databricks Settings Page.
Job overrides:
A user's cluster policy can overridden when a job is executed via API. Set the request attribute for the clusterPolicyId
.
Note
If a Databricks cluster policy is used, all job-level overrides except for clusterPolicyId
are ignored.
For more information, see API Task - Run Job.
Policy template for AWS - without instance pools:
The following example cluster policy can provide a basis for creating your own AWS cluster policies when instance pools are not in use:
{ "autoscale.max_workers": { "type": "fixed", "value": 3, "hidden": true }, "autoscale.min_workers": { "type": "fixed", "value": 1, "hidden": true }, "autotermination_minutes": { "type": "fixed", "value": 10, "hidden": true }, "aws_attributes.availability": { "type": "fixed", "value": "SPOT_WITH_FALLBACK", "hidden": false }, "aws_attributes.ebs_volume_count": { "type": "fixed", "value": 0, "hidden": false }, "aws_attributes.ebs_volume_size": { "type": "fixed", "value": 0, "hidden": false }, "aws_attributes.first_on_demand": { "type": "fixed", "value": 1, "hidden": false }, "aws_attributes.spot_bid_price_percent": { "type": "fixed", "value": 100, "hidden": false }, "aws_attributes.instance_profile_arn": { "type": "fixed", "value": "arn:aws:iam::9999999999999:instance-profile/SOME_Role_ARN", "hidden": false }, "driver_node_type_id": { "type": "fixed", "value": "i3.xlarge", "hidden": true }, "enable_local_disk_encryption": { "type": "fixed", "value": false }, "node_type_id": { "type": "fixed", "value": "i3.xlarge", "hidden": true } }
Policy template for AWS - without instance pools:
The following example cluster policy can provide a basis for creating your own AWS cluster policies when instance pools are in use:
{ "autoscale.max_workers": { "type": "fixed", "value": 3, "hidden": true }, "autoscale.min_workers": { "type": "fixed", "value": 1, "hidden": true }, "aws_attributes.instance_profile_arn": { "type": "fixed", "value": "arn:aws:iam::9999999999:instance-profile/SOME_POLICY", "hidden": false }, "enable_local_disk_encryption": { "type": "fixed", "value": false }, "instance_pool_id": { "type": "fixed", "value": "SOME_POOL", "hidden": true }, "driver_instance_pool_id": { "type": "fixed", "value": "SOME_POOL", "hidden": true }, "autotermination_minutes": { "type": "fixed", "value": 10, "hidden": true }, }
Configure Instance Profiles in AWS Databricks
Designer Cloud Powered by Trifacta platform EC2 instances can be configured with permissions to access AWS resources like S3 by attaching an IAM instance profile. Similarly, instance profiles can be attached to EC2 instances for use with AWS Databricks clusters.
Note
You must register the instance profiles in the Databricks workspace, or your Databricks clusters reject the instance profile ARNs and display an error. For more information, see https://docs.databricks.com/administration-guide/cloud-configurations/aws/instance-profiles.html#step-5-add-the-instance-profile-to-databricks.
To configure the instance profile for AWS Databricks, you must provide an IAM instance profile ARN in databricks.awsAttributes.
instanceProfileArn
parameter.
Note
For AWS Databricks, instance profiles are supportedwhen the aws.credentialProvider
is set to instance
or temporary
.
aws.credentialProvider | AWS Databricks permissions |
---|---|
instance | Designer Cloud Powered by Trifacta platformor Databricks jobs gets all permissions directly from the instance profile. |
temporary | Designer Cloud Powered by Trifacta platformor Databricks jobs use temporary credentials that are issued based on system or user IAM roles. Note The instance profile must have policies that allow Designer Cloud Powered by Trifacta platform or Databricks to assume those roles. |
default | n/a |
Note
If the aws.credentialProvider
is set to temporary or instance while using AWS Databricks:
databricks.awsAttributes.instanceProfileArn
must be set to a valid value for Databricks jobs to run successfully.aws.ec2InstanceRoleForAssumeRole
flag is ignored for Databricks jobs.
For more information, see Configure for AWS Authentication.
Configure instance pooling
Instance pooling reduces cluster node spin-up time by maintaining a set of idle and ready instances. The Designer Cloud Powered by Trifacta platform can be configured to leverage instance pooling on the AWS Databricks cluster for both worker and driver nodes.
Note
When instance pooling is enabled, the following parameters are not used:
databricks.driverNodeType
databricks.workerNodeType
For more information, see https://docs.databricks.com/clusters/instance-pools/configure.html.
Instance pooling for worker nodes
Prerequisites:
All cluster nodes used by the Designer Cloud Powered by Trifacta platform are taken from the pool. If the pool has an insufficient number of nodes, cluster creation fails.
Each user must have access to the pool and must have at least the
ATTACH_TO
permission.Each user must have a personal access token from the same AWS Databricks workspace. See Configure personal access token below.
To enable:
Acquire your pool identifier or pool name from AWS Databricks.
Note
You can use either the Databricks pool identifier or pool name. If both poolId and poolName are specified, poolId is used first. If that fails to find a matching identifier, then the poolName value is checked.
Tip
If you specify a poolName value only, then you can run your Databricks jobs against the available clusters across multiple Alteryx workspaces. This mechanism allows for better resource allocation and broader execution options.
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.Set either of the following parameters:
Set the following parameter to the AWS Databricks pool identifier:
"databricks.poolId": "<my_pool_id>",
Or, you can set the following parameter to the AWS Databricks pool name:
"databricks.poolName": "<my_pool_name>",
Save your changes and restart the platform.
Instance pooling for driver nodes
The Designer Cloud Powered by Trifacta platform can be configured to use Databricks instance pooling for driver pools.
To enable:
Acquire your driver pool identifier or driver pool name from Databricks.
Note
You can use either the Databricks driver pool identifier or driver pool name. If both driverPoolId and driverPoolName are specified, driverPoolId is used first. If that fails to find a matching identifier, then the driverPoolName value is checked.
Tip
If you specify a driverPoolName value only, then you can run your Databricks jobs against the available clusters across multiple Alteryx workspaces. This mechanism allows for better resource allocation and broader execution options.
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.Set either of the following parameters:
Set the following parameter to the Databricks driver pool identifier:
"databricks.driverPoolId": "<my_pool_id>",
Or, you can set the following parameter to the Databricks driver pool name:
"databricks.driverPoolName": "<my_pool_name>",
Save your changes and restart the platform.
Configure Platform
Review and modify the following configuration settings, as required:
Note
Restart the platform after you modify the configuration settings for the system to take affect.
Following is the list of parameters that have to be set to integrate the AWS Databricks with Designer Cloud Powered by Trifacta platform:
Required Parameters
Parameter | Description | Value |
---|---|---|
| URL to the AWS Databricks Service where Spark jobs will be run | - |
metadata.cloud | Must be set to | Default: |
Following is the list of parameters that can be reviewed or modified based on your requirements:
Optional Parameters
Parameter | Description | Value |
---|---|---|
databricks.awsAttributes.firstOnDemandInstances | Number of initial cluster nodes to be placed on on-demand instances. The remainder is placed on availability instances | Default: 1 |
databricks.awsAttributes.availability | Availability type used for all subsequent nodes past the firstOnDemandInstances. | Default: SPOT_WITH_FALLBACK |
databricks.awsAttributes.availabilityZone | Identifier for the availability zone/datacenter in which the cluster resides. The provided availability zone must be in the same region as the Databricks deployment. | |
databricks.awsAttributes.spotBidPricePercent | The max price for AWS spot instances, as a percentage of the corresponding instance type's on-demand price. When spot instances are requested for this cluster, only spot instances whose max price percentage matches this field will be considered. | Default: 100 |
databricks.awsAttributes.ebsVolume | The type of EBS volumes that will be launched with this cluster. | Default: None |
databricks.awsAttributes.instanceProfileArn | EC2 instance profile ARN for the cluster nodes. This is only used when AWS credential provider is set to temporary/instance. The instance profile must have previously been added to the Databricks environment by an account administrator. | For more information, see Configure for AWS Authentication. |
databricks.clusterMode | Determines the cluster mode for running a Databricks job. | Default: JOB |
feature.parameterization.matchLimitOnSampling.databricksSpark | Maximum number of parameterized source files that are permitted for matching in a single dataset with parameters. | Default: 0 |
databricks.workerNodeType | Type of node to use for the AWS Databricks Workers/Executors. There are 1 or more Worker nodes per cluster. | Default: |
databricks.sparkVersion | AWS Databricks runtime version which also references the appropriate version of Spark. | Depending on your version of AWS Databricks, please set this property according to the following:
Please do not use other values. |
databricks.minWorkers | Initial number of Worker nodes in the cluster, and also the minimum number of Worker nodes that the cluster can scale down to during auto-scale-down. | Minimum value: Increasing this value can increase compute costs. |
databricks.maxWorkers | Maximum number of Worker nodes the cluster can create during auto scaling. | Minimum value: Not less than Increasing this value can increase compute costs. |
databricks.poolId | If you have enabled instance pooling in AWS Databricks, you can specify the pool identifier here. | Note If both poolId and poolName are specified, poolId is used first. If that fails to find a matching identifier, then the poolName value is checked. |
databricks.poolName | If you have enabled instance pooling in AWS Databricks, you can specify the pool name here. | See previous. Tip If you specify a poolName value only, then you can use the instance pools with the same poolName available across multiple Databricks workspaces when you create a new cluster. |
databricks.driverNodeType | Type of node to use for the AWS Databricks Driver. There is only one Driver node per cluster. | Default: For more information, see the sizing guide for Databricks. Note This property is unused when instance pooling is enabled. For more information, see Configure instance pooling below. |
databricks.driverPoolId | If you have enabledinstance poolingin AWS Databricks, you can specify the driver node pool identifier here. For more information, see Configure instance pooling below. | Note If both driverPoolId and driverPoolName are specified, driverPoolId is used first. If that fails to find a matching identifier, then the driverPoolName value is checked. |
databricks.driverPoolName | If you have enabled instance pooling in AWS Databricks, you can specify the driver node pool name here. For more information, see Configure instance pooling below. | See previous. Tip If you specify a driverPoolName value only, then you can use the instance pools with the same driverPoolName available across multiple Databricks workspaces when you create a new cluster. |
databricks.logsDestination | DBFS location that cluster logs will be sent to every 5 minutes | Leave this value as |
databricks.enableAutotermination | Set to true to enable auto-termination of a user cluster after N minutes of idle time, where N is the value of the autoterminationMinutes property. | Unless otherwise required, leave this value as |
databricks.clusterStatePollerDelayInSeconds | Number of seconds to wait between polls for AWS Databricks cluster status when a cluster is starting up | |
databricks.clusterStartupWaitTimeInMinutes | Maximum time in minutes to wait for a Cluster to get to Running state before aborting and failing an AWS Databricks job. | Default: 60 |
databricks.clusterLogSyncWaitTimeInMinutes | Maximum time in minutes to wait for a Cluster to complete syncing its logs to DBFS before giving up on pulling the cluster logs to the Trifacta node. | Set this to |
databricks.clusterLogSyncPollerDelayInSeconds | Number of seconds to wait between polls for a Databricks cluster to sync its logs to DBFS after job completion. | Default: 20 |
databricks.autoterminationMinutes | Idle time in minutes before a user cluster will auto-terminate. | Do not set this value to less than the cluster startup wait time value. |
databricks.maxAPICallRetries | Maximum number of retries to perform in case of 429 error code response | Default: 5. For more information, see Configure Maximum Retries for REST API section below. |
databricks.enableLocalDiskEncryption | Enables encryption of data like shuffle data that is temporarily stored on cluster's local disk. | - |
databricks.patCacheTTLInMinutes | Lifespan in minutes for the Databricks personal access token in-memory cache | Default: 10 |
spark.useVendorSparkLibraries | When | Note This setting is ignored. The vendor Spark libraries are always used for AWS Databricks. |
Configure Databricks Job Management
AWS Databricks enforces a hard limit of 1000 created jobs per workspace, and by default cluster jobs are not deleted. To support jobs more than 1000 jobs per cluster, you can enable job management for AWS Databricks.
Note
This feature covers the deletion of the job definition on the cluster, which counts toward the enforced limits. The Designer Cloud Powered by Trifacta platform never deletes the outputs of a job or the job definition stored in the platform. When cluster job definitions are removed, the jobs remain listed in the Job History page, and job metadata is still available. There is no record of the job on the AWS Databricks cluster. Jobs continue to run, but users on the cluster may not be aware of them.
Tip
Regardless of your job management option, when you hit the limit for the number of job definitions that can be created on the Databricks workspace, the platform by default falls back to using the runs/submit API, if the Databricks Job Runs Submit Fallback setting has been enabled.
Steps:
You apply this change through the Workspace Settings Page. For more information, see Platform Configuration Methods.
Locate the following property and set it to one of the values listed below:
Databricks Job Management
Property Value
Description
Never Delete
(default) Job definitions are never deleted from the AWS Databricks cluster.
Always Delete
The AWS Databricks job definition is deleted during the clean-up phase, which occurs after a job completes.
Delete Successful Only
When a job completes successfully, the AWS Databricks job definition is deleted during the clean-up phase. Failed or canceled jobs are not deleted, which allows you to debug as needed.
Skip Job Creation
For jobs that are to be executed only one time, the Designer Cloud Powered by Trifacta platform can be configured to use a different mechanism for submitting the job. When this option is enabled, the Designer Cloud Powered by Trifacta platform submits jobs using the run-submit API, instead of the run-now API. The run-submit API does not create an AWS Databricks job definition. Therefore the submitted job does not count toward the enforced job limit.
Default
Inherits the default system-wide setting.
When this feature is enabled, the platform falls back to use the runs/submit API as a fallback when the job limit for the Databricks workspace has been reached:
Databricks Job Runs Submit Fallback
Save your changes and restart the platform.
Configure for Databricks Secrets Management
Optionally, you can leverage Databricks Secrets Management to store sensitive Databricks configuration properties. When this feature is enabled and a set of properties are specified, those properties and their values are stored in masked form. For more information on Databricks Secrets Management, see https://docs.databricks.com/security/secrets/index.html.
Steps:
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.Locate the following properties and set accordingly:
Setting
Description
databricks.secretNamespace
If multiple instances of Designer Cloud Powered by Trifacta Enterprise Edition are using the same Databricks cluster, you can specify the Databricks namespace to which these properties apply.
databricks.secrets
An array containing strings representing the properties that you wish to store in Databricks Secrets Management. For example, the default value stores a recommended set of Spark and Databricks properties:
["spark.hadoop.dfs.adls.oauth2.client.id","spark.hadoop.dfs.adls.oauth2.credential", "dfs.adls.oauth2.client.id","dfs.adls.oauth2.credential", "fs.azure.account.oauth2.client.id","fs.azure.account.oauth2.client.secret"],
You can add or remove properties from this array list as needed.
Save your changes and restart the platform.
Configure for Secrets Manager
The AWS Secrets Manager is a secure vault for storing access credentials to AWS resources.
Note
AWS Secrets Manager is mandatory to use with AWS Databricks.
For more information, see Configure for AWS Secrets Manager.
Configure for Users
Configure AWS Databricks workspace overrides
A single AWS Databricks account can have access to multiple Databricks workspaces. You can create more than one workspace by using Account API if you are account is on the E2 version of the platform or on a selected custom plan that allows multiple workspaces per account.
For more information, see https://docs.databricks.com/administration-guide/account-api/new-workspace.html
Each workspace has a unique deployment name associated with it that defines the workspace URL. For example: https://<deployment-name>.cloud.databricks.com
.
Note
The existing property
databricks.serviceUrl
is used to configure the URL to the Databricks Service to run Spark jobs.The
databricks.serviceUrl
defines the default Databricks workspace for all user in the Designer Cloud Powered by Trifacta Enterprise Edition workspace.Individual user can override this setting in the User Preferences in the Databricks Personal Access Token page.
For more information, see Databricks Settings Page.
For more information, see Configure Platform section above.
Configure Databricks job throttling
By default, Databricks workspaces apply limits on the number of jobs that can be submitted before the cluster begins to fail jobs. These limits are the following:
Maximum number of concurrent jobs per cluster
Max number of concurrent jobs per workspace
Max number of concurrent clusters per workspace
Depending on how your clusters are configured, these limits can vary. For example, if the maximum number of concurrent jobs per cluster is set to 20
, then the 21st concurrent job submitted to the cluster fails.
To prevent unnecessary job failure, the Designer Cloud Powered by Trifacta platform submits the throttling of jobs to Databricks. When job throttling is enabled and the 21 concurrent job is submitted, the Designer Cloud Powered by Trifacta platform holds that job internally the first of any of the following events happens:
An active job on the cluster completes, and space is available for submitting a new job. The job is then submitted.
The user chooses to cancel the job.
One of the timeout limits described below is reached.
Note
The Designer Cloud Powered by Trifacta platform supports throttling of jobs based on the maximum number of concurrent jobs per cluster. Throttling against the other limits listed above is not supported at this time.
Steps:
Please complete the following steps to enable job throttling.
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.In the Trifacta Application, select User menu > Admin console > Admin settings.
Locate the following settings and set their values accordingly:
Setting
Description
databricks.userClusterThrottling.enabled
When set to
true
, job throttling per Databricks cluster is enabled. Please specify the following settings.databricks.userClusterthrottling.maxTokensAllottedPerUserCluster
Set this value to the maximum number of concurrent jobs that can run on one user cluster. Default value is
20
.databricks.userClusterthrottling.tokenExpiryInMinutes
The time in minutes after which tokens reserved by a job are revoked, irrespective of the job status. If a job is in progress and this limit is reached, then the Databricks token is expired, and the token is revoked under the assumption that it is stale. Default value is
120
(2 hours).Tip
Set this value to
0
to prevent token expiration. However, this setting is not recommended, as jobs can remain in the queue indefinitely.jobMonitoring.queuedJobTimeoutMinutes
The maximum time in minutes in which a job is permitted to remain in the queue for a slot on Databricks cluster. If this limit is reached, the job is marked as failed.
batch-job-runner.cleanup.enabled
When set to
true
, the Batch Job Runner service is permitted to clean up throttling tokens and job-level personal access tokens.Tip
Unless you have reason to do otherwise, you should leave this setting to
true
.Save your changes and restart the platform.
Configure personal access token
Each user must insert a Databricks Personal Access Token to access Databricks resources. For more information, see Databricks Settings Page.
Specify Databricks Tables cluster name
Individual users can specify the name of the cluster to which they are permissioned to access Databricks Tables. This cluster can also be shared among users. For more information, see Databricks Settings Page.
Configure maximum retries for REST API
There is a limit of 30 requests per second per workspace on the Databricks REST APIs. If this limit is reached, then a HTTP status code 429 error is returned, indicating that rate limiting is being applied by the server. By default, theDesigner Cloud Powered by Trifacta platform re-attempts to submit a request 5
times and then fails the job if the request is not accepted.
If you want to change the number of retries, change the value for the databricks.maxAPICallRetries
flag.
Value | Description |
---|---|
5 | (default) When a request is submitted through the AWS Databricks REST APIs, up to
|
0 | When an API call fails, the request fails. As the number of concurrent jobs increases, more jobs may fail. Note This setting is not recommended. |
5+ | Increasing this setting above the default value may result in more requests eventually getting processed. However, increasing the value may consume additional system resources in a high concurrency environment and jobs might take longer to run due to exponentially increasing waiting time. |
Use
Run Job From Application
When the above configuration has been completed, you can select the running environment through the application.
Note
When a Databricks job fails, the failure is reported immediately in the Trifacta Application. In the background, the job logs are collected from Databricks and may not be immediately available.
See Run Job Page.
Run Job via API
You can use API calls to execute jobs.
Make sure that the request body contains the following:
"execution": "databricksSpark",
For more information, see https://api.trifacta.com/ee/9.7/index.html#operation/runJobGroup
Troubleshooting
Spark job on AWS Databricks fails with "Invalid spark version" error
When running a job using Spark on AWS Databricks, the job may fail with the above invalid version error. In this case, the Databricks version of Spark has been deprecated.
Solution:
Since an AWS Databricks cluster is created for each user, the solution is to identify the cluster version to use, configure the platform to use it, and then restart the platform.
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods..Acquire the value for
databricks.sparkVersion
.In AWS Databricks, compare your value to the list of supported AWS Databricks version. If your version is unsupported, identify a new version to use.
Note
Ensure to note the version of Spark supported for the version of AWS Databricks that you have chosen.
In the Designer Cloud Powered by Trifacta platform configuration, Set
databricks.sparkVersion
to the new version to use.Note
The value for
spark.version
does not apply to Databricks.Restart the Designer Cloud Powered by Trifacta platform.
The platform is restarted. A new AWS Databricks cluster is created for each user using the specified values, when the user runs a job.
Spark job fails with "spark scheduler cannot be cast" error
When you run a job on Databricks, the job may fail with the following error:
java.lang.ClassCastException: org.apache.spark.scheduler.ResultTask cannot be cast to org.apache.spark.scheduler.Task at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:616) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
The job.log
file may contain something similar to the following:
2022-07-19T15:41:24.832Z - [sid=0cf0cff5-2729-4742-a7b9-4607ca287a98] - [rid=83eb9826-fc3b-4359-8e8f-7fbf77300878] - [Async-Task-9] INFO com.trifacta.databricks.spark.JobHelper - Got error org.apache.spark.SparkException: Stage 0 failed. Error: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ip-10-243-149-238.eu-west-1.compute.internal, executor driver): java.lang.ClassCastException: org.apache.spark.scheduler.ResultTask cannot be cast to org.apache.spark.scheduler.Task at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:616) ...
This error is due to a class mismatch between the Designer Cloud Powered by Trifacta platform and Databricks.
Solution:
The solution is to disable the precedence of using the Spark JARs provided from the Designer Cloud Powered by Trifacta platform over the Databricks Spark JARs. Please perform the following steps:
To apply this configuration change, login as an administrator to the Trifacta node. Then, edit
trifacta-conf.json
. For more information, see Platform Configuration Methods.Locate the
spark.props
section and add the following configuration elements:"spark": { ... "props": { "spark.driver.userClassPathFirst": false, "spark.executor.userClassPathFirst": false, ... } },
Save your changes and restart the platform.