This section describes some of the configuration options for the JDBC (relational) ingestion, which support faster execution of JDBC-based jobs.
Data ingestion works by streaming a JDBC source into a temporary storage space in the base storage layer to stage the data for job execution. The job can then be run on Photon or Spark. When the job is complete, the temporary data is removed from base storage or retained in the cache (if it is enabled).
Data ingestion happens for Spark and Trifacta Photon jobs.
Data ingestion applies only to JDBC sources that are not native to the running environment. For example, JDBC ingestion is not supported for Hive.
Schema information is retained from the schematized source and is applied during publication of the generated results.
Supported for HDFS and other large-scale backend datastores.
Data caching refers to the process of ingesting and storing data sources on the Trifacta node for a period of time for faster access if they are needed for additional platform operations.
Tip
Data ingestion and data caching can work together. For more information on data caching, see Configure Data Source Caching.
Job Type | JDBC Ingestion Enabled only | JDBC Ingestion and Caching Enabled |
---|---|---|
transformation job | Data is retrieved from the source and stored in a temporary backend location for use in sampling. | Data is retrieved from the source for the job and refreshes the cache where applicable. |
sampling job | See previous. | Cache is first checked for valid data objects. Outdated objects are retrieved from the data source. Retrieved data refreshes the cache. Note Caching applies only to full scan sampling jobs. Quick scan sampling is performed in the Trifacta Photon running environment. As needed you can force an override of the cache when executing the sample. Data is collected from the source. See Samples Panel. |
Although there is no absolute limit, you should avoid executing jobs on tables over several 100 GBs. Larger data sources can significantly impact end-to-end performance.
Note
This recommendation applies to all JDBC-based jobs.
Rule of thumb:
For a single job with 16 ingest jobs occurring in parallel, maximum expected transfer rate is 1 GB/minute.
Scalability:
1 ingest job per source, meaning a dataset with 3 sources = 3 ingest jobs.
Rule of thumb for max concurrent jobs for a similar edge node:
max concurrent sources = max cores - cores used for services
Above is valid until the network becomes a bottleneck. Internally, the above maxed out at about 15 concurrent sources.
Default concurrent jobs = 16, pool size of 10, 2 minute timeout on pool. This is to prevent overloading of your database.
Adding more concurrent jobs once network has bottleneck will start slow down all the transfer jobs simultaneously.
If processing is fully saturated (# of workers is maxed):
max transfer can drop to 1/3 GB/minute.
Ingest waits for two minutes to acquire a connection. If after two minutes a connection cannot be acquired, the job fails.
When job is queued for processing:
Job is silently queued and appears to be in progress.
Service waits until other jobs complete.
Currently, there is no timeout for queueing based on the maximum number of concurrent ingest jobs.
To enable JDBC ingestion and performance caching, the first two of the following parameters must be enabled.
You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json
. For more information, see Platform Configuration Methods.
Parameter Name | Description |
---|---|
webapp.connectivity.ingest.enabled | Enables JDBC ingestion. Default is |
feature.jdbcIngestionCaching.enabled | Enables caching of ingested JDBC data. Note
When disabled, no caching of JDBC data sources is performed. For more information on caching, see Configure Data Source Caching. |
feature.enableLongLoading | When enabled,you can monitor the ingestion of long-loading JDBC datasets through the Import Data page. Default is Tip After a long-loading dataset has been ingested, importing the data and loading it in the Transformer page should perform faster. |
feature.enableParquetLongLoading | When enabled, you can monitor the ingestion of long-loading Parquet datasets. Default is |
longloading.addToFlow | When long-loading is enabled, set this value to |
longloading.addToLibrary | When long-loading is enabled, this feature enables monitoring of the ingest process when large relational sources are added to the library. Default is |
In the following sections, you can review the available configuration parameters for JDBC ingest.
You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json
. For more information, see Platform Configuration Methods.
Parameter Name | Description |
---|---|
batchserver.workers.ingest.max | Maximum number of ingester threads that can run on the Designer Cloud Powered by Trifacta platform at the same time. |
batchserver.workers.ingest.bufferSizeBytes | Memory buffer size while copying to backend storage. A larger size for the buffer yields fewer network calls, which in rare cases may speed up ingest. |
batch-job-runner.cleanup.enabled | Clean up after job, which deletes the ingested data from backend storage. Default is Note If JDBC ingestion is disabled, relational source data is not removed from platform backend storage. This feature can be disabled for debugging and should be re-enabled afterward. Note This setting rarely applies if JDBC ingest caching has been enabled. |
Parameter Name | Description |
---|---|
data-service.systemProperties.logging.level | When the logging level is set to Note Use this setting for debug purposes only, as the log files can grow quite large. Lower the setting after the issue has been debugged. See Logging below. |
You can use the following methods to track progress of ingestion jobs.
Through application: In the Job History page, you can track progress of all jobs, including ingestion. Where there are errors, you can download logs for further review.
See Job History Page.
See Logging below.
Through APIs:
You can track status of
jobType=ingest
jobs through the API endpoints.From the above endpoint, get the ingest jobId to track progress.
See https://api.trifacta.com/ee/9.7/index.html#operation/getJobGroup
During and after an ingest job, you can download the job logs through the Job History page. Logs include:
All details including errors
Progress on ingest transfer
Record ingestion
See Job History Page.