Flow Optimization Settings Dialog
In the Flow Optimization Settings dialog, you can configure the following settings, which provide finer-grained control and performance tuning over your flow and its job executions. From the Flow View menu, select Optimization settings.
This feature must be enabled at the workspace level. When enabled, the settings in this dialog are applied to the current flow.
These optimizations are designed to improve performance by pre-filtering the volume of data by reducing the columns and rows to the ones that are actually used.
Tip
In general, all of these optimizations should be enabled for each flow. As needed, you can selectively disable optimizations if you are troubleshooting execution issues.
When these filters are enabled, the number of filters successfully applied to a job execution is listed in the Optimization summary in the Job Details page. See Job Details Page.
When enabled, the Trifacta Application attempts to apply any of the listed optimizations that are enabled to jobs that are executed for this flow.
Note
When thisoption is disabled, then no optimization settings are available.
The following optimizations can be enabled or disabled in general. For individual data sources, you may be able to enable or disable these settings based on your environment and its requirements.
Tip
These optimizations are applied at the recipe level. They can be applied on any flow and may improve performance within the Transformer page.
When enabled, job execution performance is improved by removing any unused or redundant columns based on the recipe that is selected.
When this setting is enabled, the Trifacta Application optimizes job performance on this flow by pushing data filters to recipes.
Additional optimizations can be enabled or disabled for specific types of transformations or jobs.
When enabled, jobs for this flow that are sourced from files stored in S3 can be executed in Snowflake.
Note
For execution of S3 jobs in Snowflake, AWS credentials are passed in encrypted format as part of the SQL that is executed within Snowflake.
Note
Additional limitations and requirements may apply for file-based job execution.
For more information, see Snowflake Running Environment.
Individual types of databases may support one or more of the following pushdowns. Additional restrictions may apply for your specific database.
Tip
These optimizations are applied to queries of your relational datasources that support pushdown. These optimizations are applied within the source, which limits the volume of data that is transferred during job execution.
Note
For each relational connection, you can enable the optimization capabilities to improve the flow and its job execution performance. The optimization settings may vary based on the type of relational connections.
When enabled, job execution performance is improved by removing any unused or redundant columns from the source database.
Limitations:
Column pruning optimizations cannot be applied to imported datasets generated with custom SQL.
When this setting is enabled, the Trifacta Application optimizes job performance on this flow by pushing data filters directly on the source database.
Limitations:
Filter pushdown optimizations cannot be applied to imported datasets generated with custom SQL.
Pushdown filters cannot be applied to dates in your relational sources.
Note
SQL-based filtering is performed on a best-effort basis. When these optimizations are enabled for your flow, there is no guarantee that they will be applied during job execution.
Note
The connection types may or may not be available in your product edition. For more information, see Connection Types.
When this setting is enabled, the Trifacta Application optimizes job performance by executing sampling jobs directly on the source database.
Note
All pushdowns must be enabled to ensure sample jobs run in the database.
Databases that do not support pushdown may support the following optimization settings.
When enabled, job execution performance is improved by removing any unused or redundant columns from the source database.