Use Write Data In-DB to create or update a table directly in the database.
In-Database enables blending and analysis against large sets of data without moving the data out of a database and can provide significant performance improvements over traditional analysis methods. For more about the In-Database tool category, go to In-Database Overview.
Review the tool configuration options below.
Use the Output Mode dropdown to select the appropriate option for writing the data. Choices include...
Append Existing: Append all the data to an existing table. The output consists of Records Before + Records After.
Delete Data & Append: Delete all the original records from the table and then append the data to the existing table. Note that this mode is different depending on the database you write to:
Oracle Databases: Uses DELETE statement.
SQL Server Databases: Uses TRUNCATE TABLE statement because this is a more efficient method. You need either the ALTER table or TRUNCATE permissions on the table.
Overwrite Table (Drop): Completely drop the existing table and create a new one.
Create New Table: Create a new table. It does not overwrite an existing table.
Create Temporary Table: Use a CREATE TABLE statement to create a table that will be removed by the Alteryx engine upon workflow completion and requires CREATE TABLE permissions for all data sources except SQL Server. For SQL Server, a temporary table is created. This is necessary to use the data downstream as most databases don’t allow temporary tables to persist beyond the end of the session. If this option is selected, the Table Name field is disabled and displays “[a unique temporary table name is generated on each run]".
Update Rows: Update existing rows in a table based on the incoming records.
Delete Rows: Delete existing rows in a table based on the incoming records.
Merge Tables: Merge the incoming data with the table specified in the tool. It supports Delete and Update based on selected Merge Fields.
Table Name: Enter the name of the database table to create or update.
Append Fields Mapping (or SET Field Map when Update Rows is the selected Output Mode) becomes active when Append Existing or Delete Data & Append is chosen above.
Choose a configuration mode:
Auto Config by Name: Aligns fields by field name.
Auto Config by Position: Aligns fields by their field order in the stream.
Select how to handle nonconforming data fields from the options in the dropdown menu.
Error - Stop Processing: Throws an error in the Results window and ends the processing.
Output Applicable Fields: Applicable fields are included. Null values populate empty fields.
This area becomes active when Update Rows or Delete Rows is chosen as the Output Mode.
WHERE Field Map: This forms the conditional statement for the Update Rows and Delete Rows Output Mode.
Use this section to define Pre- and Post-SQL statements to execute. This option is compatible with all of the above output modes. Use the 3-dot menu associated with either the Pre- or Post-SQL Statement fields to launch the Configure Pre/Post-SQL Statements window, which includes the SQL Editor. Note that Stored Procedures are not currently supported.
Pre- and Post-SQL statements are compatible with the following technologies: Bulk Loaders, OCI, ODBC, and OleDB.
Pre-SQL Statement: Define a SQL statement to execute before running the in-database statement/query and before appending to or creating the output table.
Post-SQL Statement: Define a SQL statement to execute after running the in-database statement/query and after appending to or creating the output table.
Most Pre/Post-SQL use cases involve the Write Data In-DB tool. If your workflow doesn't include a Data Stream In tool, then the next best place for session variables is the Write Data In-DB tool.
Note
When the workflow runs, the Results window displays messages to show that the tool is executing SQL statements, along with a label (PreSQL, PostSQL).
All SQL statements are sent directly to the database. Pre- and Post-SQL statements do not return data. While SQL can return data, Designer doesn't do anything with that data.
If there are multiple streams of In-DB tools, there is no guaranteed order of execution between different streams. The Results window displays the order.
Review sample queries for the Update and Delete Rows options for the Output Mode:
WITH upstream AS ( ... ) UPDATE t SET t.col2 = (SELECT col2 FROM upstream u WHERE u.col1 = t.col1 AND ...), ... FROM target_ t WHERE EXISTS ( SELECT * FROM upstream u WHERE u.col1 = t.col1 AND ... )
WITH upstream AS ( ... ) DELETE t from target_ t WHERE EXISTS ( SELECT * FROM upstream u WHERE u.col1 = t.col1 AND ... )
Update/Delete Output Options currently do not support column aliases.
Update Output Option currently requires all field names to match the target table's field names, for the Auto Config by Name Append Fields Mapping option, and the number of fields to match the target table's number of fields, for the Auto Config by Position Append Fields Mapping option.
Update/Delete is currently only supported for SQL Server ODBC connections.
Update/Delete Output options currently do not support updating or deleting Null values. The comparison operators (such as
=
or!=
) being used for Alteryx update and delete queries return “unknown” for null values. For more information, see Microsoft SQL Server documentation.When appending data using the Write InDB tool, all columns in the target table have to be included in the INSERT statement in order for the INSERT to be valid. Missing columns will be added with a NULL value.
Merge InDB is only supported for Databricks Unity Catalog.