Amazon Redshift
Redshift | Redshift Spectrum | |
---|---|---|
Type of Support: | Read & Write; In-Database | Read & Write |
Verified On: | Client version 1.3.7.1000 | Client version 1.3.7.1000 |
Connection Type: | ODBC (64-bit) | ODBC (64-bit) |
Driver Details: |
The ODBC driver can be downloaded from Amazon Redshift. An AWS account must be created. In-Database processing requires 64-bit database drivers. |
The ODBC driver can be downloaded from Amazon Redshift Spectrum. An AWS account must be created. |
Alteryx tools used to connect
- Input Data Tool and Output Data Tool (Standard workflow processing)
- Connect In-DB Tool and Data Stream In Tool (In-database workflow processing)
Additional Details
In the ODBC Data Source Administrator:
- Select the Redshift driver and click Configure.
- Type in your Connection Settings and credentials.
- In the Additional Options area, select the Retrieve Entire Results Into Memory option.
- Save the connection by clicking OK.
To use the bulk connection via the Output Data tool:
- Click the Write to File or Database drop-down and select Other Databases > Amazon Redshift Bulk.
- Select a Data Source Name (or click ODBC Admin to create one). See ODBC and OLEDB Database Connections.
- (Optional) Type a User Name and Password.
- In the Amazon S3 section, type or paste your AWS Access Key and AWS Secret Key to access the data for upload.
- In the Secret Key Encryption drop-down, select an encryption option:
- Hide: Hide the password using minimal encryption.
- Encrypt for Machine: Any user on the computer will be able to fully use the connection.
- Encrypt for User: The logged in user can use the connection on any computer.
- In the Endpoint drop-down, select Default to allow Amazon to determine the endpoint automatically based on the bucket you select. To specify an endpoint for private S3 deployments, or if you know a specific bucket region, you can alternately select an endpoint (S3 region), enter a custom endpoint, or select from one of ten previously-entered custom endpoints.
- (Optional) Select Use Signature V4 for Authentication to use Signature Version 4 instead of the default Signature Version 2. This will increase security, but connection speeds may be slower. This option is automatically enabled for regions requiring Signature Version 4.
- US East (Ohio) Region
- Canada (Central) Region
- Asia Pacific (Mumbai) Region
- Asia Pacific (Seoul) Region
- EU (Frankfurt) Region
- EU (London) Region
- China (Beijing) Region
- Select a Server-Side Encryption method for uploading to an encrypted Amazon S3 bucket. For more information on Amazon S3 encryption methods, see the Amazon Simple Storage Service Developer Guide.
- None (Default): No encryption method is used.
- SSE-KMS: Use server-side encryption with AWS KMS-managed keys. Optionally provide a KMS Key ID. When you select this method, Use Signature V4 for Authentication is enabled by default.
- In Bucket Name, type the name of the AWS bucket in which your data objects are stored.
If the Bucket you select is not in the region of the endpoint you specify, the following error occurs: “The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.” Select Default to clear the error.
Regions created after January 30, 2014 support only Signature Version 4. The following regions require Signature Version 4 authentication:
Optionally select Use Redshift Spectrum to connect to Spectrum tables.
You can optionally specify or adjust the following Redshift options. For more information, see the Amazon Redshift Database Developer Guide.
To create Spectrum tables with the Output Data tool, specify both the schema and table name.
spectrum_schema.tablename
- Primary Key: Select column(s) for the Primary Key and adjust the order of columns.
- Distribution Style: Select Even, Key, or All.
- Distribution Key: Select a column for the Distribution Key.
- Sort Style: Select None, Compound, or Interleaved.
- Sort Key: Select column(s) for the Sort Key and adjust the order of columns.
- Enable Vacuum and Analyze Operations: (Bulk connections only) Enabled by default. When enabled, VACUUM and ANALYZE maintenance commands are executed after a bulk load APPEND to the Redshift database.
- Size of Bulk Load Chunks (1 MB to 102400 MB): To increase upload performance, large files are split into smaller files with a specified integer size, in megabytes. The default value is 128.
- Enable backslash (\) as escape character: (Bulk connections only) Enabled by default. When enabled, a character that immediately follows a backslash character is loaded as column data, even if that character normally is used for a special purpose (for example, delimiter character, quotation mark, embedded newline character, or escape character).
Distribution Key is ignored if 'Key' is not selected for Distribution Style. Sort Key is ignored if 'None' is selected for Sort Style.