Skip to main content

prompt.png Prompt Tool

User Role Requirements

User Role*

Tool/Feature Access

Full User

Basic User

X

*Applies to Alteryx One Professional and Enterprise Edition customers on Designer versions 2025.1+.

Use the Prompt tool to send prompts to a Large Language Model (LLM) and then receive the model’s response as an output. You can also configure LLM parameters to tune the model’s output.

Tool Components

The Prompt tool has 3 anchors (2 input and 1 output):

  • M input anchor: (Optional) Use the M input anchor to connect the model connection settings from the llm_override_icon.png LLM Override Tool. Alternatively, set up a connection within this tool.llm_override_icon.png LLM Override Tool

  • D input anchor: (Optional) Use the D input anchor to connect data you want to add to your prompt. The Prompt tool accepts standard data types (for example, String, Numeric, and DateTime) in addition to Blob data from the Blob Input Tool Icon Blob Input Tool.

  • Output anchor: Use the output anchor to pass the model’s response downstream.

Configure the Tool

Connect to the AI Model Service in Alteryx One

The tool automatically connects to the Alteryx One workspace that you used to sign in to the Alteryx One app. If you want to use an LLM connection from a different workspace, you can make this change through the Alteryx One app or Designer Desktop.

Select a Workspace in the Alteryx One App

  1. Go to the Alteryx One app.

  2. Select the Profile menu to view a list of available workspaces.

  3. Select the workspace from the list that includes the LLM connection you want to use.

  4. Return to Designer Desktop.

LLM Connection

Important

Before you can select an LLM, a Workspace Admin must create an LLM connection in Alteryx One.Create LLM Connections

  1. Use the LLM Provider dropdown to select the provider you want to use in your workflow. If you connected the LLM Override tool, this option is disabled.

  2. Use the Select Model dropdown to select an available model from your LLM Provider. If you connected the LLM Override tool and selected a specific model, this option is disabled.

Prompt Settings

Use the Prompt Settings section to compose your prompt and configure the data columns associated with the prompt and response.

  1. Enter your prompt in the Prompt Template field. For an estimate of the token count of your prompt, refer to the Token Count at the bottom of the Prompt Template field. The Token Count doesn’t account for additional tokens that might come from inserted data columns.

  2. Include upstream data in your prompt for more advanced analysis. The Prompt tool creates a new prompt for each row of your incoming data. The tool then sends an LLM request for each of these rows.

    1. To insert an input data column, you can either…

      1. Enter an opening bracket ([) in the text field to bring up the column selection menu. You can also type out the column name within brackets ([]).

      2. Select a column from the Insert Field dropdown.

    2. To attach unstructured data, like image or PDF files, select the column that contains the unstructured data from the Attach Non-Text Columns dropdown. Use a Blob Input Tool Icon Blob Input Tool to bring your images and PDFs into your workflow.

      Note

      Support for unstructured data, such as images and PDFs, depends on your LLM provider and the model you select. Go to your LLM provider’s documentation for details about supported unstructured or multimodal data types.

  3. Enter the Response Column Name. This column contains the LLM response to your prompt.

  4. Run the workflow. 

Prompt Builder

Use Prompt Builder to quickly test different prompts and model configurations. You can then compare the results with prompt history. To experiment with different models and model configurations, run the workflow with your initial prompt and then select Refine and Test in Prompt Builder to open the Prompt Builder window.

Important

Prompt Builder requires data connected to the D input anchor.

Prompt Workspace Tab

Use the Prompt Workspace tab to enter your prompts and update model configurations:

  1. Use the Select Model dropdown to select an available model from your LLM Provider.

  2. Enter the Number of Records to Test. If you have a large dataset, use this setting to limit the number of records tested with your prompt.

  3. Configure the model’s parameters for Temperature, Max Output Tokens, and TopP. Refer to the Advanced Model Configuration Settings section for parameter descriptions.

  4. Enter or update your prompt in the Prompt Template text field.

    Tip

    To get help creating a ready-to-use prompt, select Generate a Prompt for Me and then describe your task. To use this feature, your Alteryx One account must have access to Ask Alteryx.

  5. Select Test and Run to view the sample response for each row.

  6. If you like the responses of the new prompt, select Save Prompt to Canvas to update the Prompt tool with the new prompt and model configuration.

History Tab

Use the History tab to view your past prompts, model parameters, and a sample response.

For each past prompt, you can…

  • Add to Canvas: Update the Prompt tool with the selected prompt and associated model parameters.

  • Add to Favorites: Save the selected prompt to a list of your favorite prompts. Select the 3-dot menu next to Add to Canvas to find this option. To view your favorite prompts, select the Only Show Favorites checkbox.

  • Edit Prompt: Return to the Configuration tab with the selected prompt and associated model parameters. Select the 3-dot menu next to Add to Canvas to find this option.

  • Delete Prompt: Delete the selected prompt from the History tab. Select the 3-dot menu next to Add to Canvas to find this option.

  • Download Results: Save a CSV file containing the prompt and model parameters for the current row in the History tab.

Warning

Make sure to save your prompt before you leave the Prompt Builder window. You will lose your prompt history when you select Close

Structured Output

Use Structured Output to define a JSON schema for the LLM response and enforce a consistent, machine-readable format without describing the structure in your prompt. When enabled, the Prompt tool sends both the prompt and the schema to the model, which returns a response that conforms to the defined structure. The tool validates each response against the schema to ensure required fields are present, data types match, and only allowed fields are included.

  • JSON Schema Column: Select the column that contains your JSON schema. The schema defines the expected response structure, including field names, data types, required fields, and constraints. To easily create JSON schema, use the json_schema.png JSON Schema Tool. If you select the default None (No structured output) option, structured output is disabled and the model returns free-form text.

  • On Schema Validation Failure: Select how the tool handles responses that do not conform to the schema:

    • Set LLM Output to Null: Replace the invalid response with a null value.

    • Write Invalid JSON: Output the response even if it does not conform to the schema.

    • Fail Tool on First Record: Stop tool execution when the first validation failure occurs.

Error Handling

When an error occurs, choose your error handling option from the On Error dropdown:

  • Error - Stop Processing Records: Throw an error in the Results window and stop processing records.

  • Warning - Continue Processing Records: Throw a warning in the Results window, but continue processing records.

  • Ignore - Continue Processing Records: Ignore columns that differ and continue processing records.

Advanced Model Configuration Settings

Use the Advanced Model Configuration section to configure the model’s parameters:

  • Temperature: Controls the randomness of the model's output as a number between 0 and 2. The default value is 1.

    • Lower values provide more reliable and consistent responses.

    • Higher values provide more creative and random responses, but can also become illogical.

  • Max Output Tokens: The maximum number of tokens that the LLM can include in a response. Each token is about 3/4 of a word. Tokens are basic input and output units in a LLMs. They’re text chunks that can be words, character sets, or combinations of words and punctuation. Refer to your LLM provider and specific model for max available output tokens.

  • TopP: Controls which output tokens the model samples from and ranges from 0 to 1. The model selects from the most to least probable tokens until the sum of their probabilities equals the TopP value. For example, if the TopP value is 0.8 and you have 3 tokens with probabilities of 0.5, 0.3, and 0.2, then the model only selects from the tokens with 0.5 and 0.3 probability (sum of 0.8). Lower values result in more consistent responses, while higher values result in more random responses.

Output

The tool outputs 2 string data columns.

  • LLM Prompt Column: Contains your prompt.

  • LLM Response Column: Contains the response from your LLM Provider and Model.