In System Settings, on the Controller screens, configure the controller component. The controller is available for configuration if the local machine is configured to act as a controller.
The Alteryx Service Controller is responsible for the management of the service settings and the delegation of work to the Alteryx Service Workers. Only one machine may be enabled as a controller in a deployment.
The General screen includes configuration options such as where temporary files and log files should be stored and what information should be logged.
If the machine using Designer is separate from the controller machine and you want to schedule a workflow to run in the future, add the controller token in the Schedule Workflow screen in Designer to connect to the controller and have the job run from there.
You will also need the controller token if you want to have one machine act as a controller and another machine act as workers. Set the first machine up as a controller, then copy the controller token and add it when configuring the worker machine (in System Settings, Controller > Remote) so that the machines can communicate with each other.
The controller token is auto-generated for you. If you want to change your token, click Regenerate. You will get the following message stating the service will be stopped: "Are you sure you want to regenerate the token? If the service is running it will be stopped, and any remote workers or clients connected to this computer will be disconnected."
Token regeneration should only be done if absolutely necessary, such as the token becoming compromised. Regenerating the token will require updating any Gallery or Worker nodes in the deployment.
- Controller Token: A secret key that is used to establish communications between the controller machine and the machine using Designer, and between the controller machine and the worker machine.
- Workspace: The Controller Workspace is the location where the controller stores temporary or cache files. By default, the folder is located within the global workspace folder. Use a path to a location that is safe to store large amounts of files.
- Logging: The controller component contains functionality to produce logs of events such as services being started, shut downs, execution requests, etc. which can be helpful for troubleshooting issues. This information is stored in files on the file system. See Log Files.
- Level: Allows you to choose the types of messages that should be captured. (None = No logging; Low = Log only Emergency, Alert, Critical, and Error type messages; Normal= Log everything in Low, plus Warnings and Notices; High = Log all message types.) A level of “None” or “Low” may be sufficient for production environments where little logging is needed while a level of “High” logs more messages to help with troubleshooting.
- File size: Allows you to specify the maximum size of a log file.
- Enable log file rotation: Log files can become quite large depending on how the system is running and the level of the logging. Enabling log file rotation ensures that when the current log file reaches its maximum size it is placed in an archive file and logs are written to a new file. This helps prevent creating large log files that are difficult to consume in standard log readers.
- Enable Scheduler auto-connect: Allows users on this machine to auto-connect to the Scheduler. Enable this if you have difficulties connecting to the Scheduler.
- Enable Insights: Configuring the machine to enable insights allows it to handle requests for rendering insights in the Gallery. Insights are interactive dashboards created in Alteryx Designer.
The Alteryx Service includes a persistence layer that it uses to store information critical to the functioning of the service, such as Alteryx application files, the job queue, and result data. The Service supports two different mechanisms for persistence: SQLite and MongoDB. For lightweight and local deployments SQLite is adequate for most scheduling needs. For heavier usage, or if the Alteryx Gallery is deployed, MongoDB must be used.
Since the controller acts as an orchestrator of workflow executions and various other operations it needs a location where it can maintain the workflows that are available, a queue of execution requests, and other information. These settings can be defined on the Persistence screen.
When switching between SQLite and MongoDB database types, previously scheduled jobs are not automatically migrated. These jobs must be manually re-scheduled.
It is highly recommended that you provide an automated backup system for whatever persistence mechanism you choose. For information on backing up MongoDB, see MongoDB Management. To back up SQLite, you can zip up or copy the Persistence folder found in \ProgramData\Alteryx\Service\.
For User-managed MongoDB complete this information based on the configuration of your MongoDB instance.
- Database Type: The controller maintains data in either SQLite or MongoDB databases. Alteryx Server offers embedded SQLite or MongoDB options as well as a user-managed MongoDB option. If you are configuring the machine for a Gallery, you must use MongoDB.
- SQLite: Creates an instance of the SQLite database for you to use. For lightweight and local deployments that use the Scheduler, SQLite is sufficient.
- MongoDB: Creates an instance of the MongoDB database for you to use. For heavier usage, or if the Alteryx Gallery is deployed, MongoDB must be used.
- User-managed Mongo DB: Allows you to connect the Service to your own implementation of MongoDB.
- Data Folder: This is the location where either the SQLite or embedded MongoDB database files should be stored. If you select User-managed MongoDB this option is disabled because it is configured directly in your own MongoDB instance.
- Database: For embedded MongoDB the host, username, and password automatically generated are available for you to use if you would like to access and interrogate the data. The Admin Password is for MongoDB Admins to setup backups and replica sets. The user Password is the one all of the components use to communicate with MongoDB and can be used for creating usage reports that connect to the database.
- Persistence Options: The controller maintains a queue of Alteryx jobs and caches uploaded files for use in executing those jobs. Workflow queues and results can quickly take up space if left unattended. You can specify whether or not job results and files should be deleted and, if so, how many days they should remain. These settings may help to reduce the amount of drive space necessary as the system is used.
The controller can be enabled to also handle requests for mapping operations, such as orchestrating requests for map tile generation and caching. On the Mapping screen, configure whether or not the machine should act as a Map Controller, and define the thresholds for the tile and layer caching. The map tiles and the reference maps needed to render them can be cached to increase performance. A larger cache size, and increased time to live, results in faster responses for tiles that have been requested before, but takes up more memory and disk space; a smaller cache has the opposite effect.
- Enable map tile controller: Configuring the machine to enable a map tile controller allows it to serve up map tiles that are rendered by Workers. These tiles are used for rendering maps in the Map Question and Map Input tools.
- Memory cache: This is the maximum number of map tiles that are stored in memory. 1,000 tiles requires roughly 450 MB of memory. A higher memory cache results in more tiles being stored to increase performance, but requires more system resources.
- Disk cache: This is the maximum amount of space to allocate for caching map tile images on the hard drive. A higher disk cache results in greater consumption of drive space but may increase performance of map tile requests.
- Reference layer time to live: Reference layers are created by Map Questions and Map Input Tools and are driven by a .yxdb file. The controller can maintain a reference to this .yxdb file to help speed up rendering. This setting allows you to define the amount of time to persist reference layer information. Increasing this number may help optimize performance of frequently requested layers. If a reference layer expires, it is generated again the next time it is requested.
If you are configuring the machine to act as a worker, then you will only see the Remote screen under Controller. Since the machine is not configured to be a controller, it must connect to the controller machine. The host location and the controller token are required to connect to the controller machine.
- Host: Type the host location of the controller machine.
- Token: Enter the controller machine token. This information is found on the controller machine in System Settings on the Controller > General screen. See General section in this article.
- View: This displays the controller token characters.
- Hide: This hides the controller token characters.