The data model must be reprocessed regularly to ensure that it is updated with new data from the data source. This can be done manually, by opening the model definition file and processing the model, or on a scheduled basis.
Model processing schedules are used to auto-run the processing of data flows and data models. The data flows and models in the model definition file will then be processed automatically according to the schedule that was created. This scheduled processing can be set up from Model or from the Content Explorer.
- Click here to learn more about model processing.
- Click here to learn about scheduling model processing from the Content Explorer.
Important: This feature is not available in the Community Edition.
Set Model Scheduling
To create a model schedule, click the Schedule button from the Data Flow, Data Model, or Security ribbon (this button is only enabled once the model file has been saved):
The Schedule panel will appear on the right of the interface. You can select:
New Schedule: configure a new schedule for the current model.
Go to schedule listing: go to the schedule listing for the current model in the content manager.
When creating a new schedule, you need to configure the Job Details and the Schedule.
Name: the default schedule name is the model name and the schedule creation date and time. You can change the name as required.
Description: add an optional description.
ETL Execution Part: execute the entire Master Flow (including data flow and models), or the execute the models only.
Override Security: override metadata security set from the Admin console or the Materialized Manager. Disable if metadata security should not be affected by processing the data model. Click here to learn more.
Sync Model Columns: select how the tables in the model should be synchronized. Click here to learn more.
- Click here to learn more about syncing column settings
Under Schedule, set the schedule to 'Once' or 'Recurring'. If you create a one-off schedule, then choose whether to run the job immediately ('now') or at a later date or time ('Delayed'). Lastly, set the time zone.
If you create a recurring schedule, then choose the frequency of the job (hourly, daily, weekly, or monthly). Next, enter the start date and time (you can also enter an end date and time if relevant).
Finally, choose the time zone.
Save and Run the Schedule
To save and run the new schedule, click the Save & Run button (red highlight below):
If saved successfully, you'll see a confirmation message:
On Demand Schedules
On Demand Schedules can be enabled from the Master Flow. They are used to trigger the rendering of specified publications, alerts, and subscriptions only when their underlying data model(s) is reprocessed. If the specified data models have been reprocessed when the Data Flow is executed, the schedule will be triggered for any corresponding publications, alerts, and subscriptions set to run on demand.
- Click here to learn how to enable On Demand Schedules from the Master Flow.
Edit a Scheduled Job
You can also click on a schedule that was run to view its Job Executions.
- Click here to learn more about managing the schedule.
Schedule Handling allows users to determine how a task will behave if there are issues when it runs.
Schedule Timeout: the maximum length of time (in hours) a task will run before it is aborted. 'None' means that the job will run without aborting.
Disable Schedule after Consecutive Failure: the maximum number of times the task will attempt to run before abandoning the task. Once a task has failed more times than this value, the task will be canceled and will only run if it is manually restarted. 'Never disable' means that the job will run without aborting.
Note: If the task is aborted after the configured number of consecutive failures, an email message will be sent to the admins and model owner.
The system default timeout value is 4 hours, and the system default number of consecutive failures before aborting is 3.
Currently these defaults cannot be changed but can be overridden using the Schedule Handling options.
Automatic Column Handling
Automatic Column Handling allows users to update chosen tables in the semantic model each time the model is processed by automatically adding new columns to the semantic model added physically to the underlying tables.
- Click here to learn more about automatic column handling