Comment on page
Job Execution Status
These metrics provide information about your job, enabling you to monitor performance and check for issues.
Metric
See All Events (SQL Syntax)
Metric type | Informational |
About this metric | The number of currently running job executions |
Timeframe | Now |
A job is scheduled to run every defined time interval (i.e every minute, hour, day etc). A job execution is defined as an execution of a specific time interval. As long as the job is up-to-date, a single job execution will be running according to the expected running schedule.
In case there's a backlog, multiple job executions can run concurrently. Each execution will handle a different time interval. This can happen, for example, due to historical running of the job or a spike in the amount of data which causes the job execution duration to exceed the time between intervals.
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT STRING_FORMAT('{0,number,#,###}', running_executions) AS running_executions, 'OK' AS running_executions_severity
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';
Metric
See All Events (SQL Syntax)
Troubleshooting
Metric type | Warning |
About this metric | The number of queued job executions pending |
Limits | Warn when > 0 |
Timeframe | Now |
This metric is the count of runnable jobs not running. The value for an up-to-date job should be 0. If you are performing a replay, this number could be very high, potentially in the thousands. However, as the replay runs, the number should steadily decrease at the rate at which work is being done. If it is not a replay, it can mean that the cluster is not big enough to handle the workload. A value below 10 is considered acceptable, but you may have a problem if it is increasing, and the cluster is not managing to keep up with the workload.
Be aware that an increasing queue will cause latency issues with your data. However, if the number is constant over time and the latency is acceptable, then this may not be an issue for you. If the queue increases, it should be investigated.
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT STRING_FORMAT('{0,number,#,###}', queued_executions) AS queued_executions, IF_ELSE(queued_executions > 0, 'WARNING', 'OK') AS queued_executions_severity
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';
Consider increasing the cluster's server limit to allow more executions to run in parallel. This will allow the job to remain up-to-date and/or close any pending backlog.
Metric
See All Events (SQL Syntax)
Metric type | Informational |
About this metric | The number of job executions completed today |
Timeframe | Today (midnight UTC to now) |
The total number of job executions completed today. A job is scheduled to run every defined time interval (i.e. every minute, hour, day etc). A job execution is defined as an execution of a single time interval.
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT STRING_FORMAT('{0,number,#,###}', completed_executions_today) AS completed_executions_today, 'OK' AS completed_executions_today_severity
FROM system.monitoring.jobs
WHERE job_id '<job_id>';
Metric
See All Events (SQL Syntax)
Metric type | Informational |
About this metric | The number of job executions completed over the lifetime of the job |
Timeframe | Job lifetime |
The total number of job executions completed over the lifetime of the job. A job is scheduled to run every defined time interval (i.e every minute, hour, day etc). A job execution is defined as an execution of a single time interval.
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT STRING_FORMAT('{0,number,#,###}', total_completed_executions) AS total_completed_executions, 'OK' AS total_completed_executions_severity
FROM system.monitoring.jobs
WHERE job_id '<job_id>';
Metric
See All Events (SQL Syntax)
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT STRING_FORMAT('{0,number,#,###}', executions_waiting_for_dependencies) AS executions_waiting_for_dependencies, 'OK' AS executions_waiting_for_dependencies_severity
FROM system.monitoring.jobs
WHERE job_id '<job_id>';
Metric
See All Events (SQL Syntax)
Troubleshooting
Metric type | Warning |
About this metric | The number of job executions that encountered an error and are currently retrying |
Limits | Error when > 0 |
The number of job executions that encountered an error and are currently retrying. Ideally, this should be 0. If a job encounters a transient error, the value will disappear after the retry is successful, otherwise, investigation is required to fix the issue. The retry will continue as long as the issue occurs and will stop only once it is resolved.
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT STRING_FORMAT('{0,number,#,###}', executions_retrying_after_failure) AS executions_retrying_after_failure, IF_ELSE(executions_retrying_after_failure > 0, 'ERROR', 'OK') AS executions_retrying_after_failure_severity
FROM system.monitoring.jobs
WHERE job_id '<job_id>';
See the
logs.tasks.task_executions
system table for details and error messages.Run the following query to return tasks that have encountered an error:
SELECT job_id, job_name, stage_name, job_type,
task_name, task_start_time, task_end_time, task_error_message
FROM logs.tasks.task_executions
WHERE task_error_message IS NOT NULL
ORDER BY job_id;
Metric
See All Events (SQL Syntax)
Metric type | Warning |
About this metric | The error message detailing why the job failed. |
Timeframe | Now |
The error message detailing why the job failed, which will be unique to your job. You can adapt the query under the See All Events (SQL Syntax) tab to view the full error for the message relevant to your job.
Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use
SELECT *
. SELECT COALESCE(execution_failure_reason, IF_ELSE(tasks_failing_to_load > 0, 'Job failed to load on the cluster', NULL::STRING)) AS execution_failure_reason, 'ERROR' AS execution_failure_reason_severity
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';
Last modified 3mo ago