Job Status

This section explains how to monitor and your Upsolver jobs.

When you create a job in Upsolver, metrics are continuously gathered to monitor performance, issues, and errors. These metrics provide extensive information on the running of the job, the data processed by the job - whether an ingestion or transformation job - and details regarding your cluster. You can use these metrics to ensure your jobs run as expected, and that data is flowing efficiently.

The value given by a metric is useful on its own, though some metrics, when considered in combination with related metrics, provide a greater understanding of the workings of your job. It is helpful to refer to this guide to ensure you diagnose issues quickly and effectively.

Jobs

To view metrics for a job, click the Jobs link on the main menu. The Jobs page provides an overview of all your jobs, and you can filter this view as follows:

  • Status: The checklist shows one or more job statuses that you can select to filter the view. This list only displays the status(es) that your jobs are currently in.

  • Job Name: Use the Search box to filter based on the name of your job(s).

  • Filters: Click Filters to open the options and filter by Job, Status, Backlog, Created At, Cluster Name, Source, and Target. You can use any combination of these filters.

The following information is displayed for each job:

ColumnDescription

Job

The name of the job.

Status

The Status indicates whether the job is running or in another phase.

Backlog

The backlog of events being processed, with the delay measured in time, e.g. 2 Minutes, Up to date.

Events Over Time

A graph of events processed since the job started. Hover your mouse over the graph for the exact number of events processed at a point in time.

Created At

Time indicator to show how long since the job was created.

Cluster

The name of the cluster that processes the job, and the cluster status. Click on the cluster to view more details.

Source

The icon and name of the data source where the data is read from, e.g. Kafka, PostgreSQL CDC.

Target

The icon and name of the data target where the data is loaded, e.g. Redshift, Snowflake.

Status

Each job will be in one of the following statuses:

StatusIcon ColorDescription

Running

Green

The job is running.

Writing Paused

Grey

The writing of data to the target is paused, but data will continued to be ingested from the source to ensure no data is lost.

Deleting

Grey

The job is deleting the intermediate data.

Deleted

Grey

The target data has been deleted and the job has been dropped.

Completed

Grey

The job has reached its end date and all work is complete.

Clean-up

Grey

Following a user-defined completion date of a job, the data has surpassed its retention period and is being deleted from the target.

Failed (Retrying)

Red

The job encountered fatal errors that are currently preventing or will prevent, the job from proceeding.

Last updated