Job Status

This section explains how to monitor your Upsolver jobs.

When you create a job in Upsolver, metrics are continuously gathered to monitor performance, issues, and errors. These metrics provide extensive information on the running of the job, the data processed by the job - whether an ingestion or transformation job - and details regarding your cluster. You can use these metrics to ensure your jobs run as expected, and that data is flowing efficiently.

The value given by a metric is useful on its own, though some metrics, when considered in combination with related metrics, provide a greater understanding of the workings of your job. It is helpful to refer to this guide to ensure you diagnose issues quickly and effectively.

Jobs

To view metrics for a job, click the Jobs link on the main menu. The Jobs page provides an overview of all your jobs, and you can filter this view as follows:

  • Status: The checklist shows one or more job statuses that you can select to filter the view. This list only displays the status(es) that your jobs are currently in.

  • Job Name: Use the Search box to filter based on the name of your job(s).

  • Filters: Click Filters to open the options and filter by Job, Status, Backlog, Created At, Cluster Name, Source, and Target. You can use any combination of these filters.

The following information is displayed for each job:

ColumnDescription

Job

The name of the job.

Status

The Status indicates whether the job is running or in another phase.

Backlog

The backlog of events being processed, with the delay measured in time, e.g. 2 Minutes, Up to date.

Events Over Time

A graph of events processed since the job started. Hover your mouse over the graph for the exact number of events processed at a point in time.

Created At

Time indicator to show how long since the job was created.

Cluster

The name of the cluster that processes the job, and the cluster status. Click on the cluster to view more details.

Source

The icon and name of the data source where the data is read from, e.g. Kafka, PostgreSQL CDC.

Target

The icon and name of the data target where the data is loaded, e.g. Redshift, Snowflake.

Status

Each job will be in one of the following statuses:

Job StatusStatus description

Running

The job is up and running with no errors or warnings detected.

Failed (Retrying)

The job is experiencing errors that prevent it from progressing. Due to the built-in retry mechanism, Upsolver automatically attempts to retry the job from the exact point where the error occurred.

Warnings

The job is progressing and loading data, but there are warnings. These warnings indicate issues such as row rejections or problems with the job's cleanup tasks.

Completed

The job has finished processing data up to the specified "END_AT" parameter, which determines the cutoff time for data ingestion. Any files with timestamps after this cutoff are ignored. Once this point is reached, the job status changes to "Completed".

Paused (Cluster Stopped)

If the cluster on which the job is running is stopped, the job status will be updated to "Paused (Cluster Stopped)". No reading or writing can be performed by the job.

Writing Paused

Writing to the target has been paused. To prevent data loss, the job continues to read and process data from the source, but does not load it to the target until the job is resumed.

Deleting

When a job is dropped, it may take a few minutes to delete the associated metadata. During this period, the job status will be displayed as "Deleting".

Clean up

The job is currently in the cleanup phase, removing any remaining data or metadata associated with the job.

Last updated