Job Status

This section explains how to monitor your Upsolver jobs.

When you create a job in Upsolver, metrics are continuously gathered to monitor performance, issues, and errors. These metrics provide extensive information on the running of the job, the data processed by the job - whether an ingestion or transformation job - and details regarding your cluster. You can use these metrics to ensure your jobs run as expected, and that data is flowing efficiently.

The value given by a metric is useful on its own, though some metrics, when considered in combination with related metrics, provide a greater understanding of the workings of your job. It is helpful to refer to this guide to ensure you diagnose issues quickly and effectively.


To view metrics for a job, click the Jobs link on the main menu. The Jobs page provides an overview of all your jobs, and you can filter this view as follows:

  • Status: The checklist shows one or more job statuses that you can select to filter the view. This list only displays the status(es) that your jobs are currently in.

  • Job Name: Use the Search box to filter based on the name of your job(s).

  • Filters: Click Filters to open the options and filter by Job, Status, Backlog, Created At, Cluster Name, Source, and Target. You can use any combination of these filters.

The following information is displayed for each job:



The name of the job.


The Status indicates whether the job is running or in another phase.


The backlog of events being processed, with the delay measured in time, e.g. 2 Minutes, Up to date.

Events Over Time

A graph of events processed since the job started. Hover your mouse over the graph for the exact number of events processed at a point in time.

Created At

Time indicator to show how long since the job was created.


The name of the cluster that processes the job, and the cluster status. Click on the cluster to view more details.


The icon and name of the data source where the data is read from, e.g. Kafka, PostgreSQL CDC.


The icon and name of the data target where the data is loaded, e.g. Redshift, Snowflake.


Each job will be in one of the following statuses:

StatusIcon ColorDescription



The job is running.

Writing Paused


The writing of data to the target is paused, but data will continued to be ingested from the source to ensure no data is lost.



The job is deleting the intermediate data.



The target data has been deleted and the job has been dropped.



The job has reached its end date and all work is complete.



Following a user-defined completion date of a job, the data has surpassed its retention period and is being deleted from the target.

Failed (Retrying)


The job encountered fatal errors that are currently preventing or will prevent, the job from proceeding.