LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Reference
Reference
  • Reference
  • ✨Learning Paths
    • Amazon Kinesis
    • Amazon S3
    • Apache Kafka
    • Confluent Cloud
    • Microsoft SQL Server
    • MongoDB
    • MySQL
    • PostgreSQL
  • SQL COMMANDS
    • Clusters
      • CREATE CLUSTER
      • ALTER CLUSTER
      • ROLL CLUSTER
      • STOP/START CLUSTER
      • DROP CLUSTER
    • Connections
      • CREATE CONNECTION
        • Amazon Kinesis
        • Amazon Redshift
        • Amazon S3
        • Apache Kafka
        • AWS Glue Data Catalog
        • ClickHouse
        • Confluent Cloud
        • Elasticsearch
        • Hive Metastore
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • Polaris Catalog
        • Iceberg REST catalogs
        • PostgreSQL
        • Snowflake
      • ALTER CONNECTION
      • DROP CONNECTION
      • CDC Connections with SSL
    • Jobs
      • CREATE JOB
        • Ingestion
          • Amazon Kinesis
          • Amazon S3
          • Apache Kafka
          • Confluent Kafka
          • Content Types
          • Microsoft SQL Server
          • MongoDB
          • MySQL
          • PostgreSQL
        • Replication
          • Microsoft SQL Server
          • MongoDB
          • MySQL
          • PostgreSQL
        • Transformation
          • INSERT
            • MAP_COLUMNS_BY_NAME
            • UNNEST
          • MERGE
          • SELECT
          • Job Options
            • Amazon Redshift
            • Amazon S3
            • Apache Iceberg
            • ClickHouse
            • Data Lake Tables
            • Elasticsearch
            • PostgreSQL
            • Snowflake
        • Monitoring
          • Amazon CloudWatch
          • Datadog
          • Dynatrace
      • ALTER JOB
      • PAUSE / RESUME JOB
      • DROP JOB
    • Materialized Views
      • CREATE MATERIALIZED VIEW
      • ALTER MATERIALIZED VIEW
      • DROP MATERIALIZED VIEW
      • Join with a Materialized View
    • Tables
      • CREATE TABLE
      • ALTER TABLE
      • DROP TABLE
    • Iceberg Tables
      • Upsolver Managed Tables
        • CREATE ICEBERG TABLE
        • ALTER ICEBERG TABLE
        • DROP ICEBERG TABLE
        • OPTIMIZE ICEBERG TABLE
      • External Iceberg Tables
        • CREATE EXTERNAL ICEBERG TABLE
        • DROP EXTERNAL ICEBERG TABLE
      • Mirror Iceberg Tables
        • CREATE MIRROR
        • ALTER MIRROR
        • PAUSE / RESUME MIRROR
        • DROP MIRROR
  • FUNCTIONS & OPERATORS
    • Data Types
    • Functions
      • Aggregate
        • APPROX_COUNT_DISTINCT
        • APPROX_COUNT_DISTINCT_EACH
        • AVG
        • AVG_EACH
        • AVG_TIME_SERIES
        • COLLECT_SET
        • COLLECT_SET_EACH
        • COUNT
        • COUNT(*)
        • COUNT(DISTINCT ...)
        • COUNT_EACH
        • COUNT_IF
        • DYNAMIC_SESSIONS
        • FIRST
        • FIRST_ARRAY
        • FIRST_EACH
        • FIRST_TIME_SERIES
        • LAST
        • LAST_ARRAY
        • LAST_EACH
        • LAST_K
        • LAST_K_EACH
        • LAST_TIME_SERIES
        • MAX
        • MAX_BY
        • MAX_EACH
        • MAX_TIME_SERIES
        • MIN
        • MIN_BY
        • MIN_EACH
        • MIN_TIME_SERIES
        • SESSION_COUNT
        • STD_DEV
        • STD_DEV_EACH
        • STRING_MAX
        • STRING_MAX_EACH
        • STRING_MIN_EACH
        • SUM
        • SUM_EACH
        • SUM_TIME_SERIES
        • WEIGHTED_AVERAGE
      • Array
        • ARRAY_DISTINCT
        • ARRAY_JOIN
        • ARRAY_MAX
        • ARRAY_MIN
        • ARRAY_SORT
        • ARRAY_SORT_DESC
        • ARRAY_SUM
        • COUNT_VALUES_IF
        • COUNT_VALUES
        • ELEMENT_AT
        • FIRST_ELEMENT
        • LAST_ELEMENT
        • VALUE_INDEX_IN_ARRAY
        • VALUE_INDEX_IN_ROW
      • Comparison
        • GREATEST
        • LEAST
      • Conditional
        • COALESCE
        • IF_ELSE
        • NULL_IF
      • Date & Time
        • ADD_TIME_ZONE_OFFSET
        • DATE
        • DATE_ADD
        • DATE_DIFF
        • DATE_TRUNC
        • DAY
        • DAY_OF_WEEK
        • DAY_OF_YEAR
        • EXTRACT_TIMESTAMP
        • EXTRACT
        • FORMAT_DATETIME
        • FROM_ISO8601_DATE
        • FROM_UNIXTIME
        • HOUR
        • MILLISECOND
        • MINUTE
        • MONTH
        • QUARTER
        • RUN_END_TIME
        • RUN_START_TIME
        • SECOND
        • SUBTRACT_TIME_ZONE_OFFSET
        • TO_UNIX_EPOCH_MILLIS
        • TO_UNIX_EPOCH_SECONDS
        • TO_UNIXTIME
        • WEEK
        • YEAR_OF_WEEK
        • YEAR
      • Filter
        • IS_DUPLICATE
        • NOT
      • Interval
        • PARSE_DURATION
      • Mathematical
        • ABS
        • CBRT
        • CEIL
        • CEILING
        • DEGREES
        • EXP
        • FLOOR
        • GET_SHARD_NUMBER
        • LN
        • LOG
        • LOG2
        • LOG10
        • MOD
        • MODULO
        • POW
        • POWER
        • RADIANS
        • RAND
        • RANDOM
        • RECIPROCAL
        • ROUND
        • SIGN
        • SORT_VALUES
        • SQRT
        • TRUNCATE
      • Regular Expressions
        • REGEXP_EXTRACT
        • REGEXP_EXTRACT_ALL
        • REGEXP_LIKE
        • REGEX_MATCH_POSITION
        • REGEX_NAMED_GROUPS
        • REGEXP_REPLACE
      • Spatial
        • ST_DISTANCE
        • ST_WGS84_DISTANCE
        • WKT_SPATIAL_CONTAINS
        • WKT_SPATIAL_INTERSECT
      • String
        • BASE64_DECODE
        • BASE64_TO_HEX
        • BYTES_SUBSTRING
        • CONCAT
        • DATE
        • JOIN_ARRAYS
        • LENGTH
        • LOWER
        • LPAD
        • LTRIM
        • MD5
        • PARSE_DATETIME
        • REPLACE
        • REVERSE
        • RPAD
        • RTRIM
        • SHA1
        • SHA3_512
        • SHA256
        • SHA512
        • SORT_VALUES
        • SPLIT
        • SPLIT_TO_RECORD
        • STRING_FORMAT
        • STRIP_MARGIN
        • STRIP_PREFIX
        • STRIP_SUFFIX
        • STRPOS
        • SUBSTR
        • SUBSTRING
        • TRANSLATE
        • TRIM_CHARS
        • TRIM
        • UPPER
        • UUID_GENERATOR
        • XX_HASH
      • Structural
        • FROM_KEY_VALUE
        • GET_RANGE
        • JOIN_ALL_BY_KEY
        • JSON_PATH
        • JSON_TO_RECORD
        • MAP_WITH_INDEX
        • QUERY_STRING_TO_RECORD
        • RECORD_TO_JSON
        • SORT_BY
        • TO_ARRAY
        • ZIP_WITH_INDEX
        • ZIP
      • Trigonometric
        • COS
        • SIN
        • TAN
        • TANH
      • Type Conversion
        • CAST
        • CHR
        • DECIMAL_TO_HEX
        • HEX_TO_DECIMAL
        • TO_BIGINT
        • TO_DOUBLE
        • TO_STRING
      • URL
        • TOP_PRIVATE_DOMAIN
        • URL_DECODE
        • URL_ENCODE
        • URL_PARSER
    • Operators
      • Comparison
      • Conditional
        • CASE
      • Logical
      • Mathematical
      • String
  • MONITORING
    • Clusters
    • Datasets
      • Ingested Data
        • Column
      • Lineage
      • Data Violations
      • Statistics
      • Maintenance
        • Compactions
        • Expire Snapshots
        • Orphan Files
      • Columns
      • Partitions
      • Properties
    • Job Status
      • Stream & File Sources
        • Monitoring
        • Graphs
        • Lineage
        • Settings
      • CDC Sources
        • Monitoring
        • Replication Settings
        • Job Settings
    • System Catalog
      • Information Schema
        • Clusters
        • Columns
        • Connections
        • Jobs
        • Mirrors
        • Tables
        • Users
        • Views
      • Insights
        • job_output_column_stats
        • dataset_column_stats
      • Monitoring
        • CDC Status
        • Clusters
        • Expectations
        • Jobs
        • Partition Statistics
        • Recent Compactions
        • Running Queries
        • Table Statistics
      • Task Executions Table
  • GENERAL
    • Common SQL Syntax
    • View Entity Syntax
    • Keyboard Shortcuts
Powered by GitBook
On this page
  • Rows read (completed executions)
  • Rows filtered by WHERE clause
  • Average rows scanned per execution
  • Maximum rows scanned in an execution
  • Rows Pending Processing
  • Discovered Files
  • Discovered Bytes
  • Parse Errors (for ingestion jobs)
  • Rows written (completed executions)
  • Rows filtered by HAVING clause
  • Rows filtered due to missing partition
  • Rows filtered due to missing Primary Key
  • Bytes written (completed executions)
  • Columns written
  • Columns written - sparse
  1. MONITORING
  2. Job Status
  3. Stream & File Sources
  4. Monitoring (V1)

Data Scanned

These metrics provide insight into the data processed by tasks within your job.

Last updated 11 months ago

Rows read (completed executions)

Metric type

Informational

About this metric

The total number of rows scanned by completed executions today. This is a measure of rows that were processed successfully.

Timeframe

Today (midnight UTC to now)

More Information

This informative metric shows accumulative progress. If the value is 0, then the job has not yet processed anything or has not started.

If and is greater than 0, but the rows scanned in completed executions are 0, then your source doesn't contain any data. This should increase in line with the number of completed executions.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', rows_scanned_by_completed_tasks_today) AS rows_scanned_by_completed_tasks_today, 'OK' AS rows_scanned_by_completed_tasks_today_severity
FROM system.monitoring.jobs 
WHERE job_id = '<job_id>';

Rows filtered by WHERE clause

Metric type

Warning

About this metric

The number of rows that were filtered out because they didn’t pass the WHERE clause predicate defined in the job.

Limits

Timeframe

Today (midnight UTC to now)

More information

The number of rows that were filtered out because some or all of the primary key columns were NULL. If this behavior is intended, the rows can be filtered out in the WHERE clause.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', rows_filtered_by_where_clause_today) AS rows_filtered_by_where_clause_today, IF_ELSE(rows_filtered_by_where_clause_today > 0 AND rows_filtered_by_where_clause_today >= rows_scanned_by_completed_tasks_today, 'ERROR', 'OK') AS rows_filtered_by_where_clause_today_severity
FROM system.monitoring.jobs 
WHERE job_id = '<job_id>';

All rows were filtered out by the WHERE clause. Please evaluate the WHERE clause of your job's SELECT statement to confirm it isn't filtering out too many events.

Click on Job Details to view the query used to create your job. You can see the WHERE clause and rewrite the predicates to adjust the rows explicitly filtered out.

Average rows scanned per execution

Metric type

Informational

About this metric

The average number of rows scanned per job execution.

Timeframe

Today (midnight UTC to now)

More information

This informational metric explains how much work is taking place within each execution. If this number is low, then there is a lot of overhead for a single execution, or if high, it may indicate that you have high latency.

There is no target value for this metric, however, it should be viewed in comparison with your expectations of how much work should be done in each execution.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###.##}', avg_rows_scanned_per_execution_today) AS avg_rows_scanned_per_execution_today, 'OK' AS avg_rows_scanned_per_execution_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

Maximum rows scanned in an execution

Metric type

Warning

About this metric

The maximum number of rows scanned in a single job execution today.

Limits

Timeframe

Today (midnight UTC to now)

More information

In streaming data, data should arrive at a fixed cadence. This means you should not experience a cycle of seeing a spike of data arriving, and then no work.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', max_rows_scanned_in_execution_today) AS max_rows_scanned_in_execution_today, IF_ELSE(max_rows_scanned_in_execution_today > 1000000 AND max_rows_scanned_in_execution_today > avg_rows_scanned_per_execution_today * 10, 'WARNING', 'OK') AS max_rows_scanned_in_execution_today_severity
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

The amount of data scanned by a single execution exceeds the historical normal. Consider checking that the extra data processed is intentional. If the new data volume is expected, ensure the cluster is sized appropriately.

Rows Pending Processing

Metric type

Informational

About this metric

The number of rows in the source table that have not been processed yet. Only rows that have been committed to the source table are included.

More information

The number of rows in the source table that have not been processed yet. Only rows that have been committed to the source table are included.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', rows_pending_processing) AS rows_pending_processing, 
'OK' AS rows_pending_processing_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

Discovered Files

Metric type

Warning

About this metric

The number of files to load discovered by the job.

Limits

Error when = 0

Timeframe

Today (midnight UTC to now)

More information

This metric applies to ingestion jobs copying data from Amazon S3, and counts the number of discovered files that match the job, but have not yet been parsed.

If your job didn't find any files, the pattern you used to discover the files needs correcting. However, this can be 0 at the very start of the job, otherwise, you need to recreate the job with the correct file pattern.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', discovered_files_today) AS discovered_files_today, IF_ELSE(discovered_files_today = 0, 'ERROR', 'OK') AS discovered_files_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

No files to load were detected. Ensure the job is reading from the correct location and that files exist in that location. If using a date pattern make sure the pattern matches file paths.

Click on Job Details to view the query used to create your job. You can check if the file pattern is correct. If not, you will need to create a new job and drop the old one.

Discovered Bytes

Metric type

Informational

About this metric

The number of bytes to load discovered in the source stream.

More information

This provides a general indication of the amount of work to be done, enabling you to understand the size of your data stream.

For additional columns, alter this statement and use SELECT *.

SELECT CASE WHEN discovered_bytes_today::BIGINT < POWER(1024, 1) THEN CAST(discovered_bytes_today::BIGINT AS STRING) || ' Bytes' WHEN discovered_bytes_today::BIGINT < POWER(1024, 2) THEN CAST(ROUND(discovered_bytes_today::BIGINT / POWER(1024, 1), 2) AS STRING) || ' KB' WHEN discovered_bytes_today::BIGINT < POWER(1024, 3) THEN CAST(ROUND(discovered_bytes_today::BIGINT / POWER(1024, 2), 2) AS STRING) || ' MB' WHEN discovered_bytes_today::BIGINT < POWER(1024, 4) THEN CAST(ROUND(discovered_bytes_today::BIGINT / POWER(1024, 3), 2) AS STRING) || ' GB' ELSE CAST(ROUND(discovered_bytes_today::BIGINT / POWER(1024, 4), 2) AS STRING) || ' TB' END AS discovered_bytes_today, IF_ELSE(discovered_bytes_today::BIGINT = 0, 'ERROR', 'OK') AS discovered_bytes_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

No items detected in the source stream. Please ensure the source stream exists and contains items to be ingested.

Parse Errors (for ingestion jobs)

Metric type

Informational

About this metric

The number of items that failed to parse. This value represents a lower bound as malformed items may corrupt subsequent items in the same file as well.

Limits

Error when > 0

Timeframe

Today (midnight UTC to now)

More information

This metric only applies to ingestion jobs and counts the number of errors when a file or row could not be parsed. Generally, the value should be 0. If this value is above 0, you should understand why these parse errors exist e.g. the file is in the wrong format, or not formed, or corrupted.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', parse_errors_today) AS parse_errors_today, IF_ELSE(parse_errors_today > 0, 'ERROR', 'OK') AS parse_errors_today_severity 
FROM system.monitoring.jobs 
WHERE job_id = '<job_id>';

Rows written (completed executions)

Metric type

Informational

About this metric

The number of rows written to the target by the job.

Timeframe

Today (midnight UTC to now)

More information

If you are expecting scanned and written rows to match and they don’t, you need to investigate the cause of this. Similarly if you have a flattening operation that you expect to increase the number of written rows and this doesn’t happen, investigation is required.

For additional columns, alter this statement and use SELECT *.

SELECT CASE WHEN bytes_written_today::BIGINT < POWER(1024, 1) THEN CAST(bytes_written_today::BIGINT AS STRING) || ' Bytes' WHEN bytes_written_today::BIGINT < POWER(1024, 2) THEN CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 1), 2) AS STRING) || ' KB' WHEN bytes_written_today::BIGINT < POWER(1024, 3) THEN CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 2), 2) AS STRING) || ' MB' WHEN bytes_written_today::BIGINT < POWER(1024, 4) THEN CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 3), 2) AS STRING) || ' GB' ELSE CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 4), 2) AS STRING) || ' TB' END AS bytes_written_today, 'OK' AS bytes_written_today_severity
FROM system.monitoring.jobs 
WHERE job_id = '<job_id>';

Rows filtered by HAVING clause

Metric type

Informational

About this metric

The number of rows that were filtered out because they didn't pass the HAVING clause predicate defined in the job.

Timeframe

Today (midnight UTC to now)

More information

The number of rows that were filtered out because they didn't pass the HAVING clause predicate defined in the job.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', rows_filtered_by_having_clause_today) AS rows_filtered_by_having_clause_today, 'OK' AS rows_filtered_by_having_clause_today_severity 
FROM system.monitoring.jobs 
WHERE job_id = '<job_id>';

Rows filtered due to missing partition

Metric type

Warning

About this metric

The number of rows that were filtered out because some or all of the partition columns were NULL or empty string.

Limit

Error when > 0

Timeframe

Today (midnight UTC to now)

More information

If you are writing to a partition table and one of the partitions has a NULL value or empty string, the row will be filtered out. This is not usually intended behavior and flags that this is a user error requiring investigation.

If this behavior is intended, the rows can be filtered out in the WHERE clause.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', rows_filtered_by_missing_partition_today) AS rows_filtered_by_missing_partition_today, 'OK' AS rows_filtered_by_missing_partition_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

Rows filtered due to missing Primary Key

Metric type

Warning

About this metric

The number of rows that were filtered out because some or all of the primary key columns were NULL.

Limits

Error when > 0

Timeframe

Today (midnight UTC to now)

More information

Rows are filtered out when a primary key is NULL. If this behavior is intended, the rows can be filtered out in the WHERE clause.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', rows_filtered_by_missing_primary_key_today) AS rows_filtered_by_missing_primary_key_today, 'OK' AS rows_filtered_by_missing_primary_key_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

Bytes written (completed executions)

Metric type

Informational

About this metric

The size of the data written by the job.

Timeframe

Today (midnight UTC to now)

More information

Informative metric to provide a sense of scale of the data and how much is being done. If you expect this value to be more or less there is most likely a mistake in the configuration of the job.

For additional columns, alter this statement and use SELECT *.

SELECT CASE WHEN bytes_written_today::BIGINT < POWER(1024, 1) THEN CAST(bytes_written_today::BIGINT AS STRING) || ' Bytes' WHEN bytes_written_today::BIGINT < POWER(1024, 2) THEN CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 1), 2) AS STRING) || ' KB' WHEN bytes_written_today::BIGINT < POWER(1024, 3) THEN CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 2), 2) AS STRING) || ' MB' WHEN bytes_written_today::BIGINT < POWER(1024, 4) THEN CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 3), 2) AS STRING) || ' GB' ELSE CAST(ROUND(bytes_written_today::BIGINT / POWER(1024, 4), 2) AS STRING) || ' TB' END AS bytes_written_today, 'OK' AS bytes_written_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

Columns written

Metric type

Warning

About this metric

The number of columns written to by the job. This value can change over time if the query uses * in the select clause.

Limits

Warn when > 500

Timeframe

Today (midnight UTC to now)

More information

This is a fixed number if you’re not using a SELECT * statement. You can have as many columns as you want in Upsolver, but a lot of columns can cause problems downstream in query engines such as Athena or Glue. Furthermore, this may not be what the user intended, as it can be difficult to work with a lot of columns.

It is best practice to ensure you keep your tables to a maximum of a few hundred columns for downstream support and performance.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', columns_written_to_today) AS columns_written_to_today, IF_ELSE(columns_written_to_today > 500, 'WARNING', 'OK') AS columns_written_to_today_severity 
FROM system.monitoring.jobs
WHERE job_id = '<job_id>';

The job is writing a large number of columns. Consider transforming this table into a new table with a specific list of required columns or consider selecting the required columns explicitly

Columns written - sparse

Metric type

Warning

About this metric

The number of sparse columns written to today. A sparse column is a column that appears in less than 0.01% of all rows.

Limits

Warn when > 50% of the number of columns

Timeframe

Today (midnight UTC to now)

More information

The number of sparse columns written today. A sparse column is a column that appears in less than 0.01% of all rows. This often happens when the job is writing to a high number of columns, but those columns only show up in one or two events.

If you have a lot of sparse columns in your data, this is often because of malformed data or unexpected results. This makes it hard to work with the data downstream, so it is best to transform the data so that there are fewer columns.

For additional columns, alter this statement and use SELECT *.

SELECT STRING_FORMAT('{0,number,#,###}', sparse_columns_written_to_today) AS sparse_columns_written_to_today, IF_ELSE(sparse_columns_written_to_today > columns_written_to_today * 0.5, 'WARNING', 'OK') AS sparse_columns_written_to_today_severity 
FROM system.monitoring.jobs 
WHERE job_id = '<job_id>';

A large number of sparse columns was detected. Consider changing the data structure to use static column names and/or using arrays and structs where appropriate.

Error when > 0 AND equal to the

Warn when > 1,000,000 AND 10 * today

This value should be similar to to ensure spikes and dips are not happening, and some jobs are not working harder than other executions. A big difference between the two may be indicative of performance and latency issues.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Failed to parse some of the events in the source location. See the job monitoring page for .

Written rows relate to the . A scanned row will result in a written row unless it was filtered, or an aggregation reduced the number of scanned to written rows. For example, it may scan 1,000,000 rows, perform an aggregation, and write the result as a single row. Conversely, a flattening operation to unnest data can result in more rows written than scanned.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Run the following SQL command in a query window in Upsolver, replacing <job_id> with the Id for your job. The Id for your job can be found in the section under the Settings tab.

Average rows scanned per job execution
Average rows scanned per execution
Max rows scanned in an execution
Average rows scanned per job execution
Details
Details
Details
Details
Details
Details
Details
Details
Details
Details
Details
Details
Details
Details
Job executions completed - today
Job executions completed - lifetime
details and error messages