System Columns
This article explains the system columns that are automatically appended to your tables.
When you ingest your data into Upsolver, your table is enriched with various system columns that are generated by Upsolver based on your data.
The specific columns and what they describe are detailed below:
General
These columns are applicable to all staging tables:
Column | Data type | Description |
---|---|---|
$commit_time | timestamp | The time at which the record was committed to the current table. When reading historical data, this is the current time at which we're reading the data. When reading the latest data written to the source, this is the same as the source time. |
$compression | string | The compression type, if applicable. |
$event_date | date | The date at which the event was fired or recognized in the data source. This is typically used as a date partition column for your staging table. |
$event_timestamp | instant | The time at which the event was fired or recognized in the data source. |
Storage
These columns are applicable to staging tables with data ingested from storage connections, for example Amazon S3:
Column | Data type | Description |
---|---|---|
$compressed_file_name | string | The name of the file that was read, if it was compressed. |
$column_count | long | The number of columns read for CSV files. |
$file_last_modified | long | The time the file was last modified. |
$file_name | string | The name of the file that was read. |
$file_size | long | The size of the file that was read in bytes. |
$row_number | long | The index within the file corresponding to each record. |
Streaming
These columns are applicable to staging tables with data ingested from streaming sources:
Column | Data type | Description |
---|---|---|
$partition | long | The partition this data was read from. |
Amazon Kinesis
These columns are applicable to staging tables with data ingested from Kinesis:
Column | Data type | Description |
---|---|---|
$kinesis_timestamp | long | The time at which the event was fired or recognized in Kinesis. |
$offset | long | The offset of the stream that the data is read from. |
$shard | string | The ID of the shard the data was read from. |
$stream | string | The name of the stream the data was read from. |
Apache Kafka
These columns are applicable to staging tables with data ingested from Kafka:
Column | Data type | Description |
---|---|---|
$kafka_timestamp | long | The time at which the event was fired or recognized in Kafka. |
$row_number | long | The index corresponding to each record. |
$topic | string | The name of the topic the data was read from. |
Databases (CDC)
These columns apply to staging tables with data ingested from database (CDC) sources:
Column | Data type | Description |
---|---|---|
$binlog_timestamp | timestamp | Timestamp for when the event was processed by the connector. |
$database_name | string | The name of the source database. |
$debezium_version | string | The version number and name of the Debezium engine. |
$from_snapshot | Boolean | Whether the row was created from a snapshot or a change event. Values are: True: snapshot False: from latest change event |
$is_delete | Boolean | Boolean representation for |
$old_row | string | Optional. Represents the row prior the captured change. |
$operation | string | Row operation. Values are: create, update, delete, or snapshot. TRUNCATE and MESSAGE operations are not currently supported and will result in an error. |
$primary_key | string | If a source table includes one or more primary keys, this field holds a concatenation, separated by |
$table_name | string | The name of the source table. |
Microsoft SQL Server
These columns apply to staging tables with data ingested from Microsoft SQL Server:
Column | Data Type | Description |
---|---|---|
$change_lsn | ||
$commit_lsn | ||
$commit_time | ||
$event_serial_time | ||
$full_table_name | ||
$item_index | ||
$row_number | ||
$schema_name | ||
$shard_number | ||
$source_id | ||
$source_name |
MongoDB
These columns apply to staging tables with data ingested from MongoDB:
Column | Data Type | Description |
---|---|---|
$collection_name | string | |
$commit_time | string | |
$full_collection_name | string | |
$replica_set | string | |
$row_number | long | |
$shard_number | long | |
$source_id | string | |
$source_time | string | |
$transaction_number | long |
MySQL
These columns apply to staging tables with data ingested from MySQL:
Column | Data type | Description |
---|---|---|
$binlog_file_name | string | Binlog name where the event was recorded. |
$binlog_file_position | string | Position within the binlog. |
$binlog_row | string | Row within the change event. |
$full_table_name | string | Concatenation of |
$gtid | string | If query logging is enabled, represents Global Transaction ID. Learn more. |
$query | string | Optional. Original SQL query that produced the change event. Instructions to enable query logging. |
PostgreSQL
These columns apply to staging tables with data ingested from PostgreSQL:
Column | Data type | Description |
---|---|---|
$full_table_name | string | Concatenation of |
$is_hearbeat | Boolean | True if the event originates from a heartbeat table. False if it originates from a tracked table. |
$lsn | string | Log Sequence Number (LSN) represents location in the transaction log where the specific change event was recorded. |
$schema_name | string | The name of the source schema. |
Last updated