CREATE TABLE
Create a data lake table, defining the schema and related metadata in the data lake catalog.
Unlike most databases, a table can be created without any columns. Upsolver tables support dynamic column lists, meaning that the list of columns can expand based on the data being loaded.
Tables are defined and managed in a metastore, while the physical data is stored as files in a corresponding storage location. Currently, Upsolver supports writing and managing metadata in the AWS Glue Data Catalog and storing data lake files in Amazon S3.
By default, the CREATE TABLE
command creates a base table in your specified catalog along with a view that is used to query the data.
Syntax
Jump to
Table identifier
Table identifiers are provided in the following format:
Note that only metastore connection types are accepted for the catalog name.
Valid table names match the following identifier format:
Column types
Partition clause
For each value within the column names provided, a partition is created in the table's underlying metastore.
These partitions allow you query only the files belonging to an individual partition rather than all the existing files. This can improve query efficiency when you filter based on a specific partition value.
A table's partitions can only be defined when the table is first created. The accepted column types are string
, bigint
, boolean
, and date
.
Note that if this is a staging table that you intend on using to ingest external data using the COPY FROM
command, the column names you partition by here should match the names of the field as they arrive exactly. Additionally, these columns should be in the root and not in a subfield.
If this is a target table that you intend on writing an aggregated output to, the partition column should be one of the aggregation key columns and not an aggregated column.
For more information, see Working with partition indexes.
Primary key clause
A table's primary key column(s) contains the values that uniquely identify each row.
When using an INSERT
job to write to a table, values are inserted or updated based on the primary keys. This means that when writing to a table with no primary keys defined, the job only ever appends the new data that arrives; no existing values are updated.
In order to use a table in a MERGE
job, at least one primary key column should be defined.
Additionally, if this is a staging table that you intend on using to ingest external data with COPY FROM
, you should not define any primary key columns.
Table options
Jump to
GLOBALLY_UNIQUE_KEYS
GLOBALLY_UNIQUE_KEYS
Type: Boolean
Default: false
(Optional) By default, partition keys are implicitly part of the primary key. This means that when upserting, only rows with the same primary key and partition are replaced. This is the more performant and therefore recommended option.
However, some use cases may require that the primary keys be globally unique (in other words, unique across partitions). This means that when upserting, rows with the same primary key should be replaced, even if they belong to different partitions. Note that this also means that rows can "switch" partitions.
If such is the case, you should set this option as true
.
STORAGE_CONNECTION
STORAGE_CONNECTION
Type: identif
ier
Default: Default storage connection configured for the metastore connection this table is created under
(Optional) The storage connection associated with the STORAGE_LOCATION
for the table's underlying files.
Only a storage type connection can be used here (e.g. S3, Blob storage, GCS, Oracle object storage), and it should match the catalog's metastore. For example, if Glue is used as the metastore, only S3 is allowed as a storage connection.
When set, STORAGE_LOCATION
must be configured as well to provide a path to store the data.
STORAGE_LOCATION
STORAGE_LOCATION
Type: text
Default: Default storage location configured for the metastore connection this table is created under
(Optional) The storage location for the table's underlying files.
For S3, it should be provided in the format s3://bucket_name/path_to_data
.
This option is required when STORAGE_CONNECTION
is set.
When set, STORAGE_CONNECTION
must be configured as well to provide a connection with access to write to the specified storage location.
COMPUTE_CLUSTER
— editable
COMPUTE_CLUSTER
— editable Type: identif
ier
Default: The sole cluster in your environment
(Optional) The compute cluster that processes the table.
This option can only be omitted when there is just one cluster in your environment. You must specify which one to use when you have more than one compute cluster.
COMPRESSION
— editable
COMPRESSION
— editableValues: { SNAPPY | GZIP }
Default: SNAPPY
(Optional) Type of compression for the table data.
COMPACTION_PROCESSES
— editable
COMPACTION_PROCESSES
— editableType: integer
Default: 1
(Optional) This determines the number of compaction processes your table can do in parallel when it periodically compacts your data.
DISABLE_COMPACTION
— editable
DISABLE_COMPACTION
— editableType: Boolean
Default: false
(Optional) When true
, disables the compaction process.
RETENTION_DATE_PARTITION
RETENTION_DATE_PARTITION
Type: identifier
Default: The only partition column of type date
(Optional) This configures the partition column to be used to determine whether the retention period has passed for a given record.
This option is required if you have more than one date partition column.
TABLE_DATA_RETENTION
— editable
TABLE_DATA_RETENTION
— editableValue: <integer> DAYS
(Optional) When set, data in partitions that have passed the retention period are deleted from the table. Number of days can range between 1 and 9999.
This option is not a deterministic mechanism that deletes data when it immediately surpasses the defined threshold. This mechanism is closer to the lifecycle policies on common blob storage services, such as Amazon S3, and is designed to save storage costs, not to delete data based on a specific time. Therefore when data passes the retention period, it will be deleted at some point in the future, and can no longer be relied on to exist, though Upsolver aims to delete it within a reasonable timeframe.
You should be aware that transformation job that reads from a table with a defined data retention may or may not read data that has surpassed the retention threshold.
For example, if the current time is 2023-02-23 12:30:00 UTC, and you have defined TABLE_DATA_RETENTION = 2 days
, you can expect data written during 2023-02-23, 2023-02-22, and 2023-02-21 to exist in the table. The retention threshold truncates data to the nearest day, so when the time changes to 2023-02-24 00:00:00 UTC, you can no longer expect data from 2023-02-21 to be present in the table, although it might be there for a while.
Note that you need at least one date partition column for this option to work.
COLUMN_DATA_RETENTION
— editable
COLUMN_DATA_RETENTION
— editableType: list
of (<column_name>, <integer> DAYS)
pairs
(Optional) When set, after the duration of a column elapses in a partition, the data is rewritten without the contents of that column. Number of days can range between 1 and 9999.
Note that you need at least one date
partition column for this to work.
COMMENT
— editable
COMMENT
— editableType: text
(Optional) A description or comment regarding this table.
EXPOSE_IN_CATALOG
— editable
EXPOSE_IN_CATALOG
— editableType: Boolean
Default: true
(Optional) When false
, this option disables the creation of the entities (tables and views) in the catalog. This means that the data will be in S3 without an exposed metadata/schema, preventing it from being queried.
Examples
Minimum example
Note that the parentheses at the end can be omitted when creating a table without any columns.
Basic example
Full example
Partitioned staging table example
Last updated