Amazon S3

This page describes how to ingest your data from Amazon S3.

Prerequisites

Ensure that you have an Amazon S3 connection with the correct permissions to read from your intended bucket.

See: Connect to your Amazon S3 bucket

Additionally, if you are ingesting to the data lake, you need a metastore connection that can be used to create a staging table as well as a corresponding storage connection that can be used to store your table's underlying files.

To learn more about how tables work in Upsolver, see SQL tables.

Create a job that reads from Amazon S3

You can create a job to ingest your data from S3 into a staging table in the data lake or ingest directly into your target.

Jump to

Ingest to the data lake

After completing the prerequisites, you can create your staging tables. The example below creates a table without defining columns or data types, as these will be inferred automatically by Upsolver, though you can define columns if required:

CREATE TABLE default_glue_catalog.upsolver_samples.orders_raw_data()
    PARTITIONED BY $event_date;

Create a staging table to store the data ingested from Amazon S3.

Upsolver recommends partitioning by the system column $event_date or another date column within the data in order to optimize your query performance.

Next, you can create an ingestion job as follows:

CREATE SYNC JOB load_orders_raw_data_from_s3
   CONTENT_TYPE = JSON
   AS COPY FROM S3 upsolver_s3_samples LOCATION = 's3://upsolver-samples/orders/' 
   INTO default_glue_catalog.upsolver_samples.orders_raw_data;

Create an ingestion job to copy data from Amazon S3 into a staging table in the data lake.

Note that multiple ingestion jobs can write to the same table, resulting in a final table that contains a UNION ALL of all data copied into that table. This means that any duplicate rows that are written are not removed and the columns list may expand if new columns are detected.

This may not be your intended behavior, so ensure you are writing to the correct table before running your job.

The example above only uses a small subset of all job options available when reading from Amazon S3. Depending on your use case, you may want to configure a different set of options. For instance, if you're reading from a folder partitioned by date, you may want to use the DATE_PATTERN option.

For the full list of job options, including syntax examples, see Amazon S3.

After your data has been ingested into your staging table, you are ready to build your data pipeline. Follow the instructions here for transforming your data and writing it to your target locations.

Ingest directly to the target

Directly ingesting your data enables you to copy your data straight into the target system, bypassing the need for a staging table. The syntax and job options are identical to ingesting into a staging table, however, the target connector differs:

CREATE SYNC JOB ingest_s3_to_snowflake
   CONTENT_TYPE = JSON
   AS COPY FROM S3 upsolver_s3_samples LOCATION = 's3://upsolver-samples/orders/' 
   INTO SNOWFLAKE my_snowflake_connection.demo.orders_transformed;

Create a job to ingest from Amazon S3 directly into a target Snowflake database.

Job options

Transformations can be applied to your ingestion job to correct issues, exclude columns, or mask data before it lands in the target. Furthermore, you can use expectations to define data quality rules on your data stream and take appropriate action.

For more information, see the Ingestion jobs page, which describes the available job options and includes examples.

Alter a job that reads from Amazon S3

Some job options are considered mutable, enabling you to run a SQL command to alter an existing ingestion job rather than create a new job. The job options apply equally to jobs that ingest into the data lake or directly to the target and the syntax to alter a job is identical.

For example, take the job we created earlier:

CREATE SYNC JOB load_orders_raw_data_from_s3
   CONTENT_TYPE = JSON
   AS COPY FROM S3 upsolver_s3_samples LOCATION = 's3://upsolver-samples/orders/' 
   INTO default_glue_catalog.upsolver_samples.orders_raw_data;

Create an ingestion job to copy data from Amazon S3 into a staging table in the data lake.

If you want to keep the job as is, but only change the cluster that is running the job, execute the following command:

ALTER JOB load_orders_raw_data_from_s3 
    SET COMPUTE_CLUSTER = my_new_cluster;

Note that some options such as COMPRESSION cannot be altered once the connection has been created.

To check which job options are mutable, see Amazon S3.

Drop a job that reads from Amazon S3

If you no longer need a job, you can easily drop it using the following SQL command. This applies to jobs that ingest into the data lake and directly into the target:

DROP JOB load_orders_raw_data_from_s3;

Last updated