Connect to your S3 bucket
S3 connections have a wide variety of uses in SQLake.
Like other connection types, it can be used to read your data and/or write transformed data to a specified location. However, unlike other types, S3 connections also serve as a storage location for the underlying files for your Upsolver managed tables as well as the intermediate files used while running a job.
This means that even if you don't intend to write to an S3 bucket as a target location, you should still have an S3 connection that has write permissions to an S3 bucket.
Note that an S3 connection is created by default when you deploy Upsolver on your AWS account.
An S3 connection can be created as simply as follows:
The connection in this example is created based on the default credentials derived from Upsolver's integration with your AWS account.
The following example also creates an S3 connection but explicitly configures the credentials by providing a specific role:
To establish a connection with specific permissions, you can configure the
AWS_ROLE
and EXTERNAL_ID
options like in the example above or you can configure theAWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
options to provide the credentials to read from your bucket.Additionally, you can limit the list of buckets displayed within your catalog by providing the list of paths to display using
PATH_DISPLAY_FILTER[S]
.All connections have read and write permissions by default but you can easily create a connection with only read access by setting
READ_ONLY
to true
.The options
ENCRYPTION_KMS_KEY
or ENCRYPTION_CUSTOMER_MANAGED_KEY
can be used to configure your bucket's encryption.Finally using the
COMMENT
option, you can add a description for your connection.For a detailed guide on how to configure permissions to access your S3 data in SQLake, see: Configure access to S3
For the full list of connection options with syntax and detailed descriptions, see: S3 connection with SQL
Once you've created your connection, you are ready to move onto the next step of building your data pipeline: reading your data into SQLake with an ingestion job.
Certain connection options are considered mutable, meaning that in some cases, you may only need to run a SQL command to alter an existing S3 connection rather than to create an entirely new one.
For example, take the S3 connection we created previously based on default credentials:
If you only wish to change the connection's permissions, you can run the following command:
ALTER S3 CONNECTION my_s3_connection
SET AWS_ROLE = 'arn:aws:iam::123456789012:role/new-sqlake-role';
Note that some options such as
READ_ONLY
cannot be altered once the connection has been created.If you no longer need a certain connection, you can easily drop it with the following SQL command:
DROP CONNECTION my_s3_connection;
However, note that if there are existing tables or jobs that are dependent upon the connection in question, the connection cannot be deleted.
Last modified 4mo ago