Output to Snowflake
Last updated
Last updated
Ensure that you have a connection with the correct permissions to write to your target table. Additionally, this target table should already exist within Snowflake before writing to it using Upsolver. Furthermore, you will need a storage connection that has access to the bucket you would like the job to use to store the intermediate files used while running the job. Finally, you should also have a staging table created previously that contains your ingested data.
After you have fulfilled the prerequisites, you can create an INSERT
job as follows:
This example only uses a subset of all job options available when writing to Snowflake.
Depending on your use case, you may want to configure a different set of options. For instance, this example contains an aggregation, which means you may want to configure the AGGREGATION_PARALLELISM
option.
Certain job options are considered mutable, meaning that in some cases, you can run a SQL command to alter an existing transformation job rather than having to create a new one.
For example, take the job we created as an example earlier:
If you wanted to keep the job as is but just change the cluster that is running the job, you can run the following command:
Note that some options such as RUN_INTERVAL
cannot be altered once the connection has been created.
If you no longer need a certain job, you can easily drop it with the following SQL command:
Learn More
For the full list of job options with syntax and detailed descriptions, see the transformation job options for .
See the SQL command reference for more details and examples.