This quickstart guide shows you how to create a connection to Snowflake.

Connect to your Snowflake EDW and create an output table for refined data

Upsolver uses a JDBC connection to write data to Snowflake. This connection creates temporary tables in Snowflake, while the data is continuously streamed. It also merges delta lakes into Snowflake tables. Thanks to Upsolver, the merging process can be automatically managed, that is, you no longer need to maintain delta files or tables. Ultimately, this eliminates the need for large-scale table scans on Snowflake, which can be slow or expensive.

Your connection is persistent, so you won't need to re-create it for every job. The connection is also shared with other users in your organization.

Here’s the code for creating a connection to Snowflake:

// Syntax
    CONNECTION_STRING = 'jdbc:snowflake://
// Example
CREATE SNOWFLAKE CONNECTION my_snowflake_connection
       CONNECTION_STRING = 'jdbc:snowflake://
       USER_NAME = 'demouser'
       PASSWORD = 'demopass';

Now you can start transforming your data, and there are three ways you can do this using a transformation job:

To take full advantage of these jobs, we recommend that you explicitly create your output table, which enables you to define partitions, primary keys, and more.

You don’t need to define all your output columns, as Upsolver automatically adds missing columns. If you do want to strictly control which columns are added, you can define them using the CREATE TABLE statement and set the ADD_MISSING_COLUMNS job property to FALSE.

Here's the code to create an output table in Snowflake:

CREATE or REPLACE TABLE <snowflake schema>.<snowflake table> (
    <column_name> <data_type>, 
    <column_name> <data_type>, 
    <column_name> <data_type> );

Learn More

Please see the SQL command reference for Snowflake for the full list of connection options, and examples.

Last updated