LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Reference
Reference
  • Reference
  • ✨Learning Paths
    • Amazon Kinesis
    • Amazon S3
    • Apache Kafka
    • Confluent Cloud
    • Microsoft SQL Server
    • MongoDB
    • MySQL
    • PostgreSQL
  • SQL COMMANDS
    • Clusters
      • CREATE CLUSTER
      • ALTER CLUSTER
      • ROLL CLUSTER
      • STOP/START CLUSTER
      • DROP CLUSTER
    • Connections
      • CREATE CONNECTION
        • Amazon Kinesis
        • Amazon Redshift
        • Amazon S3
        • Apache Kafka
        • AWS Glue Data Catalog
        • ClickHouse
        • Confluent Cloud
        • Elasticsearch
        • Hive Metastore
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • Polaris Catalog
        • Iceberg REST catalogs
        • PostgreSQL
        • Snowflake
      • ALTER CONNECTION
      • DROP CONNECTION
      • CDC Connections with SSL
    • Jobs
      • CREATE JOB
        • Ingestion
          • Amazon Kinesis
          • Amazon S3
          • Apache Kafka
          • Confluent Kafka
          • Content Types
          • Microsoft SQL Server
          • MongoDB
          • MySQL
          • PostgreSQL
        • Replication
          • Microsoft SQL Server
          • MongoDB
          • MySQL
          • PostgreSQL
        • Transformation
          • INSERT
            • MAP_COLUMNS_BY_NAME
            • UNNEST
          • MERGE
          • SELECT
          • Job Options
            • Amazon Redshift
            • Amazon S3
            • Apache Iceberg
            • ClickHouse
            • Data Lake Tables
            • Elasticsearch
            • PostgreSQL
            • Snowflake
        • Monitoring
          • Amazon CloudWatch
          • Datadog
          • Dynatrace
      • ALTER JOB
      • PAUSE / RESUME JOB
      • DROP JOB
    • Materialized Views
      • CREATE MATERIALIZED VIEW
      • ALTER MATERIALIZED VIEW
      • DROP MATERIALIZED VIEW
      • Join with a Materialized View
    • Tables
      • CREATE TABLE
      • ALTER TABLE
      • DROP TABLE
    • Iceberg Tables
      • Upsolver Managed Tables
        • CREATE ICEBERG TABLE
        • ALTER ICEBERG TABLE
        • DROP ICEBERG TABLE
        • OPTIMIZE ICEBERG TABLE
      • External Iceberg Tables
        • CREATE EXTERNAL ICEBERG TABLE
        • DROP EXTERNAL ICEBERG TABLE
      • Mirror Iceberg Tables
        • CREATE MIRROR
        • ALTER MIRROR
        • PAUSE / RESUME MIRROR
        • DROP MIRROR
  • FUNCTIONS & OPERATORS
    • Data Types
    • Functions
      • Aggregate
        • APPROX_COUNT_DISTINCT
        • APPROX_COUNT_DISTINCT_EACH
        • AVG
        • AVG_EACH
        • AVG_TIME_SERIES
        • COLLECT_SET
        • COLLECT_SET_EACH
        • COUNT
        • COUNT(*)
        • COUNT(DISTINCT ...)
        • COUNT_EACH
        • COUNT_IF
        • DYNAMIC_SESSIONS
        • FIRST
        • FIRST_ARRAY
        • FIRST_EACH
        • FIRST_TIME_SERIES
        • LAST
        • LAST_ARRAY
        • LAST_EACH
        • LAST_K
        • LAST_K_EACH
        • LAST_TIME_SERIES
        • MAX
        • MAX_BY
        • MAX_EACH
        • MAX_TIME_SERIES
        • MIN
        • MIN_BY
        • MIN_EACH
        • MIN_TIME_SERIES
        • SESSION_COUNT
        • STD_DEV
        • STD_DEV_EACH
        • STRING_MAX
        • STRING_MAX_EACH
        • STRING_MIN_EACH
        • SUM
        • SUM_EACH
        • SUM_TIME_SERIES
        • WEIGHTED_AVERAGE
      • Array
        • ARRAY_DISTINCT
        • ARRAY_JOIN
        • ARRAY_MAX
        • ARRAY_MIN
        • ARRAY_SORT
        • ARRAY_SORT_DESC
        • ARRAY_SUM
        • COUNT_VALUES_IF
        • COUNT_VALUES
        • ELEMENT_AT
        • FIRST_ELEMENT
        • LAST_ELEMENT
        • VALUE_INDEX_IN_ARRAY
        • VALUE_INDEX_IN_ROW
      • Comparison
        • GREATEST
        • LEAST
      • Conditional
        • COALESCE
        • IF_ELSE
        • NULL_IF
      • Date & Time
        • ADD_TIME_ZONE_OFFSET
        • DATE
        • DATE_ADD
        • DATE_DIFF
        • DATE_TRUNC
        • DAY
        • DAY_OF_WEEK
        • DAY_OF_YEAR
        • EXTRACT_TIMESTAMP
        • EXTRACT
        • FORMAT_DATETIME
        • FROM_ISO8601_DATE
        • FROM_UNIXTIME
        • HOUR
        • MILLISECOND
        • MINUTE
        • MONTH
        • QUARTER
        • RUN_END_TIME
        • RUN_START_TIME
        • SECOND
        • SUBTRACT_TIME_ZONE_OFFSET
        • TO_UNIX_EPOCH_MILLIS
        • TO_UNIX_EPOCH_SECONDS
        • TO_UNIXTIME
        • WEEK
        • YEAR_OF_WEEK
        • YEAR
      • Filter
        • IS_DUPLICATE
        • NOT
      • Interval
        • PARSE_DURATION
      • Mathematical
        • ABS
        • CBRT
        • CEIL
        • CEILING
        • DEGREES
        • EXP
        • FLOOR
        • GET_SHARD_NUMBER
        • LN
        • LOG
        • LOG2
        • LOG10
        • MOD
        • MODULO
        • POW
        • POWER
        • RADIANS
        • RAND
        • RANDOM
        • RECIPROCAL
        • ROUND
        • SIGN
        • SORT_VALUES
        • SQRT
        • TRUNCATE
      • Regular Expressions
        • REGEXP_EXTRACT
        • REGEXP_EXTRACT_ALL
        • REGEXP_LIKE
        • REGEX_MATCH_POSITION
        • REGEX_NAMED_GROUPS
        • REGEXP_REPLACE
      • Spatial
        • ST_DISTANCE
        • ST_WGS84_DISTANCE
        • WKT_SPATIAL_CONTAINS
        • WKT_SPATIAL_INTERSECT
      • String
        • BASE64_DECODE
        • BASE64_TO_HEX
        • BYTES_SUBSTRING
        • CONCAT
        • DATE
        • JOIN_ARRAYS
        • LENGTH
        • LOWER
        • LPAD
        • LTRIM
        • MD5
        • PARSE_DATETIME
        • REPLACE
        • REVERSE
        • RPAD
        • RTRIM
        • SHA1
        • SHA3_512
        • SHA256
        • SHA512
        • SORT_VALUES
        • SPLIT
        • SPLIT_TO_RECORD
        • STRING_FORMAT
        • STRIP_MARGIN
        • STRIP_PREFIX
        • STRIP_SUFFIX
        • STRPOS
        • SUBSTR
        • SUBSTRING
        • TRANSLATE
        • TRIM_CHARS
        • TRIM
        • UPPER
        • UUID_GENERATOR
        • XX_HASH
      • Structural
        • FROM_KEY_VALUE
        • GET_RANGE
        • JOIN_ALL_BY_KEY
        • JSON_PATH
        • JSON_TO_RECORD
        • MAP_WITH_INDEX
        • QUERY_STRING_TO_RECORD
        • RECORD_TO_JSON
        • SORT_BY
        • TO_ARRAY
        • ZIP_WITH_INDEX
        • ZIP
      • Trigonometric
        • COS
        • SIN
        • TAN
        • TANH
      • Type Conversion
        • CAST
        • CHR
        • DECIMAL_TO_HEX
        • HEX_TO_DECIMAL
        • TO_BIGINT
        • TO_DOUBLE
        • TO_STRING
      • URL
        • TOP_PRIVATE_DOMAIN
        • URL_DECODE
        • URL_ENCODE
        • URL_PARSER
    • Operators
      • Comparison
      • Conditional
        • CASE
      • Logical
      • Mathematical
      • String
  • MONITORING
    • Clusters
    • Datasets
      • Ingested Data
        • Column
      • Lineage
      • Data Violations
      • Statistics
      • Maintenance
        • Compactions
        • Expire Snapshots
        • Orphan Files
      • Columns
      • Partitions
      • Properties
    • Job Status
      • Stream & File Sources
        • Monitoring
        • Graphs
        • Lineage
        • Settings
      • CDC Sources
        • Monitoring
        • Replication Settings
        • Job Settings
    • System Catalog
      • Information Schema
        • Clusters
        • Columns
        • Connections
        • Jobs
        • Mirrors
        • Tables
        • Users
        • Views
      • Insights
        • job_output_column_stats
        • dataset_column_stats
      • Monitoring
        • CDC Status
        • Clusters
        • Expectations
        • Jobs
        • Partition Statistics
        • Recent Compactions
        • Running Queries
        • Table Statistics
      • Task Executions Table
  • GENERAL
    • Common SQL Syntax
    • View Entity Syntax
    • Keyboard Shortcuts
Powered by GitBook
On this page
  • Syntax
  • Connection options
  • Minimum example
  • Full example
  1. SQL COMMANDS
  2. Connections
  3. CREATE CONNECTION

Snowflake

Last updated 5 months ago

To write transformed data into a Snowflake table using Upsolver, you need to create a connection with the appropriate credentials. Upsolver supports two authentication methods for Snowflake:

  1. Username and Password

  2. Key Pair Authentication: Requires a username, private key file, and an optional private key passphrase.

Syntax

CREATE SNOWFLAKE CONNECTION <connection_identifier> 
    CONNECTION_STRING = '<connection_string>' 
    USER_NAME = '<user_name>'
    { PASSWORD = '<password>' 
        | PRIVATE_KEY_FILE='<file_path>' [PRIVATE_KEY_FILE_PWD=<file_passphrase>] }
    [ MAX_CONCURRENT_CONNECTIONS = <integer> ]
    [ COMMENT = '<comment>' ]

Jump to

Connection options

CONNECTION_STRING — editable

Type: text

The connection string to use when connecting to the database.

Format:

jdbc:snowflake://
    <ACCOUNT_WITH_REGION>.snowflakecomputing.com?
    db=<DB_NAME>&warehouse=<WAREHOUSE_NAME>&role=<ROLE_NAME>

Where:

  • ACCOUNT_WITH_REGION.snowflakecomputing.com

    • The connection URL in Snowflake.

    • Example: snowflakedemo.us-east-2.aws.snowflakecomputing.com

  • DB_NAME

    • The name of the database to connect to.

  • WAREHOUSE_NAME

    • (Optional) The warehouse name. If not provided, the default warehouse is used. If no default warehouse exists, the CREATE CONNECTION command fails.

  • ROLE_NAME

    • (Optional) The name of the role to use when connecting. If not provided, the default role is used. If no default role exists, the CREATE CONNECTION command fails. To ensure proper functionality and access for our user when connecting to Snowflake, the following permissions need to be granted in snowflake:

      • Grant usage on the specified database:

        GRANT USAGE ON DATABASE <database_name> TO ROLE <Role_name>;
      • Grant usage on the specified schema within the database:

        GRANT USAGE ON SCHEMA <schema_name> TO ROLE <Role_name>;
      • Grant permissions to create tables and stages within the specified schema:

         CREATE TABLE, CREATE STAGE ON SCHEMA <schema_name> TO ROLE <Role_name>;
      • Grant permission to create schemas within the specified database:

        CREATE SCHEMA ON DATABASE <database_name> TO ROLE <Role_name>;

USER_NAME

Type: text

The user to authenticate to the database with.

PASSWORD — editable with user_name

Type: text

The password for the user.

PRIVATE_KEY_FILE - editable with user_name

Type: text

Local path to the private key on Upsolver's server.

Key-Pair Authentication Setup

Step 1 - Generate keys

Step 2 - Upload the Key

PRIVATE_KEY_FILE_PWD- editable with PRIVATE_KEY_FILE

(Optional) Specifies the passphrase used to decrypt the private key file if it is encrypted. This parameter must be used alongside PRIVATE_KEY_FILE.

MAX_CONCURRENT_CONNECTIONS — editable

Type: integer

(Optional) The maximum number of concurrent connections to the database.

Limiting this may reduce the load on the database but could result in longer data latency.

COMMENT — editable

Type: text

(Optional) A description or comment regarding this connection.

Minimum example

CREATE SNOWFLAKE CONNECTION my_snowflake_connection
    CONNECTION_STRING = 'jdbc:snowflake://
        snowflakedemo.us-east-1.snowflakecomputing.com?
        db=DEMO_DB&warehouse=DEMO_WH&role=ADMIN'
    USER_NAME = 'your_user'
    PASSWORD = 'your_pass';

Full example

User-Password Authentication Example

CREATE SNOWFLAKE CONNECTION my_snowflake_connection
    CONNECTION_STRING = 'jdbc:snowflake://
        snowflakedemo.us-east-1.snowflakecomputing.com?
        db=DEMO_DB&warehouse=DEMO_WH&role=ADMIN'
    USER_NAME = 'your_user'
    PASSWORD = 'your_pass'
    MAX_CONCURRENT_CONNECTIONS = 23
    COMMENT = 'Snowflake connection example';

Key-Pair Authentication Example

CREATE SNOWFLAKE CONNECTION my_snowflake_connection
    CONNECTION_STRING = 'jdbc:snowflake://
        snowflakedemo.us-east-1.snowflakecomputing.com?
        db=DEMO_DB&warehouse=DEMO_WH&role=ADMIN'
    USER_NAME = 'your_user'
    PRIVATE_KEY_FILE = 'path/to/rsa_key_encrypted.p8'
    PRIVATE_KEY_FILE_PWD = 'private_key_passphrase'
    MAX_CONCURRENT_CONNECTIONS = 23
    COMMENT = 'Snowflake connection example';

Read more about .

Configure your Snowflake account for key-pair authentication following the

Upload the private key to Upsolver's server. Use the resulting file path on the server as the PRIVATE_KEY_FILE parameter for the connection setup. For detailed instructions on uploading the key, refer to .

connection string arguments in Snowflake
Snowflake Documentation.
this guide
CONNECTION_STRING
USER_NAME
PASSWORD
PRIVATE_KEY_FILE
PRIVATE_KEY_FILE_PWD
MAX_CONCURRENT_CONNECTIONS
COMMENT