LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
How To Guides
How To Guides
  • How To Guides
  • SETUP
    • Deploy Upsolver on AWS
      • Deployment Guide
      • AWS Role Permissions
      • VPC Peering Guide
      • Role-Based AWS Credentials
    • Enable API Integration
    • Install the Upsolver CLI
  • CONNECTORS
    • Create Connections
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
      • Snowflake
      • Tabular
    • Configure Access
      • Amazon Kinesis
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • Confluent Kafka
    • Enable CDC
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
  • JOBS
    • Basics
      • Real-time Data Ingestion — Amazon Kinesis to ClickHouse
      • Real-time Data Ingestion — Amazon S3 to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Snowflake
    • Advanced Use Cases
      • Build a Data Lakehouse
      • Enriching Data - Amazon S3 to ClickHouse
      • Joining Data — Amazon S3 to Amazon Athena
      • Upserting Data — Amazon S3 to Amazon Athena
      • Aggregating Data — Amazon S3 to Amazon Athena
      • Managing Data Quality - Ingesting Data with Expectations
    • Database Replication
      • Replicate CDC Data into Snowflake
      • Replicate CDC Data to Multiple Targets in Snowflake
      • Ingest Your Microsoft SQL Server CDC Data to Snowflake
      • Ingest Your MongoDB CDC Data to Snowflake
      • Handle PostgreSQL TOAST Values
    • VPC Flow Logs
      • Data Ingestion — VPC Flow Logs
      • Data Analytics — VPC Flow Logs
    • Job Monitoring
      • Export Metrics to a Third-Party System
    • Data Observability
      • Observe Data with Datasets
  • DATA
    • Query Upsolver Iceberg Tables from Snowflake
  • APACHE ICEBERG
    • Analyze Your Iceberg Tables Using the Upsolver CLI
    • Optimize Your Iceberg Tables
Powered by GitBook
On this page
  • Create a Redshift connection
  • Alter a Redshift connection
  • Drop a Redshift connection
  1. CONNECTORS
  2. Create Connections

Amazon Redshift

This page describes how to create and maintain connections to your Amazon Redshift database.

Last updated 11 months ago

Before you can write your transformed data to a table in Amazon Redshift, you should first establish a connection to your Amazon Redshift database.

Create a Redshift connection

Simple example

A Redshift connection can be created as simply as follows:

CREATE REDSHIFT CONNECTION redshift_connection
    CONNECTION_STRING = 'jdbc:redshift://<host_name>:<port>/<database_name>'
    USER_NAME = 'your username'
    PASSWORD = 'your password';

where:

  • host_name: The endpoint of the Amazon Redshift cluster.

  • port: The port number that you specified when you launched the cluster. The default port for Amazon Redshift is 5439.

  • database_name: The name of the target database.

Read more:

Note that a Redshift connection must specify the database it is connecting to within the connection string. This means that in order to connect to multiple databases within your account, you need to create at least one connection per database.

Full example

The following example also creates a Redshift connection but additionally limits the maximum number of concurrent connections to your database by configuring an additional option MAX_CONCURRENT_CONNECTIONS:

CREATE REDSHIFT CONNECTION redshift_connection
    CONNECTION_STRING = 'jdbc:redshift://<host_name>:5439/<database_name>'
    USER_NAME = 'your username'
    PASSWORD = 'your password'
    MAX_CONCURRENT_CONNECTIONS = 10
    COMMENT = 'My new Redshift connection';

Alter a Redshift connection

Some connection options are considered mutable, meaning that in some cases, you can run a SQL command to alter an existing Redshift connection rather than creating a new one.

To change the database you are connecting to but keep everything else the same without having to create an entirely new connection, you can run the following command:

ALTER REDSHIFT CONNECTION my_redshift_connection
    SET CONNECTION_STRING = 'jdbc:redshift://<host_name>:5439/<database_name>';

Drop a Redshift connection

If you no longer need a connection, you can easily drop it with the following SQL command:

DROP CONNECTION my_redshift_connection; 

However, note that if there are existing tables or jobs that are dependent upon the connection in question, the connection cannot be deleted. Attempting to do so will raise the following error:

Cannot delete connection due to usages: <tables> <jobs>

Learn More

To discover which connection options are mutable, and to learn more about the options, please see the SQL command reference for .

Connection string arguments in Redshift
Finding your cluster connection string
Amazon Redshift