LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  • Step 1 - Connect to Amazon Redshift
  • Create a new connection
  • Use an existing connection
  • Step 2 - Select where to ingest the data
  1. DATA INGESTION WIZARD
  2. Using the Wizard
  3. Target Set-up

Amazon Redshift

Follow these steps to use Amazon Redshift as your target.

Last updated 11 months ago

Step 1 - Connect to Amazon Redshift

Create a new connection

Click Create a new connection, if it is not already selected.

In the Connection String field, enter your connection in the following format:

jdbc:redshift://<ENDPOINT>:<PORT>/<DB_NAME>

where:

  • ENDPOINT: The endpoint of the Amazon Redshift cluster.

  • PORT: The port number that you specified when you launched the cluster. The default port for Amazon Redshift is 5439.

  • DB_NAME: The name of the target database.

Read more:

Provide the Username and Password that will be used to authenticate to the database.

In the Name your connection field, type in the name for this connection. Please note this connection will be available to other users in your organization.

Use an existing connection

By default, if you have already created a connection, Upsolver selects Use an existing connection, and your Redshift connection is populated in the list.

For organizations with multiple connections, select the target connection you want to use.

Step 2 - Select where to ingest the data

Select an existing schema for the ingested data in the Select a target schema list.

When ingesting multiple source schemas into Redshift you have the following options:

  1. Ingest all tables into a single Redshift schema and add the source schema name to every new table created in Redshift.

  2. Map every source schema into a target schema in Redshift.

If you are ingesting into a single table, provide a name for the new table. If the source is a database (, , , ) Upsolver will create new tables in the selected schema.

Microsoft SQL Server
MongoDB
MySQL
PostgreSQL
Connection string arguments in Redshift
Finding your cluster connection string
Create a new connection to Amazon Redshift to use as the target for your ingestion job.
Select your Redshift connection to use as the target for your ingestion job.