LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  • Prerequisites
  • Create a job writing to Redshift
  1. JOBS
  2. Transformation
  3. Data Targets

Output to Amazon Redshift

Last updated 11 months ago

Prerequisites

Ensure that you have an connection with the correct permissions to write to your target table. Additionally, this target table and columns should already exist within Redshift before writing to it using Upsolver.

You also need a storage connection that has access to the bucket you would like the job to use to store the intermediate files used while running the job.

Finally, you should also have a staging table created previously that contains the data you intend to write to Redshift.

Create a job writing to Redshift

After you have fulfilled the prerequisites, you can create an INSERT job as follows:

CREATE JOB load_data_to_redshift
    START_FROM = BEGINNING
    SKIP_FAILED_FILES = TRUE
    FAIL_ON_WRITE_ERROR = FALSE
AS INSERT INTO REDSHIFT <redshift_connection>.<schema_name>.<target_table_name> 
    MAP_COLUMNS_BY_NAME            
    SELECT orderid AS app_name
    FROM <glue_catalog_name>.<database_name>.<table_name>
    WHERE time_filter();

This example only uses a subset of all job options available when writing to Redshift. Depending on your use case, you may want to configure a different set of options.


Learn More

For the full list of job options with syntax and detailed descriptions, see the transformation job options for .

See the SQL command reference for more details and examples.

Amazon Redshift
Amazon Redshift
INSERT