LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  1. JOBS

Transformation

Last updated 1 year ago

CtrlK

Transformation jobs copy your source data to your target and are written using familiar SQL code. These Quickstart guides will provide you with the essential skills to write a transformation job, however, Upsolver includes an extensive list of functions and operators that you can use to build advanced solutions for your requirements.

Before writing a transformation job, ensure you have a connection to your target and that your ingestion job is running.

Managing Data

Upsert Data to the Target Table Perform inserts and updates in your target data using the INSERT and MERGE statements.

Delete Data from the Target Table Find out how to use the MERGE statement to delete rows from the target that have been deleted in the source.

Aggregate and Output Data Understand how to aggregate data when transforming data in the staging zone.

Join Two Data Streams Learn how to join multiple data streams into one table using a MATERIALIZED VIEW.

Data Targets

Output to Amazon Athena Learn how to create a job that writes to Amazon Athena.

Output to Amazon Redshift Find out how to write a transformation job to write to Amazon Redshift.

Output to Amazon S3 Discover how you can create a job that copies your data to Amazon S3.

Output to Elasticsearch

Learn how to write a transformation job to copy your data to Elasticsearch.

Output to Snowflake Write a job that copies writes to a Snowflake table.