LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  1. JOBS

Ingestion

Last updated 11 months ago

Using familiar SQL syntax, you can create an ingestion job to read your data and write it into a staging table, or directly into a supported target. Upsolver ingestion jobs can automatically infer the schema, and populate the column names and types in the table.

Before ingesting your data, ensure that you have a connection to read from your data source. You will also need a metastore connection and corresponding cloud storage location for your staging table or a connection to your target system.

Ingestion Job Basics

Learn how to copy data from Amazon S3 into a staging table in the data lake.

Discover how to create a transformation job to copy data from a staging to a target table in the data lake.

Stream and File Sources

Find how to ingest data from an Amazon Kinesis stream into a staging table in the data lake or directly to the target.

Learn how to ingest your data from Amazon S3 into a staging table in the data lake or directly to the target.

Discover how to ingest your data from Apache Kafka into a staging table in the data lake or directly to the target.

Learn how to ingest data from your Confluent Kafka source into the data lake or directly to the target.

CDC Sources

Discover how to ingest data from Microsoft SQL Server into a staging table in the data lake.

Learn how to ingest data from MongoDB into a staging table in the data lake.

Find out how to ingest from MySQL into a staging table in the data lake.

Learn how to copy data from PostgreSQL into a staging table in the data lake.

Ingest to a Staging Table
Output to a Target Table
Amazon Kinesis
Amazon S3
Apache Kafka
Confluent Kafka
Microsoft SQL Server
MongoDB
MySQL
PostgreSQL