Ingestion
Using familiar SQL syntax, you can create an ingestion job to read your data and write it into a staging table, or directly into a supported target. Upsolver ingestion jobs can automatically infer the schema, and populate the column names and types in the table.
Before ingesting your data, ensure that you have a connection to read from your data source. You will also need a metastore connection and corresponding cloud storage location for your staging table or a connection to your target system.
Ingestion Job Basics |
---|
Ingest to a Staging Table Learn how to copy data from Amazon S3 into a staging table in the data lake. |
Output to a Target Table Discover how to create a transformation job to copy data from a staging to a target table in the data lake. |
Stream and File Sources |
---|
Amazon Kinesis Find how to ingest data from an Amazon Kinesis stream into a staging table in the data lake or directly to the target. |
Amazon S3 Learn how to ingest your data from Amazon S3 into a staging table in the data lake or directly to the target. |
Apache Kafka Discover how to ingest your data from Apache Kafka into a staging table in the data lake or directly to the target. |
Confluent Kafka Learn how to ingest data from your Confluent Kafka source into the data lake or directly to the target. |
CDC Sources |
---|
Microsoft SQL Server Discover how to ingest data from Microsoft SQL Server into a staging table in the data lake. |
MongoDB Learn how to ingest data from MongoDB into a staging table in the data lake. |
MySQL Find out how to ingest from MySQL into a staging table in the data lake. |
PostgreSQL Learn how to copy data from PostgreSQL into a staging table in the data lake. |
Last updated