Links

Learning Paths

Learn how to ingest your data from a stream, database, or file into your data lake, lakehouse, or warehouse.
Upsolver offers a low-code solution for ingesting data from your source into a target warehouse or lake. Using familiar SQL syntax, it is easy to create the entities you need. While there are differences in connection and job options between the various data sources and targets, the code required for each path shares common features.
Each learning path starts with the source data that you want to ingest. After you have configured AWS, please follow the learning path for your data source, which will guide you through the steps to enable access to create the relevant connections and jobs for your intended target.

Configure AWS

Regardless of your source and target, you must configure AWS prior to ingesting data. Please see the following guide for instructions. Our technical support team is on hand if you need help.

Learning Paths

Select your source to begin your data ingestion journey:

Suggested Reading

Upsolver makes coding pipelines easy. While you concentrate on writing bespoke jobs to move your data, we take care of schema evolution, data type changes, duplicates, and bad data, while ensuring fresh, ordered data.
We recommend that you understand how these features work under the hood, so we deliver the results you expect:

Further Learning

Enhance your jobs by exploring Upsolver's special features, which will help you deliver high-quality data to your destination:

Data Quality

Observability

Last modified 27d ago