LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
How To Guides
How To Guides
  • How To Guides
  • SETUP
    • Deploy Upsolver on AWS
      • Deployment Guide
      • AWS Role Permissions
      • VPC Peering Guide
      • Role-Based AWS Credentials
    • Enable API Integration
    • Install the Upsolver CLI
  • CONNECTORS
    • Create Connections
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
      • Snowflake
      • Tabular
    • Configure Access
      • Amazon Kinesis
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • Confluent Kafka
    • Enable CDC
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
  • JOBS
    • Basics
      • Real-time Data Ingestion — Amazon Kinesis to ClickHouse
      • Real-time Data Ingestion — Amazon S3 to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Snowflake
    • Advanced Use Cases
      • Build a Data Lakehouse
      • Enriching Data - Amazon S3 to ClickHouse
      • Joining Data — Amazon S3 to Amazon Athena
      • Upserting Data — Amazon S3 to Amazon Athena
      • Aggregating Data — Amazon S3 to Amazon Athena
      • Managing Data Quality - Ingesting Data with Expectations
    • Database Replication
      • Replicate CDC Data into Snowflake
      • Replicate CDC Data to Multiple Targets in Snowflake
      • Ingest Your Microsoft SQL Server CDC Data to Snowflake
      • Ingest Your MongoDB CDC Data to Snowflake
      • Handle PostgreSQL TOAST Values
    • VPC Flow Logs
      • Data Ingestion — VPC Flow Logs
      • Data Analytics — VPC Flow Logs
    • Job Monitoring
      • Export Metrics to a Third-Party System
    • Data Observability
      • Observe Data with Datasets
  • DATA
    • Query Upsolver Iceberg Tables from Snowflake
  • APACHE ICEBERG
    • Analyze Your Iceberg Tables Using the Upsolver CLI
    • Optimize Your Iceberg Tables
Powered by GitBook
On this page
  1. CONNECTORS

Create Connections

This section describes the source and target connections for ingesting and outputting your data.

Before creating your data pipeline with Upsolver, ensure that you have the necessary connectors in place to support your intended use case.

For data ingestion, you first need a connection to your data source; then, in order to read your data into a staging table, you should have a connection to a metadata store, along with a corresponding cloud storage location.

After your data has been ingested and transformed, you need a connection to your target location for outputting your data.

If you have deployed Upsolver on AWS, you will have an Amazon S3 and AWS Glue Data Catalog connection created by default.

See the guide to for more information.

Connections determine the credentials that Upsolver uses to read and/or write your data, so you may want to explicitly create your own Amazon S3 or AWS Glue Data Catalog connections to configure specific permissions.

The use of static IAM credentials for cross-account access is deprecated and Upsolver recommends that you use an IAM role instead.

Learn more about a connection type:

Deploy Upsolver on AWS
Amazon Kinesis
Amazon Redshift
Amazon S3
Apache Kafka
AWS Glue Data Catalog
ClickHouse
Confluent Kafka
Elasticsearch
Microsoft SQL Server
MongoDB
MySQL
PostgreSQL
Snowflake
Tabular