LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  • Step 1 - Connect to Confluent Cloud
  • Create a new connection
  • Use an existing connection
  • Step 2 - Select a topic to ingest
  • Step 3 - Check that events are read successfully
  1. DATA INGESTION WIZARD
  2. Using the Wizard
  3. Source Set-up

Confluent Cloud

Follow these steps to use Kafka hosted on Confluent Cloud as your source.

Last updated 11 months ago

Step 1 - Connect to Confluent Cloud

Create a new connection

Click Create a new connection, if it is not already selected. You can find instructions for how to create your Confluent Kafka cluster in this .

In Bootstrap server[s], enter a single bootstrap server host in the format of hostname:port or a list of bootstrap hosts in the format of (hostname1:port, hostname2:port).

Ensure the host address is accessible to Upsolver.

Configure the appropriate consumer properties for your Kafka cluster. You can find all of the available properties in the reference.

To securely access your Confluent Kafka cluster, you must configure your Confluent resource Access Key and Secret Key in the username and password properties, respectively. You can learn more about resource keys in the .

For a standard connection, use the following format:

security.protocol = SASL_SSL
sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule
     required username = "API_KEY"   password = "SECRET";
ssl.endpoint.identification.algorithm = https
sasl.mechanism = PLAIN

In order for Upsolver to connect to your Confluent cluster using SSL, follow these steps to .

In the Name your connection field, type in the name for this connection. Please note this connection will be available to other users in your organization.

Use an existing connection

By default, if you have already created a connection, Upsolver selects Use an existing connection, and your Confluent Cloud connection is populated in the list.

For organizations with multiple connections, select the source connection you want to use.

Step 2 - Select a topic to ingest

After the connection has been successfully established, Upsolver will automatically discover available topics and display them in the list, Select a topic to ingest. Choose the topic you'd like to ingest. By default, Upsolver will automatically detect the format of your data and parse events into the appropriate structure.

When using Avro format, Upsolver allows you to configure Confluent Schema Registry to help in decoding the field names and data types. If you use the Confluent Schema Registry, select the Avro Schema Registry option from the content type select list. In the Schema Registry Url text box enter the full URL to your Confluent Schema Registry using the following format:

https://access_key:secret_key@sr-aws.confluent.cloud/schemas/ids/{id}

Note that the API access and secret keys must be passed as part of the URL. Additionally, to support schema evolution, include {id}, as shown in the example. When it's included, Upsolver will automatically insert the schema ID from the Avro header of the incoming event to fetch the appropriate schema from the Schema Registry. Optionally, enter your Username and Password.

Step 3 - Check that events are read successfully

As soon as you select a topic, Upsolver will attempt to load a sample of the events.

If Upsolver did not load any sample events, try the following:

  1. Verify that your topic has recent events. Upsolver attempts to grab the most recently available events from the topic and will not seek back in time to fetch old events.

Verify the you selected to ensure it matches the events in your topic.

content type
quick start guide
Kafka Consumer Configuration
Using API Keys to Control Access in Confluent Cloud
Encrypt and Authenticate with TLS
Create a new connection to Confluent Cloud and select a topic to ingest.
Select your existing Confluent Cloud connection to use for your ingestion job. Then select your topic.