LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  1. CONNECTORS
  2. Connectors

Confluent Cloud

This quickstart shows you how to create a Confluent Cloud connection.

Create a connection to Confluent Kafka

To ingest data from your Confluent topic into a table within Upsolver, you must first create a connection that provides the appropriate credentials to access your topic.

Your connection is persistent, so you won't need to re-create it for every job. The connection is also shared with other users in your organization.

Here’s the code for creating a connection to Confluent Kafka:

// Syntax
CREATE KAFKA CONFLUENT CONNECTION <connection_identifier> 
 HOSTS = ('<bootstrap_server_1>:<port_number>','<bootstrap_server_2>:<port_number>') 
 CONSUMER_PROPERTIES = '<bootstrap_server_1>:<port_number> 
    security.protocol=SASL_SSL
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
        required username="<kafka_username>"   password="<kafka_password>";
    ssl.endpoint.identification.algorithm=https
    sasl.mechanism=PLAIN';

// Example
CREATE KAFKA CONFLUENT my_confluent_connection
 HOSTS = ('foo:9092', 'bar:9092')
 CONSUMER_PROPERTIES = 'security.protocol = SASL_SSL
    sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule
        required username = "API_KEY"   password = "SECRET";
    ssl.endpoint.identification.algorithm = https
    sasl.mechanism = PLAIN';

After you complete this step, you should see the my_confluent_connection connection in your navigation tree.


Learn More

Last updated 11 months ago

Please see the SQL command reference for for the full list of connection options, and examples.

Confluent Cloud