LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
How To Guides
How To Guides
  • How To Guides
  • SETUP
    • Deploy Upsolver on AWS
      • Deployment Guide
      • AWS Role Permissions
      • VPC Peering Guide
      • Role-Based AWS Credentials
    • Enable API Integration
    • Install the Upsolver CLI
  • CONNECTORS
    • Create Connections
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
      • Snowflake
      • Tabular
    • Configure Access
      • Amazon Kinesis
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • Confluent Kafka
    • Enable CDC
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
  • JOBS
    • Basics
      • Real-time Data Ingestion — Amazon Kinesis to ClickHouse
      • Real-time Data Ingestion — Amazon S3 to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Snowflake
    • Advanced Use Cases
      • Build a Data Lakehouse
      • Enriching Data - Amazon S3 to ClickHouse
      • Joining Data — Amazon S3 to Amazon Athena
      • Upserting Data — Amazon S3 to Amazon Athena
      • Aggregating Data — Amazon S3 to Amazon Athena
      • Managing Data Quality - Ingesting Data with Expectations
    • Database Replication
      • Replicate CDC Data into Snowflake
      • Replicate CDC Data to Multiple Targets in Snowflake
      • Ingest Your Microsoft SQL Server CDC Data to Snowflake
      • Ingest Your MongoDB CDC Data to Snowflake
      • Handle PostgreSQL TOAST Values
    • VPC Flow Logs
      • Data Ingestion — VPC Flow Logs
      • Data Analytics — VPC Flow Logs
    • Job Monitoring
      • Export Metrics to a Third-Party System
    • Data Observability
      • Observe Data with Datasets
  • DATA
    • Query Upsolver Iceberg Tables from Snowflake
  • APACHE ICEBERG
    • Analyze Your Iceberg Tables Using the Upsolver CLI
    • Optimize Your Iceberg Tables
Powered by GitBook
On this page
  • Create a Confluent connection
  • Alter a Confluent connection
  • Drop a Confluent connection
  1. CONNECTORS
  2. Create Connections

Confluent Cloud

This page describes how to create and maintain connections to your Confluent Cloud cluster.

Last updated 12 months ago

In order to consume events from your Kafka topics hosted in within Upsolver, you need to first create a connection. The connection defines how to connect and authenticate to your Confluent Kafka cluster.

Create a Confluent connection

Simple example

A Confluent connection can be created as simply as follows:

CREATE CONFLUENT CONNECTION my_confluent_connection
    HOSTS = ('foo:9092', 'bar:9092')
    SASL_USERNAME = 'API_KEY'
    SASL_PASSWORD = 'SECRET'
    COMMENT = 'My new Confluent Cloud connection';

You'll need to provide your Confluent API credentials to read from your Confluent cluster.

Full example

The following example creates a Confluent connection but sets additional options, such as CONSUMER_PROPERTIES:

CREATE CONFLUENT CONNECTION my_confluent_connection
    HOSTS = ('foo:9092', 'bar:9092')
    CONSUMER_PROPERTIES = 
      'bootstrap.servers = HOST:PORT
      security.protocol = SASL_SSL
      sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule   
               required username = "API_KEY"   password = "SECRET";
      ssl.endpoint.identification.algorithm = https
      sasl.mechanism = PLAIN'
    VERSION = CURRENT
    REQUIRE_STATIC_IP = true
    SSL = false
    TOPIC_DISPLAY_FILTERS = ('topic1', 'topic2')
    COMMENT = 'My new Confluent connection';

Alter a Confluent connection

A number of connection options are considered mutable, meaning that in some cases, you can run a SQL command to alter an existing Confluent connection rather than creating a new one.

For example, take the simple Confluent connection we created previously:

CREATE CONFLUENT CONNECTION my_confluent_connection
    HOSTS = ('foo:9092', 'bar:9092');

If you needed to update the consumer properties, rather than creating a new connection, you can run the following command:

ALTER CONFLUENT CONNECTION my_confluent_connection
    SET CONSUMER_PROPERTIES = 
       'bootstrap.servers = HOST:PORT
       security.protocol = SASL_SSL
       sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule   
             required username = "API_KEY"   password = "SECRET";
       ssl.endpoint.identification.algorithm = https
       sasl.mechanism = PLAIN'; 

Note that some options such as VERSION cannot be altered once the connection has been created.

Drop a Confluent connection

If you no longer need a connection, you can easily drop it with the following SQL command:

DROP CONNECTION my_confluent_connection; 

However, note that if existing tables or jobs are dependent upon the connection, they cannot be deleted.

Learn More

To discover which connection options are mutable, and to learn more about the options, please see the SQL command reference for .

Confluent Cloud
Confluent Cloud