LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Quickstarts
Quickstarts
  • Quickstarts
  • DATA INGESTION WIZARD
    • Using the Wizard
      • Source Set-up
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Cloud
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
      • Target Set-up
        • Amazon Redshift
        • AWS Glue Data Catalog
        • ClickHouse
        • Polaris Catalog
        • Snowflake
      • Job Configuration
        • Job Configuration
        • Job Configuration for CDC
      • Review and Run Job
  • CONNECTORS
    • Connectors
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • Polaris Catalog
      • PostgreSQL
      • Snowflake
  • JOBS
    • Ingestion
      • Job Basics
        • Ingest to a Staging Table
        • Output to a Target Table
      • Stream and File Sources
        • Amazon Kinesis
        • Amazon S3
        • Apache Kafka
        • Confluent Kafka
      • CDC Sources
        • Microsoft SQL Server
        • MongoDB
        • MySQL
        • PostgreSQL
    • Transformation
      • Updating Data
        • Upsert Data to a Target Table
        • Delete Data from a Target Table
        • Aggregate and Output Data
        • Join Two Data Streams
      • Data Targets
        • Output to Amazon Athena
        • Output to Amazon Redshift
        • Output to Amazon S3
        • Output to Elasticsearch
        • Output to Snowflake
  • APACHE ICEBERG
    • Optimize Your Iceberg Tables
    • Install the Iceberg Table Analyzer
Powered by GitBook
On this page
  1. CONNECTORS
  2. Connectors

Apache Kafka

This quickstart shows you how to create an Apache Kafka connection.

Connecting to Apache Kafka in the Raw Zone

To consume data from your cluster you must first create a connection to Apache Kafka. This connection enables you to configure the AWS IAM credentials, bucket names, and prefixes that Upsolver needs to access the data. You can create a connection using familiar SQL syntax.

Your connection is persistent, so you won't need to re-create it for every job, and the connection is shared with other users in your organization.

Here's the code to create an Apache Kafka connection:

// Syntax
CREATE KAFKA CONNECTION <connection_identifier> 
 HOSTS = ('<bootstrap_server_1>:<port_number>','<bootstrap_server_2>:<port_number>') 
 CONSUMER_PROPERTIES = '<bootstrap_server_1>:<port_number> 
   security.protocol=SASL_SSL
   sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
         required username="<kafka_username>"   password="<kafka_password>";
   ssl.endpoint.identification.algorithm=https
   sasl.mechanism=PLAIN';

// Example
CREATE KAFKA CONNECTION my_kafka_connection
 HOSTS = ('pkc-2396y.us-east-1.aws.confluent.cloud:9092')
 CONSUMER_PROPERTIES = 
      'bootstrap.servers=pkc-2396y.us-east-1.aws.confluent.cloud:9092
      security.protocol=SASL_SSL
      sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
            required username="XXXXXXXX"   password="-----------";
      ssl.endpoint.identification.algorithm=https
      sasl.mechanism=PLAIN';

After you complete this step, you should see the my_kafka_connection connection in your navigation tree.


Learn More

Last updated 1 year ago

Please see the SQL command reference for for the full list of connection options, and examples.

Apache Kafka