Apache Kafka
This page describes how to create and maintain connections to your Apache Kafka cluster.
In order to read and work with data from your Kafka topic in SQLake, you should first create a connection to your Kafka cluster that provides Upsolver with the necessary credentials to access your data.
Create a Kafka connection
Simple example
A Kafka connection can be created as simply as follows:
However, you may find that additional connection options may need to be configured in order to provide the proper credentials to read from your Kafka cluster.
Full example
The following example also creates a Kafka connection but sets additional options such as CONSUMER_PROPERTIES
to ensure the configurations necessary to ingest data from this example cluster are provided:
For the full list of connection options with syntax and detailed descriptions, see: Kafka connection with SQL
Once you've created your connection, you are ready to move onto the next step of building your data pipeline: reading your data into SQLake with an ingestion job.
Alter a Kafka connection
Certain connection options are considered mutable, meaning that in some cases, you can run a SQL command to alter an existing Kafka connection rather than creating a new one.
For example, take the simple Kafka connection we created previously:
If it turns out certain consumer properties need to be configured in order from our cluster, instead of having to create an entirely new connection, you can run the following command:
Note that some options such as VERSION
cannot be altered once the connection has been created.
To check which specific connection options are mutable, see: Kafka connection with SQL
Drop a Kafka connection
If you no longer need a certain connection, you can easily drop it with the following SQL command:
However, note that if there are existing tables or jobs that are dependent upon the connection in question, the connection cannot be deleted.
For more details, see: DROP CONNECTION
Last updated