Apache Kafka

This article explains how to configure SSL for your Apache Kafka connection.

This article explains how to configure SSL for your Apache Kafka connection in Upsolver, detailing the necessary steps to upload the certificates to the Upsolver servers and configure the connection.

The following guide also applies to configuring your Confluent Cloud connection.

Configure SSL for Apache Kafka

To enable Upsolver to successfully connect to your Apache Kafka cluster using SSL, the key and certificate files generated when SSL was deployed should be provided as part of your Upsolver cluster configuration. This involves uploading the required certificates to the Upsolver servers by running a patch HTTP request for each cluster you wish to use with your connection.


Before starting, ensure you have SSL authentication configured within your Kafka cluster.

See Encryption and authentication with SSL for more information.

Note that the instructions below were written for a Linux-based system.

If you are working with Windows, you can use a Linux mimicking program or file a ticket via the Upsolver support portal and send the certificates for us to update it for you.

Configure your Upsolver cluster

Step 1: Upload your files

There are two request types you can use to upload the files:

  1. ModifyServerFiles - Can be used to upload multiple files at once. Replaces existing files, only the files in the latest upload will be saved, and all previously uploaded files will be removed.

  2. ModifyServerFile - Can be used to upload a single file with each request. Does not remove the previously uploaded files.

The name must be unique: uploading a new file with an existing name will replace the previously uploaded file. This applies to both scripts.

For Multiple Files

First, run the following request for the API server:

echo {} | jq '{ clazz: "ModifyServerFiles", serverFiles: [ { name: "kafka.client.keystore.jks", "path": "/opt/kafka.client.keystore.jks", "content": $file1 }, { name: "kafka.client.truststore.jks", "path": "/opt/kafka.client.truststore.jks", "content": $file2 } ]  }' --arg file1 $(cat /<FILE_PATH>/kafka.client.keystore.jks | base64) --arg file2 $(cat /<FILE_PATH>/kafka.client.truststore.jks | base64) |
http PATCH "https://api.upsolver.com/environments/<API_SERVER_ID>/" "Authorization: <API_TOKEN>" "x-user-organization: <ORG_ID>"

Then run this request for the cluster you wish to upload the files to:

echo {} | jq '{ clazz: "ModifyServerFiles", serverFiles: [ { name: "kafka.client.keystore.jks", "path": "/opt/kafka.client.keystore.jks", "content": $file1 }, { name: "kafka.client.truststore.jks", "path": "/opt/kafka.client.truststore.jks", "content": $file2 } ]  }' --arg file1 $(cat /<FILE_PATH>/kafka.client.keystore.jks | base64) --arg file2 $(cat /<FILE_PATH>/kafka.client.truststore.jks | base64) |
http PATCH "https://api.upsolver.com/environments/<CLUSTER_ID>/" "Authorization: <API_TOKEN>" "x-user-organization: <ORG_ID>"

Note that the two requests only differ in the IDs provided within the URLs for each request.

The first line of the request creates the JSON array serverFiles, which contains the path and content of the file you are uploading.

The path referenced within the array itself is the path the file is written to within the server; it is also the path that should be provided when using this file to establish a connection.

The content of the file is passed through as an argument with --arg. Here, <FILE_PATH> represents the path to the file you are uploading on your local computer.

This example uploads two files to the server, but the serverFiles array elements can be adjusted to upload either one or more files. Note that running this request overrides any files that may have been uploaded previously.

For a Single File

To upload a single file without overriding any existing files, run these requests instead:

echo {} | jq '{ clazz: "ModifyServerFile", serverFile: { name: "cert.pem", "path": "/opt/cert.pem", "content": $file1 }  }' --arg file1 $(cat ~/Downloads/cert.pem | base64) |
http PATCH "https://api.upsolver.com/environments/<API_SERVER_ID>/" "Authorization: $(cat ~/.upsolver/token)" "X-Api-Impersonate-Organization: <ORG_ID>"
echo {} | jq '{ clazz: "ModifyServerFile", serverFile: { name: "cert.pem", "path": "/opt/cert.pem", "content: $file1 }  }' --arg file1 $(cat ~/Downloads/cert.pem | base64) |
http PATCH "https://api.upsolver.com/environments/<CLUSTER_ID>/" "Authorization: $(cat ~/.upsolver/token)" "X-Api-Impersonate-Organization: <ORG_ID>"

The name parameter must be unique across different files: using the same name will replace the existing file with that name.

Make sure to replace the placeholder values <API_SERVER_ID> and <CLUSTER_ID>, as well as your <API_TOKEN> and <ORG_ID>.

To learn how to generate an API token, see Enable API Integration.

How to find your <API_SERVER_ID> and <CLUSTER_ID>
  1. Open a new worksheet.

  2. Run the query select * from system.information_schema.clusters;

  3. Lookup the relevant clusters and cluster ids in the results pane.

How to find your <ORG_ID>
  1. Navigate to the Security Information page by clicking Settings > Security Information.

  2. Your org id can be found at the Trusted entity as the sts:ExternalId value.

Step 2: Roll your cluster

Once the certificates have been uploaded, roll the modified cluster to apply the changes.

How to roll a cluster
  1. Open a worksheet.

  2. Run the following statement: roll cluster "cluster_name"

Create your connection

To use your key and certificate files to connect Upsolver to your Kafka cluster, you should provide the paths to your uploaded files as part of the properties in the CREATE KAFKA CONNECTION command.

To allow the connection to be used for reading data, the key store and trust store locations should be configured as CONSUMER_PROPERTIES.


CREATE KAFKA CONNECTION my_kafka_connection
  HOSTS = ('<bootstrap_server_1>:<port_number>','<bootstrap_server_2>:<port_number>')
  CONSUMER_PROPERTIES = 'security.protocol=SSL

Last updated