LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
How To Guides
How To Guides
  • How To Guides
  • SETUP
    • Deploy Upsolver on AWS
      • Deployment Guide
      • AWS Role Permissions
      • VPC Peering Guide
      • Role-Based AWS Credentials
    • Enable API Integration
    • Install the Upsolver CLI
  • CONNECTORS
    • Create Connections
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
      • Snowflake
      • Tabular
    • Configure Access
      • Amazon Kinesis
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • Confluent Kafka
    • Enable CDC
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
  • JOBS
    • Basics
      • Real-time Data Ingestion — Amazon Kinesis to ClickHouse
      • Real-time Data Ingestion — Amazon S3 to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Snowflake
    • Advanced Use Cases
      • Build a Data Lakehouse
      • Enriching Data - Amazon S3 to ClickHouse
      • Joining Data — Amazon S3 to Amazon Athena
      • Upserting Data — Amazon S3 to Amazon Athena
      • Aggregating Data — Amazon S3 to Amazon Athena
      • Managing Data Quality - Ingesting Data with Expectations
    • Database Replication
      • Replicate CDC Data into Snowflake
      • Replicate CDC Data to Multiple Targets in Snowflake
      • Ingest Your Microsoft SQL Server CDC Data to Snowflake
      • Ingest Your MongoDB CDC Data to Snowflake
      • Handle PostgreSQL TOAST Values
    • VPC Flow Logs
      • Data Ingestion — VPC Flow Logs
      • Data Analytics — VPC Flow Logs
    • Job Monitoring
      • Export Metrics to a Third-Party System
    • Data Observability
      • Observe Data with Datasets
  • DATA
    • Query Upsolver Iceberg Tables from Snowflake
  • APACHE ICEBERG
    • Analyze Your Iceberg Tables Using the Upsolver CLI
    • Optimize Your Iceberg Tables
Powered by GitBook
On this page
  • Configure SSL for Apache Kafka
  • Configure your Upsolver cluster
  • Create your connection
  1. CONNECTORS
  2. Configure Access

Apache Kafka

This article explains how to configure SSL for your Apache Kafka connection.

Last updated 11 months ago

This article explains how to configure SSL for your Apache Kafka connection in Upsolver, detailing the necessary steps to upload the certificates to the Upsolver servers and configure the connection.

The following guide also applies to configuring your Confluent Cloud connection.

Configure SSL for Apache Kafka

To enable Upsolver to successfully connect to your Apache Kafka cluster using SSL, the key and certificate files generated when SSL was deployed should be provided as part of your Upsolver cluster configuration. This involves uploading the required certificates to the Upsolver servers by running a patch HTTP request for each cluster you wish to use with your connection.

Prerequisites

Before starting, ensure you have SSL authentication configured within your Kafka cluster.

See for more information.

Note that the instructions below were written for a Linux-based system.

If you are working with Windows, you can use a Linux mimicking program or file a ticket via the and send the certificates for us to update it for you.

Configure your Upsolver cluster

Step 1: Upload your files

There are two request types you can use to upload the files:

  1. ModifyServerFiles - Can be used to upload multiple files at once. Replaces existing files, only the files in the latest upload will be saved, and all previously uploaded files will be removed.

  2. ModifyServerFile - Can be used to upload a single file with each request. Does not remove the previously uploaded files.

The name must be unique: uploading a new file with an existing name will replace the previously uploaded file. This applies to both scripts.

For Multiple Files

First, run the following request for the API server:

echo {} | jq '{ clazz: "ModifyServerFiles", serverFiles: [ { name: "kafka.client.keystore.jks", "path": "/opt/kafka.client.keystore.jks", "content": $file1 }, { name: "kafka.client.truststore.jks", "path": "/opt/kafka.client.truststore.jks", "content": $file2 } ]  }' --arg file1 $(cat /<FILE_PATH>/kafka.client.keystore.jks | base64) --arg file2 $(cat /<FILE_PATH>/kafka.client.truststore.jks | base64) |
http PATCH "https://api.upsolver.com/environments/<API_SERVER_ID>/" "Authorization: <API_TOKEN>" "x-user-organization: <ORG_ID>"

Then run this request for the cluster you wish to upload the files to:

echo {} | jq '{ clazz: "ModifyServerFiles", serverFiles: [ { name: "kafka.client.keystore.jks", "path": "/opt/kafka.client.keystore.jks", "content": $file1 }, { name: "kafka.client.truststore.jks", "path": "/opt/kafka.client.truststore.jks", "content": $file2 } ]  }' --arg file1 $(cat /<FILE_PATH>/kafka.client.keystore.jks | base64) --arg file2 $(cat /<FILE_PATH>/kafka.client.truststore.jks | base64) |
http PATCH "https://api.upsolver.com/environments/<CLUSTER_ID>/" "Authorization: <API_TOKEN>" "x-user-organization: <ORG_ID>"

Note that the two requests only differ in the IDs provided within the URLs for each request.

The first line of the request creates the JSON array serverFiles, which contains the path and content of the file you are uploading.

The path referenced within the array itself is the path the file is written to within the server; it is also the path that should be provided when using this file to establish a connection.

The content of the file is passed through as an argument with --arg. Here, <FILE_PATH> represents the path to the file you are uploading on your local computer.

This example uploads two files to the server, but the serverFiles array elements can be adjusted to upload either one or more files. Note that running this request overrides any files that may have been uploaded previously.

For a Single File

To upload a single file without overriding any existing files, run these requests instead:

echo {} | jq '{ clazz: "ModifyServerFile", serverFile: { name: "cert.pem", "path": "/opt/cert.pem", "content": $file1 }  }' --arg file1 $(cat ~/Downloads/cert.pem | base64) |
http PATCH "https://api.upsolver.com/environments/<API_SERVER_ID>/" "Authorization: $(cat ~/.upsolver/token)" "X-Api-Impersonate-Organization: <ORG_ID>"
echo {} | jq '{ clazz: "ModifyServerFile", serverFile: { name: "cert.pem", "path": "/opt/cert.pem", "content: $file1 }  }' --arg file1 $(cat ~/Downloads/cert.pem | base64) |
http PATCH "https://api.upsolver.com/environments/<CLUSTER_ID>/" "Authorization: $(cat ~/.upsolver/token)" "X-Api-Impersonate-Organization: <ORG_ID>"

The name parameter must be unique across different files: using the same name will replace the existing file with that name.

Make sure to replace the placeholder values <API_SERVER_ID> and <CLUSTER_ID>, as well as your <API_TOKEN> and <ORG_ID>.

How to find your <API_SERVER_ID> and <CLUSTER_ID>
  1. Open a new worksheet.

  2. Run the query select * from system.information_schema.clusters;

  3. Lookup the relevant clusters and cluster ids in the results pane.

How to find your <ORG_ID>
  1. Navigate to the Security Information page by clicking Settings > Security Information.

  2. Your org id can be found at the Trusted entity as the sts:ExternalId value.

Step 2: Roll your cluster

Once the certificates have been uploaded, roll the modified cluster to apply the changes.

How to roll a cluster
  1. Open a worksheet.

  2. Run the following statement: roll cluster "cluster_name"

Create your connection

Example

CREATE KAFKA CONNECTION my_kafka_connection
  HOSTS = ('<bootstrap_server_1>:<port_number>','<bootstrap_server_2>:<port_number>')
  CONSUMER_PROPERTIES = 'security.protocol=SSL
      ssl.truststore.location=/opt/kafka.client.truststore.jks
      ssl.keystore.location=/opt/kafka.client.keystore.jks
      ssl.keystore.password=<PASSWORD>
      ssl.key.password=<PASSWORD>';

To learn how to generate an API token, see .

To use your key and certificate files to connect Upsolver to your Kafka cluster, you should provide the paths to your uploaded files as part of the properties in the command.

To allow the connection to be used for reading data, the key store and trust store locations should be configured as .

Encryption and authentication with SSL
Upsolver support portal
Enable API Integration
CREATE KAFKA CONNECTION
CONSUMER_PROPERTIES