Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • How to upload files
  • Step 1
  • Step 2
  • Step 3

Was this helpful?

  1. Administration
  2. Clusters

Uploading user-provided certificates

PreviousCluster propertiesNextPython UDF

Last updated 1 year ago

Was this helpful?

In order to successfully connect to Kafka or other third parties, you may need to upload the required certificates to the Upsolver servers by running a patch HTTP request for each cluster you wish to use with your connection.

Note that the instructions below were build for a Linux-based system with HTTPie installed.

If you are working with Windows, you can use a Linux mimicking program to use the below examples, adjust the example to your current system or reach out to and send the certificates for us to update it for you.

How to upload files

The following is an example of how to upload your keystore and truststore files to connect to Kafka, but it can be easily adjusted to upload other files as well. The files you require and the Kafka properties you need to use might be different than the ones shown in this example.

The files need to be uploaded to the API cluster and the Data updates that the data source will be using, it's recommended to upload the certificates to all clusters. The API cluster will use the certificates when testing the connection when creating a new connection. The data updates cluster will use the certificate to run the reading tasks.

There are two request types to upload the files: 1. ModifyServerFiles - Can be used to upload multiple files at once. Replaces existing files, only the files in the latest upload will be saved, and all previously uploaded files will be removed. 2. ModifyServerFile - Can be used to upload a single file with each request. Does not remove the previously uploaded files.

The name must be Unique, uploading a new file with an existing name will replace the previously uploaded file using both scripts.

Step 1

First, run one of the following requests for the API and Data Updates servers:

echo {} | jq '{ clazz: "ModifyServerFiles", serverFiles: [ { name: "<Unique Name>", "path": "<Unique path>", "content": $file1 }, { name: "<Unique Name>", "path": "<Unique path>", "content": $file2 } ]  }' --arg file1 $(cat <Local file path>/<file name> | base64) --arg file2 $(cat <Local file path>/<file name> | base64) |
http PATCH "https://api.upsolver.com/environments/<SERVER_ID>/" "Authorization: <API_TOKEN>" "x-user-organization: <ORG_ID>"

The first line of the request creates a JSON array serverFiles which contains the name, path, and content of the files you are uploading.

The name and the path referenced are the name of the certificate and the path the file is written to within the server, it is also the path that should be provided when using this file to establish a connection. Replace <Unique Name> with a unique name for the certificate and the <Unique path> with a unique path for the file within the server.

The content of the file is passed through with as an argument with --arg. The <Local file path> represents the path to the file you are uploading on your local computer, The <File name> represents the name of the file you'd like to upload, note to use the full path to the file including the file's extension.

This example uploads two files to the server, but the serverFiles array elements can be adjusted to upload either one or more files.

Finally, provide the <SERVER_ID> , as well as the <API_TOKEN> and the <ORG_ID>.

An example for a final request:

echo {} | jq '{ clazz: "ModifyServerFiles", serverFiles: { name: "cert.pem", "path": "/opt/cert.pem", "content": $file1 }  }' --arg file1 $(cat ~/Downloads/cert.pem | base64) |
http PATCH "https://api.upsolver.com/environments/12345678-1234-1234-1234-12345678/" "Authorization: $(cat ~/.upsolver/token)" "X-Api-Impersonate-Organization: 12345678-1234-1234-1234-12345678"
How to find your <API_SERVER_ID>
  1. Go to the Clusters page and click on the PrivateAPI cluster.

  2. Click on Copy API endpoint in the upper right corner.

  3. The API server ID can be found within the endpoint as follows: https://api-<API_SERVER_ID>.upsolver.com/

How to find your <CLUSTER_ID>
  1. Go to the Clusters page and click on the cluster you wish to upload your files to.

  2. Once you are on that specific cluster's page, the cluster ID can be found within the page's URL as follows: https://app.upsolver.com/environments/view/<CLUSTER_ID>

How to find your <ORG_ID>
  1. Navigate to the SAML Integration page by clicking More > SAML.

  2. Your org id can be found at the end of the Audience URI as upsolver://organization/<ORG_ID>

Note that running the ModifyServerFiles request overrides any files that may have been previously uploaded to the server.

To upload a single file without overriding any existing ones, run the requests instead:

echo {} | jq '{ clazz: "ModifyServerFile", serverFile: { name: "<Unique Name>", "path": "<Unique path>", "content": $file1 }  }' --arg file1 $(cat <Local file path>/<File name> | base64) |
http PATCH "https://api.upsolver.com/environments/<SERVER_ID>/" "Authorization: <API_TOKEN>" "X-Api-Impersonate-Organization: <ORG_ID>"

The first line of the request creates a JSON array serverFile which contains the name, path, and content of the file you are uploading.

The name and the path referenced are the name of the certificate and the path the file is written to within the server, it is also the path that should be provided when using this file to establish a connection. Replace <Unique Name> with a unique name for the certificate and the <Unique path> with a unique path for the file within the server.

The content of the file is passed through as an argument with --arg. The <Local file path> represents the path to the file you are uploading on your local computer, The <File name> represents the name of the file you'd like to upload, note to use the full path to the file including the file's extension.

Note: The name parameter must be unique across different files. Using the same name will replace the existing file with that name.

Finally, provide the <SERVER_ID> , as well as the <API_TOKEN> and the <ORG_ID>.

Step 2

Once the certificates have been uploaded, roll the modified cluster to apply the changes.

How to roll a cluster
  1. Go to the Clusters page and select the cluster you would like to roll.

  2. In the upper righthand corner, click on the three dots next to Stop.

  3. Select Roll from the list of options that appears.

Step 3

If you followed this example using Kafka, you should now set the following as your consumer properties for your Kafka data sources:

security.protocol=SSL
ssl.truststore.location=<Unique path>
ssl.keystore.location=<Unique path>
ssl.keystore.password=<PASSWORD>
ssl.truststore.password=<PASSWORD>

Note: the above Kafka properties configuration is an example, the final configuration will depend on the security setting you are using in Kafka. More information cam be found in the Kafka connection documentation.

For existing Kafka data sources, you can update the consumer properties by going to Properties > Advanced > Kafka Consumer Properties.

To learn how to generate an API token, see:

For information on creating a Kafka connection see:

Upsolver support
Upsolver REST API
Kafka connection