LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
How To Guides
How To Guides
  • How To Guides
  • SETUP
    • Deploy Upsolver on AWS
      • Deployment Guide
      • AWS Role Permissions
      • VPC Peering Guide
      • Role-Based AWS Credentials
    • Enable API Integration
    • Install the Upsolver CLI
  • CONNECTORS
    • Create Connections
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
      • Snowflake
      • Tabular
    • Configure Access
      • Amazon Kinesis
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • Confluent Kafka
    • Enable CDC
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
  • JOBS
    • Basics
      • Real-time Data Ingestion — Amazon Kinesis to ClickHouse
      • Real-time Data Ingestion — Amazon S3 to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Snowflake
    • Advanced Use Cases
      • Build a Data Lakehouse
      • Enriching Data - Amazon S3 to ClickHouse
      • Joining Data — Amazon S3 to Amazon Athena
      • Upserting Data — Amazon S3 to Amazon Athena
      • Aggregating Data — Amazon S3 to Amazon Athena
      • Managing Data Quality - Ingesting Data with Expectations
    • Database Replication
      • Replicate CDC Data into Snowflake
      • Replicate CDC Data to Multiple Targets in Snowflake
      • Ingest Your Microsoft SQL Server CDC Data to Snowflake
      • Ingest Your MongoDB CDC Data to Snowflake
      • Handle PostgreSQL TOAST Values
    • VPC Flow Logs
      • Data Ingestion — VPC Flow Logs
      • Data Analytics — VPC Flow Logs
    • Job Monitoring
      • Export Metrics to a Third-Party System
    • Data Observability
      • Observe Data with Datasets
  • DATA
    • Query Upsolver Iceberg Tables from Snowflake
  • APACHE ICEBERG
    • Analyze Your Iceberg Tables Using the Upsolver CLI
    • Optimize Your Iceberg Tables
Powered by GitBook
On this page
  • Create a Kinesis connection
  • Alter a Kinesis connection
  • Drop a Kinesis connection
  1. CONNECTORS
  2. Create Connections

Amazon Kinesis

This page describes how to create and maintain connections to your Amazon Kinesis stream.

To read and work with your Amazon Kinesis data in Upsolver, you should first create a connection to your Kinesis stream that provides Upsolver with the necessary credentials to access your data. This guide shows you how.

Create a Kinesis connection

Simple example

A Kinesis connection can be created as simply as follows:

CREATE KINESIS CONNECTION my_kinesis_connection
    REGION = 'us-east-1';

Note that the connection in this example is created based on the default credentials derived from Upsolver's integration with your AWS account.

Full example

The following example also creates a Kinesis connection but additionally configures credentials by providing a specific role as well as establishes the connection as read-only:

CREATE KINESIS CONNECTION my_kinesis_connection
    AWS_ROLE = 'arn:aws:iam::123456789012:role/upsolver-sqlake-role'
    REGION = 'us-east-1'
    READ_ONLY = true
    MAX_WRITERS = 22
    STREAM_DISPLAY_FILTERS = ('stream1', 'stream2')
    COMMENT = 'kinesis connection example';

To establish a connection with specific permissions, you can configure the AWS_ROLE and EXTERNAL_ID options like in the example above or you can configure the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY options to provide the credentials to read from your stream.

Additionally, all connections have read and write permissions by default but you can easily create a connection with only read access by setting READ_ONLY to true.

You can also limit the list of streams displayed within your catalog by providing the list of streams to display using STREAM_DISPLAY_FILTER[S].

After you've created your connection, you are ready to move on to the next step of building your data pipeline: reading your data into Upsolver with an ingestion job.

Alter a Kinesis connection

Some connection options are considered mutable, meaning that in some cases, you can run a SQL command to alter an existing Kinesis connection rather than create a new one.

For example, take the Kinesis connection we created previously based on default credentials:

CREATE KINESIS CONNECTION my_kinesis_connection
    REGION = 'us-east-1';

To change the connection's permissions and keep everything else the same without having to create an entirely new connection, you can run the following command:

ALTER KINESIS CONNECTION my_kinesis_connection
    SET AWS_ROLE = 'arn:aws:iam::123456789012:role/new-sqlake-role'; 

Note that some options such as READ_ONLY and REGION cannot be altered once the connection has been created.

Drop a Kinesis connection

If you no longer need a connection, you can easily drop it with the following SQL command:

DROP CONNECTION my_kinesis_connection; 

However, note that if there are existing tables or jobs that are dependent upon the connection in question, the connection cannot be deleted.


Learn more

Last updated 11 months ago

For the full list of connection options with syntax and detailed descriptions, please see the SQL command reference for .

Amazon Kinesis