Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • How Upsolver works with Amazon Kinesis
  • Reading from a stream
  • Data retention
  • Create an Amazon Kinesis data source
  • Create an Amazon Kinesis data source (Advanced)

Was this helpful?

  1. Connecting data sources
  2. Amazon AWS data sources

Amazon Kinesis Stream data source

This article provides an introduction on how Upsolver works with Amazon Kinesis along with a guide on how to create an Amazon Kinesis Stream data source in Upsolver.

PreviousFull guide: S3 data sourceNextAmazon S3 over SQS data source

Last updated 4 years ago

Was this helpful?

How Upsolver works with Amazon Kinesis

Reading from a stream

Upsolver can read events from your Amazon Kinesis, from the specified stream.

While setting up the data source, Upsolver scans the last 30 minutes of data in the stream in order find 100 records (to preview) and auto detect the content format.

If there is no data from the last 30 minutes, you will need to select a content format.

When a data source is created it is populated with the sample 100 records from the preview. Once data is captured from the data stream, this preview data will be replaced with the actual data.

This means that if you do not select to read from the start, the data source will initially appear with 100 historical records from the past 30 minutes, and these will be replaced when data is ingested from the stream.

If you create an output based on this data source, the preview data is not included.

Data retention

By default the data in a data source is kept forever. If you specify a retention period (e.g. 30 days), this only applies to the data source. I

f you create an output based on this data source, it will have its own retention, and these do not need to match (e.g. output may be configured to retain the data for a year).

Create an Amazon Kinesis data source

If you wish to customize the data source by selecting:

  • a specific compute cluster

  • a target storage option

  • retention options

See:

1. From the Data Sources page, click New.

2. Select Amazon Kinesis Stream.

3. Select your AWS region.

4. Select the Kinesis stream to read from.

5. Name this data source.

6. Click Continue. The S3 Bucket Integration window will appear.

If you have connected to this stream previously, and therefore have the required permissions, you can now use your Amazon Kinesis Stream data source.

7. If you have not connected to this stream previously, click Launch Integration to launch the AWS CloudFormation page in a new tab.

8. Check the I acknowledge statement and click Create Stack.

Once the stack is created with the required resources, the message in Upsolver disappears automatically.

You can now use your Amazon Kinesis Stream data source.

Create an Amazon Kinesis data source (Advanced)

1. From the Data Sources page, click New.

2. Select Amazon Kinesis Stream and then click Advanced.

3. Name this data source.

4. Select the content format. This is typically auto-detected, but you can manually select a format.

6. Select the name of the Kinesis stream to read from.

7. Decide whether or not to read the stream from the start. This could be a day or a week, depending on the Amazon Kinesis setup.

If you do not select Read From Start, the stream will be read from the current time onwards and there will be no historical data in the data source.

10. If you choose to check Enabled, specify a retention period in Upsolver for the data in minutes, hours, or days.

The data will be deleted permanently after this period elapses. By default, the data is kept forever.

Click Show Advanced Options to configure:

  • Real Time Statistics

  • Shards

  • Execution Parallelism

  • End Read At

  • Compression

  • Store Raw Data

Otherwise, skip to step 17.

11. Check Real Time Statistics to calculate the data source statistics in real time directly from the input stream.

This is relevant to lookup tables, when answers are required very fast and in real time.

12. Specify the number of shards or parallel readers from Amazon Kinesis (the more shards, the quicker the data processing).

Each shard in Upsolver can read from one or more shards in Amazon Kinesis.

The number of shards in Upsolver must be less than or equal to the number of shards in Amazon Kinesis.

Contact Upsolver Professional Services if you want to configure this option. Typically you need one Upsolver shard per 10-20 MBps of data.

13. Contact Upsolver Professional Services if you wish to configure the execution parallelism. This determines how many files will be created in the data source per minute and must be set to a value that is less than the number of shards.

14. (Optional) Under End Read At, select a time to stop reading from the stream. This is useful if you wish to stop processing a stream.

15. Select the compression in the stream from the dropdown options.

If you set the compression to Auto Detect, the data that is ingested can be in multiple formats.

16. Check Store Raw Data to store an additional copy of the data in its original format without any transformations.

17. Click Continue. A preview of the data will appear.

18. For CSV, select a Header.

19. Click Continue again.

Parsed data samples will appear and you can review the Sample Size, Parsed Successfully, and # of Errors.

18. (Optional) If there are any errors, click Back to change the settings as required.

19. Click Create.

You can now use your Amazon Kinesis Stream data source.

If the content type is not automatically identified, you will be prompted to select the Content Format and configure any related options. See:

If necessary, configure the content format options. See:

5. Select the desired Kinesis connection (or ).

8. From the dropdown, select a compute cluster (or ) to run the calculation on.

9. Select a target storage connection (or ) to store the data read from the stream (output storage).

Data formats
Data formats
create a new one
create a new one
create a new one
Advanced