Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Storage
  • Amazon Athena
  • Amazon Redshift Spectrum
  • Streams
  • Database
  • Upsolver
  • Qubole
  • Lookup tables

Was this helpful?

  1. Data outputs and data transformation

Data outputs

This page provides an overview of the different data outputs that can be created in Upsolver.

PreviousData source propertiesNextAmazon AWS data outputs

Last updated 4 years ago

Was this helpful?

Upsolver supports sending data to various data outputs including Amazon Athena, Redshift Spectrum, Upsolver, and more.

Storage

These data outputs enable you to write data to blob storage and continue processing in these systems (e.g. send data to S3 then trigger notifications and send emails).

Storage outputs options:

Athena is a managed query engine over the data stored in files in Amazon S3. Through a configuration in the Glue Data Catalog (a metastore that holds the metadata of the location and format of the files), Athena is able to provide SQL access to the data.

Amazon Athena is a core data output that enables you to get your data into a data lake and query it in a cost-effective manner. After the data is organized, compacted, and stored in an Athena table on Amazon S3, you can run fast, cheap queries on the data lake.

A queryable data lake is a useful step in the process as it allows you to inspect your data and run ad-hoc queries to understand your data (using an SQL interface) before writing to a database (which is more expensive and requires a database). Data is stored in parquet format; it is columnar and partitioned, thus enabling you to query only the data you need.

There are also significant cost/speed benefits to using Athena as it only charges for data scanned (e.g. $X per TB).

For example, a 100 MB compressed parquet file may include a GB of raw data. If you only query one out of the ten columns in the data file, you are only charged for scanning 10 MB of data.

Athena also allows you to partition the data by date (e.g. per year). This date will appear as a column and can be added to your WHERE clause. Since Athena resolves this condition first and only scans the relevant partitions, this will further reduce the cost.

Consider if you have stored 50 TB of data over two years at a rate of 50 GB of data a week:

  • If you only need to query the past 5 weeks, you will be querying only 250 GB of data.

  • If you only need to access a tenth of the columns, this means that you are only paying to access 25 GB.

As a result, you can pay just cents for a query instead of hundreds of dollars. This huge reduction in cost makes data that would have previously been too expensive to use now useable.

Athena is a server-less architecture. If you require control of the servers, consider using Qubole.

Redshift Spectrum provides similar functionality to Amazon Athena at a similar cost. The data is stored in an Amazon S3 bucket that is connected to a Redshift cluster. Since the data is partitioned, any query looks in the relevant partition only.

A Redshift Spectrum data output provides you with speedy and cost-effective access to data and is also useful for querying and joining data in S3 and a Redshift Spectrum DB.

Streams

These data outputs enable you to stream your events to your Amazon Kinesis stream or Apache Kafka topic.

Database

Database data outputs are used when projects require a specific database output.

One use case would be if you write all your data into staging tables, use periodic queries to aggregate the data, and then write this to the final data tables (this follows an Extract Load Translate methodology). You may then clear your staging tables periodically, preserving only the transformed data, ensuring that excessive storage is not required and controlling the costs.

Given that the cost of a database is hundreds of times the cost of storing data in S3, it is important to plan your DB data outputs very carefully.

A common scenario for customers moving to Upsolver is to store the raw data in one of the more cost-effective data sources, pre-aggregate the data, and then write the output to the smaller tables on the database.

Another use case would be storing log analytics in Elasticsearch, resulting in a fast-growing database with many duplicates. By deduping the data, it would be possible to make significant savings without losing any fidelity.

Database output options:

Upsolver is the data output for plain ETLs and is useful when you need to transform data and create a new data source with this transformed data. Typically this would include cleaning up and sanitizing data, ordering data, aggregating data, and/or splitting data into multiple streams.

This sort of data output can be created as part of a pre-preparation process. Once the data source is created this can be duplicated and the output type can be switched.

Upsolver supports Qubole, a cloud-native data platform external system that provides you with managed clusters on various open sources systems such as Presto (which is the engine under Athena).

With Qubole, you have control over the servers and can spin up additional servers if required. Qubole pricing is based on servers and not on data, making it more cost effective if you have huge volumes of data to query.

Note: The output for Athena, Redshift Spectrum, and Qubole are all equivalent (the files are stored and managed in the same way).

Lookup tables are defined as a transformation like any other output, with the output being a key-value store that you can query to enable faster lookups (e.g. to get all users who clicked on a specific ad in "real-time").

Lookup tables are useful for:

By querying a lookup table, it is possible to enrich one stream with data from another stream.

Given that storing data in a data lake is comparatively cheap, it may be useful to flatten relational data.

This means that it is not necessary to consider joins when querying data, making it easier to extract the required data.

An Upsolver lookup table replaces ETL code and a serving DB like Redis or Cassandra.

When a lookup table is defined as real time, instead of waiting until the data is written to S3 and to the disk, the event’s details (the delta) are updated directly in-memory and only then stored in S3. This can be useful in cases such as supplying data on a user in real-time while the user is browsing a website.

Amazon S3
HDFS
Google Storage
Microsoft Azure
Amazon Athena
Amazon Athena data output
Amazon Redshift Spectrum
Amazon Redshift data output
Amazon Kinesis data output
Kafka data output
Amazon Redshift
Snowflake
MySQL
PostgreSQL
Microsoft SQL Server
Elasticsearch
Upsolver
Upsolver data output
Qubole
Qubole data output
Lookup tables
Lookup table data output