Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • 1. Create an Amazon Athena data output
  • 2. Add aggregation to the data stream
  • 3. Rename the fields to match the output schema
  • 4. Define the partition strategy for writing data to Amazon Athena
  • 5. Verify your SQL and data and define run parameter
  • 6. Start writing data to Amazon Athena and verify data
  • 7. Query item_summary table for aggregated data

Was this helpful?

  1. Getting Started
  2. Get started as a Upsolver user
  3. Upsolver free training
  4. Introduction to Upsolver

Pre-aggregate data for efficiency and performance

Increase storage and processing efficiency by pre-aggregating data before loading to data warehouses or log processors.

PreviousTransform and write data to Amazon AthenaNextUPSERT streaming data to Amazon Athena

Last updated 4 years ago

Was this helpful?

Raw data can be very voluminous. Analysts might only need summarized data that's applicable to business. By pre-aggregating the data as it is ingested, you’re able to improve query performance and minimize storage and compute costs.

The objective of this module is to create a summary table in Athena that contains the totals for each item sold for a given day.

Desired Output would be an item_summary table with below schema:

1. Create an Amazon Athena data output

Click on DATA OUTPUT > NEW and SELECT Amazon Athena output. Give the output a NAME > item_summary and choose DATA SOURCES > Orders Click on NEXT to continue.

2. Add aggregation to the data stream

From the Upsolver Transformation Screen, add Itemid to the output by clicking the + sign next to the field. Then click Make Aggregated Click OK if the warning pops-up.

Click on Add Calculated Field. Find and SELECT the TO_DATE function. Enter DATETIME > data.orderDate and NAME > root.date Check the data by clicking on PREVIEW. If everything looks ok, click on SAVE.

Click on Add Aggregation Find and SELECT COUNT(*) and click on SAVE. This will count the number of items for a given period of time window in a stream. Click away the warning if they appear.

Add another aggregation by clicking on Add Aggregation again. This time select SUM from the list and choose data.items[].price from the list. Click on SAVE. Dismiss any warnings that you may encounter.

Add the last aggregation by clicking on Add Aggregation. This time select AVG from the list and choose data.items[].price from the list. Click on SAVE. Dismiss any warnings that you may encounter.

3. Rename the fields to match the output schema

Change the output column names to data_items_itemid > items_id sum_data_items_price > sum_items_price, and avg_data_items_price > avg_items_price Be very careful renaming the fields, we will use them later.

4. Define the partition strategy for writing data to Amazon Athena

Proper partitioning strategy can significant improve query performance with Amazon Athena. Upsolver manages partitions and compactions automatically.

Click on Manage Partitions and choose Time Partition Field > date and Time Partition Size > Daily Click on CLOSE. You will see Partitioned by event time (Daily partitions) shown on the screen instead of processing time.

5. Verify your SQL and data and define run parameter

Switch to the SQL view. Your SQL should look similar to the following. If your SQL looks different, copy and paste it to your environment.

SET "date" = TO_DATE(data.orderDate);
// GENERATED @ 2020-10-12T04:13:03.089790Z
SELECT PARTITION_TIME("date") AS _date_1:TIMESTAMP,
       AGGREGATION_TIME() AS processing_time:TIMESTAMP,
       data.items[].itemId AS items_id:STRING,
       "date" AS _date:TIMESTAMP,
       COUNT(*) AS "count":DOUBLE,
       SUM(data.items[].price) AS sum_items_price:DOUBLE,
       AVG(data.items[].price) AS avg_items_price:DOUBLE
  FROM "Orders"  
    GROUP BY data.items[].itemId,
          "date",
          PARTITION_TIME("date")
    APPEND ON DUPLICATE

Click on PREVIEW to verify your data. Click on RUN to define the Amazon Athena. Choose the DATABASE NAME > upsolver and TABLE NAME > item_summary Click on NEXT.

6. Start writing data to Amazon Athena and verify data

Leave everything on the Run page as default and click on DEPLOY.

The output is continuous stream by leaving ENDING AT as Never,

Upsolver starts writing data to Amazon Athena. You can click on PROGRESS to monitor output progress. This may take a minute.

7. Query item_summary table for aggregated data

When Upsolver creates an upserting output to Athena, it creates two objects: a table with the suffice _underlying_data and a view with the output name. When querying, you should query the VIEW, and not the underlying data.

// The number of each item sold

SELECT items_id,
         SUM("count")
FROM upsolver.item_summary
GROUP BY  items_id LIMIT 10
// The average price of the items

SELECT "_date",
         sum("sum_items_price")/sum("count") AS avg_price
FROM upsolver.item_summary
GROUP BY  "_date" LIMIT 10

Using Athena you can now query the data set created by Upsolver (it may take a few minutes before the data is available). Some sample queries that can be run are below: Click to login to Athena in order to run below queries

here
Reduce data volume and increase processing efficiency by summarizing data before writing to destination
Create a Amazon Athena output called item_summary
Upsolver Transformation Screen - Make Aggregated
Transform date format to be more readable
Count the number of items for a given period of time
Calculate the total price of an item for a given period of time
Calculate the average price of an item for a given period of time
Rename fields to item_id, sum_items_price and avg_items_price
Partition data by day based on event time
How many of each item were sold
Average price of the items