Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • 1. Create a data output
  • 2. Map fields from the source to Amazon Athena
  • Map source fields to output
  • Add fields and edit the output field name
  • 3. Transform data by creating calculated field
  • Transform DATE format
  • Mask field with MD5 hashing function
  • 4. Use SQL interface for aggregation
  • 5. Connect to Amazon Athena
  • Configure Amazon Athena run parameter
  • 6. Verify data in Amazon Athena

Was this helpful?

  1. Getting Started
  2. Get started as a Upsolver user
  3. Upsolver free training
  4. Introduction to Upsolver

Transform and write data to Amazon Athena

Build a real-time data stream to continuously stream data from Cloud object store to Amazon Athena. It utilizes Upsolver's built-in functions and SQL to transform streaming data.

PreviousIntroduction to UpsolverNextPre-aggregate data for efficiency and performance

Last updated 4 years ago

Was this helpful?

This tutorial uses pre-populated retail data and streams continuously to an Amazon Athena table. Upsolver supports many data sources and data outputs out-of-the-box.

1. Create a data output

Click on DATA OUTPUT > NEW and SELECT Amazon Athena

Provide the Output a NAME > shopping and choose DATA SOURCES > Orders Click on NEXT.

Note: The fields are automatically parsed and shown on the left hand side of the UI. Statistics and data distribution information for each field are also displayed.

From the Upsolver Transformation Screen, we can add columns to the output, create calculated fields, manage aggregations, and perform UPSERTs. Upsolver provides both a UI for building the transformations as well as a SQL interface. This functionality makes Upsolver approachable to less technical users that may not be as familiar with big data technologies.

Please find a raw sample data coming from Amazon S3 below:

{
"orderId": "abc",
"buyerEmail": "ben@upsolver.com",
"orderDate": "2020/07/07T01:00:00Z",
"items": [{"itemId": "widget","price": 19.99}, 
          {"itemId": "trinket","price": 9.99}],
"netTotal": 29.98,
"salesTax": 1.80
}

The desired output will be an Amazon Athena table, flattening the nested items into individual rows. In addition, the buyerEmail should be hashed to protect PII, and an order_grand_total will be added by summing up the netTotal and salesTax.

2. Map fields from the source to Amazon Athena

Map source fields to output

The first transformation will be to add a column mapping for the Items. Expand the items array and click the + symbol next to itemId and price on the left-hand side. Dismiss the warning(s) if you see them.

You will see a warning regarding the mapping of NUMERIC fields to DOUBLE. This warning can be closed, as we want to keep the item price as a DOUBLE.

Add fields and edit the output field name

Click in the OUTPUT COLUMN NAME and change the column names from data_items_itemsid > item_id and data_items_price > item_price.

Click on the PREVIEW button, and verify that the output contains a row for each ordered item.

Add 3 more columns to the output and rename them orderId > order_id, netTotal > order_net_total, and salesTax > order_sales_tax. Dismiss the warnings.

3. Transform data by creating calculated field

Upsolver provides over 200 built in functions out-of-the-box for easy data transformations. Additionally, users can extend the capability using Python UDFs.

Transform DATE format

Click on Add Calculated Field on the upper left corner. Find the TO_DATE function and click on SELECT. Under DATETIME type in data.orderDate NAME > root.order_date Click on PREVIEW to make sure the transformation looks ok. Click on SAVE.

Mask field with MD5 hashing function

To protect PII, we're going to hash buyers' email addresses using the MD5 algorithm. Click on Add Calculated Field on the upper left corner. Find MD5 function and click on SELECT. Enter INPUT > data.buyerEmail and NAME > root.buyer_email_hash Click PREVIEW to make sure everything looks ok and then click on SAVE.

Note: Upsolver provides a syntax sample for usage on the right to help you get started.

4. Use SQL interface for aggregation

Click over to the SQL tab by clicking SQL on the upper right corner.

All of the transformations from previous steps are expressed as SQL. You will see that the existing mappings and transformations are expressed as SQL query. Any changes you make in the SQL view will also be represented in the UI view and vice-versa.

Add a simple calculated field data.netTotal + data.salesTax AS order_total:DOUBLE directly to the SQL. Notice that the added field is automatically reflected in the UI. Click on PREVIEW to make sure everything looks correct.

Checkpoint: your SQL should look like the code below. Copy/paste the SQL to your SQL tab if something is missing.

SET order_date = TO_DATE(data.orderDate);
SET buyer_email_hash = MD5(data.buyerEmail);
// GENERATED @ 2020-10-11T20:45:11.427116Z
SELECT time AS processing_time:TIMESTAMP,
       data.items[].itemId AS item_id:STRING,
       data.items[].price AS item_price:DOUBLE,
       data.orderId AS order_id:STRING,
       data.netTotal AS order_net_total:DOUBLE,
       data.salesTax AS order_sales_tax:DOUBLE,
       order_date AS order_date:TIMESTAMP,
       buyer_email_hash AS buyer_email_hash:STRING,
       data.netTotal + data.salesTax AS order_total:DOUBLE
  FROM "Orders"  

5. Connect to Amazon Athena

Configure Amazon Athena run parameter

Click on RUN on the upper right corner and choose S3 STORAGE > S3 Default Output Storage

Select CONNECTION > Create your first Amazon Athena Connection and provide a NAME > athena and leave everything else as the default value click on CREATE

Go back to the RUN PARAMETER screen by clicking on the back button of your browser twice and you should see athena populated under CONNECTION. Provide the DATABASE NAME that you want to output to. This example uses upsolver. TABLE NAME > orders Click on NEXT.

Leave everything on RUN shopping as default and click on DEPLOY.

The output is continuous stream by leaving ENDING AT as Never,

6. Verify data in Amazon Athena

Using Athena you can now query the data set created by Upsolver (it may take a few minutes before the data is available). Some sample queries that can be run are below:

// get a list of orders

SELECT order_date,
         order_id,
         buyer_email_hash,
         order_net_total,
         order_sales_tax,
         order_total
FROM upsolver.orders
GROUP BY  1,2,3,4,5,6 limit 10
// how many orders per day?

SELECT count(distinct order_id) order_per_day,
         date_trunc('DAY',order_date) Date
FROM upsolver.orders
GROUP BY  date_trunc('DAY',order_date)
ORDER BY  date_trunc('DAY',order_date) DESC 

In addition to transforming data to prepare it for querying, there are other behind-the-scenes optimizations that Upsolver performs to improve performance of Athena queries. Not only does Upsolver utilize best practices such as using Parquet format and partitioning data, but it also runs compaction to improve the underlying file size. You can read more about compaction .

Click to login to Athena in order to run below queries

here
here
Upsolver supports many data sources and file formats (click on the graphic to zoom in)
Create output to Amazon Athena
Upsolver transformation screen
Map fields to output
Rename field names
Map additional fields and rename them
All the changes in SQL is reflected in the UI and vice-versa
Choose default output storage
Upsolver starts to stream transformed data to AWS Athena table