Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Prerequisite
  • Create a data source
  • Create a data output
  • Data transformation
  • Connect to Amazon Athena and start streaming
  • Verify your data in Amazon Athena

Was this helpful?

  1. Getting Started
  2. Get started as a Upsolver user
  3. Upsolver in 5 minutes

Amazon Athena data output

Process streaming data from end-to-end within 10 minutes. The data source is an Amazon S3 bucket (updated continuously). Upsolver processes the data and loads to Amazon Athena table.

PreviousAdditional sandbox funNextMySQL (AWS RDS) data output

Last updated 4 years ago

Was this helpful?

To help you get started with Upsolver, you can try it out for free. You can choose between and subscriptions. Both options give you free Upsolver units (UUs), units of processing capability per hour based on VM instance type.

Prerequisite

  1. An existing AWS environment to output your data to

  2. Appropriate permissions to output to an Amazon Athena environment

  3. Have an Athena environment set up. If not, follow .

Create a data source

1. Click on DATA SOURCES > NEW to connect to various data sources. Upsolver works with both streaming and static data sources. We're using sample mobile gaming data for this tutorial.

2. Click on SELECT next to Amazon S3 data source. There are many data sources to choose from. The Quickstart uses pre-populated mobile data.

3. Choose BUCKET > upsolver-tutorials-mobile-users by clicking on the dropdown box. Fill in FOLDER > data and DATE PATTERN > yyyy/MM/dd/HH/ click on CONTINUE. Upsolver automatically verifies the files in the bucket and displays blue checkmarks next to the file name.

4. Upsolver automatically parses the data and displays a sample of the data that you’re working with. You can expand each record to view each event in a formatted view.

5. Click on CREATE to complete the Amazon S3 Data Source. Notice the schema is automatically generated and data statistics are also displayed.

Create a data output

1. Click on NEW OUTPUT on the upper right corner. (You can also get to the output screen by clicking on OUTPUTS > NEW)

2. Choose Amazon Athena output by clicking on SELECT. We’re going to use Amazon Athena as our output for this tutorial. You can output to any Upsolver data outputs.

3. Enter gaming as the name of the output. Leave everything as default and click on NEXT.

4. Expand payload on the left hand side. Add a field to your output by clicking on the + sign next to the fields. The field that we’re adding is data.payload.eventName

Data transformation

1. Rename the fields by clicking on the field under OUTPUT COLUMN NAME and rename payload.eventname to event_name

2. Transform data.clientEventTime to from unix epoch to human readable UTC format. Click on Add Calculated Field on the upper left.

3. Find the TO_DATE function then click on SELECT. Note that each function has usage examples displayed on the right.

4. Enter data.clientEventTime in the DATETIME box and give the field a NAME called event_time.

5. Click on PREVIEW to make sure the date looks right and click on SAVE.

6. We're going to do a simple aggregation by adding the count of events for a given time. Click over to the SQL tab on the upper right hand corner. Keep in mind all changes that are made in the SQL view will also be represented in the UI view and vice-versa.

7. Copy/paste 2 lines of SQL to the statement (see line 8 and 10 below) to the generated SQL statement and click on PREVIEW to ensure data looks correct.

SET partition_date = UNIX_EPOCH_TO_DATE(time);
SET event_time = UNIX_EPOCH_TO_DATE(data.clientEventTime);
// GENERATED @ 2020-11-16T01:32:29.510858Z
SELECT PARTITION_TIME(partition_date) AS partition_date:TIMESTAMP,
       time AS processing_time:TIMESTAMP,
       data.payload.eventName AS event_name:STRING,
       event_time AS event_time:TIMESTAMP,
       count(*) as event_count:DOUBLE
  FROM "upsolver-tutorials-mobile-users"
  GROUP BY event_time, partition_time(partition_date), time, data.payload.eventName

You may change the way data is partitioned by clicking on Manage Partitions > Partition by time. Select Partition Field > event_time and choose the way you want to data to be partitioned.

Connect to Amazon Athena and start streaming

1. Click on RUN on the upper right corner. Create a new connection to Amazon Athena by choosing CONNECTION > Create your first Amazon Athena Connection.

2. Make sure you're logged in your Amazon AWS account for the following steps.

Click on INTEGRATE and make sure you're logged in to your AWS account. For your privacy, Upsolver does not host your data.

3. Choose the Multi-Tenant Deployment option. Leave everything as default and click on CONTINUE.

4. If you haven't logged in your AWS account, make sure you are logged in now. It will bring you to the CloudFormation stacks page for the integration. Check the acknowledge box and click on Create stack.

The integration should table about 1 minute. and you will see CREATE_COMPLETE.

5. Switch back to your Upsolver environment and click on DONE.

5. Click on DATA OUTPUT > gaming > RUN.

6. Select S3 STORAGE > S3 Default Output Storage and Connection > Athena Choose the DATABASE NAME > default that you want the table to reside. Give the table a name TABLE NAME > gaming Click on NEXT.

7. Use the slide bar to select the data from the last day. Keep ENDING AT as Never to ensure that the data is continuously being streamed to the table. Click on DEPLOY.

8. Upsolver starts outputting the data to Amazon Athena. Wait for the data to write to the Amazon Athena table. This might take several minutes. You can keep track of progress under the PROGRESS tab. You should be able to start querying your data in a few minutes.

Verify your data in Amazon Athena

Outputting to Amazon Athena requires Upsolver to integrate with your account. If you don't have an environment available, you can try the and write to your own local environment.

Upsolver has many deployment models. For this quick exercise, we're going to deploy it as a fully managed service. You can also easily deploy Upsolver in your private VPC. For more information, click . Make sure you have the right permission. You can click on SEND THIS PAGE to provide the integration information to your cloud administrators if you do not have the right permission.

The free version offers limited processing power. If you're trying to output faster or a larger amount of data, please Upsolver to increase Upsolver Units.

MySQL output
here
contact
Upsolver Dedicated Compute
Community
these instructions
Add Calculated Field to start converting unix epoch to human readable UTC format