Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Welcome to Upsolver!
  • About this guide
  • Upsolver Quickstart
  • Create a Data Source
  • Create a queryable data output
  • Add fields to your output
  • Perform simple data transformations
  • Output processed data to a table
  • Explore transformed data in Open Lake worksheets
  • Integrate with your cloud account
  • Run CloudFormation stack to integrate

Was this helpful?

  1. Getting Started
  2. Get started as a Upsolver user
  3. Upsolver in 5 minutes

Upsolver Quickstart in 5 minutes

This guide provides a quick tour of Upsolver for first time users.

PreviousUpsolver in 5 minutesNextAdditional sandbox fun

Last updated 4 years ago

Was this helpful?

Welcome to Upsolver!

When you first log into Upsolver's , you will see a link to this guide. It provides you with a quick tour of Upsolver.

About this guide

Upsolver Quickstart

Welcome to Upsolver! After signing up and logging in the Community Edition, you will see the Quickstart's welcome screen. Choose Sandbox for this Quickstart.

Create a Data Source

2. Define data source bucket: select the Amazon S3 bucket that hosts where your data is located. Leave this option as the default value of upsolver-tutorials-orders Note: Upsolver supports all data formats. Click on NEXT.

3. Define data source format: Upsolver provides many options to parse your data. The Quickstart provides a subset of the options. Leave all values as default: GLOBAL FILE PATTERN is set to *. It means Upsolver will parse everything in a S3 bucket. DATE FORMAT is yyyy/MM/dd/HH. This is how objects are stored in the folders. For example: s3://upsolver-tutorial-orders/2021/01/16/15/35/<file_name>

You may see a sample of the files from defined bucket displayed on the right side of the screen. Optionally, you can identify the time you want to start ingesting from. We will leave all values as default and click on NEXT.

4. Preview sample data: Upsolver will display a sample of the data being parsed. You may click on each individual event to see a formatted record. Click on CREATE.

Create a queryable data output

1. Start creating the Upsolver Query Output: now we have a data source defined, click on NEW TABLE OUTPUT on the upper right hand corner to start transforming your data and output to Upsolver Query Output.

2. Define the Queryable Output: provide the data output with a NAME and define the DATA SOURCES(s) -where the data comes from. We will use the data source created in the previous section called upsolver-tutorials-orders. For this Quickstart, we will write to a new table. Leave all values as default and click on NEXT.

Add fields to your output

1. Add the following fields to your output by clicking on the + sign next to each field. These fields were parsed automatically when the Data Source was created. Leave data.netTotal and data.salesTax as DOUBLE when you map these fields to the output.

data.buyerEmail
data.orderId
data.netTotal (DOUBLE)
data.salesTax (DOUBLE)

This step also allows you to change the name and data type of your fields when you output to your target system.

2. Click on Add Calculated Field to perform a simple data transformation.

Perform simple data transformations

Upsolver offers 200+ built-in transformations functions. You may use the UI or SQL to transform your data. Changes will be automatically synced between the two interfaces. Let's start by transforming data.orderDate to a TIMESTAMP format.

1. Transform data.orderDate from a STRING to TIMESTAMP.

  • Locate TO_DATE function and click on SELECT

  • Under DATETIME, locate the data.orderDate field and give it a NAME as order_date.

  • click on SAVE. Notice that the calculated field is automatically added to your listed output fields as data.order_date with TIMESTAMP data type.

2. We can use the SQL UI to add a simple calculation directly in SQL instead of using the UI. Click over to the SQL tab. Note: changes to the UI are automatically translated in the SQL statement. Changes in the SQL statement will be automatically reflected in the UI.

3. This step uses the SQL UI to directly calculate the total for each order. Add the following SQL to your pre-generated SQL statement (note that this aggregation can also be easily performed in the UI instead of SQL) data.netTotal + data.salesTax as order_total,on line 10 and WHERE data.eventType = 'ORDER' at the end of the statement. The SQL will look like the following after adding the aggregations and filter.

Below is the full SQL statement for your reference.

SET partition_date = UNIX_EPOCH_TO_DATE(time);
SET order_date = TO_DATE('data.orderDate');
// GENERATED @ 2021-01-25T18:28:15.707009Z
SELECT PARTITION_TIME(partition_date) AS partition_date:TIMESTAMP,
       time AS processing_time:TIMESTAMP,
       data.buyerEmail AS buyeremail:STRING,
       data.orderId AS orderid:STRING,
       data.netTotal AS nettotal:DOUBLE,
       data.salesTax AS salestax:DOUBLE,
       data.netTotal + data.salesTax as order_total, //add this line
       order_date AS order_date:TIMESTAMP
  FROM "upsolver-tutorials-orders"  
  WHERE data.eventType = 'ORDER' //add this line

4. Click on PREVIEW to make sure the data is as expected.

4. Click back to your UI view. Note: everything you've changed in SQL is automatically reflected in the UI. We're only scratching the surface of Upsolver data processing capabilities.

Output processed data to a table

1. Click on RUN on the upper right corner. Leave everything as its default values and provide a TABLE NAME. Click on NEXT.

2. Leave all the values as default. Note: you may use the slide bar to choose the time window you want to output your data from. Optionally, you can leave ENDING AT as Never to continuously stream new data into your Open Lake table. Click on DEPLOY.

3. Click on the PROGRESS tab to monitor the Data Output status. The output will take about 1-2 minutes to catchup to its current state. Wait for OUTPUT PROGRESS to start turning green.

4. The output will take about 1-2 minutes to catchup to its current event. After the data is caught up under PROGRESS, click on ERRORS to make sure everything is successful.

Explore transformed data in Open Lake worksheets

1. Click the CREATE WORKSHEET button on the upper right hand corner to start exploring the data that you've transformed and written to a table.

2. Expand the upsolver catalog and choose the sample_data schema. Click on the <table name> you've written to (from the step 1). You will see a sample of of your transformed data!

Integrate with your cloud account

You have now explored Upsolver by using sample data. You may integrate with your own cloud environment and start transforming your own data!

Run CloudFormation stack to integrate

1. Navigate to INTEGRATE NOW from your Sandbox environment

2. Choose the cloud provider of your preference. In this example, we're going to use AWS.

Make sure you're already logged into your AWS account. If applicable, disable the popup blocker.

3. You will arrive at the integration page. Leave everything as default and click on CONTINUE.

4. Scroll down to your Create Stack page. Check the I acknowledge box and click on Create stack.

5. The stack creation process might take a minute or two. Keep refreshing until you see CREATE_COMPLETE.

6. Navigate back to Upsolver. Now you can start creating data sources and transform your own data!

The sample environment provides you with a pre-createdthat continuously parses data from an Amazon S3 bucket. Upsolver transforms the data and users can query the transformed data with SQL.

The offers limited compute. or more compute resources.

1. Welcome screen: start your Upsolver journey by creating a Data Source. We have set up sample streaming data in an Amazon S3 bucket with new files constantly being written to the bucket. Note: Upsolver provides many built-in Click Create Data Source to connect Upsolver to your data source.

5. You have successfully created your first data source! You will find a list of parsed fields on the left and data demographics and statistics as well. Click on each field to view the field's statistics.

Upsolver offers much more than this Quickstart. for a demo or a free POC with more compute power.

Congratulations! You have taken a quick tour of Upsolver.to start your Upsolver journey. Happy Upsolving!

🎉
Data Source
free Community Edition
Contact Upsolver f
Data Sources.
Contact Upsolver
🎉
Contact us
free Community Edition