Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page

Was this helpful?

  1. Data outputs and data transformation
  2. Data outputs

Lookup table data output

This article provides an introduction to lookup tables along with a guide to how to create a lookup table output using Upsolver.

What are lookup tables?

Lookup tables are a key-value store for faster lookups by key (e.g. use a lookup table to get all users who clicked on a specific ad in a specific timeframe).

Lookup tables are useful for:

An Upsolver lookup table replaces ETL code and a serving database like Redis or Cassandra.

When a lookup table is defined as real time, instead of waiting until the data is written to S3 and to the disk, the event’s details (the deltas) are updated directly in-memory and only then stored in S3.

By querying a lookup table, it is possible to enrich one stream with data from another stream.

You can easily create a lookup table based on any of your existing data sources using one or more keys and one or more aggregations.

Aggregations are functions that group multiple events together to form a more significant result. An aggregation function can return a single value or a hash table. For example, MAX stores the maximum value for the selected stream data for each key value for the selected window period.

Unlike databases, Upsolver runs continuous queries and not ad-hoc queries. Therefore, aggregation results are incrementally updated with every incoming event.

You can also:

  • Apply filters to your lookup table.

    • These are equivalent to SQL WHERE clause.

  • Enrich your lookup table using calculated fields.

    • This enables you to transform your data using a variety of built-in formulas such as:

      • Running a regular expression.

      • Performing a mathematical operation.

      • Extracting structured information from your raw User-Agent data.

  • Add lookups to further enrich your lookup table.

Create a lookup table data output

1. Go to the Outputs page and click New.

2. Select Lookup Table as your output type.

3. Name your output and select your Data Sources, then click Next.

How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%).

The density in the hierarchy (how many of the events in this branch of the data hierarchy include this field) expressed a percentage.

How many unique values appear in this field.

The total number of values ingested for this field.

The first time this field included a value, for example, a year ago.

The last time this field included a value, for example, 2 minutes ago.

The percentage distribution of the field values. These distribution values can be exported by clicking Export.

A time-series graph of the total number of events that include the selected field.

The most recent data values for the selected field and columns. You can change the columns that appear by clicking Choose Columns.

The number of fields in the selected hierarchy.

The number of keys in the selected hierarchy.

The number of arrays in the selected hierarchy.

A stacked bar chart (by data type) of the number of fields versus the density/distinct values. or a stacked bar chart of the number of fields by data type.

A list of the fields in the hierarchy element, including Type, Density, Top Values, Key, Distinct Values, Array, First Seen, and Last Seen.

Toggle from UI to SQL at any point to view the corresponding SQL code for your selected output.

8. Add any required lookups and review them under the Calculated Fields tab.

Click Preview at any time to view a preview of your current output.

12. Click Run and fill out the following fields:

  • Query Cluster: See warning below

  • Cloud Storage: Where Upsolver stores the intermediate bulk files before loading

  • Retention: A retention period for the data in Upsolver; after this amount of time elapsed the data will be deleted forever

The default query cluster runs internal operations but does not allow external querying from the API server, only a sample of the data is loaded. If the data exceeds 16 MB, some keys will not return.

14. Click Next and complete the following:

The range of data to process. This can start from the data source beginning, now, or a custom date and time. This can never end, end now or end at a custom date and time.

15. Finally, click Deploy to run the output. It will show as Running in the output panel and is now live in production and consumes compute resources.

Note: You can edit the contents of a new lookup table only if it has not yet run.

PreviousQubole data outputNextLookup table alias

Last updated 1 year ago

Was this helpful?

Click Properties to review this output's properties. See:

4. Click the information iconin the fields tree to view information about a field. The following will be displayed:

5. Click the information iconnext to a hierarchy element (such as the overall data) to review the following metrics:

6. Click the plus iconin the fields tree to add a field from the data source to your output. The Data Source Field will be added to the Schema tab. If required, modify the Output Column Name.

You can also edit your output directly in SQL. See:

7. Add any required calculated fields and review them in the Calculated Fields tab. See:

9. In the Filters tab, add a filter such as WHERE in SQL to the data source. See:

10. Click Make Aggregated to turn the output into an aggregated output. Read the warning before clicking OK and then add the required aggregation. This aggregation field will then be added to the Schema tab. See:

11. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations. See: ,

See:

To query the full data, you should create a query cluster and connect it with the lookup table. See:

Select the compute cluster to run the calculation on. Alternatively, click the drop-down and .

Once a lookup table has run, it cannot be edited. Instead duplicate the lookup table and edit the copy. See:

Output properties
Transform with SQL
Adding Calculated Fields
from data sources
from lookup tables
from reference data
Adding Filters
Aggregation Functions
Functions
Aggregation Functions
Running an Output
Duplicating an output
How to create a query cluster
create a new compute cluster