Lookup table data output
This article provides an introduction to lookup tables along with a guide to how to create a lookup table output using Upsolver.
Lookup tables are useful for:
Join between streams
An Upsolver lookup table replaces ETL code and a serving DB like Redis or Cassandra.
When a lookup table is defined as real time, instead of waiting until the data is written to S3 and to the disk, the event’s details (the delta) are updated directly in-memory and only then stored in S3.
By querying a lookup table, it is possible to enrich one stream with data from another stream.
You can easily create a lookup table based on any of your existing data sources using one or more keys and one or more aggregations.
Aggregations are functions that group multiple events together to form a more significant result. An aggregation function can return a single value or a hash table. For example,
MAXstores the maximum value for the selected stream data for each key value for the selected window period.
Unlike databases, Upsolver runs continuous queries and not ad-hoc queries. Therefore, aggregation results are incrementally updated with every incoming event.
You can also:
- Apply filters to your lookup table.
- These are equivalent to SQL
- Enrich your lookup table using calculated fields.
- This enables you to transform your data using a variety of built-in formulas such as:
- running a regular expression
- performing a mathematical operation
- extracting structured information from your raw User-Agent data
- Add lookups to further enrich your lookup table.
1. Go to the Outputs page and click New.
2. Select Lookup Table as your output type.
3. Name your output and select your Data Sources, then click Next.
4. Click the information icon
in the fields tree to view information about a field. The following will be displayed:
Density in Events
Density in Data
How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%).
The density in the hierarchy (how many of the events in this branch of the data hierarchy include this field), expressed a percentage.
How many unique values appear in this field.
The total number of values ingested for this field.
The first time this field included a value, for example, a year ago.
The last time this field included a value, for example, 2 minutes ago.
Field Content Samples Over Time
The percentage distribution of the field values. These distribution values can be exported by clicking Export.
A time-series graph of the total number of events that include the selected field.
The most recent data values for the selected field and columns. You can change the columns that appear by clicking Choose Columns.
5. Click the information icon
next to a hierarchy element (such as the overall data) to review the following metrics:
# of Fields
# of Keys
# of Arrays
The number of fields in the selected hierarchy.
The number of keys in the selected hierarchy.
The number of arrays in the selected hierarchy.
A stacked bar chart (by data type) of the number of fields versus the density/distinct values or a stacked bar chart of the number of fields by data type.
A list of the fields in the hierarchy element, including Type, Density, Top Values, Key, Distinct Values, Array, First Seen, and Last Seen.
6. Click the plus icon
in the fields tree to add a field from the data source to your output. The Data Source Field will be added to the Schema tab. If required, modify the Output Column Name.
8. Add any required lookups and review them under the Calculated Fields tab.
10. Click Make Aggregated to turn the output into an aggregated output. Read the warning before clicking OK and then add the required aggregation. This aggregation field will then be added to the Schema tab. See: Aggregation Functions
12. Click Run and fill out the following fields:
- Query Cluster: See warning below
- Cloud Storage: Where Upsolver stores the intermediate bulk files before loading
- Retention: A retention period for the data in Upsolver; after this amount of time elapsed the data will be deleted forever
14. Click Next and complete the following:
Processing Time Range
The range of data to process. This can start from the data source beginning, now, or a custom date and time. This can never end, end now or end at a custom date and time.
15. Finally, click Deploy to run the output. It will show as Running in the output panel and is now live in production and consumes compute resources.