Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Transformations on array fields
  • Array Context: Define Transformations
  • Using the ZIP function
  • Flattening Data
  • Try it yourself

Was this helpful?

  1. Data outputs and data transformation
  2. Data transformation

Work with Arrays

Upsolver supports arrays natively. The data transformations are defined on fields directly. It is important to understand how arrays are handled for data transformations. Understanding the following concepts will allow you control the output structure precisely in order to achieve your goals.

Transformations on array fields

This section explains what happens in Upsolver when performing a calculation on inputs that are arrays. In Upsolver, transformations are defined on the field names and the values of those fields are then passed in to the calculation at runtime.

Example 1: calculation without an array

The following simple calculation is defined in Upsolver:

data.value1 + data.value2

This is the simplest calculation that you may have and it doesn't involve arrays. This example is being used as the starting point.

If the input looked like :

{"value1": 1, "value2": 2}

Upsolver will calculate: 1 + 2 = 3

Example 2: calculation with an array

The input data:

{"value1": [1,2], "value2": 2}

The calculation definition in Upsolver:

data.value1[] + data.value2

The calculation will look like this: [1,2] + 2

This will result in Upsolver performing the add operation for each value in the first input with the value 2 from the second input.

The result of the calculation:

[1+2, 2+2] = [3,4]

Example 2: calculation with two arrays

The input data:

{"value1": [1,2], "value2": [20,30]}

In this case the following calculation is performed:

[1,2] + [20,30]

The two arrays need to be added together. In this case, Upsolver will perform a Cartesian Product on the two arrays and then perform the sum operation on each resulting pair. Since the Cartesian Product is:

[1,2] X [20,30] = [(1,20),(1,30),(2,20),(2,30)]

We will get the following result:

[1+20, 1+30, 2+20, 2+30] = [21,31,22,32]

As you can see we ended up with 2 X 2 = 4 elements, and in general Cartesian Product calculations will result in length(value1[]) * length(value2[]) elements.

Usually Cartesian Product calculations are undesirable and we will see how to avoid them in the next section.

Array Context: Define Transformations

We will be using an Hierarchical Data Output to demonstrate the way Upsolver handles transformations. When using Hierarchical Outputs, the objects will be written without changes. This makes it easy for us to understand what's happening internally. Later on we will discuss flattening this data to tabular formats.

Example 1

Assume we have the following input event:

{
    "salaries": [{
        "employee": "Jhon",
        "baseRate": 100000,
        "bonus": 23000
    },
    {
        "employee": "Jacob",
        "baseRate": 78000,
        "bonus": 12000
    }
    ]
}

In this sample data we have an array of salaries that include the base rate and a bonus value. We are interested in calculating the final amount to be paid for every employee. We want to get an output that looks like this:

{
    "toPay": [
    {
        "employee": "Jhon",
        "salary": 123000
    },
    {
        "employee": "Jacob",
        "salary": 100000
    }
    ]
}

Note: we will be using SQL syntax to quickly demonstrate the transformations, the same applies for transformations defined using the UI Mode.

In our Hierarchical Data Output we will define the following select:

SET salary = data.salaries[].baseRate + data.salaries[].bonus;
SELECT data.salaries[].employee as toPay.employee,
       salary[] as toPay.salary 
    FROM my_data_source

If you preview the result of the above query you will get something that is not the desired result:

{
    "toPay": {
        "employee": [
            "Jhon",
            "Jacob"
        ],
        "salary": [
            123000,
            112000,
            101000,
            90000
        ]
    }
}

As you can see we ended up with two independent arrays, one for the employees, and one for the salaries. Moreover the array of salaries has 4 items instead of the expected 2 items.

We can see some clues about what is happening if we look at the query we wrote more carefully. The first thing we may notice is that when we select the salary field we are selecting it as an array: salary[]. We can also notice this by looking at the calculated field in the field list:

The reason this calculated field is an array is because it had 2 fields from within an array as inputs, but we chose to write it outside of the context of the input array. Meaning we wrote the result to a field named salary in the root of our object.

In layman's terms the context of a field is basically the fields’ location withing the nested structure.

Essentially what we have done is taken our input event and added the calculation like this (note the placement of the field is outside of the arrays):

{
    "salaries": [{
        "employee": "Jhon",
        "baseRate": 100000,
        "bonus": 23000
    },
    {
        "employee": "Jacob",
        "baseRate": 78000,
        "bonus": 12000
    }
    ],
     "salary": salaries[].baseRate + salaries[].bonus
}

Since the calculated field is being placed outside of the array it has no local context when performing the calculation. Therefore, what must happen is that Upsolver will take all the values that are available and pass them to the SUM function. So we end up with the result of the following calculation:

[100000, 78000] + [23000, 12000]

Now as we mentioned at the start of this guide this will lead to a the calculation being performed on a pairs of values that are the result of the Cartesian Product:

[100000, 78000] X [23000, 12000]

So we will get:

[100000 + 23000, 100000 + 12000, 78000 + 23000, 78000 + 12000] =

[123000, 112000, 101000, 90000]

So we got an array back from our calculation which explains why salary is an array field. Not only that we lost the context of the calculation we don't know which employee belongs to which salary.

Solution

Now let's see how to fix this. The solution is really simple all that we need to do is change the context of the calculation. We will solve it with two simple changes that we will show one step at a time. You will understand why both are needed.

Step 1: change where we save the calculated field:

SET data.salaries[].salary = data.salaries[].baseRate + data.salaries[].bonus;
SELECT data.salaries[].employee as toPay.employee,
       data.salaries[].salary as toPay.salary 
    FROM salaries

In this case we changed the target field name from salary to data.salaries[].salary If you preview this you will see we have made some progress, but we haven't completed the calculation.

{
    "toPay": {
        "employee": [
            "Jhon",
            "Jacob"
        ],
        "salary": [
            123000,
            90000
        ]
    }
}

Step 2: add context to calculated values:

We have solved part of our problem, we now only have two salaries and they are the correct values! However, we don't know which salary belongs to which employee. This is because we sent our results to the fields toPay.employee and toPay.salary So we have fields from within a single array salaries[] being sent to the fields that are not within the same array record. We can solve this problem by placing the resulting fields within a shared array:

SET data.salaries[].salary = data.salaries[].baseRate + data.salaries[].bonus;
SELECT data.salaries[].employee as toPay[].employee,
       data.salaries[].salary as toPay[].salary 
    FROM salaries

We have changed the result fields that we are writing to to be within a shared array. The result is as expected:

{
    "toPay": [
        {
            "employee": "Jhon",
            "salary": 123000
        },
        {
            "employee": "Jacob",
            "salary": 90000
        }
    ]
}

Using the ZIP function

In some cases the data can arrive in a slightly different format:

{
    "employee": ["Jhon", "Jacob"],
    "baseRate": [100000, 78000],
    "bonus": [23000, 12000]
}

In this case what we want the first element of each array is to be related to the first element of every other array and so on.

Unlike the example above we can't achieve the desired calculated salary by placing the calculation in the correct context since there is no shared context array. We can create the required context by zipping together the arrays using the ZIP function:

SET salaries = ZIP('employee,baseRate,bonus', 
               data.employee[], 
               data.baseRate[], 
               data.bonus[]);

The ZIP function gets an optional comma separated list of field names. If this the field names aren't provided it will default to field1, field2,...with a list of array inputs. The ZIP function will stitch the arrays together on an element by element basis: the first element with the first element, second element with the second element and so on.

This will allow access to a calculated field that is in the exact some structure as the first example:


{
    "salaries": [
        {
            "employee": "Jhon",
            "baseRate": 100000,
            "bonus": 23000
        },
        {
        	"employee": "Jacob",
            "baseRate": 78000,
            "bonus": 12000
        }
    ]
}

You may use the same output query that was used initially to define the output:

SET salaries = ZIP('employee,baseRate,bonus', data.employee[], data.baseRate[], data.bonus[]);
SET salaries.salary = salaries[].baseRate + salaries[].bonus;
SELECT 
       salaries[].employee AS toPay[].employee,
       salaries[].salary AS toPay[].salary
  FROM "salaries"  

Flattening Data

Now that we understand how transformations on arrays work, and how the context and position of the fields matter. Next we will discuss flattening arrays. This is useful when we want to convert nested objects with arrays to flat tables. Outputs that send data to tabular systems will use flattening by default.

When using a tabular output if you select array fields Upsolver will flatten your data by the array. For example, if we have the following event:

{
    "values": [1,2,3],
    "name": "Oleg",
    "id": 123
}

And the following tabular output defined in Upsolver:

SELECT data.values[] as value,
       data.name as name,
       data.id as id
   FROM my_data_source

We will get the resulting rows:

value

name

id

1

Oleg

123

2

Oleg

123

3

Oleg

123

The result shows three rows from one source event. This is because the data was flattened based on the values[] array that contained three values.

How do you flatten multiple arrays?

Example:

{
    "values": ["apple", "NY"],
    "type": ["fruit", "city"]
}

In this case, if we simply SELECT the arrays:

SELECT data.values[] as value,
       data.type[] as type
   FROM my_data_source

It will result in a Cartesian Product:

value

type

apple

fruit

NY

fruit

apple

city

NY

city

If you have independent arrays being selected without a shared context, it will end up with a Cartesian Product in the output. If you wish to avoid this you must use the ZIPfunction to combine the arrays into a single context:

SET zipped = ZIP('type,value', data.type[], data.values[]);
SELECT zipped[].value as value
       zipped[].type as type
   FROM my_data_source

The result is as expected:

value

type

apple

fruit

NY

city

Try it yourself

Given this input data:

{
    "cars": [ {
        "units" : ["mph","mpg"],
        "data": [
            {"vehicle": "Ford Mustang", "values": [ 180, 18]},
            {"vehicle": "Chevrolet Corvette", "values": [ 212, 19]
	      }]
    },
    {
        "units" : ["kph","kmpl"],
        "data": [
            {"vehicle": "Porsche 911", "values": [ 320, 8]},
            {"vehicle": "BMW Z4", "values": [ 249, 12]}
        ]}
    ]
}

Try and get the following tabular output:

unit

value

vehicle

mph

180

Ford Mustang

mpg

18

Ford Mustang

mph

212

Chevrolet Corvette

mpg

19

Chevrolet Corvette

kph

320

Porsche 911

kmpl

8

Porsche 911

kph

249

BMW Z4

kmpl

12

BMW Z4

PreviousQuery hierarchical dataNextView outputs

Last updated 2 years ago

Was this helpful?