Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • FROM_KEY_VALUE
  • GET_RANGE
  • ITEM_INDEX
  • JSON_PATH
  • JSON_TO_RECORD
  • MAP_WITH_INDEX
  • QUERY_STRING_TO_RECORD
  • RECORD_TO_JSON
  • TO_ARRAY
  • ZIP

Was this helpful?

  1. Getting Started
  2. Glossary
  3. Language guide
  4. Functions
  5. Calculated functions

Structural functions

This page goes over the structural functions in Upsolver.

FROM_KEY_VALUE

Maps a list of key values to a record with fields for each key.

Inputs

  • key

  • value

Properties

  • Keys

Example

Input

{
  "data": [
    { "key": "a", "value": 1 },
    { "key": "b", "value": 2 }
  ]
}

SQL

SET result = FROM_KEY_VALUE('a,b', data[].key, data[].value)

Result

{
  "result": {
    "a": 1,
    "b": 2
  }
}

GET_RANGE

Returns range of numbers between start and end (inclusive).

Inputs

  • first

  • last

ITEM_INDEX

Gets the index of the item.

Properties

  • Global Index - Use the global index in the event instead of the index in the containing array

  • Count Nulls

JSON_PATH

Extracts data from JSON objects.

Inputs

  • JSON - A String that contains JSON Document

Properties

  • Path - JSON Path Expression

JSON
Path
Result

"[{ "name": "The Great Gatsby", "author": { "name": "F. Scott Fitzgerald" } }, { "name": "Nineteen Eighty-Four", "author": { "name": "George Orwell" } }]"

"$[*].author.name"

"F. Scott Fitzgerald", "George Orwell"

"{ "net_id": 41 }"

"net_id"

"41"

"{ "net_id": [41, 42] }"

"net_id"

"[41, 42]"

"{ "net_id": [41, 42] }"

"net_id[*]"

"41", "42"

"{ "net_id": [41, 42] }"

"net_id.parent"

null

"[1,2,3]"

"$[*]"

"1", "2", "3"

"[{ "name": "The Great Gatsby", "author": { "name": "F. Scott Fitzgerald" } }, { "name": "Nineteen Eighty-Four", "author": { "name": "George Orwell" } }]"

"$[*].author.name"

"{"name":"F. Scott Fitzgerald"}",

"{"name":"George Orwell"}"

JSON_TO_RECORD

Extracts data from JSON objects. This only includes any JSON documents from the data source; this function does not work on or include in the result any calculated fields.

This function expects a string formatted according to classic JSON requirements.

It is recommended that you first validate your string format before applying this function as additional pre-processing may be required to get your data in the correct format.

See the data below for an example.

Properties

  • Mappings - JSON to field mappings

  • Output Array - If the string contains multiple JSON records use this to allow outputing all of them

Value
Mappings
Output Array
Result

"{ "a": "Hello" }"

"a,a,string"

true

{"a": "Hello"}

"{ "a": { "value": "Hello" }, "b" : { "value": "World" } }"

"a.value,a.value,string b.value,b.value,string"

true

{"a.value": "Hello", "b.value": "World"}

Example

Sample data
[{'packageid': 7, 
  'percent_savings_text': ' ', 
  'percent_savings': 0, 
  'option_text': 'Counter-Strike: Condition Zero - 8,19€', 
  'option_description': '', 
  'can_get_free_license': '0', 
  'is_free_license': False, 
  'price_in_cents_with_discount': 819}, 
 {'packageid': 574941, 
  'percent_savings_text': ' ', 
  'percent_savings': 0, 
  'option_text': 
  'Counter-Strike - Commercial License - 8,19€', 
  'option_description': '', 
  'can_get_free_license': '0', 
  'is_free_license': False, 
  'price_in_cents_with_discount': 819}]

Notice that this data is not in standard JSON format as it uses single quotes ' instead of double quotes ".

Additionally, the boolean values are formatted as True and False; however, this function only parses lower case boolean values.

As a result, in order to correctly parse our original string, we first convert it to a proper JSON format before using using the JSON_TO_RECORD function to parse it.

SET partition_date = UNIX_EPOCH_TO_DATE(time);
SET clean_subs = TRIM(' [{''packageid'': 7, ''percent_savings_text'': '' '', ''percent_savings'': 0, ''option_text'': ''Counter-Strike: Condition Zero - 8,19€'', ''option_description'': '''', ''can_get_free_license'': ''0'', ''is_free_license'': False, ''price_in_cents_with_discount'': 819}, {''packageid'': 574941, ''percent_savings_text'': '' '', ''percent_savings'': 0, ''option_text'': ''Counter-Strike - Commercial License - 8,19€'', ''option_description'': '''', ''can_get_free_license'': ''0'', ''is_free_license'': False, ''price_in_cents_with_discount'': 819}]');
SET replacesubs = REGEXP_REPLACE(REGEXP_REPLACE(REPLACE('''', '"', clean_subs), ':\s*False', ':false'), ':\s*True', ':true');
SET jsonrecord = JSON_TO_RECORD('packageid,packageid,number
percent_savings_text, percent_savings_text,string
percent_savings,percent_savings,number
option_text,option_text,string
option_description,option_description,string
can_get_free_license,can_get_free_license,number
is_free_license,is_free_license,string
price_in_cents_with_discount, price_in_cents_with_discount, number',
 true,replacesubs);

SELECT jsonrecord[].packageid:number AS jsonrecord[].packageid:number,
       jsonrecord[].percent_savings_text:STRING AS jsonrecord[].percent_savings_text:STRING,
       jsonrecord[].percent_savings:number AS jsonrecord[].percent_savings:NUMBER,
       jsonrecord[].option_text:string AS jsonrecord[].option_text:STRING,
       jsonrecord[].option_description:string AS jsonrecord[].option_description:STRING,
       jsonrecord[].can_get_free_license:number AS jsonrecord[].can_get_free_license:number,
       jsonrecord[].is_free_license:string AS jsonrecord[].is_free_license:string
  FROM xyxz

If you have multiple records concatenated into one string as shown in the sample data, it is recommended to use Upsolver's JSON output format rather than the tabular output format. This allows the output to have an array column structure.

MAP_WITH_INDEX

Outputs an index and a value field. Index contains a zero based index and value contains the value in the input field.

Inputs

  • value - The value to convert to a record

value
result

"a", "b", "c"

{"index": 0, "value": "a"}, {"index": 1, "value": "b"}, {"index": 2, "value": "c"}

The example below illustrates the usage of the function and also how to reference the array of values created as a result:

LET dlc_parsed = MAP_WITH_INDEX(SPLIT(TRIM_CHARS('[] ', dlc),',')) 
  , dlc_parsed[].game_hier_key = MD5(STRING_FORMAT('{0}.{1}.{2}'
                                   , appid 
                                   , dlc_parsed[].value, $event_date)))

QUERY_STRING_TO_RECORD

Extracts data from query string.

Properties

  • Mappings - Field names

RECORD_TO_JSON

Converts the record containing the field to a JSON string. This only includes any JSON documents from the data source; this function does not work on or include in the result any calculated fields.

Note that the preview functionality only shows fields in the JSON that were output directly in other fields. However, when you run the output, it will include the full JSON.

Example

Input

{ "a": [{"b": 1, "c": 1}, {"b": 2, "c": 2}] }

SQL

SET jsons = RECORD_TO_JSON(a[].b);

Result

[{"b": 1, "c": 1}, {"b": 2, "c": 2}]

TO_ARRAY

Outputs the values from the inputs as an array.

inputs
result

["a", "c"], ["b"]

"a", "c", "b"

ZIP

PreviousString functionsNextZIP

Last updated 2 years ago

Was this helpful?

See:

See:

JSON format
ZIP