Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Auto-Detect
  • Avro
  • Parquet
  • ORC
  • JSON
  • CSV
  • TSV
  • x-www-form-urlencoded
  • Protobuf
  • Avro-record
  • Avro w/ Schema Registry
  • XML

Was this helpful?

  1. Guide for developers
  2. Upsolver REST API

API content formats

This page provides a guide on how to configure different content formats in Upsolver using API calls.

Auto-Detect

Example

{
	"clazz" : "AutoDetectContentType"
}

Avro

Example

{
	"clazz" : "AvroContentType"
}

Parquet

Example

{
	"clazz" : "ParquetContentType"
}

ORC

Example

{
	"clazz" : "OrcContentType"
}

JSON

JSON data. Multiple JSONs can be read from a single file/record by appending them with optional whitespace in between.

Fields

Field

Name

Type

Description

Optional

nestedJsonPaths

Nested Json Paths

[][]

Paths to string fields that contain a "stringified" JSON that should be parsed into the schema. Each such path is represented as an array of the path parts.

+

nestedJsons

Nested Jsons

NestedPath[]

Paths to string fields that contain a "stringified" JSON that should be parsed into the schema. Each such path is represented as an array of the path parts.

+

splitRootArray

Split Root Array

Boolean

If the root object is an array, it can either be parsed as separate events, or as a single event which contains only an array.

+

keepOriginalNestedJsonString

Keep Original Nested Json String

Boolean

When using the nestedJsonPaths parameter, the original JSON string can optionally be kept in addition to the parsed value. If this is selected, the parsed data is stored in a record with the suffix _parsed.

+

storeJsonAsString

Store Json As String

Boolean

Whether to store the JSON in native format in a separate field.

+

Example

{
	"clazz" : "JsonContentType"
}

CSV

Fields

Field

Name

Type

Description

Optional

inferTypes

Infer Types

Boolean

Whether or not to infer types. If not selected, Upsolver will read all fields as strings.

header

Header

String

If applicable, the header of the file. If you only add details for one column, additional columns will be labeled as overflow columns.

delimiter

Delimiter

Char

The delimiter between columns of data.

+

nullValue

Null Value

String

If applicable, the default null value in the data.

+

nestedJsons

Nested Jsons

NestedPath[]

Paths to string fields that contain a "stringified" JSON that should be parsed into the schema. Each such path is represented as an array of the path parts.

+

keepOriginalNestedJsonString

Keep Original Nested Json String

Boolean

When using the nestedJsonPaths parameter, the original JSON string can optionally be kept in addition to the parsed value. If this is selected, the parsed data will be stored in a record with the suffix _parsed.

+

Example

{
	"clazz" : "CsvContentType",
	"inferTypes" : true,
	"header" : "header1,header2,header2"
}

TSV

Fields

Field

Name

Type

Description

Optional

inferTypes

Infer Types

Boolean

Whether or not to infer types. If not selected, Upsolver will read all fields as strings.

header

Header

String

If applicable, the header of the file. If you only add details for one column, additional columns will be labeled as overflow columns.

nestedJsons

Nested Jsons

NestedPath[]

Paths to string fields that contain a "stringified" JSON that should be parsed into the schema. Each such path is represented as an array of the path parts.

+

keepOriginalNestedJsonString

Keep Original Nested Json String

Boolean

When using the nestedJsonPaths parameter, the original JSON string can optionally be kept in addition to the parsed value. If this is selected, the parsed data will be stored in a record with the suffix _parsed.

+

Example

{
	"clazz" : "TsvContentType",
	"inferTypes" : true,
	"header" : "header1,header2,header2"
}

x-www-form-urlencoded

Fields

Field

Name

Type

Description

Optional

inferTypes

Infer Types

Boolean

Whether or not to infer types. If not selected, Upsolver will read all fields as strings.

Example

{
	"clazz" : "WWWFormUrlEncodedType",
	"inferTypes" : true,
}

Protobuf

Fields

Field

Name

Type

Description

Optional

schemaFiles

Schema Files

SchemaFile[]

mainFile

Main File

String

The main file from the list of selected schema files.

messageType

Message Type

String

The message type.

bytesParsers

Bytes Parsers

BytesParser[]

BytesParsers can be used to define special parsing behavior for 'bytes' fields in the schema.

bytesParsers.path

Path

String

The path to the field that should use this parser.

bytesParsers.parserSchema

Parser Schema

String

The schema that the parser will use. In CSV and TSV formats, it should be a comma-delimited list of the field names in the CSV/TSV rows. In the Avro format, it should be the Avro schema used to decode the bytes of the inner field.

+

bytesParsers.schemaType

Schema Type

String

The type of parser to use. Supported types are JSON, CSv, TSF, or Avro.

+

Example

{
	"clazz" : "ProtobufContentType",
	"schemaFiles" : "schemaFiles",
	"mainFile" : "mainFile",
	"messageType" : "messageType",
	"bytesParsers" : "bytesParsers"
}

Avro-record

Individual Avro records without the framing or schema.

Fields

Field

Name

Type

Description

Optional

schema

Schema

String

The Avro schema used to decode the messages. Note that the behavior is undefined if the schema does not match the data.

bytesParsers

Bytes Parsers

BytesParser[]

BytesParsers can be used to define special parsing behavior for 'bytes' fields in the schema.

+

bytesParsers.path

Path

String

The path to the field that should use this parser.

bytesParsers.parserSchema

Parser Schema

String

The schema that the parser will use. In CSv and TSV formats, it should be a comma-delimited list of the field names in the CSV/TSV rows. In the Avro format, it should be the Avro schema used to decode the bytes of the inner field.

+

bytesParsers.schemaType

Schema Type

String

The type of parser to use. Supported types are JSON, CSV, TSV, or Avro.

+

Example

{
	"clazz" : "AvroRecordContentType",
	"schema" : "{ \"type\": \"record\", \"name\": 
\"root\", \"fields\": [ {\"name\": \"value\", \"type\": 
\"string\" } ] }"
}

Avro w/ Schema Registry

Individual Avro records with schema provided by Schema Registry.

Fields

Field

Name

Type

Description

Optional

schemaRegistryUrl

Schema Registry Url

String

The URL of getting schema where {id} is the ID of the schema.

schemaRegistryFormat

Schema Registry Format

SchemaRegistryFormat

The strategy used to encode the schema version into every message.

+

bytesParsers

Bytes Parsers

BytesParser[]

BytesParsers can be used to define special parsing behavior for 'bytes' fields in the schema.

bytesParsers.path

Path

String

The path to the field that should use this parser.

bytesParsers.parserSchema

Parser Schema

String

The schema that the parser will use. In CSV and TSV formats, it should be a comma-delimited list of the field names in the CSV/TSV rows. In the Avro format, it should be the Avro schema used to decode the bytes of the inner field.

+

bytesParsers.schemaType

Schema Type

String

The type of parser to use. Supported types are JSON, CSV, TSV, or Avro.

+

Example

{
	"clazz" : "AvroSchemaRegistryContentType",
	"schemaRegistryUrl" : "schemaRegistryUrl"
}

XML

XML data. Multiple XMLs can be read from a single file/record by appending them with optional whitespace in between.

Fields

Field

Name

Type

Description

Optional

storeRootAsString

Store Root As String

Boolean

Whether to store the XML root in native format in a separate field.

+

Example

{
	"clazz" : "XmlContentType",
}
PreviousModify a data sourceNextCI/CD on Upsolver

Last updated 4 years ago

Was this helpful?