LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Articles
Articles
  • Articles
  • GET STARTED
    • Core Concepts
      • Core Components
      • Deployment Models
      • Entities Overview
      • Upsolver Timeline
      • Schema Detection and Evolution
    • Pipeline Basics
    • Understanding Sync and Non-Sync Jobs
  • DATA
    • Optimization Processes for Iceberg Tables in Upsolver
    • Column Case Sensitivity
    • Column Transformations
    • Compaction Process
    • Expectations
    • Field Name Encoding
    • Iceberg Adaptive Clustering
    • Schema Evolution
      • Iceberg Schema Evolution
      • Snowflake Schema Evolution
      • Redshift Schema Evolution
    • System Columns
    • Working with Date Patterns
  • JOBS
    • Ingest Data Using CDC
      • Performing Snapshots
      • MySQL Binlog Retention
      • PostgreSQL Partitioned Tables
      • CDC Known Limitations
    • Transformation
      • Flattening Arrays
      • Working with Arrays
Powered by GitBook
On this page
  • Avro Field Name Restrictions
  • Upsolver's Encoding Strategy
  • Encoding Pattern
  • Example
  1. DATA

Field Name Encoding

Learn how Upsolver handles field name encoding in Avro and Parquet files.

Upsolver ensures compatibility with various target systems, including those that have specific restrictions on field names, such as Avro. This document outlines how Upsolver handles field name encoding in AVRO and Parquet files, ensuring the proper encoding and representation of field names.

Avro Field Name Restrictions

Avro imposes specific restrictions on field names, which must:

  • Start with characters [A-Za-z_]

  • Subsequently contain only [A-Za-z0-9_]

Upsolver's Encoding Strategy

To support any Unicode character in field names, Upsolver encodes these names. Moreover, for targets with case-insensitive column names such as Amazon Athena, Upsolver encodes uppercase characters.

The user generally will not see these encoded names in the target system but may encounter them in intermediate files or if looking directly at the source files.

Only fields containing unsupported characters are encoded.

Encoding Pattern

An encoded field starts with x_, and every encoded character is represented as _<char-code>_.

Example

Consider the following original data:

{
    "l": 1,
    "b": true,
    "s": "str",
    "d": 3.2,
    "a.b": "a.b",
    "C.d": "C.d",
    "x_": "x_",
    "x_a": "x_a"
}

When converted to a Parquet file for use in Athena (which is case insensitive), it will be encoded as follows:

{
  "l": 1,
  "b": true,
  "s": "str",
  "d": 3.2,
  "x_a_2e_b": "a.b",
  "x_c_2e_d": "C.d",
  "x_x_5f_": "x_",
  "x_x_5f_a": "x_a"
}

This encoding ensures that the field names comply with the restrictions while preserving the original field names within the data content, thereby maintaining consistency and compatibility across different systems. In Athena, the table/view definition will name the columns by their original unencoded name.

Last updated 11 months ago