LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
Overview
Overview
  • Welcome to Upsolver
  • GET STARTED
    • What is Upsolver?
    • Schedule a Demo
    • Start Your Free Trial
    • Apache Iceberg
  • RESOURCES
    • Reference
    • Iceberg Academy
    • Blog
    • Chill Data Summit
    • Community
    • Videos
  • RELEASE NOTES
    • March 2025
    • February 2025
    • January 2025
    • Earlier Releases
      • 2024
        • December 2024
        • November 2024
        • October 2024
        • September 2024
        • August 2024
        • July 2024
        • June 2024
        • May 2024
        • April 2024
        • March 2024
        • February 2024
        • January 2024
      • 2023
        • December 2023
        • November 2023
          • Deprecated Job Option
        • October 2023
        • September 2023
        • August 2023
        • July 2023
        • June 2023
        • May 2023
  • Legal
Powered by GitBook
On this page
  • 2024.04.25-12.36
  • 2024.04.16-12.06
  • 2024.04.04-09.33
  1. RELEASE NOTES
  2. Earlier Releases
  3. 2024

April 2024

Upsolver new features, enhancements, and bug fixes for April 2024.

Last updated 11 months ago

Release Notes Blog

For more detailed information on these updates, check out the blog.

2024.04.25-12.36

Enhancements

  • Iceberg:

    • Added support for writing to hidden partitions

    • Enabled changing the partition specification of existing tables even while they are actively being written to by a job

    • Support writing to External Iceberg tables

    • Support altering Iceberg table properties via SQL

Bug Fixes

  • Worksheet tree - Show replication jobs under tables that were created dynamically

  • MongoDB CDC:

    • Corrected the parsing of Decimal types to Double

    • Resolved errors encountered when replicating collections containing fields with types Regex, Min Key, and Max Key


2024.04.16-12.06

  • Introduced the PARSE_DEBEZIUM_JSON_TYPE property to the Avro Schema Registry content format for dynamic parsing of JSON columns from Debezium sources into Upsolver records or keeping as JSON strings. For Snowflake outputs with schema evolution, fields are written to columns of type Variant.

  • Upgraded the Snowflake driver to 3.15.0

  • Fixed a bug preventing the pausing of ingestion jobs to Snowflake

  • Iceberg schema evolution:

    • Nested fields were added without the field docs, which are later used to understand which field evolved from which. Affected tables may need to be recreated if jobs writing to them are causing errors

    • Was not handling cases where a field can have multiple types (e.g., a field can be a record and can also be an array of strings)


2024.04.04-09.33

  • Ingestion wizard:

  • For new entities, you can now use the updated Parquet list structure (parquet.avro.write-old-list-structure = false) when writing Parquet files to S3 and Upsolver tables

  • Support casting strings to JSON in jobs writing to Iceberg tables

  • Previewing Classic Data Sources is now supported (SELECT * FROM "classic data source name")

  • Cost reduction:

    • Reduced S3 API costs of Iceberg tables

    • Reduced S3 API costs of Hive tables

  • Fixed a bug that could skip data when reading from CDC sources

  • CDC Event log is now deleted right after parsing the log events

  • Increased performance of VPC integration experience

Enhancements

Added support for Iceberg table retention using the property

UI: cosmetic changes

Bug Fixes

New Features

Data lineage diagram is now accessible from , , and materialized view pages. Users can easily view real-time job status and dependencies

(CDC sources are not supported at this point)

Enhancements

are now supported by replication jobs

Reduced S3 API costs of replication jobs and

The option for external Iceberg tables now supports optimizing tables that are not partitioned

The cluster system table () now shows data that is aligned with the Cluster page

Bug Fixes

Fixed a bug where events Written graph wouldn't show for that contains a lot of sub jobs or where the job list page contains a lot of jobs

Fixed a bug where and jobs wouldn't work when trying to create a table with a name that existed before

Fixed a rare bug where showing "Lifetime" statistics on the page wouldn't show the lifetime statistics

Fixed a bug where jobs that read data from the would timeout when there were tables with a large number of columns

Fixed a bug where it was possible to drop a table that a or job was writing into. The new behavior now requires that the job is dropped first

Fixed a bug where a that reads data from a table that is partitioned by time wouldn't read from the start of the table

Fixed a bug where the first point in the graph would have a timestamp that is before the start time of the first job that writes to a table

⬆️
🔧
✨
⬆️
🔧
⬆️
🔧
Upsolver May 2024 Feature Summary
TABLE_DATA_RETENTION
Job Status
Datasets
COLUMN_TRANSFORMATIONS
OPTIMIZE
system.monitoring.clusters
Monitoring
Datasets
system.information_schema.columns
Datasets
single entity jobs
single entity jobs
replication
single entity
replication
single entity
single entity job
ClickHouse wizard
ClickHouse is now supported as a target