A Deeper Dive into Apache Iceberg V3: How New Designs Are Solving Core Data Lake Challenges
The Next Chapter for Apache Iceberg: Welcoming the Iceberg V3 Spec
by

The data community has long grappled with the challenge of how to bring database-like agility to petabyte-scale datasets stored in open cloud storage. The trade-off has often been between the scalability of data lakes and the performance and ease-of-use of traditional data warehouses. Executing fine-grained updates or evolving table schemas on massive tables often required slow, expensive, and disruptive operations.
The Apache Iceberg project is taking on this challenge. Early versions introduced a revolutionary metadata layer that brought reliability and ACID transactions to data lakes. However, certain operations still presented performance bottlenecks at scale.
With the ratification of the V3 specification, the Apache Iceberg community has introduced new designs that directly address these core issues. These advancements represent a significant leap forward in the mission to build an open and high-performance data lakehouse architecture. Let's explore the technical details of these solutions.
More Efficient Row-Level Transactions with Deletion Vectors
A primary challenge for data lakes has been handling row-level deletes efficiently. Previous approaches, like positional delete files, were a clever solution but could lead to performance degradation at query time when a reader had to reconcile many small delete files against large data files.
The Iceberg V3 spec introduces binary deletion vectors, a more performant and scalable architecture. The core idea is to attach a bitmap to each data file, where each bit corresponds to a row, marking it as deleted or not.
When a query engine reads a data file, it also reads its corresponding deletion vector. As it scans rows, it can check the bitmap with minimal overhead and skip rows marked for deletion. This design is made exceptionally efficient through the use of Roaring bitmaps. This data structure is ideal for this task because it can compress sparse sets of integers—like the positions of deleted rows—into a tiny footprint.
The practical difference is profound:
- Previous Model (Positional Deletes): A query might involve reading a central log of deletes, like
deletes.avro
, containing tuples of(file_path, row_position)
. - V3 Model (Deletion Vectors): Each data file (e.g.,
file_A.parquet
) is paired with a small, efficient sidecar file (e.g.,file_A.puffin
) containing a Roaring bitmap of its deleted rows.
This change localizes delete information, streamlines the read path, and dramatically improves the performance of workloads that rely on frequent Change Data Capture (CDC) or row-level updates.
Simplified Schema Evolution with Default Column Values
Another common operational hurdle in managing large tables has been schema evolution. Adding a column to a table with billions of rows traditionally required a "backfill"—a costly and time-consuming job to rewrite all existing data files to add the new column.
Iceberg V3 eliminates this friction with default column values. This feature allows a default value to be specified directly in the table's metadata when a column is added.
ALTER TABLE events ADD COLUMN version INT DEFAULT 1;
This operation is instantaneous because it only modifies metadata. No data files are touched. When a query engine encounters an older data file without the version
column, it consults the table schema, finds the default value, and seamlessly populates it in the query results on the fly. This simple but powerful mechanism makes schema evolution a fast, non-disruptive operation, allowing data models to evolve quickly.
Improved Query Engine Compatibility with Enhanced Data Types and Lineage
Beyond these headline features, V3 broadens the capabilities of Iceberg to support more advanced use cases:
- Row-Level Lineage: For robust auditing and reliable CDC pipelines, V3 formalizes the tracking of row history. By embedding metadata about when a row was added or last modified, Iceberg tables can now provide a clear lineage, simplifying data governance and enabling more efficient downstream data replication.
- Rich Data Types: V3 closes the gap with traditional databases by introducing a more expressive type system. This includes a
VARIANT
type for handling semi-structured data like JSON, nativeGEOMETRY
andGEOGRAPHY
types for advanced geospatial analysis, support for nanosecond-precision timestamps with the newtimestamp_ns
andtimestamptz_ns
data types, a significant increase from the previous microsecond limit.
Building the Future of the Open Data Lakehouse
These V3 features—deletion vectors, default values, row lineage, and richer types—are more than just individual improvements. Together, they represent a cohesive step toward a new paradigm where the lines between the data lake and the data warehouse are erased. They enable faster, more efficient, and more flexible data operations than previously thought possible.
This progress is a testament to the collaborative spirit of the Apache Iceberg community. At Google, we are proud to contribute to and support open-source projects like Iceberg that are defining the future of data architecture. We are excited to see the innovative applications the community will build on this powerful new foundation.
Want to get started with Iceberg? Check out this blog post to learn more about how Google Cloud's managed Iceberg offering, BigLake tables for Apache Iceberg in BigQuery, makes building Iceberg-native lakehouses easier by maximizing performance without sacrificing governance.