UniForm and Compatibility Mode
Skill: databricks-iceberg
What You Can Build
Section titled “What You Can Build”You have Delta tables that work well inside Databricks. Now an external team needs to read them from Snowflake, Trino, or PyIceberg. UniForm generates Iceberg metadata automatically so external engines read your Delta tables as Iceberg — no data migration, no format conversion. Compatibility Mode extends this to streaming tables and materialized views in SDP pipelines.
In Action
Section titled “In Action”“Using SQL, enable UniForm on an existing Delta table so external engines can read it as Iceberg via the IRC endpoint.”
ALTER TABLE analytics.gold.customersSET TBLPROPERTIES ( 'delta.columnMapping.mode' = 'name', 'delta.enableIcebergCompatV2' = 'true', 'delta.universalFormat.enabledFormats' = 'iceberg');Key decisions:
- The table stays Delta internally — you keep CDF, streaming, and all Delta features
- Iceberg metadata generates asynchronously after each Delta commit (typically seconds, occasionally minutes for large transactions)
- External engines get read-only access via the Iceberg REST Catalog; writes still go through Delta
- Column mapping mode
nameis required — if your table usesidmode, migrate tonamefirst - Deletion vectors must be disabled before enabling UniForm
More Patterns
Section titled “More Patterns”Enable UniForm on a new table
Section titled “Enable UniForm on a new table”“Using SQL, create a new Delta table with UniForm enabled from the start.”
CREATE TABLE analytics.gold.products ( product_id BIGINT, name STRING, category STRING, price DECIMAL(10,2), updated_at TIMESTAMP)TBLPROPERTIES ( 'delta.columnMapping.mode' = 'name', 'delta.enableIcebergCompatV2' = 'true', 'delta.universalFormat.enabledFormats' = 'iceberg');Setting these properties at creation time avoids the deletion vector cleanup step you’d need on an existing table.
Fix deletion vectors before enabling UniForm
Section titled “Fix deletion vectors before enabling UniForm”“Using SQL, disable deletion vectors on an existing table and then enable UniForm.”
-- Step 1: Disable deletion vectorsALTER TABLE analytics.gold.customersSET TBLPROPERTIES ('delta.enableDeletionVectors' = 'false');
-- Step 2: Rewrite files to remove existing DVsREORG TABLE analytics.gold.customers APPLY (PURGE);
-- Step 3: Enable UniFormALTER TABLE analytics.gold.customersSET TBLPROPERTIES ( 'delta.columnMapping.mode' = 'name', 'delta.enableIcebergCompatV2' = 'true', 'delta.universalFormat.enabledFormats' = 'iceberg');The REORG ... PURGE step rewrites data files that contain deletion vectors. Without it, external engines hit read errors on those files.
Compatibility Mode for streaming tables
Section titled “Compatibility Mode for streaming tables”“Using SQL, create a streaming table in an SDP pipeline with Compatibility Mode so external engines can read it as Iceberg.”
CREATE OR REFRESH STREAMING TABLE analytics.bronze.eventsTBLPROPERTIES ( 'delta.universalFormat.enabledFormats' = 'compatibility', 'delta.universalFormat.compatibility.location' = 's3://my-bucket/iceberg-compat/events/')AS SELECT * FROM STREAM read_files('/Volumes/analytics/raw/landing/events/');Compatibility Mode writes a separate, read-only copy to the external location in Iceberg-compatible format. After the initial full copy, updates are incremental. The trade-off: you pay extra storage costs proportional to the table size.
Compatibility Mode in Python SDP
Section titled “Compatibility Mode in Python SDP”“Using Python, define a streaming table with Compatibility Mode in an SDP pipeline.”
from pyspark import pipelines as dp
@dp.table( name="events", table_properties={ "delta.universalFormat.enabledFormats": "compatibility", "delta.universalFormat.compatibility.location": "s3://my-bucket/iceberg-compat/events/", },)def events(): return ( spark.readStream.format("cloudFiles") .option("cloudFiles.format", "json") .load("/Volumes/analytics/raw/landing/events/") )This produces the same result as the SQL version. The compatibility.location must point to a configured external location.
Watch Out For
Section titled “Watch Out For”- UniForm does not work on streaming tables or materialized views — use Compatibility Mode instead. Regular UniForm only applies to standard Delta tables.
- Async metadata has a visibility delay — external engines won’t see the latest data immediately after a Delta commit. For most workloads this is seconds; large transactions can take minutes.
- Compatibility Mode duplicates data — it writes a full copy to the external location. Factor in the extra cloud storage costs, especially for large tables.
- Choosing the wrong approach — use Managed Iceberg (
USING ICEBERG) when you need external read/write and don’t need Delta features. Use UniForm when you want to keep Delta features and only need external reads. Use Compatibility Mode when the source is a streaming table or materialized view. - Initial Compatibility Mode generation can take up to 1 hour — plan for this on large tables. Subsequent refreshes are incremental and faster.
- Disabling UniForm — run
ALTER TABLE ... UNSET TBLPROPERTIES ('delta.universalFormat.enabledFormats')to stop Iceberg metadata generation. Existing metadata remains until vacuumed.