Edit

Share via


Delta Lake table format interoperability

In Microsoft Fabric, the Delta Lake table format is the standard for analytics. Delta Lake is an open-source storage layer that brings ACID (Atomicity, Consistency, Isolation, Durability) transactions to big data and analytics workloads.

All Fabric experiences natively generate and consume Delta Lake tables, providing a unified product experience. Delta Lake tables produced by one compute engine, such as Fabric Data Warehouse or Synapse Spark, can be consumed by any other engine, such as Power BI. When you ingest data into Fabric, Fabric stores it as Delta tables by default. You can easily integrate external data containing Delta Lake tables by using OneLake shortcuts.

Delta Lake features and Fabric experiences

To achieve interoperability, all the Fabric experiences align on the Delta Lake features and Fabric capabilities. Some experiences can only write to Delta Lake tables, while others can read from it.

  • Writers: Data warehouses, eventstreams, and exported Power BI semantic models into OneLake
  • Readers: SQL analytics endpoint and Power BI direct lake semantic models
  • Writers and readers: Fabric Spark runtime, dataflows, data pipelines, and Kusto Query Language (KQL) databases

The following matrix shows key Delta Lake features and its availability on each Fabric experience.

Fabric capability Column mappings Deletion vectors V-order writing Table optimization and maintenance Partitions Liquid Clustering TIMESTAMP_NTZ Delta reader/writer version and default table features
Data warehouse Delta Lake export Name: Yes
ID: No
Yes Yes Yes Read: N/A (not applicable)
Write: No
No No Reader: 3
Writer: 7
Deletion Vectors,
Column Mappings (name)
SQL analytics endpoint Name: Yes
ID: No
Yes N/A (not applicable) N/A (not applicable) Read: Yes
Write: N/A (not applicable)
Yes No N/A (not applicable)
Lakehouse explorer and preview Name: Yes
ID: No
Yes N/A (not applicable) Yes Read: Yes
Write: N/A (not applicable)
Yes Yes N/A (not applicable)
Fabric Spark Runtime 1.3 Name: Yes
ID: Yes
Yes Yes Yes Read: Yes
Write: Yes
Yes Yes Reader: 1
Writer: 2
Fabric Spark Runtime 1.2 Name: Yes
ID: Yes
Yes Yes Yes Read: Yes
Write: Yes
Yes, read only Yes Reader: 1
Writer: 2
Fabric Spark Runtime 1.1 Name: Yes
ID: Yes
No Yes Yes Read: Yes
Write: Yes
Yes, read only No Reader: 1
Writer: 2
Dataflows Gen2 Name: Yes
ID: No
Yes Yes No Read: Yes
Write: Yes
Yes, read only No Reader: 1
Writer: 2
Data pipelines Name: No
ID: No
No Yes No Read: Yes
Write: Yes, overwrite only
Yes, read only No Reader: 1
Writer: 2
Power BI direct lake semantic models Name: Yes
ID: No
Yes N/A (not applicable) N/A (not applicable) Read: Yes
Write: N/A (not applicable)
Yes No N/A (not applicable)
Export Power BI semantic models into OneLake Name: Yes
ID: No
N/A (not applicable) Yes No Read: N/A (not applicable)
Write: No
No No Reader: 2
Writer: 5
Column Mappings (name)
KQL databases Name: Yes
ID: No
Yes No No* Read: Yes
Write: Yes
No No Reader: 1
Writer: 1
Eventstreams Name: No
ID: No
No No No Read: N/A (not applicable)
Write: Yes
No No Reader: 1
Writer: 2

* KQL databases provide certain table maintenance capabilities such as retention. Data is removed at the end of the retention period from OneLake. For more information, see One Logical copy.

Note

  • Fabric doesn't write column mappings by default, except where noted. The default Fabric experience generates tables that are compatible across the service. Delta Lake tables produced by third-party services may have incompatible table features.
  • Some Fabric experiences don't offer table optimization and maintenance capabilities, such as bin-compaction, V-order, deletion vector merge (PURGE), and clean up of old unreferenced files (VACUUM). To keep Delta Lake tables optimal for analytics, follow the techniques in Use table maintenance feature to manage delta tables in Fabric for tables ingested using those experiences.

Current limitations

Currently, Fabric doesn't support these Delta Lake features:

  • V2 Checkpoints aren't uniformly available in all experiences. Only Spark notebooks and Spark jobs can read and write to tables with V2 Checkpoints. Lakehouse and SQL Analytics don't correctly list tables containing V2 Checkpoint files in the __delta_log folder.
  • Delta Lake 3.x Uniform. This feature is supported only in the Data Engineering Spark-compute (Notebooks, Spark Jobs).
  • Identity columns writing (proprietary Databricks feature)
  • Delta Live Tables (proprietary Databricks feature)
  • Delta Lake 4.x features: Type widening, collations, variant type, coordinated commits.

Special characters on table names

Microsoft Fabric supports special characters as part of the table names. This feature allows the usage of unicode characters to compose table names in Microsoft Fabric experiences.

The following special characters are either reserved or not compatible with at least one of Microsoft Fabric technologies and must not be used as part of a table name: " (double quotes), ' (single quote), #, %, +, :, ?, ` (backtick).