Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics platform. It's designed to help you build and deploy data engineering, machine learning, and analytics workloads at scale, providing a unified platform for data teams to collaborate efficiently. Across these use cases for Databricks, there are some common considerations and best practices for using Azure Databricks. This article addresses these considerations and gives architectural recommendations that are mapped to the principles of the Well-Architected Framework pillars.
It's assumed that as an architect, you've reviewed the Choose an analytical data store and chose Azure Databricks as the analytics platform for your workload.
Technology scope
This review focuses on the interrelated decisions for Databricks features, which are hosted in Azure:
- Azure Databricks
- Apache Spark
- Delta Lake
- Unity Catalog
- MLflow
Reliability
The purpose of the Reliability pillar is to provide continued functionality by building enough resilience and the ability to recover fast from failures.
Reliability design principles provide a high-level design strategy applied for individual components, system flows, and the system as a whole.
Workload design checklist
Start your design strategy based on the design review checklist for Reliability. Determine its relevance to your business requirements while keeping in mind the nature of your application and the criticality of its components. Extend the strategy to include more approaches as needed.
Understand service limits and quotas: Azure Databricks service limits directly constrain workload reliability through compute clusters, workspace capacity, storage throughput, and network bandwidth restrictions. Architecture design must proactively incorporate these quotas to prevent unexpected service disruptions, including the 1000-node cluster limit, workspace cluster maximums, and regional capacity constraints that can halt scaling operations during peak demand.
Anticipate potential failures through failure mode analysis: Systematic failure mode analysis identifies potential system failures and establishes corresponding mitigation strategies to maintain distributed computing resilience.
Common failure scenarios and their proven mitigation approaches include:
Failure Mitigation Cluster driver node failure Use cluster auto-restart policies and implement checkpointing for Spark applications. Use structured streaming with fault-tolerant state management. Job execution failures Implement retry policies with exponential backoff. Use Databricks job orchestration with error handling. Configure proper timeout settings. Data corruption or inconsistency Use Delta Lake ACID transactions and time travel capabilities and data expectations in Lakeflow Declarative Pipelines. Implement data validation checks and monitoring. Workspace or region unavailability Implement multi-region deployment strategies. Use workspace backup and restore procedures. Configure cross-region data replication. These mitigation strategies leverage native Azure Databricks capabilities including auto-restart, automatic scaling, Delta Lake consistency guarantees, and Unity Catalog security features for fault tolerance.
Design to support redundancy across the critical layers: Redundancy is a key strategy that must be applied to the critical architectural layers to maintain workload continuity.
For example, distribute clusters across availability zones, using diverse instance types, leveraging cluster pools, and implementing automatic node replacement policies. Similarly, Reliable network design protects against connectivity failures that could disrupt control plane reachability, data access, and communication with dependencies. Using redundant network paths, diverse private endpoint configurations, DNS failover mechanisms, and VNet injection can help achieve network resilience. Even metadata resilience is important for maintaining compliance and data accessibility during service disruptions, since governance failures can halt data access and compromise compliance requirements.
For higher availability, consider multi-region Azure Databricks deployments for geographic redundancy that protects against regional outages and ensures business continuity during extended service disruptions. Multi-region setup is also a viable solution for disaster recovery.
Implement scaling strategies: Use autoscaling to handle demand fluctuations while keeping performance steady. Plan for delays in adding resources and regional capacity limits, balancing speed, and cluster startup latency impacts during peak demand periods.
Adopt serverless compute for improved reliability: Serverless compute options reduce operational complexity and improve reliability by shifting infrastructure management to Microsoft, providing automatic scaling, built-in fault tolerance, and consistent availability without cluster lifecycle management overhead.
Implement comprehensive health monitoring and alerting: Comprehensive monitoring across all Azure Databricks components enables proactive issue detection and rapid response before availability impacts occur, covering workspace health, cluster status, job execution patterns, and data pipeline performance with automated escalation workflows.
Protect data using Delta Lake reliability features: Delta Lake provides essential data protection through ACID transactions, automatic versioning, time travel capabilities, and schema enforcement that prevent corruption and enable recovery from data issues.
Configure job reliability and retry mechanisms: Job reliability configurations establish resilient data processing through intelligent retry policies, timeout management, and failure handling mechanisms that distinguish between transient issues and permanent errors.
Build data pipeline resilience and fault tolerance: Data pipeline resilience addresses the critical reliability challenges of distributed data processing where failures can cascade throughout interconnected data systems and disrupt business analytics workflows.
Advanced resilience strategies leverage Lakeflow Declarative Pipelines, structured streaming checkpoints, Auto Loader rescued data capabilities, and Lakeflow Declarative Pipelines quality constraints to provide automatic error handling, data quality enforcement, and graceful degradation during infrastructure disruptions.
Establish backup and disaster recovery procedures: Effective disaster recovery requires aligning recovery time objectives with business requirements while establishing automated backup processes for workspace metadata, notebook repositories, job definitions, cluster configurations, and integrated data storage systems.
If you're using a secondary region for recovery, pay attention to workspace metadata synchronization, code repository replication, and coordinated integration with dependent Azure services to maintain operational continuity across geographic boundaries.
Implement reliability testing and chaos engineering: Systematic reliability testing validates that failure recovery mechanisms function correctly under real-world conditions, incorporating chaos engineering principles to identify resilience gaps before they impact production environments.
Recommendations
Recommendation | Benefit |
---|---|
Configure cluster autoscaling with minimum node count of 2 and maximum node count aligned with workspace quota limits. Set target utilization thresholds between 70-80% to balance cost efficiency with performance headroom. | Automatic scaling prevents cluster resource exhaustion while maintaining cost efficiency through dynamic node allocation. Proper limit configuration ensures workloads remain within service quotas, preventing job failures from exceeding workspace capacity constraints. |
Deploy Azure Databricks workspaces across multiple Azure regions for mission-critical workloads. Configure workspace replication with automated backup of source code, job definitions, and cluster configurations using Databricks Asset Bundles and Azure DevOps or Azure Data Factory pipelines. | Multi-region deployment provides geographic redundancy that maintains data processing capabilities during regional outages or disasters. Automated workspace replication reduces recovery time objectives from hours to minutes by ensuring consistent configurations across regions. This approach supports business continuity requirements and minimizes operational impact during extended regional service disruptions. |
Establish cluster pools with pre-warmed instances using diverse VM sizes within the same family. Configure pool sizes to maintain 20-30% overhead capacity above typical workload requirements. | Pre-warmed cluster pools reduce cluster startup time from 5-10 minutes to under 60 seconds, enabling faster recovery from node failures. Diverse VM sizing within pools ensures cluster provisioning succeeds even when specific instance types face capacity constraints. |
Activate Delta Lake time travel features by configuring automatic table versioning and retention policies. Set retention periods based on recovery requirements, typically 7-30 days for production tables. | Time travel capabilities provide point-in-time data recovery without requiring external backup systems or complex restore procedures. Automatic versioning protects against data corruption and accidental modifications while maintaining complete data lineage for compliance and debugging purposes. This approach eliminates the need for separate backup infrastructure while ensuring rapid recovery from data issues. |
Integrate Azure Databricks with Azure Monitor by enabling diagnostic logs for cluster events, job execution, and data analytics. Configure custom metrics and alerts for cluster health, job failure rates, and resource utilization thresholds. | Centralized monitoring provides unified observability across all Azure Databricks components, enabling proactive issue detection before failures impact production workloads. Custom alerting reduces mean time to resolution by automatically notifying teams when clusters experience performance degradation or job failures exceed acceptable thresholds. |
Deploy serverless SQL warehouses for ad-hoc analytics and reporting workloads requiring consistent availability without cluster management overhead. | Serverless infrastructure eliminates cluster provisioning delays and provides automatic scaling with built-in high availability guarantees. Microsoft manages all infrastructure patching, updates, and failure recovery, reducing operational complexity while ensuring consistent performance. |
Configure Azure Databricks job retry policies with exponential backoff starting at 30 seconds and maximum retry count of 3. Set different retry strategies for transient failures versus configuration errors to avoid unnecessary resource consumption. | Intelligent retry mechanisms automatically recover from transient failures such as network timeouts or temporary resource unavailability without manual intervention. Exponential backoff prevents overwhelming downstream services during outages while distinguishing between recoverable transient issues and permanent configuration problems. This approach reduces operational overhead and improves overall system resilience through automated failure recovery. |
Implement VNet injection for Azure Databricks workspaces to enable custom network routing and private connectivity. Configure network security groups and Azure Firewall rules to control traffic flow and integrate with existing enterprise networking infrastructure. | VNet injection provides network-level redundancy through custom routing options and eliminates dependency on default Azure networking paths. Private connectivity enables integration with on-premises networks and other Azure services while maintaining security isolation. This configuration supports multiple availability zones and custom load balancing strategies that enhance overall network reliability. |
Activate Unity Catalog with automated metastore backup and cross-region metadata synchronization. Configure external metastore locations in separate storage accounts to ensure metadata persistence during workspace failures. | Unity Catalog backup ensures governance policies and data lineage information survive workspace disasters, maintaining compliance and operational continuity. Cross-region synchronization reduces metadata recovery time from hours to minutes while preserving centralized access control policies across all environments. |
Deploy Lakeflow Declarative Pipelines for production data pipelines requiring automatic quality enforcement and fault tolerance. Configure pipeline restart policies and expectation handling to ensure data quality while maintaining processing continuity. | Lakeflow Declarative Pipelines provides declarative pipeline management that automatically handles transient failures, data quality violations, and infrastructure issues without manual intervention. Built-in quality enforcement through expectations prevents corrupted data from propagating downstream while automatic retry capabilities ensure pipeline completion during temporary resource constraints. This managed approach reduces operational overhead while maintaining data integrity standards. |
Create automated workspace backup procedures using Azure REST APIs or Databricks CLI to export source code, job and pipeline configurations, cluster settings, and workspace metadata. Schedule regular backups to Azure storage accounts with cross-region replication enabled. | Comprehensive workspace backups enable complete environment restoration during disaster scenarios, preserving all development work and operational configurations. Automated procedures eliminate human error and ensure backup consistency while cross-region storage replication protects against regional outages. This approach reduces recovery time objectives and maintains business continuity for data teams and their analytical workflows. |
Implement structured streaming with checkpoint locations stored in highly available Azure storage accounts with zone-redundant storage (ZRS). Configure checkpoint intervals between 10-60 seconds based on throughput requirements and failure recovery objectives. | Checkpointing provides exactly-once processing guarantees and enables automatic recovery from cluster failures without data loss or duplicate processing. ZRS storage ensures checkpoint persistence across availability zone failures, maintaining streaming job continuity during infrastructure disruptions. |
Activate automatic cluster restart policies for long-running workloads with appropriate restart timeouts and maximum restart attempts. Configure cluster termination detection and automatic job rescheduling for mission-critical data processing workflows. | Automatic restart policies ensure workload continuity during planned maintenance events and unexpected cluster failures without requiring manual intervention. Intelligent restart logic distinguishes between recoverable failures and permanent issues, preventing infinite restart loops while maintaining service availability for critical data processing operations. |
Configure instance pools with multiple VM families and sizes within the same compute category to provide allocation flexibility during capacity constraints. | Diverse instance type configurations ensure cluster provisioning succeeds even when specific VM sizes experience regional capacity limitations. Mixed VM families within pools provide cost optimization opportunities while maintaining performance characteristics suitable for workload requirements, reducing the risk of provisioning failures during peak demand periods. |
Establish chaos engineering practices by deliberately introducing cluster failures, network partitions, and resource constraints in non-production environments. Automate failure injection using Azure Chaos Studio to validate recovery procedures and identify resilience gaps. | Proactive failure testing validates disaster recovery procedures and automatic recovery capabilities before production incidents occur. Systematic chaos engineering identifies weak points in pipeline dependencies, cluster configurations, and monitoring systems that may not be apparent during normal operations. This approach builds confidence in system resilience while ensuring recovery procedures work as designed during actual outages. |
Security
The purpose of the Security pillar is to provide confidentiality, integrity, and availability guarantees to the workload.
The Security design principles provide a high-level design strategy for achieving those goals by applying approaches to the technical design of Azure Databricks.
Workload design checklist
Start your design strategy based on the design review checklist for Security and identify vulnerabilities and controls to improve the security posture.
Review security baselines: The Azure Databricks security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Microsoft cloud security benchmark.
Integrate secure development lifecycle (SLC): Implement security code scanning for source code and MLflow model security validation to identify vulnerabilities early in the development lifecycle.
Use infrastructure-as-code (IaC) validation to enforce secure configurations of Azure Databricks resources.
Also, protect the development environment by implementing secure source code management, managing credentials safely within development workflows, and integrating automated security testing into CI/CD pipelines used for data processing and machine learning model deployment.
Provide centralized governance: Add traceability and auditing for data sources through Databricks pipelines. Unity Catalog serves a centralized metadata catalog that supports data discovery and lineage tracking across workspaces with fine-grained access controls and validation.
Unity Catalog can be integrated with external data sources.
Introduce intentional resource segmentation: Enforce segmentation at different scopes by using separate workspaces and subscriptions. Use separate segments for production, development, and sandbox environments to limit blast radius of potential breaches.
Apply segmentation by sensitivity and function: isolate sensitive data workloads in dedicated workspaces with stricter access controls, and use sandbox environments for exploratory work with limited privileges and no production data access.
Implement secure network access: Azure Databricks' data plane resources like Spark clusters, and VMs, are deployed into your Azure Virtual Network (VNet) through vNet Injection. Those resources are deployed into subnets within that VNet. Therefore, the control plane, managed by Databricks platform, is isolated from the data plane preventing unauthorized access. The control plane communicates securely with the data plane to manage the workload, while all data processing remains within your network.
vNet injection gives you full control over configuration, routing, and security though Azure's private networking capabilities. For example, you can use Azure Private Links to secure the connection to the control plane without using the public internet. You can use network security groups (NSGs) to control egress and ingress traffic between subnets, route traffic through Azure Firewall, NAT Gateway, or network virtual appliances for inspection and control. You can even peer the VNet with your on-premises network, if needed.
Implement authorization and authentication mechanisms: Consider identity and access management across both the control and data planes. The Databricks runtime enforces its own security features and access controls during job execution, creating a layered security model. Azure Databricks components, like Unity Catalog and Spark cluster, integrates with Microsoft Entra ID, enabling access management through Azure's built-in RBAC policies. This integration also provides enterprise authentication through single sign-on, multifactor authentication, and conditional access policies, and so on.
It's important to understand where your architecture relies on Databricks-native security and where it intersects with Entra ID. This dual-layered approach may require separate identity management and maintenance strategies.
Encrypt data at rest: Azure Databricks integrates with Azure Key Vault to manage encryption keys. This integration supports customer-managed keys (CMK), allowing you to control the operation of your encryption keys, such as revocation, auditing, and compliance with security policies.
Protect workload secrets: To run data workflows, there's often a need to store secrets like database connection strings, API keys, and other sensitive information. Azure Databricks natively supports secret scopes to store secrets within a workspace that can be securely accessed from source code and jobs.
Secret scopes are integrated with Azure Key Vault allowing you to reference secrets and manage them centrally. Enterprise teams often require Key Vault-backed secret scopes for compliance, security, and policy enforcement.
Implement security monitoring: Azure Databricks natively supports audit logging that gives you visibility into admin activities in a workspace, like login attempts, notebook access, changes to permissions. Also, Unity Catalog access logs tracks who accessed what data, when, and how.
With Azure Databricks, those logs can be viewed in Azure Monitor.
Security Analysis Tool (SAT) is fully compatible with Azure Databricks workspaces.
Recommendations
Recommendation | Benefit |
---|---|
Deploy Azure Databricks workspaces using VNet injection to establish network isolation and enable integration with corporate networking infrastructure. Configure custom network security groups, route tables, and subnet delegation to control traffic flow and enforce enterprise security policies. | VNet injection eliminates public internet exposure for cluster nodes and provides granular network control through custom routing and firewall rules. Integration with on-premises networks enables secure hybrid connectivity while maintaining compliance with corporate security standards. |
Configure Microsoft Entra ID single sign-on integration with multifactor authentication and conditional access policies for workspace access. Enable automatic user provisioning and group synchronization to streamline identity management and enforce enterprise authentication standards. | SSO integration eliminates password-related security risks while providing centralized identity management through enterprise authentication systems. Conditional access policies add context-aware security controls that evaluate user ___location, device compliance, and risk factors before granting workspace access. This layered approach significantly reduces authentication-related security vulnerabilities while improving user experience. |
Deploy Unity Catalog with centralized metastore configuration to establish unified data governance across all Azure Databricks workspaces. Configure hierarchical permission structures using catalogs, schemas, and table-level access controls with regular permission audits. | Unity Catalog provides centralized data governance that eliminates inconsistent access controls and reduces security gaps across multiple workspaces. Fine-grained permissions enable least-privilege access while audit logging supports compliance requirements and security investigations. |
Activate customer-managed keys for workspace storage encryption using Azure Key Vault integration with automatic key rotation policies. Configure separate encryption keys for different environments and implement proper access controls for key management operations. | Customer-managed keys provide complete control over encryption key lifecycle management and support regulatory compliance requirements for data sovereignty. Key separation across environments reduces security exposure while automatic rotation policies maintain cryptographic hygiene without operational overhead. This approach enables meeting stringent compliance requirements such as FIPS 140-2 Level 3 or Common Criteria standards. |
Establish Azure Key Vault-backed secret scopes for centralized credential management with role-based access controls. Implement secret rotation policies and avoid storing credentials in source code or cluster configurations. | Key Vault integration centralizes secrets management while providing enterprise-grade security controls including access logging and automatic rotation capabilities. This approach eliminates credential exposure in code and configuration files while enabling secure access to external systems and databases. |
Create IP access lists with allow-only policies for trusted corporate networks and deny rules for known threat sources. Configure different access policies for production and development environments based on security requirements. | IP-based access controls provide an additional security layer that prevents unauthorized access from untrusted networks, reducing the attack surface significantly. Environment-specific policies enable appropriate security levels while supporting compliance requirements for network-based access restrictions. |
Configure all clusters to use secure cluster connectivity (no public IP) and disable SSH access to cluster nodes. Implement cluster access modes and runtime security features to prevent unauthorized code execution. | Secure cluster connectivity eliminates public internet exposure for compute nodes while preventing direct SSH access that could compromise cluster security. Runtime security features provide additional protection against malicious code execution and lateral movement attacks within the cluster environment. |
Deploy Azure Private Link endpoints for control plane access to eliminate public internet transit for workspace connectivity. Configure private DNS zones and ensure proper network routing for seamless private connectivity integration. | Private Link eliminates public internet exposure for workspace access while ensuring all management traffic remains within Azure's backbone network. Private connectivity provides enhanced security for sensitive workloads and supports compliance requirements that mandate private network access. This configuration significantly reduces exposure to internet-based threats while maintaining full workspace functionality. |
Activate Enhanced Security and Compliance add-on for regulated environments requiring HIPAA, PCI-DSS, or SOC 2 compliance. Configure automatic security updates and enable compliance security profiles for specific regulatory frameworks. | Enhanced Security and Compliance provides specialized security controls including compliance security profiles, automatic security updates, and enhanced monitoring capabilities. This managed approach ensures continuous compliance with regulatory requirements while reducing operational overhead for security management. Automatic updates maintain security posture without disrupting business operations or requiring manual intervention. |
Enable audit logging through Unity Catalog system tables and workspace audit logs with automated analysis and alerting. Configure log retention policies and integrate with SIEM systems for centralized security monitoring and incident response. | Audit logging provides complete visibility into user activities, data access patterns, and system changes for security monitoring and compliance reporting. Integration with SIEM systems enables automated threat detection and rapid incident response capabilities through centralized log analysis. |
Configure OAuth 2.0 machine-to-machine authentication for API access and automated workloads instead of personal access tokens. Implement proper token scoping and lifecycle management to ensure secure programmatic access. | OAuth authentication provides enhanced security through fine-grained permission scoping and improved token lifecycle management compared to personal access tokens. This approach enables secure automation while maintaining proper access controls and audit trails for programmatic workspace interactions. |
Implement workspace isolation strategies by deploying separate workspaces for different environments and establishing network segmentation controls. Configure environment-specific access policies and data boundaries to prevent cross-environment data access. | Workspace isolation prevents data leakage between environments while supporting compliance requirements for data segregation and access controls. This architecture reduces blast radius during security incidents and enables environment-specific security policies that match risk profiles. |
Deploy the Security Analysis Tool (SAT) for continuous security configuration assessment with automated remediation recommendations. Schedule regular security scans and integrate findings into CI/CD pipelines for proactive security management. | Automated security assessment provides continuous monitoring of workspace configurations against security best practices and compliance requirements. Integration with development workflows enables shift-left security practices that identify and address misconfigurations before they reach production environments. This proactive approach significantly reduces security risks while minimizing remediation costs and operational disruption. |
Configure service principal authentication for automated workflows and CI/CD pipelines with minimal required permissions. Implement credential management through Azure Key Vault and enable certificate-based authentication for enhanced security. | Service principal authentication eliminates dependencies on user credentials for automated processes while providing proper access controls and audit trails. Certificate-based authentication offers enhanced security compared to client secrets while supporting proper credential lifecycle management for production automation scenarios. |
Establish network egress controls through VNet injection with custom route tables and network security groups to monitor and restrict data transfer. Configure Azure Firewall or network virtual appliances to inspect and control outbound traffic patterns. | Network egress controls prevent unauthorized data exfiltration while providing visibility into data movement patterns through traffic monitoring and analysis. Custom routing and firewall inspection enable detection of unusual data transfer activities that could indicate security breaches or insider threats. |
Activate Microsoft Entra ID credential passthrough for Azure Data Lake Storage access to eliminate service principal dependencies. Configure user-specific access controls and ensure proper permission inheritance from Unity Catalog governance policies. | Credential passthrough eliminates the complexity of managing service principals for data access while providing seamless integration with enterprise identity systems. User-specific access controls ensure data access permissions align with organizational policies and job functions. This approach simplifies credential management while maintaining strong security controls and audit capabilities for data lake operations. |
Implement cluster hardening practices including SSH restriction, custom image scanning, and runtime security controls. Use approved base images and prevent unauthorized software installation through cluster policies and init scripts validation. | Cluster hardening reduces attack surface through SSH restrictions and prevents unauthorized software installation that could compromise cluster security. Custom image scanning ensures base images meet security standards while runtime controls prevent malicious code execution and lateral movement within the cluster environment. |
Implement automated security scanning for source code and code artifacts through CI/CD pipeline integration with static analysis tools and vulnerability scanners | Automated security scanning enables early detection of security vulnerabilities in analytical code and infrastructure configurations before they reach production environments. |
Cost Optimization
The purpose of the Cost Optimization pillar is to manage costs to maximize the value delivered.
The Cost Optimization design principles provide a high-level design strategy for achieving those goals and making tradeoffs within the Azure Databricks architecture.
Workload design checklist
Start your design strategy based on the design review checklist for Cost Optimization. Define policies and procedures to continuously monitor and optimize costs while meeting your performance requirements.
Determine your cost drivers: Theoretical capacity planning often leads to over-provisioning and wasted spend, and conversely not investing in enough resources is risky.
Estimate costs and seek optimization opportunities based on workload behavior. Run pilot workloads, benchmark cluster performance, and analyze autoscaling behavior. Real usage data can help to right-size the cluster, set scaling rules, and allocate the right resources.
Set clear accountability for spend: When using multiple Azure Databricks workspaces, it's important to track which teams or projects are responsible for specific costs. This requires tagging resources (like clusters or jobs) with project or cost center information, using chargeback models to assign usage-based costs to teams, and setting budget controls to monitor and limit spending.
Choose the appropriate tiers: It's recommended that you use the Standard tier providing for development and basic production workloads; Premium tier for production workloads as it provides security features, such as the Unity Catalog, which are central to most analytics workloads.
Choose between serverless compute versus VMs: For serverless, you only pay for what you use (consumption-based). Serverless is recommended for bursty workloads or on-demand jobs because it scales automatically and reduces operational overhead. You don't need to manage infrastructure or pay for idle time.
For predictable or steady usage, opt for VM-based clusters. This gives you more control, but requires operational management and tuning to avoid overprovisioning. If you are sure about long-term usage, use reserved capacity. Databricks Commit Units (DBCU) are pre-paid usage contracts that give discounts in exchange for usage commitments.
Make sure you analyze historical trends and project future demands to make the best choice.
Optimize cluster utilization: Reduce Azure Databricks costs by automatically scaling and shutting down clusters when they're not needed.
Evaluate if your budget allows for cluster pools. While they can reduce cluster start times, they are idle resources that accrue infrastructure costs even while not in use.
Save costs in Dev/Test environments by using scaled down configurations. Encourage cluster sharing among teams to avoid spinning up unnecessary resources. Enforce auto-termination policies to deprovision idle clusters.
Optimize compute for each workload: Different workloads require different compute configurations. Some jobs may need higher memory, processing power, while others might run lightweight jobs accruing lower cost. Use the right cluster for the right job.
Instead of using the same large cluster for everything, assign the right cluster to each job. Azure Databricks lets you tailor compute resources to match each workload, helping you reduce costs and improve performance.
Optimize storage costs: Storing large volumes of data can get expensive. Try to reduce cost by using Delta Lake capabilities. For example, data compaction allows you to merge many small files into fewer large files to reduce storage overhead and speed up queries.
Be diligent about managing old data. You can use retention policies to remove outdated versions. In addition, you can move older infrequently accessed data to cheaper storage tiers. If applicable, automate lifecycle policies like time-based deletion or tiering rules help archive or delete data as it becomes less useful, keeping storage lean.
Different storage formats and compression settings can also reduce the amount of space used.
Optimize data processing techniques: There are costs associated with compute, networking, and querying when processing large volumes of data. To reduce costs, use a combination of strategies for query tuning, data format selection, and Delta Lake and code optimizations:
Minimize data movement. Evaluate the data processing pipeline to reduce unnecessary data movement and bandwidth costs. Implement incremental processing to avoid reprocessing unchanged data, and use caching to store frequently accessed data closer to compute resources. Reduce overhead when connectors access or integrate with external data sources.
Use efficient file formats. Formats like Parquet and compression algorithms native to Databricks like Zstandard lead to faster read times and less data costs due to less data being moved.
Make your queries efficient. Avoid full-table scans to reduce compute costs. Instead, partition your Delta tables based on common filter columns. Use native features to reduce compute time. For example, native Spark features like Catalyst Optimzer and Adaptive Query Execution (AQE) to dynamically optimize joins and partitioning at runtime. Databricks Photon engine accelerates query execution.
Apply code optimization design patterns like Competing Consumers, Queue-Based Load Leveling, and Compute Resource Consolidation within Azure Databricks environments.
Monitor consumption: Databricks Unit (DBU) is an abstracted billing model that's based on compute usage. Azure Databricks gives you detailed information that provides visibility into usage metrics about clusters, runtime hours, and other components. Use that data for budget planning and controlling costs.
Have automated spending guardrails: To avoid overspending and efficient use of resources, enforce policies that prevent or regulate the use of resources. For example, have checks on the types of clusters that can be created, limit the cluster size or its lifetime. Also, set alerts to get notified when resource usage near the allowed budget boundaries. For example, if a job suddenly starts consuming 10× more DBUs, a script can alert the admin or shut it down.
Take advantage of Databricks system tables to track cluster usage and DBU consumption. You can query the table to detect cost anomalies.
Recommendations
Recommendation | Benefit |
---|---|
Deploy job clusters for scheduled workloads instead of all-purpose clusters to eliminate idle compute costs and configure automatic termination upon job completion. | Job clusters reduce costs by up to 50% through automatic termination after job completion, optimizing DBU consumption by precisely matching compute time to actual processing requirements. |
Enable cluster autoscaling with carefully configured minimum and maximum node limits based on workload analysis to handle baseline load and peak demand requirements. Configure scaling policies to respond quickly to workload changes while avoiding unnecessary scaling oscillations that can increase costs unnecessarily. |
Autoscaling reduces over-provisioning costs by 20-40% compared to fixed-size clusters while maintaining performance levels during peak periods and automatically reducing resources during low-demand periods. |
Configure auto-termination for all interactive clusters with appropriate timeout periods based on usage patterns, typically 30-60 minutes for development environments. | Auto-termination reduces interactive cluster costs by 60-80% without impacting user productivity, eliminating costs from clusters running overnight or over weekends. |
Adopt serverless SQL warehouses for interactive SQL workloads to eliminate infrastructure management overhead and optimize costs through consumption-based billing. Configure appropriate sizing based on concurrency requirements and enable auto-stop functionality to minimize costs during inactive periods. Migrate from classic SQL endpoints to serverless SQL warehouses for better performance and cost efficiency, leveraging built-in Photon acceleration capabilities. |
Serverless SQL warehouses reduce SQL workload expenses by 30-50% compared to always-on clusters through usage-based billing that eliminates idle time costs. Built-in Photon acceleration delivers up to 12x performance improvements while providing predictable per-query costs for interactive analytics scenarios. |
Implement cluster pools for frequently used configurations to reduce startup times and optimize resource allocation based on usage patterns and demand forecasting. | Cluster pools reduce startup time from minutes to seconds while eliminating DBU charges for idle pool instances, providing cost-effective resource provisioning for development teams. |
Use Delta Lake optimization features including OPTIMIZE commands, Z-ORDER clustering, and VACUUM operations to reduce storage costs and improve query performance. Schedule regular optimization jobs to compact small files, implement data retention policies, and configure compression settings based on data access patterns. |
Delta Lake optimization reduces storage costs by 40-60% through data compaction and efficient compression while improving query performance by reducing file scan requirements. |
Implement compute policies to enforce cost-effective configurations across all workspaces and teams by restricting instance types and enforcing auto-termination settings. Create different policy templates for development, staging, and production environments with varying levels of restrictions and appropriate tags for cost attribution. |
Compute policies reduce average cluster costs by 25-35% by preventing overprovisioning and ensuring adherence to cost optimization standards while maintaining governance. |
Monitor costs using Databricks system tables and Azure Cost Management integration to gain visibility into DBU consumption patterns and spending trends. Implement automated cost reporting dashboards that track usage by workspace, user, job, and cluster type while configuring cost alerts for proactive management. Use Unity Catalog system tables to analyze detailed usage patterns and create chargeback models for different teams and projects based on actual resource consumption. |
Comprehensive cost monitoring provides visibility into DBU consumption patterns and enables accurate cost attribution through detailed usage analytics and tagging strategies. Integration with Azure Cost Management enables organization-wide cost governance and helps establish accountability across teams, leading to more responsible resource usage patterns. |
Purchase Databricks reserved capacity through Databricks Commit Units (DBCU) for predictable workloads with stable usage patterns and optimal commitment terms. | Reserved capacity achieves 20-40% cost savings through DBCU compared to pay-as-you-go pricing while providing cost predictability over 1-3 year terms for stable production workloads. |
Optimize workload-specific compute configurations by selecting appropriate compute types for different use cases such as job clusters for ETL pipelines and GPU instances for ML training. Match instance types and cluster configurations to specific workload requirements rather than using generic configurations across all scenarios. |
Workload-specific optimization reduces costs by 30-50% compared to one-size-fits-all approaches by eliminating overprovisioning and leveraging specialized compute types optimized for specific use cases. |
Implement automated data lifecycle policies with scheduled cleanup operations including VACUUM commands, log file retention, and checkpoint management based on business requirements. | Automated lifecycle management reduces storage costs by 50-70% by systematically removing unnecessary data versions, logs, and temporary files while preventing storage bloat over time. |
Use Standard tier for development and testing environments while applying Premium tier only for production workloads that require advanced security features and compliance certifications. | Strategic tier selection optimizes licensing costs by up to 30% by using Standard tier for non-production workloads where advanced security features aren't required. Premium tier features like RBAC and audit logging are applied only where business requirements and security policies justify the additional cost investment. |
Implement serverless jobs for variable and intermittent workloads that have unpredictable scheduling patterns or resource requirements for ad-hoc analytics and experimental workloads. Configure serverless compute for batch processing jobs where usage patterns are difficult to predict and leverage automatic optimization capabilities. Migrate appropriate workloads from traditional clusters to serverless compute based on usage analysis and cost-benefit evaluation to optimize resource utilization. |
Serverless jobs eliminate idle time costs and provide automatic optimization for variable resource requirements, reducing costs by 40-60% for unpredictable workloads. The consumption-based billing model ensures you pay only for actual compute time used, making it ideal for development environments and sporadic production workloads with automatic resource optimization. |
Configure cost alerts and budgets through Azure Cost Management and Databricks usage monitoring to enable proactive cost management with multiple alert thresholds. Set up escalation procedures for different stakeholder groups and implement automated responses for critical cost overruns with regular budget reviews. |
Proactive cost monitoring enables early detection of cost anomalies and budget overruns, preventing surprise expenses and allowing timely intervention before costs impact budgets significantly. |
Optimize data formats and enable Photon acceleration to reduce compute time through efficient data processing with columnar storage formats and compression algorithms. Implement partitioning strategies that minimize data scanning requirements and enable Photon acceleration for supported workloads to leverage vectorized query execution. |
Data format optimization and Photon acceleration reduce compute time and costs by 30-50% through columnar storage optimizations and vectorized query execution capabilities. These optimizations compound over time as data volumes grow, providing increasing cost benefits for analytical workloads and complex data processing pipelines without requiring architectural changes. |
Operational Excellence
Operational Excellence primarily focuses on procedures for development practices, observability, and release management. The Operational Excellence design principles provide a high-level design strategy for achieving those goals for the operational requirements of the workload.
Workload design checklist
Start your design strategy based on the design review checklist for Operational Excellence for defining processes for observability, testing, and deployment related to Azure Databricks.
Collect monitoring data: For your Azure Databricks workload, focus on tracking key areas like cluster health, resource usage, jobs and pipelines, data quality, and access activity. Use these metrics to gain insights to confirm the system is delivering functionality at the expected performance. Also they can be used to audit how data and resources are accessed and used and enforce governance.
Monitor the cluster: When monitoring Azure Databricks clusters, focus on indicators that reflect performance and efficiency. Track overall cluster health and observe how resources like CPU, memory, and disk are being used across nodes.
Monitor jobs and pipelines: Capture metrics that reflect execution flow. This includes tracking job success and failure rates, and run durations. Also, gather information about how jobs are triggered to clarify execution context.
Use Databricks System tables provide a native way of capturing job status, dependency chains, and throughput.
Monitor data source connectivity. Monitor integrations and dependencies with external systems. This includes capturing data source connectivity status, tracking API dependencies, and observing service principal authentication behavior. Unity Catalog can be used to manage and monitor external locations, helping identify potential access or configuration issues.
Monitor data quality: Collect signals that validate both the integrity and freshness of your data. This includes monitoring for schema evolution issues using tools like Auto Loader, and implementing rules that do completeness checks, null value detection, and anomaly identification. You can use Lakeflow Declarative Pipelines to enforce built-in quality constraints during data processing.
Additionally, capturing data lineage through Unity Catalog helps trace how data flows and transforms across systems, providing transparency and accountability in your pipelines.
Azure Databricks' built-in monitoring tools are integrated with Azure Monitor.
Set up automated and repeatable deployment assets: Use Infrastructure as Code (IaC) to define and manage Azure Databricks resources.
Automate provisioning of workspaces, including region selection, networking, and access control, to ensure consistency across environments. Use cluster templates to standardize compute configurations, reducing the risk of misconfiguration and improving cost predictability. Also define jobs and pipelines as code using formats like JSON ARM templates, making them version-controlled and reproducible.
Use Databricks Asset Bundles to version control notebook source code, job configurations, pipeline definitions, and infrastructure settings in Git repositories with proper branching strategies and rollback procedures.
Automate deployments: Use CI/CD pipelines in Azure Databricks to automate the deployment of pipelines, job configurations, cluster settings, and Unity Catalog assets. Instead of manually pushing changes, consider tools like Databricks Repos for version control, Azure DevOps or GitHub Actions for pipeline automation, and Databricks Asset Bundles for packaging code and configurations.
Automate routine tasks: Common automation includes managing cluster lifecycles (like scheduled start/stop), cleaning up logs, validating pipeline health. By integrating with Azure tools like Logic Apps or Functions, teams can build self-healing workflows that automatically respond to issues, such as restarting failed jobs or scaling clusters. This kind of automation is key to maintaining reliable, efficient Azure Databricks operations as workloads grow.
Have strong testing practices: Azure Databricks-specific strategies include unit testing for notebook code, integration testing for data pipelines, validation of Lakeflow Declarative Pipelines logic, permission testing with Unity Catalog, and verifying infrastructure deployments. These practices help catch issues early and reduce production incidents,
Develop operational runbooks to handle incidents: Operational runbooks offer structured, step-by-step guidance for handling common Azure Databricks scenarios. These runbooks include diagnostic commands, log locations, escalation contacts, and recovery procedures with estimated resolution times, enabling consistent and rapid incident response across teams.
Develop backup and recovery procedures: Backup and recovery procedures ensure business continuity through protection of workspace configurations, analytics source code, job definitions, and data assets with automated backup schedules and cross-region replication that meet recovery time and recovery point objectives.
Implement team collaboration and knowledge management: Team collaboration practices optimize Azure Databricks productivity through shared workspace organization, notebook collaboration features, and documentation standards that facilitate knowledge transfer and reduce project duplication across development teams.
Recommendations
Recommendation | Benefit |
---|---|
Configure diagnostic settings for Azure Databricks workspaces to send platform logs, audit logs, and cluster events to Azure Monitor Log Analytics workspace. Enable all available log categories including workspace, clusters, accounts, jobs, notebook, and Unity Catalog audit logs for observability coverage. |
Centralizes all Azure Databricks telemetry in Log Analytics, enabling advanced KQL queries for troubleshooting, automated alerting on critical events, and compliance reporting. Provides unified visibility across workspace activities, cluster performance, and data access patterns for proactive operational management. |
Deploy Azure Databricks workspaces using Azure Resource Manager templates or Bicep files with parameterized configurations for consistent environment provisioning. Include workspace settings, network configurations, Unity Catalog enablement, and security policies in the template definitions to ensure standardized deployments across development, testing, and production environments. |
Eliminates configuration drift between environments and reduces deployment errors through consistent, version-controlled infrastructure definitions. Accelerates environment provisioning by 70% compared to manual deployment processes and enables rapid recovery through automated workspace recreation during disaster scenarios. |
Integrate Azure Databricks notebooks and other source code with Git repositories using Databricks Repos for source control and collaborative development. Configure automated CI/CD pipelines through Azure DevOps or GitHub Actions to deploy source code changes, job and pipeline configurations, and cluster templates across environments with proper testing and approval workflows. |
Enables collaborative development with version history, branch-based workflows, and merge conflict resolution for code. Reduces deployment risks through automated testing and staged releases while maintaining complete audit trails of all production changes. |
Deploy automated cluster rightsizing solutions using Azure Databricks cluster metrics and Azure Monitor data to analyze utilization patterns and recommend optimal instance types and sizes. Configure autoscaling policies based on CPU, memory, and job queue metrics to automatically adjust cluster capacity according to workload demands. |
Optimizes infrastructure costs by automatically matching cluster resources to actual workload requirements, preventing over-provisioning waste. Maintains performance SLAs while reducing compute costs by 30-50% through intelligent resource allocation and automated scaling decisions. Eliminates manual monitoring overhead and enables proactive capacity management through data-driven insights about resource usage patterns and optimization opportunities. |
Activate Unity Catalog audit logging to track all data access operations, permission changes, and governance activities within Azure Databricks workspaces. Configure log retention policies and integrate with Azure Sentinel or third-party SIEM solutions for automated security monitoring and compliance reporting. |
Provides complete audit trails for data access patterns, permission modifications, and governance operations required for regulatory compliance frameworks like SOX, HIPAA, and GDPR. Enables automated threat detection and investigation of suspicious data access behaviors through centralized security monitoring. |
Implement Lakeflow Declarative Pipelines with data quality expectations and monitoring rules to automate data validation and pipeline quality assurance. Configure expectation thresholds, quarantine policies, and automated alerting for data quality violations to maintain pipeline reliability and data integrity. |
Automates data quality validation with declarative rules that prevent bad data from propagating downstream, reducing manual validation effort by 80%. Provides transparent data quality metrics and automated remediation workflows that maintain pipeline reliability and business confidence in data accuracy. |
Establish automated backup procedures for Azure Databricks workspace artifacts using the Databricks REST API and Azure Automation runbooks. Schedule regular backups of analytics source content, job definitions, cluster configurations, and workspace settings with versioned storage in Azure Storage accounts and cross-region replication. |
Ensures rapid recovery from accidental deletions, configuration changes, or workspace corruption with automated restoration capabilities. Maintains business continuity through versioned backups and reduces recovery time objectives from days to hours through standardized backup and restore procedures. |
Create standardized workspace folder hierarchies with naming conventions that include project codes, environment indicators, and team ownership. Implement shared folders for common libraries, templates, and documentation with appropriate access controls to facilitate knowledge sharing and collaboration. |
Improves project discoverability and reduces onboarding time for new team members through consistent workspace organization. Accelerates development through shared code libraries and standardized project structures that eliminate duplication of effort across teams. |
Configure Azure Cost Management with resource tagging strategy for Azure Databricks workspaces, clusters, and compute resources. Implement cost alerts, budget thresholds, and automated reporting to track spending across projects, teams, and environments with chargeback capabilities and optimization recommendations. |
Provides granular cost visibility and accountability across organizational units through detailed spend analysis and automated budget monitoring. Enables proactive cost optimization through spending alerts and usage pattern insights that prevent budget overruns and identify optimization opportunities. Supports accurate cost allocation and chargeback processes with detailed resource utilization reporting and automated cost center assignment based on resource tags. |
Configure service principal authentication for Azure Databricks integrations with external systems, data sources, and Azure services. Implement managed identity where possible and establish credential rotation policies with Azure Key Vault integration for secure, automated authentication management. |
Eliminates shared credential security risks and enables automated authentication without manual intervention. Provides centralized credential management with audit trails and supports fine-grained access control policies that align with least-privilege security principles. |
Establish cluster lifecycle policies with automated termination schedules, idle timeout configurations, and resource usage limits to enforce organizational governance standards. Configure policy-based cluster creation restrictions, instance type limitations, and maximum runtime controls to prevent resource waste and ensure compliance. |
Reduces compute costs by 40-60% through automated cluster lifecycle management and prevents resource waste from idle or forgotten clusters. Enforces organizational policies consistently across all users and teams while maintaining operational flexibility for legitimate use cases. |
Deploy Azure Monitor alert rules for critical Azure Databricks operations including cluster failures, job execution errors, workspace capacity limits, and Unity Catalog access violations. Configure automated notification workflows with escalation procedures and integration with incident management systems like ServiceNow or Jira. |
Enables proactive incident response through real-time notifications of critical issues before they impact business operations. Reduces mean time to detection from hours to minutes and supports automated escalation procedures that ensure appropriate team members are notified based on severity levels. |
Implement environment-specific workspace configurations with role-based access control policies that enforce separation between development, testing, and production environments. Configure Unity Catalog governance rules, network security groups, and data access permissions appropriate for each environment's security and compliance requirements. |
Prevents unauthorized access to production data and reduces risk of accidental changes in critical environments through enforced security boundaries. Maintains regulatory compliance by ensuring development activities cannot impact production systems and data integrity is preserved across environment boundaries. |
Performance Efficiency
Performance Efficiency is about maintaining user experience even when there's an increase in load by managing capacity. The strategy includes scaling resources, identifying and optimizing potential bottlenecks, and optimizing for peak performance.
The Performance Efficiency design principles provide a high-level design strategy for achieving those capacity goals against the expected usage.
Workload design checklist
Start your design strategy based on the design review checklist for Performance Efficiency. Define a baseline that's based on key performance indicators for Azure Databricks.
Do capacity planning: Analyze workloads and monitor resource usage to determine how much compute and storage your workloads actually need. Use that insight to right-size clusters, optimize job schedules, and forecast storage growth, so you avoid under-provisioning, which leads to resource constraints.
Choose optimal compute configurations for workload characteristics: Evaluate serverless options, which can offer better automatic scaling, faster startup times. Compare them with traditional clusters to choose the best fit.
For clusters, optimize configurations including instance types, sizes, scaling settings, based on data volume and processing patterns. Be sure to analyze trade-offs between instance families for specific use cases. For example, evaluating memory-optimized versus compute-optimized instances, local SSD versus standard storage options, to match performance requirements.
Spark clusters can run different types of workloads, which require their unique performance tuning. In general, you want faster job execution and avoid compute bottlenecks. Fine tune settings like executor memory, parallelism, and garbage collection to achieve those goals.
See Recommendations for selecting the right services for additional guidance on how to approach selecting the right services for your workload.
Prioritize resource allocation for critical workloads: Separate and prioritize workloads running at the same time. Use features like resource pools, cluster pools, isolation modes, and job queues to avoid interference between jobs. Set resource quotas and scheduling rules to protect high-priority tasks from being slowed down by background or lower-priority processes.
Configure autoscaling for variable workloads: Set up autoscaling policies in Azure Databricks by defining scaling triggers that cause the cluster to scale, how quickly it adds or removes nodes, and resource limits. These settings help Azure Databricks respond efficiently to changing workloads, optimize resource usage, and avoid performance issues during scaling events.
Design efficient data storage and retrieval mechanisms: Performance gains for data intensive operations especially on large volumes of data requires careful planning and tuning.
Organize data strategically. Design data partitioning schemes that optimize query performance when organizing Delta Lake tables. Good partitioning enables partition pruning, where Spark reads only the relevant subsets of data during a query, rather than scanning the entire table.
File sizing plays a key role, files that are too small create excessive metadata overhead and slow down Spark jobs, while files that are too large can cause memory and performance issues.
Aligning your data layout with how users or jobs typically query the data. Otherwise, there's potential performance hit from full-table scans.
Implement effective caching: Use caching for hot datasets and monitor cache hit ratios to ensure you aren't unnecessarily using memory. Spark provides built-in caching mechanisms and Azure Databricks offers Delta Cache, which further optimizes by caching data at the disk level across nodes.
Write efficient queries: Avoid unnecessary data scans, excessive shuffling, and long execution times, which all contribute to performance inefficiencies.
Optimize SQL queries and Spark operations through proper indexing, predicate pushdown, projection pushdown, and join optimization techniques that leverage query plan analysis for enhanced execution efficiency.
Azure Databricks provides some built-in optimizations. The Catalyst optimizer rewrites queries for efficiency, while Adaptive Query Execution (AQE) adjusts plans at runtime to handle data skew and improve joins. Delta Lake features like table statistics, Z-order clustering, and Bloom filters further reduce data scanned, leading to faster, more cost-effective queries.
Pick the right data formats and compression: Choose formats like Parquet and smart compression algorithms (e.g., ZSTD) that reduce storage and speed up reads without compromising performance.
Optimize network and I/O performance: Choose high-performance storage options (like Premium or SSD-backed storage) and design your architecture to minimize data movement by processing data close to where it's stored.
Additionally, use efficient data transfer strategies, such as batching writes and avoiding unnecessary shuffles to maximize throughput and reduce latency.
Optimize job execution based on type of workload:. Tailor optimization strategies to the specific needs For example,
- Stream processing: Real-time data pipelines require low-latency and high-throughput performance. In Azure Databricks, this means tuning parameters such as trigger intervals, micro-batch sizes, watermarking, and checkpointing. Using Structured Streaming and Delta Lake capabilities like schema evolution and exactly-once can ensure consistent processing under varying loads.
- Machine learning: ML training and inference jobs are often compute-intensive. You can boost performance by using distributed training, GPU acceleration, and efficient feature engineering pipelines. Azure Databricks supports ML performance tuning through MLflow, Databricks Runtime for ML, and integrations with tools like Horovod. Tuning resource configurations and applying data pre-processing optimizations can significantly reduce training time and inference latency.
Using Lakeflow Declarative Pipelines simplifies and automates the implementation of these optimization recommendations.
Use your monitoring system to identify performance bottlenecks: Implement comprehensive performance monitoring to get visibility into how jobs, clusters, and queries perform, to identify bottlenecks or inefficiencies that drive up costs and slow down workloads.
Analyze anomalies in key metrics like CPU and memory usage, job execution times, query latencies, and cluster health. This allows you to pinpoint slowdowns, whether they're caused by poor Spark configurations, unoptimized queries, or under/over-provisioned clusters.
Use built-in tools like the Spark UI to analyze query plans and job stages, Azure Monitor to track infrastructure-level metrics, and custom metrics or logs for deeper insights. These tools support proactive tuning, allowing you to fix issues before they impact users or critical pipelines.
Conduct systematic performance testing: Use load testing, stress testing, and benchmarking to validate execution times, resource usage, and system responsiveness. By establishing performance baselines and incorporating automated tests into your CI/CD pipelines, you can detect slowdowns early and measure the impact of any optimizations.
Recommendations
Recommendation | Benefit |
---|---|
Configure Azure Databricks clusters with memory-optimized instance types such as E-series or M-series VMs when processing large datasets that require extensive in-memory caching, machine learning model training, or complex analytical operations. Evaluate memory requirements based on dataset size and processing patterns, then select appropriate VM sizes that provide sufficient memory capacity with high memory-to-CPU ratios for optimal performance. |
Eliminates memory bottlenecks that can cause job failures or severe performance degradation, enabling smooth execution of memory-intensive operations like large-scale machine learning training and complex analytics workloads. |
Configure cluster autoscaling policies with appropriate minimum and maximum node limits based on workload patterns and performance requirements. Set minimum nodes to handle baseline workload efficiently while establishing maximum limits to prevent runaway costs. Define scaling triggers based on CPU utilization, memory usage, or job queue depth, and configure scaling velocity to balance responsiveness with cost optimization. |
Maintains consistent performance during demand fluctuations while optimizing costs through automatic resource adjustment that scales up during peak periods and scales down during low utilization periods. |
Execute OPTIMIZE commands with Z-ordering on Delta Lake tables to improve data clustering and query performance. Choose Z-order columns based on frequently used filter and join conditions in your queries, typically including columns used in WHERE clauses, GROUP BY operations, and JOIN predicates. Schedule regular optimization operations using Azure Databricks jobs or Lakeflow Declarative Pipelines to maintain optimal performance as data grows. |
Reduces query execution time by 3-10x through improved data skipping and minimized I/O operations, while also decreasing storage costs through better compression ratios achieved by clustering related data together. Provides cumulative performance improvements as optimization benefits compound over time with regular maintenance and intelligent data organization that aligns with actual query patterns. |
Enable Delta Cache on cluster configurations where you frequently access the same datasets across multiple queries or jobs. Configure cache settings to utilize local NVMe SSD storage effectively, ensuring adequate cache size allocation based on your dataset characteristics and access patterns. Monitor cache hit ratios and adjust cache configurations to maximize performance benefits for your specific workloads. |
Accelerates query performance by 2-5x for frequently accessed data through intelligent SSD-based caching that bypasses slower network storage, significantly reducing latency for iterative analytics and machine learning workloads. |
Enable Photon engine on cluster configurations and SQL warehouses to accelerate SQL queries and DataFrame operations through vectorized execution. Photon provides the most significant benefits for analytical workloads with aggregations, joins, and complex SQL operations. Configure Photon-enabled compute resources for data engineering pipelines, business intelligence workloads, and analytical applications that process large datasets. |
Delivers up to 12x performance improvement for SQL and DataFrame operations through native vectorized execution, while reducing compute costs by 2-3x due to improved processing efficiency and reduced execution time. Enables processing of larger datasets within the same time constraints and supports more concurrent users without degrading performance, significantly improving overall system throughput. |
Configure Spark executor memory settings between 2-8GB per executor and driver memory based on your largest dataset size and processing complexity. Set spark.executor.cores to 2-5 cores per executor to balance parallelism with resource efficiency. Adjust these settings based on your specific workload characteristics, data volume, and cluster size to prevent out-of-memory errors while maximizing resource utilization. |
Prevents job failures from memory issues while optimizing resource allocation efficiency, reducing both execution time and unnecessary resource waste through properly tuned memory configurations. |
Configure Azure Storage accounts with Premium SSD performance tiers for Azure Databricks workloads that require high IOPS and low latency. Use Premium Block Blob storage for data lake scenarios with intensive read/write operations. Ensure storage accounts are in the same region as your Azure Databricks workspace to minimize network latency. |
Provides up to 20,000 IOPS and sub-millisecond latency for storage operations, dramatically improving performance for data-intensive workloads and reducing job execution times by eliminating storage I/O bottlenecks. |
Design data partitioning strategies based on commonly used filter columns in your queries, typically date columns for time-series data or categorical columns for dimensional data. Avoid over-partitioning by limiting partitions to fewer than 10,000 and ensuring each partition contains at least 1GB of data. Use partition pruning-friendly query patterns and consider liquid clustering for tables with multiple partition candidates. |
Reduces data scanning by 80-95% through effective partition pruning, dramatically improving query performance and reducing compute costs by processing only relevant data partitions. Enables predictable query performance that scales linearly with filtered data size rather than total table size, maintaining consistent response times as datasets grow to petabyte scale. |
Use Parquet file format with ZSTD or Snappy compression for analytical workloads to optimize both storage efficiency and query performance. ZSTD provides better compression ratios for cold data, while Snappy offers faster decompression for frequently accessed datasets. Configure appropriate compression levels and evaluate compression trade-offs based on your access patterns and storage costs. |
Reduces storage costs by 60-80% while improving query performance through columnar storage efficiency and optimized compression, enabling faster data scanning and reduced network I/O. |
Deploy serverless SQL warehouses for business intelligence and analytical workloads that require ad-hoc querying and interactive analytics. Configure appropriate warehouse sizes (2X-Small to 4X-Large) based on concurrency requirements and query complexity. Enable auto-stop and auto-resume features to optimize costs while ensuring rapid query responsiveness for end users. |
Eliminates cluster management overhead while providing instant scaling and Photon-accelerated performance, delivering 2-3x better price-performance compared to traditional clusters for SQL workloads. Provides consistent sub-second query startup times and automatic optimization that adapts to changing workload patterns without manual intervention or configuration tuning. |
Enable Adaptive Query Execution (AQE) in Spark configurations to leverage runtime optimization capabilities including dynamic coalescing of shuffle partitions, dynamic join strategy switching, and optimization of skewed joins. Configure AQE parameters like target shuffle partition size and coalescing thresholds based on your typical data volumes and cluster characteristics. |
Improves query performance by 20-50% through intelligent runtime optimizations that adapt to actual data characteristics and execution patterns, automatically addressing common performance issues like small files and data skew. |
Create cluster pools with pre-warmed instances matching your most common cluster configurations to reduce startup times for both interactive clusters and job clusters. Configure pool sizes based on expected concurrent usage patterns and maintain idle instances during peak hours to ensure immediate availability for development teams and scheduled jobs. |
Reduces cluster startup time from 5-10 minutes to under 30 seconds, dramatically improving developer productivity and enabling faster job execution for time-sensitive data processing workflows. |
Schedule regular OPTIMIZE operations using Azure Databricks jobs to compact small files and improve query performance, and run VACUUM commands to clean up expired transaction logs and deleted files. Configure optimization frequency based on data ingestion patterns, typically daily for high-volume tables and weekly for less frequently updated tables. Monitor table statistics and file counts to determine optimal maintenance schedules. |
Maintains consistent query performance as data volumes grow by preventing file proliferation and data fragmentation, while reducing storage costs through cleanup of unnecessary files and improved compression ratios. Prevents performance degradation over time that commonly occurs in data lakes without proper maintenance, ensuring predictable query response times and optimal resource utilization. |
Configure structured streaming trigger intervals based on latency requirements and data arrival patterns, using continuous triggers for sub-second latency needs or micro-batch triggers with 1-10 second intervals for balanced performance. Optimize checkpoint locations using fast storage and configure appropriate checkpoint intervals to balance fault tolerance with performance overhead. | Achieves optimal balance between latency and throughput for real-time data processing, enabling consistent stream processing performance that can handle varying data arrival rates while maintaining low end-to-end latency. |
Deploy GPU-enabled clusters using NC, ND, or NV-series virtual machines for deep learning model training and inference workloads. Configure appropriate GPU memory allocation and utilize MLflow for distributed training orchestration. Select GPU instance types based on model complexity and training dataset size, considering both memory capacity and compute performance requirements for your specific machine learning workloads. |
Accelerates model training by 10-100x compared to CPU-only clusters through parallel processing capabilities specifically designed for machine learning operations, dramatically reducing training time and enabling faster model iteration cycles. |
Azure policies
Azure provides an extensive set of built-in policies related to Azure Databricks and its dependencies. Some of the preceding recommendations can be audited through Azure Policy. For example, you can check whether:
- Azure Databricks workspaces use VNet injection for enhanced network security and isolation
- Azure Databricks workspaces disable public network access when using private endpoints
- Azure Databricks clusters have disk encryption enabled to protect data at rest
- Azure Databricks workspaces use customer-managed keys for enhanced encryption control
- Azure Databricks workspaces have diagnostic logging enabled for monitoring and compliance
- Azure Databricks workspaces are only deployed in approved geographic regions for compliance
- Enterprise workloads use Azure Databricks Premium tier for enhanced security and compliance features
- Azure Databricks workspaces have Unity Catalog enabled for centralized data governance
For more information about governance, review the Azure Policy built-in definitions for Azure Databricks and other policies that might affect the security of the analytics platform.
Azure Advisor recommendations
Azure Advisor is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments.
For more information, see Azure Advisor.
Tradeoffs
You might have to make design tradeoffs if you use the approaches in the pillar checklists.
Analyze performance and cost trade-offs
Striking the right balance between performance and cost is key. If you over-provision, you waste money; if you under-provision, your workloads can slow down or fail. To avoid both, test different configurations, use performance benchmarks, do cost analysis, to guide your choices.
Scenario architecture
Foundational architecture that demonstrates the key recommendations: Stream processing with Azure Databricks.