Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article explains how to size, scale, and manage query queues for Databricks SQL warehouses to optimize performance and cost. Databricks recommends using a serverless SQL warehouse for most workloads. Serverless SQL warehouses deliver the best performance and efficiency by dynamically managing resources for your queries.
Serverless SQL warehouse management
Serverless SQL warehouses use Intelligent Workload Management (IWM) to automatically manage query workloads. IWM is a set of AI-powered features that process queries quickly and cost-effectively without requiring you to manage infrastructure.
Intelligent Workload Management and autoscaling
IWM uses machine learning models to dynamically manage compute resources:
- When a new query arrives, IWM predicts its resource requirements and checks available capacity.
- If capacity exists, the query starts immediately.
- If not, the query is placed in a queue.
- IWM continuously monitors the queue. If wait times increase, the autoscaler quickly provisions more clusters to process queued queries.
- When demand drops, IWM scales down resources to reduce costs while keeping enough capacity to handle recent peak loads.
This approach provides:
- Rapid upscaling to maintain low query latency.
- High throughput by admitting queries as soon as hardware is available.
- Quick downscaling to save costs during low demand.
Sizing a serverless SQL warehouse
Cluster size (for example, X-Small, Medium, Large) determines the compute resources available for a single cluster. The autoscaler adds or removes clusters of that size as needed.
Use the following guidelines to help choose the right size:
- Start with a single larger warehouse and let serverless features manage concurrency and performance. It is usually more efficient to size down if necessary than to start small and scale up.
- If queries are spilling to disk, increase the cluster size. Check for spills in the query profile.
- For workloads with many concurrent queries, configure a sufficient maximum number of clusters to handle peak loads. Monitor the Peak Queued Queries metric on the warehouse monitoring page.
Note
For serverless SQL warehouses, the cluster sizes may, in some cases, use different instance types than the ones listed in the documentation for pro and classic SQL warehouses for equivalent cluster size. In general, the price/performance ratio of the cluster sizes for serverless SQL warehouses is similar to those for pro and classic SQL warehouses.
Monitoring warehouse performance
You can monitor and right-size any SQL warehouse using these tools. The maximum number of queries in a queue for all warehouse types is 1,000.
- Monitoring page: On the SQL warehouse monitoring tab, check Peak Queued Queries. A consistent value above 0 indicates that you may need a larger cluster size or more clusters.
- Query history: Review historical query performance to identify bottlenecks.
- Query profile: Inspect execution plans for metrics such as Bytes spilled to disk, which indicates that the warehouse size may be too small.
Classic and pro SQL warehouses
Classic and pro warehouses use a manual scaling model where you configure the number of clusters.
Sizing and cluster provisioning
When creating a classic or pro warehouse, choose a cluster size and set the minimum and maximum number of clusters. These SKUs have a fixed limit of one cluster per 10 concurrent queries.
Cluster size | Driver instance type | Worker count |
---|---|---|
2X-Small | Standard_E8ds_v4 | 1 x Standard_E8ds_v4 |
X-Small | Standard_E8ds_v4 | 2 x Standard_E8ds_v4 |
Small | Standard_E16ds_v4 | 4 x Standard_E8ds_v4 |
Medium | Standard_E32ds_v4 | 8 x Standard_E8ds_v4 |
Large | Standard_E32ds_v4 | 16 x Standard_E8ds_v4 |
X-Large | Standard_E64ds_v4 | 32 x Standard_E8ds_v4 |
2X-Large | Standard_E64ds_v4 | 64 x Standard_E8ds_v4 |
3X-Large | Standard_E64ds_v4 | 128 x Standard_E8ds_v4 |
4X-Large | Standard_E64ds_v4 | 256 x Standard_E8ds_v4 |
The instance size of all workers is Standard_E8ds_v4.
Each driver and worker has eight 128 GB Standard LRS managed disks attached. The attached disks are charged hourly.
Required Azure vCPU quota for classic and pro SQL warehouses
To start a classic or pro SQL warehouse, you must have adequate Azure vCPU quota for Standard_E8ds_v4 instances in your Azure account. Use the following guidelines to determine the required vCPU quota:
If you have only one or two SQL warehouses, verify that you have 8 Azure vCPU available for each core in the cluster. This ensures that you have adequate Azure vCPU to allow for your warehouse's re-provisioning, which happens approximately every 24 hours. You might need to increase the multiplier if your SQL warehouses use autoscaling or multi-cluster load balancing.
- As the number of SQL warehouses increases, allow for between 4 and 8 Azure vCPU for each core in the cluster. Databricks recommends starting with a larger number and monitoring for stability.
- Azure vCPUs used by SQL warehouses are in addition to Azure vCPUs used by clusters used by Data Science & Engineering or by non-Databricks workloads.
To request additional Azure vCPU quota, see Standard quota: Increase limits by VM series in the Azure documentation.
Note
The information in this table can vary based on product or region availability and workspace type.
Queuing and autoscaling logic
For classic and pro warehouses, autoscaling adds clusters based on the estimated time to process all running and queued queries:
- 2-6 minutes of query load: Add 1 cluster.
- 6-12 minutes: Add 2 clusters.
- 12-22 minutes: Add 3 clusters.
- Over 22 minutes: Add 3 clusters plus 1 more for every additional 15 minutes of load.
Additional rules:
- If a query waits in the queue for 5 minutes, the warehouse scales up.
- If load remains low for 15 consecutive minutes, the warehouse scales down to the minimum needed to handle the peak load from that period.