Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The tutorials in this section introduce core features and guide you through the basics of working with the Azure Databricks platform.
For information about online training resources, see Get free Databricks training.
If you do not have a Azure Databricks account, sign up for a free trial.
Tutorial | Description |
---|---|
Query and visualize data | Use a Databricks notebook to query sample data stored in Unity Catalog using SQL, Python, Scala, and R, and then visualize the query results in the notebook. |
Import and visualize CSV data from a notebook | Use a Databricks notebook to import data from a CSV file containing baby name data from https://health.data.ny.gov into your Unity Catalog volume using Python, Scala, and R. You also learn to modify a column name, visualize the data, and save to a table. |
Create a table | Create a table and grant privileges in Databricks using the Unity Catalog data governance model. |
Build an ETL pipeline using DLT | Create and deploy an ETL (extract, transform, and load) pipeline for data orchestration using DLT and Auto Loader. |
Build an ETL pipeline using Apache Spark | Develop and deploy your first ETL (extract, transform, and load) pipeline for data orchestration with Apache Spark™. |
Train and deploy an ML model | Build a machine learning classification model using the scikit-learn library on Databricks to predict whether a wine is considered “high-quality”. This tutorial also illustrates the use of MLflow to track the model development process, and Hyperopt to automate hyperparameter tuning. |
Query LLMs and prototype AI agents with no-code | Use the AI Playground to query large language models (LLMs) and compare results side-by-side, prototype a tool-calling AI agent, and export your agent to code. |
Tutorial | Details |
---|---|
Query and visualize data | Use a Databricks notebook to query sample data stored in Unity Catalog using SQL, Python, Scala, and R, and then visualize the query results in the notebook. |
Import and visualize CSV data from a notebook | Use a Databricks notebook to import data from a CSV file containing baby name data from https://health.data.ny.gov into your Unity Catalog volume using Python, Scala, and R. You also learn to modify a column name, visualize the data, and save to a table. |
Create a table | Create a table and grant privileges in Databricks using the Unity Catalog data governance model. |
Build an ETL pipeline using DLT | Create and deploy an ETL (extract, transform, and load) pipeline for data orchestration using DLT and Auto Loader. |
Build an ETL pipeline using Apache Spark | Develop and deploy your first ETL (extract, transform, and load) pipeline for data orchestration with Apache Spark™. |
Train and deploy an ML model | Build a machine learning classification model using the scikit-learn library on Databricks to predict whether a wine is considered “high-quality”. This tutorial also illustrates the use of MLflow to track the model development process, and Hyperopt to automate hyperparameter tuning. |
Query LLMs and prototype AI agents with no-code | Use the AI Playground to query large language models (LLMs) and compare results side-by-side, prototype a tool-calling AI agent, and export your agent to code. |
Connect to Azure Data Lake Storage | Connect from Azure Databricks to Azure Data Lake Storage using OAuth 2.0 with a Microsoft Entra ID service principal. |
Get help
- If you have any questions about setting up Azure Databricks and need live help, please e-mail onboarding-help@databricks.com.
- If your organization does not have a Azure Databricks support subscription, or if you are not an authorized contact for your company's support subscription, you can get answers from the Databricks Community.