Edit

Share via


Quickstart: Get started with Azure AI Foundry

In this quickstart, we walk you through setting up your local development environment with the Azure AI Foundry SDK. We write a prompt, run it as part of your app code, trace the LLM calls being made, and run a basic evaluation on the outputs of the LLM.

Tip

The rest of this article shows how to use a hub based project. Select Foundry project at the top of this article if you want to use a Foundry project instead.

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
  • A hub based project. If you're new to Azure AI Foundry and don't have a hub based project, select Foundry project at the top of this article to use a Foundry project instead.

Set up your development environment

  1. Set up your development environment

  2. Make sure you install these packages:

    pip install azure-ai-projects azure-ai-inference azure-identity 
    

Deploy a model

Tip

Because you can customize the left pane in the Azure AI Foundry portal, you might see different items than shown in these steps. If you don't see what you're looking for, select ... More at the bottom of the left pane.

  1. Sign in to Azure AI Foundry.

  2. Select a hub based project. If you don't have a hub based project, select Foundry project at the top of this article to use a Foundry project instead.

  3. Select Model catalog from the left pane.

  4. Select the gpt-4o-mini model from the list of models. You can use the search bar to find it.

  5. On the model details page, select Deploy.

    Screenshot of the model details page with a button to deploy the model.

  6. Leave the default Deployment name. Select Deploy.

  7. Once the model is deployed, select Open in playground to test your model.

Build your chat app

Create a file named chat.py. Copy and paste the following code into it.

from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential

project_connection_string = "<your-connection-string-goes-here>"

project = AIProjectClient.from_connection_string(
    conn_str=project_connection_string, credential=DefaultAzureCredential()
)

chat = project.inference.get_chat_completions_client()
response = chat.complete(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "system",
            "content": "You are an AI assistant that speaks like a techno punk rocker from 2350. Be cool but not too cool. Ya dig?",
        },
        {"role": "user", "content": "Hey, can you help me with my taxes? I'm a freelancer."},
    ],
)

print(response.choices[0].message.content)

Insert your connection string

Your project connection string is required to call the Azure OpenAI in Azure AI Foundry Models from your code.

Find your connection string in the Azure AI Foundry project you created in the Azure AI Foundry playground quickstart. Open the project, then find the connection string on the Overview page.

Screenshot shows the overview page of a project and the ___location of the connection string.

Copy the connection string and replace <your-connection-string-goes-here> in the chat.py file.

Run your chat script

Run the script to see the response from the model.

python chat.py

Generate prompt from user input and a prompt template

The script uses hardcoded input and output messages. In a real app you'd take input from a client application, generate a system message with internal instructions to the model, and then call the LLM with all of the messages.

Let's change the script to take input from a client application and generate a system message using a prompt template.

  1. Remove the last line of the script that prints a response.

  2. Now define a get_chat_response function that takes messages and context, generates a system message using a prompt template, and calls a model. Add this code to your existing chat.py file:

    from azure.ai.inference.prompts import PromptTemplate
    
    
    def get_chat_response(messages, context):
        # create a prompt template from an inline string (using mustache syntax)
        prompt_template = PromptTemplate.from_string(
            prompt_template="""
            system:
            You are an AI assistant that speaks like a techno punk rocker from 2350. Be cool but not too cool. Ya dig? Refer to the user by their first name, try to work their last name into a pun.
    
            The user's first name is {{first_name}} and their last name is {{last_name}}.
            """
        )
    
        # generate system message from the template, passing in the context as variables
        system_message = prompt_template.create_messages(data=context)
    
        # add the prompt messages to the user messages
        return chat.complete(
            model="gpt-4o-mini",
            messages=system_message + messages,
            temperature=1,
            frequency_penalty=0.5,
            presence_penalty=0.5,
        )
    

    Note

    The prompt template uses mustache format.

    The get_chat_response function could be easily added as a route to a FastAPI or Flask app to enable calling this function from a front-end web application.

  3. Now simulate passing information from a frontend application to this function. Add the following code to the end of your chat.py file. Feel free to play with the message and add your own name.

    if __name__ == "__main__":
        response = get_chat_response(
            messages=[{"role": "user", "content": "what city has the best food in the world?"}],
            context={"first_name": "Jessie", "last_name": "Irwin"},
        )
        print(response.choices[0].message.content)
    

Run the revised script to see the response from the model with this new input.

python chat.py

Next step

In this quickstart, you use Azure AI Foundry to:

  • Create a project
  • Deploy a model
  • Run a chat completion
  • Create and run an agent
  • Upload files to the agent

The Azure AI Foundry SDK is available in multiple languages, including Python, Java, JavaScript, and C#. This quickstart provides instructions for each of these languages.

Tip

The rest of this article shows how to use a Foundry project. Select hub based project at the top of this article if you want to use a hub based project instead.

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
  • You must be Owner of the subscription to receive the appropriate access control needed to use your project.

Important

Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Start with a project and model

  1. Sign in to the Azure AI Foundry portal.

  2. On the home page, select Create an agent.

    Screenshot shows how to start building an Agent in Azure AI Foundry portal.

  3. Fill in a name to use for your project and select Create.

  4. Once your resources are created, you are in the agent playground.

  5. If you're asked to select a model, search for and select gpt-4o.

    1. Select Confirm.
    2. Don't change the default settings. Select Deploy.

You now have both a project and a model available for your agent.

Set up your environment

No installation is necessary to use the Azure AI Foundry portal.

Chat with an agent

Agents have powerful capabilities through the use of tools. Start by chatting with an agent.

After the preceding steps, you're now in the agents playground.

  1. Add instructions, such as, "You are a helpful writing assistant."
  2. Start chatting with your agent, for example, "Write me a poem about flowers."

Add files to the agent

Now let's add a file search tool that enables us to do knowledge retrieval.

  1. In your agent's Setup pane, scroll down if necessary to find Knowledge.
  2. Select Add.
  3. Select Files to upload the product_info_1.md file.
  4. Select Select local files under Add files.
  5. Select Upload and save.
  6. Change your agents instructions, such as, "You are a helpful assistant and can search information from uploaded files."
  7. Ask a question, such as, "Hello, what Contoso products do you know?"
  8. To add more files, select the ... on the AgentVectorStore, then select Manage.

Run a chat completion

Chat completions are the basic building block of AI applications. Using chat completions you can send a list of messages and get a response from the model instead of the agent.

  1. In the left pane, select Playgrounds.
  2. Select Try the chat playground.
  3. Fill in the prompt and select the Send button.
  4. The model returns a response in the Response pane.

Clean up resources

If you no longer need them, delete the resource group associated with your project.

In the Azure AI Foundry portal, select your project name in the top right corner. Then select the link for the resource group to open it in the Azure portal. Select the resource group, and then select Delete. Confirm that you want to delete the resource group.

Azure AI Foundry client library overview