Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this article, you learn to develop a generative AI application by starting from GitHub Models and then upgrade your experience by deploying an Azure AI Services resource with Azure AI Foundry Models.
GitHub Models are useful when you want to find and experiment with AI models for free as you develop a generative AI application. When you're ready to bring your application to production, upgrade your experience by deploying an Azure AI Services resource in an Azure subscription and start using Foundry Models. You don't need to change anything else in your code.
The playground and free API usage for GitHub Models are rate limited by requests per minute, requests per day, tokens per request, and concurrent requests. If you get rate limited, you need to wait for the rate limit that you hit to reset before you can make more requests.
Prerequisites
To complete this tutorial, you need:
- A GitHub account with access to GitHub Models.
- An Azure subscription with a valid payment method. If you don't have an Azure subscription, create a paid Azure account to begin. Alternatively, you can wait until you're ready to deploy your model to production, at which point you'll be prompted to create or update your Azure account to a standard account.
- Foundry Models from partners and community require access to Azure Marketplace. Ensure you have the permissions required to subscribe to model offerings. Foundry Models sold directly by Azure don't have this requirement.
Upgrade to Azure AI Foundry Models
The rate limits for the playground and free API usage help you experiment with models and develop your AI application. When you're ready to bring your application to production, use a key and endpoint from a paid Azure account. You don't need to change anything else in your code.
To get the key and endpoint:
Go to GitHub Models and select a model to land on its playground. This article uses Mistral Large 24.11.
Type in some prompts or use some of the suggested prompts to interact with the model in the playground.
Select Use this model from the playground. This action opens up a window to "Get started with Models in your codebase".
In the "Configure authentication" step, select Get Azure AI key from the "Azure AI" section.
If you're already signed in to your Azure account, skip this step. However, if you don't have an Azure account or you're not signed in to your account, follow these steps:
If you don't have an Azure account, select Create my account and follow the steps to create one.
Alternatively, if you have an Azure account, select Sign back in. If your existing account is a free account, you first have to upgrade to a standard plan.
Return to the model's playground and select Get Azure AI key again.
Sign in to your Azure account.
You're taken to Azure AI Foundry > GitHub, and the page loads with your model's details. It might take one or two minutes to load your model details in Azure AI Foundry.
For Foundry Models from partners and community, you need to subscribe to Azure Marketplace. This requirement applies to Mistral-Large-2411, for example. Select Agree and Proceed to accept the terms.
Select the Deploy button to deploy the model to your account.
When your deployment is ready, you land on your project's Overview page, where you can see the Azure AI Foundry project's endpoint.
To get the specific model's endpoint URL and API key, go to the Models + endpoints tab in the left pane of the Azure AI Foundry portal and select the deployed model. The endpoint's target URI and API key are visible on the deployment's details page. Use these values in your code to use the model in your production environment.
Use the new endpoint
To use your deployed model with code, you need the model's endpoint URL and key, which you saw in the previous section. You can use any of the supported SDKs to get predictions from the endpoint. The following SDKs are officially supported:
- OpenAI SDK
- Azure OpenAI SDK
- Azure AI Inference SDK
For more details and examples, see supported languages and SDKs. The following example shows how to use the Azure AI Inference SDK with the newly deployed model:
Install the package azure-ai-inference
using your package manager, like pip:
pip install azure-ai-inference
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
import os
from azure.ai.inference import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential
client = ChatCompletionsClient(
endpoint="https://<resource>.services.ai.azure.com/models",
credential=AzureKeyCredential(os.environ["AZURE_INFERENCE_CREDENTIAL"]),
)
Explore our samples and read the API reference documentation to get yourself started.
Generate your first chat completion:
from azure.ai.inference.models import SystemMessage, UserMessage
response = client.complete(
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Explain Riemann's conjecture in 1 paragraph"),
],
model="mistral-large"
)
print(response.choices[0].message.content)
Use the parameter model="<deployment-name>
to route your request to this deployment. Deployments work as an alias of a given model under certain configurations.
Important
Unlike GitHub Models where all the models are already configured, the Azure AI Services resource allows you to control which models are available in your endpoint and under which configuration. Add as many models as you plan to use before indicating them in the model
parameter. Learn how to add more models to your resource.
Explore additional features
Azure AI Foundry Models supports extra features that aren't available in GitHub Models, including:
- Explore the model catalog to see more models.
- Configure key-less authentication.
- Configure content filtering.
- Configure rate limiting for specific models.
- Explore additional deployment SKUs for specific models.
- Configure private networking.
Troubleshooting
For more help, see the FAQ section.
Related content
- Explore the model catalog in Azure AI Foundry portal.
- Add more models to your endpoint.