Edit

Share via


How to use Azure OpenAI image generation models

OpenAI's image generation models render images based on user-provided text prompts and optionally provided images. This guide demonstrates how to use the image generation models and configure their options through REST API calls.

Prerequisites

Call the Image Generation API

The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, we recommend starting with the quickstart.

Send a POST request to:

https://<your_resource_name>.openai.azure.com/openai/deployments/<your_deployment_name>/images/generations?api-version=<api_version>

URL:

Replace the following values:

  • <your_resource_name> is the name of your Azure OpenAI resource.
  • <your_deployment_name> is the name of your DALL-E 3 or GPT-image-1 model deployment.
  • <api_version> is the version of the API you want to use. For example, 2025-04-01-preview.

Required headers:

  • Content-Type: application/json
  • api-key: <your_API_key>

Body:

The following is a sample request body. You specify a number of options, defined in later sections.

{
    "prompt": "A multi-colored umbrella on the beach, disposable camera",
    "model": "gpt-image-1",
    "size": "1024x1024", 
    "n": 1,
    "quality": "high"
}

Output

The response from a successful image generation API call looks like the following example. The url field contains a URL where you can download the generated image. The URL stays active for 24 hours.

{ 
    "created": 1698116662, 
    "data": [ 
        { 
            "url": "<URL_to_generated_image>",
            "revised_prompt": "<prompt_that_was_used>" 
        }
    ]
} 

API call rejection

Prompts and images are filtered based on our content policy, returning an error when a prompt or image is flagged.

If your prompt is flagged, the error.code value in the message is set to contentFilter. Here's an example:

{
    "created": 1698435368,
    "error":
    {
        "code": "contentFilter",
        "message": "Your task failed as a result of our safety system."
    }
}

It's also possible that the generated image itself is filtered. In this case, the error message is set to Generated image was filtered as a result of our safety system. Here's an example:

{
    "created": 1698435368,
    "error":
    {
        "code": "contentFilter",
        "message": "Generated image was filtered as a result of our safety system."
    }
}

Write text-to-image prompts

Your prompts should describe the content you want to see in the image, and the visual style of image.

When you write prompts, consider that the Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see Content filtering.

Tip

For a thorough look at how you can tweak your text prompts to generate different kinds of images, see the Image prompt engineering guide.

Specify API options

The following API body parameters are available for image generation models.

Size

Specify the size of the generated images. Must be one of 1024x1024, 1024x1536, or 1536x1024 for GPT-image-1 models. Square images are faster to generate.

Quality

There are three options for image quality: low, medium, and high.Lower quality images can be generated faster.

The default value is high.

Number

You can generate between one and 10 images in a single API call. The default value is 1.

User ID

Use the user parameter to specify a unique identifier for the user making the request. This is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.

Output format

Use the output_format parameter to specify the format of the generated image. Supported formats are PNG and JPEG. The default is PNG.

Note

WEBP images are not supported in the Azure OpenAI in Azure AI Foundry Models.

Compression

Use the output_compression parameter to specify the compression level for the generated image. Input an integer between 0 and 100, where 0 is no compression and 100 is maximum compression. The default is 100.

Call the Image Edit API

The Image Edit API allows you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an image URL or base 64-encoded image data.

Send a POST request to:

https://<your_resource_name>.openai.azure.com/openai/deployments/<your_deployment_name>/images/edits?api-version=<api_version>

URL:

Replace the following values:

  • <your_resource_name> is the name of your Azure OpenAI resource.
  • <your_deployment_name> is the name of your DALL-E 3 or GPT-image-1 model deployment.
  • <api_version> is the version of the API you want to use. For example, 2025-04-01-preview.

Required headers:

  • Content-Type: multipart/form-data
  • api-key: <your_API_key>

Body:

The following is a sample request body. You specify a number of options, defined in later sections.

Important

The Image Edit API takes multipart/form data, not JSON data. The example below shows sample form data that would be attached to a cURL request.

-F "image[]=@beach.png" \
-F 'prompt=Add a beach ball in the center' \
-F "model=gpt-image-1" \
-F "size=1024x1024" \
-F "n=1" \
-F "quality=high"

Output

The response from a successful image editing API call looks like the following example. The url field contains a URL where you can download the generated image. The URL stays active for 24 hours.

{ 
    "created": 1698116662, 
    "data": [ 
        { 
            "url": "<URL_to_generated_image>",
            "revised_prompt": "<prompt_that_was_used>" 
        }
    ]
} 

Specify API options

The following API body parameters are available for image editing models, in addition to the ones available for image generation models.

Image

The image value indicates the image file you want to edit. It can be either a URL string to an image file, or base 64-encoded image data.

Mask

The mask parameter is the same type as the main image input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a base 64-encoded image. It must be a PNG file and have the same dimensions as the input image.