Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Only a subset of operations are currently supported with the v1 API. To learn more, see the API version lifecycle guide.
Create chat completion
POST {endpoint}/openai/v1/chat/completions
Creates a chat completion.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | object | Parameters for audio output. Required when audio output is requested withmodalities: ["audio"]. |
No | |
| └─ format | enum | Specifies the output audio format. Must be one of wav, mp3, flac,opus, or pcm16.Possible values: wav, aac, mp3, flac, opus, pcm16 |
No | |
| └─ voice | object | No | ||
| data_sources | array | The data sources to use for the On Your Data feature, exclusive to Azure OpenAI. | No | |
| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
No | 0 |
| function_call | enum | Specifying a particular function via {"name": "my_function"} forces the model to call that function.Possible values: none, auto |
No | |
| functions | array | Deprecated in favor of tools.A list of functions the model may generate JSON inputs for. |
No | |
| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
No | None |
| logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. |
No | False |
| max_completion_tokens | integer | An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. |
No | |
| max_tokens | integer | The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API. This value is now deprecated in favor of max_completion_tokens, and isnot compatible with o1 series models. |
No | |
| messages | array | A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio. |
Yes | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| modalities | object | Output types that you would like the model to generate. Most models are capable of generating text, which is the default: ["text"]The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generateboth text and audio responses, you can use: ["text", "audio"] |
No | |
| model | string | The model deployment identifier to use for the chat completion request. | Yes | |
| n | integer | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs. |
No | 1 |
| parallel_tool_calls | object | Whether to enable parallel function calling during tool use. | No | |
| prediction | object | Base representation of predicted output from a model. | No | |
| └─ type | OpenAI.ChatOutputPredictionType | No | ||
| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
No | 0 |
| reasoning_effort | object | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| response_format | object | No | ||
| └─ type | enum | Possible values: text, json_object, json_schema |
No | |
| seed | integer | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. |
No | |
| stop | object | Not supported with latest reasoning models o3 and o4-mini.Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
No | |
| store | boolean | Whether or not to store the output of this chat completion request for use in model distillation or evals products. |
No | False |
| stream | boolean | If set to true, the model response data will be streamed to the client as it is generated using server-sent events. |
No | False |
| stream_options | object | Options for streaming response. Only set this when you set stream: true. |
No | |
| └─ include_usage | boolean | If set, an additional chunk will be streamed before the data: [DONE]message. The usage field on this chunk shows the token usage statisticsfor the entire request, and the choices field will always be an emptyarray. All other chunks will also include a usage field, but with a nullvalue. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request. |
No | |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | 1 |
| tool_choice | OpenAI.ChatCompletionToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools.Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.none is the default when no tools are present. auto is the default if tools are present. |
No | |
| tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. | No | |
| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | 1 |
| user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse. |
No | |
| user_security_context | AzureUserSecurityContext | User security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureCreateChatCompletionResponse | |
| text/event-stream | AzureCreateChatCompletionStreamResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Examples
Example
Creates a completion for the provided prompt, parameters and chosen model.
POST {endpoint}/openai/v1/chat/completions
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "you are a helpful assistant that talks like a pirate"
},
{
"role": "user",
"content": "can you tell me how to care for a parrot?"
}
]
}
Responses: Status Code: 200
{
"body": {
"id": "chatcmpl-7R1nGnsXO8n4oi9UPz2f3UHdgAYMn",
"created": 1686676106,
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "Ahoy matey! So ye be wantin' to care for a fine squawkin' parrot, eh?..."
}
}
],
"usage": {
"completion_tokens": 557,
"prompt_tokens": 33,
"total_tokens": 590
}
}
}
Create embedding
POST {endpoint}/openai/v1/embeddings
Creates an embedding vector representing the input text.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| dimensions | integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
No | |
| encoding_format | enum | The format to return the embeddings in. Can be either float or base64.Possible values: float, base64 |
No | |
| input | string or array | Yes | ||
| model | string | The model to use for the embedding request. | Yes | |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.CreateEmbeddingResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Examples
Example
Return the embeddings for a given prompt.
POST {endpoint}/openai/v1/embeddings
{
"model": "text-embedding-ada-002",
"input": [
"this is a test"
]
}
Responses: Status Code: 200
{
"body": {
"data": [
{
"index": 0,
"embedding": [
-0.012838088,
-0.007421397,
-0.017617522,
-0.028278312,
-0.018666342,
0.01737855,
-0.01821495,
-0.006950092,
-0.009937238,
-0.038580645,
0.010674067,
0.02412286,
-0.013647936,
0.013189907,
0.0021125758,
0.012406612,
0.020790534,
0.00074595667,
0.008397198,
-0.00535031,
0.008968075,
0.014351576,
-0.014086051,
0.015055214,
-0.022211088,
-0.025198232,
0.0065186154,
-0.036350243,
0.009180495,
-0.009698266,
0.009446018,
-0.008463579,
-0.0040426035,
-0.03443847,
-0.00091273896,
-0.0019217303,
0.002349888,
-0.021560553,
0.016515596,
-0.015572986,
0.0038666942,
-8.432463e-05,
0.0032178196,
-0.020365695,
-0.009631885,
-0.007647093,
0.0033837722,
-0.026764825,
-0.010501476,
0.020219658,
0.024640633,
-0.0066912062,
-0.036456455,
-0.0040923897,
-0.013966565,
0.017816665,
0.005366905,
0.022835068,
0.0103488,
-0.0010811808,
-0.028942121,
0.0074280356,
-0.017033368,
0.0074877786,
0.021640211,
0.002499245,
0.013316032,
0.0021524043,
0.010129742,
0.0054731146,
0.03143805,
0.014856071,
0.0023366117,
-0.0008243692,
0.022781964,
0.003038591,
-0.017617522,
0.0013309394,
0.0022154662,
0.00097414135,
0.012041516,
-0.027906578,
-0.023817508,
0.013302756,
-0.003003741,
-0.006890349,
0.0016744611
]
}
],
"usage": {
"prompt_tokens": 4,
"total_tokens": 4
}
}
}
List evals
GET {endpoint}/openai/v1/evals
List evaluations for a project.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| after | query | No | string | Identifier for the last eval from the previous pagination request. |
| limit | query | No | integer | A limit on the number of evals to be returned in a single pagination response. |
| order | query | No | string Possible values: asc, desc |
Sort order for evals by timestamp. Use asc for ascending order ordesc for descending order. |
| order_by | query | No | string Possible values: created_at, updated_at |
Evals can be ordered by creation time or last updated time. Usecreated_at for creation time or updated_at for last updatedtime. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalList |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create eval
POST {endpoint}/openai/v1/evals
Create the structure of an evaluation that can be used to test a model's performance.
An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources.
For more information, see the Evals guide.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data_source_config | object | Yes | ||
| └─ type | OpenAI.EvalDataSourceConfigType | No | ||
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the evaluation. | No | |
| statusCode | enum | Possible values: 201 |
Yes | |
| testing_criteria | array | A list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model's output, use the sample namespace (ie, {{sample.output_text}}). |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Eval |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get eval
GET {endpoint}/openai/v1/evals/{eval_id}
Retrieve an evaluation by its ID. Retrieves an evaluation by its ID.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Eval |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Update eval
POST {endpoint}/openai/v1/evals/{eval_id}
Update select, mutable properties of a specified evaluation.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.MetadataPropertyForRequest | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Eval |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Delete eval
DELETE {endpoint}/openai/v1/evals/{eval_id}
Delete a specified evaluation.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get eval runs
GET {endpoint}/openai/v1/evals/{eval_id}/runs
Retrieve a list of runs for a specified evaluation.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string | |
| after | query | No | string | |
| limit | query | No | integer | |
| order | query | No | string Possible values: asc, desc |
|
| status | query | No | string Possible values: queued, in_progress, completed, canceled, failed |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRunList |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create eval run
POST {endpoint}/openai/v1/evals/{eval_id}/runs
Create a new evaluation run, beginning the grading process.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data_source | object | Yes | ||
| └─ type | OpenAI.EvalRunDataSourceType | No | ||
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the run. | No |
Responses
Status Code: 201
Description: The request has succeeded and a new resource has been created as a result.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRun |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get eval run
GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Retrieve a specific evaluation run by its ID.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRun |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Cancel eval run
POST {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Cancel a specific evaluation run by its ID.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRun |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Delete eval run
DELETE {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Delete a specific evaluation run by its ID.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get eval run output items
GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}/output_items
Get a list of output items for a specified evaluation run.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string | |
| after | query | No | string | |
| limit | query | No | integer | |
| status | query | No | string Possible values: fail, pass |
|
| order | query | No | string Possible values: asc, desc |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRunOutputItemList |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get eval run output item
GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}
Retrieve a specific output item from an evaluation run by its ID.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-evals | header | Yes | string Possible values: preview |
Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string | |
| output_item_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRunOutputItem |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create file
POST {endpoint}/openai/v1/files
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: multipart/form-data
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | object | Yes | ||
| └─ anchor | AzureFileExpiryAnchor | No | ||
| └─ seconds | integer | No | ||
| file | string | Yes | ||
| purpose | enum | The intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - evals: Used for eval data setsPossible values: assistants, batch, fine-tune, evals |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureOpenAIFile |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Examples
Example
POST {endpoint}/openai/v1/files
List files
GET {endpoint}/openai/v1/files
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| purpose | query | No | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureListFilesResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Retrieve file
GET {endpoint}/openai/v1/files/{file_id}
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| file_id | path | Yes | string | The ID of the file to use for this request. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureOpenAIFile |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Delete file
DELETE {endpoint}/openai/v1/files/{file_id}
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| file_id | path | Yes | string | The ID of the file to use for this request. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteFileResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Download file
GET {endpoint}/openai/v1/files/{file_id}/content
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| file_id | path | Yes | string | The ID of the file to use for this request. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/octet-stream | string |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Run grader
POST {endpoint}/openai/v1/fine_tuning/alpha/graders/run
Run a grader.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | object | A StringCheckGrader object that performs a string comparison between input and reference using a specified operation. | Yes | |
| └─ calculate_output | string | A formula to calculate the output based on grader results. | No | |
| └─ evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
No | |
| └─ graders | object | No | ||
| └─ image_tag | string | The image tag to use for the python script. | No | |
| └─ input | array | The input text. This may include template strings. | No | |
| └─ model | string | The model to use for the evaluation. | No | |
| └─ name | string | The name of the grader. | No | |
| └─ operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
No | |
| └─ range | array | The range of the score. Defaults to [0, 1]. |
No | |
| └─ reference | string | The text being graded against. | No | |
| └─ sampling_params | The sampling parameters for the model. | No | ||
| └─ source | string | The source code of the python script. | No | |
| └─ type | enum | The object type, which is always multi.Possible values: multi |
No | |
| item | The dataset item provided to the grader. This will be used to populate the item namespace. See the guide for more details. |
No | ||
| model_sample | string | The model sample to be evaluated. This value will be used to populate the sample namespace. See the guide for more details.The output_json variable will be populated if the model sample is avalid JSON string. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunGraderResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Validate grader
POST {endpoint}/openai/v1/fine_tuning/alpha/graders/validate
Validate a grader.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | object | A StringCheckGrader object that performs a string comparison between input and reference using a specified operation. | Yes | |
| └─ calculate_output | string | A formula to calculate the output based on grader results. | No | |
| └─ evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
No | |
| └─ graders | object | No | ||
| └─ image_tag | string | The image tag to use for the python script. | No | |
| └─ input | array | The input text. This may include template strings. | No | |
| └─ model | string | The model to use for the evaluation. | No | |
| └─ name | string | The name of the grader. | No | |
| └─ operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
No | |
| └─ range | array | The range of the score. Defaults to [0, 1]. |
No | |
| └─ reference | string | The text being graded against. | No | |
| └─ sampling_params | The sampling parameters for the model. | No | ||
| └─ source | string | The source code of the python script. | No | |
| └─ type | enum | The object type, which is always multi.Possible values: multi |
No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ValidateGraderResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create fine-tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs
Creates a fine-tuning job which begins the process of creating a new model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | object | The hyperparameters used for the fine-tuning job. This value is now deprecated in favor of method, and should be passed in under the method parameter. |
No | |
| └─ batch_size | enum | Possible values: auto |
No | |
| └─ learning_rate_multiplier | enum | Possible values: auto |
No | |
| └─ n_epochs | enum | Possible values: auto |
No | |
| integrations | array | A list of integrations to enable for your fine-tuning job. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| method | OpenAI.FineTuneMethod | The method used for fine-tuning. | No | |
| model | string (see valid models below) | The name of the model to fine-tune. You can select one of the supported models. |
Yes | |
| seed | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. |
No | |
| suffix | string | A string of up to 64 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel. |
No | None |
| training_file | string | The ID of an uploaded file that contains training data. See upload file for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.The contents of the file should differ depending on if the model uses the chat, or if the fine-tuning method uses the preference format. See the fine-tuning guide for more details. |
Yes | |
| validation_file | string | The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.See the fine-tuning guide for more details. |
No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List paginated fine-tuning jobs
GET {endpoint}/openai/v1/fine_tuning/jobs
List your organization's fine-tuning jobs
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| after | query | No | string | Identifier for the last job from the previous pagination request. |
| limit | query | No | integer | Number of fine-tuning jobs to retrieve. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListPaginatedFineTuningJobsResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Retrieve fine-tuning job
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}
Get info about a fine-tuning job.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Cancel fine-tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/cancel
Immediately cancel a fine-tune job.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to cancel. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List fine-tuning job checkpoints
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints
List the checkpoints for a fine-tuning job.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to get checkpoints for. |
| after | query | No | string | Identifier for the last checkpoint ID from the previous pagination request. |
| limit | query | No | integer | Number of checkpoints to retrieve. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFineTuningJobCheckpointsResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Fine-tuning - Copy checkpoint
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_name}/copy
Creates a copy of a fine-tuning checkpoint at the given destination account and region.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-copy-ft-checkpoints | header | Yes | string Possible values: preview |
Enables access to checkpoint copy operations for models, an AOAI preview feature. This feature requires the 'aoai-copy-ft-checkpoints' header to be set to 'preview'. |
| accept | header | Yes | string Possible values: application/json |
|
| fine_tuning_job_id | path | Yes | string | |
| fine_tuning_checkpoint_name | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| destinationResourceId | string | The ID of the destination Resource to copy. | Yes | |
| region | string | The region to copy the model to. | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | CopyModelResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Fine-tuning - Get checkpoint
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_name}/copy
Gets the status of a fine-tuning checkpoint copy.
Note
This Azure OpenAI operation is in preview and subject to change.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| aoai-copy-ft-checkpoints | header | Yes | string Possible values: preview |
Enables access to checkpoint copy operations for models, an AOAI preview feature. This feature requires the 'aoai-copy-ft-checkpoints' header to be set to 'preview'. |
| accept | header | Yes | string Possible values: application/json |
|
| fine_tuning_job_id | path | Yes | string | |
| fine_tuning_checkpoint_name | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | CopyModelResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List fine-tuning events
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/events
Get status updates for a fine-tuning job.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to get events for. |
| after | query | No | string | Identifier for the last event from the previous pagination request. |
| limit | query | No | integer | Number of events to retrieve. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFineTuningJobEventsResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Pause fine-tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/pause
Pause a fine-tune job.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to pause. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Resume fine-tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/resume
Resume a paused fine-tune job.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to resume. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List models
GET {endpoint}/openai/v1/models
Lists the currently available models, and provides basic information about each one such as the owner and availability.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListModelsResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Retrieve model
GET {endpoint}/openai/v1/models/{model}
Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| model | path | Yes | string | The ID of the model to use for this request. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Model |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create response
POST {endpoint}/openai/v1/responses
Creates a model response.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| include | array | Specify additional output data to include in the model response. Currently supported values are: - code_interpreter_call.outputs: Includes the outputs of python code executionin code interpreter tool call items. - computer_call_output.output.image_url: Include image urls from the computer call output.- file_search_call.results: Include the search results ofthe file search tool call. - message.input_image.image_url: Include image urls from the input message.- message.output_text.logprobs: Include logprobs with assistant messages.- reasoning.encrypted_content: Includes an encrypted version of reasoningtokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization isenrolled in the zero data retention program). |
No | |
| input | string or array | No | ||
| instructions | string | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| model | string | The model deployment to use for the creation of this response. | Yes | |
| parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| prompt | object | Reference to a prompt template and its variables. |
No | |
| └─ id | string | The unique identifier of the prompt template to use. | No | |
| └─ variables | OpenAI.ResponsePromptVariables | Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. |
No | |
| └─ version | string | Optional version of the prompt template. | No | |
| reasoning | object | reasoning models only Configuration options for reasoning models. |
No | |
| └─ effort | OpenAI.ReasoningEffort | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| └─ generate_summary | enum | Deprecated: use summary instead.A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| └─ summary | enum | A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| store | boolean | Whether to store the generated model response for later retrieval via API. |
No | True |
| stream | boolean | If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information. |
No | False |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | 1 |
| text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| tool_choice | object | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one ormore tools. required means the model must call one or more tools. |
No | |
| └─ type | OpenAI.ToolChoiceObjectType | Indicates that the model should use a built-in tool to generate a response.. | No | |
| tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like file search. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. |
No | |
| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | 1 |
| truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureResponse | |
| text/event-stream | OpenAI.ResponseStreamEvent |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Examples
Example
Create a model response
POST {endpoint}/openai/v1/responses
Get response
GET {endpoint}/openai/v1/responses/{response_id}
Retrieves a model response with the given ID.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| response_id | path | Yes | string | |
| include_obfuscation | query | No | boolean | When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API. |
| include[] | query | No | array |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Delete response
DELETE {endpoint}/openai/v1/responses/{response_id}
Deletes a response by ID.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| response_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List input items
GET {endpoint}/openai/v1/responses/{response_id}/input_items
Returns a list of input items for a given response.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| response_id | path | Yes | string | |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ResponseItemList |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List vector stores
GET {endpoint}/openai/v1/vector_stores
Returns a list of vector stores.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListVectorStoresResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Createvectorstore
POST {endpoint}/openai/v1/vector_stores
Creates a vector store.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| chunking_strategy | object | The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400. |
No | |
| └─ static | OpenAI.StaticChunkingStrategy | No | ||
| └─ type | enum | Always static.Possible values: static |
No | |
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| file_ids | array | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the vector store. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Examples
Example file not found: ./examples/vector_stores.json
Get vector store
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}
Retrieves a vector store.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store to retrieve. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Modify vector store
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}
Modifies a vector store.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store to modify. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | object | The expiration policy for a vector store. | No | |
| └─ anchor | enum | Anchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.Possible values: last_active_at |
No | |
| └─ days | integer | The number of days after the anchor time that the vector store will expire. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the vector store. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Delete vector store
DELETE {endpoint}/openai/v1/vector_stores/{vector_store_id}
Delete a vector store.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store to delete. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteVectorStoreResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create vector store file batch
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches
Create a vector store file batch.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store for which to create a file batch. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
No | |
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. |
No | |
| file_ids | array | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileBatchObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get vector store file batch
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}
Retrieves a vector store file batch.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store that the file batch belongs to. |
| batch_id | path | Yes | string | The ID of the file batch being retrieved. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileBatchObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Cancel vector store file batch
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel
Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store that the file batch belongs to. |
| batch_id | path | Yes | string | The ID of the file batch to cancel. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileBatchObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List files in vector store batch
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/files
Returns a list of vector store files in a batch.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store that the file batch belongs to. |
| batch_id | path | Yes | string | The ID of the file batch that the files belong to. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
| filter | query | No | Filter by file status. One of in_progress, completed, failed, cancelled. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListVectorStoreFilesResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
List vector store files
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files
Returns a list of vector store files.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store that the files belong to. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
| filter | query | No | Filter by file status. One of in_progress, completed, failed, cancelled. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListVectorStoreFilesResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Create vector store file
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/files
Create a vector store file by attaching a File to a vector store.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store for which to create a File. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
No | |
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. |
No | |
| file_id | string | A File ID that the vector store should use. Useful for tools like file_search that can access files. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Get vector store file
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Retrieves a vector store file.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store that the file belongs to. |
| file_id | path | Yes | string | The ID of the file being retrieved. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Update vector store file attributes
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | |
| file_id | path | Yes | string |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Delete vector store file
DELETE {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint.
Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string url |
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
|
| vector_store_id | path | Yes | string | The ID of the vector store that the file belongs to. |
| file_id | path | Yes | string | The ID of the file to delete. |
Request Header
Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
| Name | Required | Type | Description |
|---|---|---|---|
| Authorization | True | string | Example: Authorization: Bearer {Azure_OpenAI_Auth_Token}To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.comType: oauth2 Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorizescope: https://cognitiveservices.azure.com/.default |
| api-key | True | string | Provide Azure OpenAI API key here |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteVectorStoreFileResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | AzureErrorResponse |
Components
AzureAIFoundryModelsApiVersion
| Property | Value |
|---|---|
| Type | string |
| Values | v1preview |
AzureChatCompletionResponseMessage
The extended response model component for chat completion response messages on the Azure OpenAI service. This model adds support for chat message context, used by the On Your Data feature for intent, citations, and other information related to retrieval-augmented generation performed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array | Annotations for the message, when applicable, as when using theweb search tool. | No | |
| audio | object | If the audio output modality is requested, this object contains data about the audio response from the model. . |
No | |
| └─ data | string | Base64 encoded audio bytes generated by the model, in the format specified in the request. |
No | |
| └─ expires_at | integer | The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations. |
No | |
| └─ id | string | Unique identifier for this audio response. | No | |
| └─ transcript | string | Transcript of the audio generated by the model. | No | |
| content | string | The contents of the message. | Yes | |
| context | object | An additional property, added to chat completion response messages, produced by the Azure OpenAI service when using extension behavior. This includes intent and citation information from the On Your Data feature. |
No | |
| └─ all_retrieved_documents | object | Summary information about documents retrieved by the data retrieval operation. | No | |
| └─ chunk_id | string | The chunk ID for the citation. | No | |
| └─ content | string | The content of the citation. | No | |
| └─ data_source_index | integer | The index of the data source used for retrieval. | No | |
| └─ filepath | string | The file path for the citation. | No | |
| └─ filter_reason | enum | If applicable, an indication of why the document was filtered. Possible values: score, rerank |
No | |
| └─ original_search_score | number | The original search score for the retrieval. | No | |
| └─ rerank_score | number | The rerank score for the retrieval. | No | |
| └─ search_queries | array | The search queries executed to retrieve documents. | No | |
| └─ title | string | The title for the citation. | No | |
| └─ url | string | The URL of the citation. | No | |
| └─ citations | array | The citations produced by the data retrieval. | No | |
| └─ intent | string | The detected intent from the chat history, which is used to carry conversation context between interactions | No | |
| function_call | object | Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model. |
No | |
| └─ arguments | string | No | ||
| └─ name | string | No | ||
| reasoning_content | string | An Azure-specific extension property containing generated reasoning content from supported models. | No | |
| refusal | string | The refusal message generated by the model. | Yes | |
| role | enum | The role of the author of this message. Possible values: assistant |
Yes | |
| tool_calls | ChatCompletionMessageToolCallsItem | The tool calls generated by the model, such as function calls. | No |
AzureChatCompletionStreamResponseDelta
The extended response model for a streaming chat response message on the Azure OpenAI service. This model adds support for chat message context, used by the On Your Data feature for intent, citations, and other information related to retrieval-augmented generation performed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | object | No | ||
| └─ data | string | No | ||
| └─ expires_at | integer | No | ||
| └─ id | string | No | ||
| └─ transcript | string | No | ||
| content | string | The contents of the chunk message. | No | |
| context | object | An additional property, added to chat completion response messages, produced by the Azure OpenAI service when using extension behavior. This includes intent and citation information from the On Your Data feature. |
No | |
| └─ all_retrieved_documents | object | Summary information about documents retrieved by the data retrieval operation. | No | |
| └─ chunk_id | string | The chunk ID for the citation. | No | |
| └─ content | string | The content of the citation. | No | |
| └─ data_source_index | integer | The index of the data source used for retrieval. | No | |
| └─ filepath | string | The file path for the citation. | No | |
| └─ filter_reason | enum | If applicable, an indication of why the document was filtered. Possible values: score, rerank |
No | |
| └─ original_search_score | number | The original search score for the retrieval. | No | |
| └─ rerank_score | number | The rerank score for the retrieval. | No | |
| └─ search_queries | array | The search queries executed to retrieve documents. | No | |
| └─ title | string | The title for the citation. | No | |
| └─ url | string | The URL of the citation. | No | |
| └─ citations | array | The citations produced by the data retrieval. | No | |
| └─ intent | string | The detected intent from the chat history, which is used to carry conversation context between interactions | No | |
| function_call | object | Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model. |
No | |
| └─ arguments | string | No | ||
| └─ name | string | No | ||
| reasoning_content | string | An Azure-specific extension property containing generated reasoning content from supported models. | No | |
| refusal | string | The refusal message generated by the model. | No | |
| role | object | The role of the author of a message | No | |
| tool_calls | array | No |
AzureChatDataSource
A representation of configuration data for a single Azure OpenAI chat data source. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.
Discriminator for AzureChatDataSource
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
azure_search |
AzureSearchChatDataSource |
azure_cosmos_db |
AzureCosmosDBChatDataSource |
elasticsearch |
ElasticsearchChatDataSource |
pinecone |
PineconeChatDataSource |
mongo_db |
MongoDBChatDataSource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | object | Yes |
AzureChatDataSourceAccessTokenAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| access_token | string | Yes | ||
| type | enum | Possible values: access_token |
Yes |
AzureChatDataSourceApiKeyAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| key | string | Yes | ||
| type | enum | Possible values: api_key |
Yes |
AzureChatDataSourceAuthenticationOptions
Discriminator for AzureChatDataSourceAuthenticationOptions
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
system_assigned_managed_identity |
AzureChatDataSourceSystemAssignedManagedIdentityAuthenticationOptions |
user_assigned_managed_identity |
AzureChatDataSourceUserAssignedManagedIdentityAuthenticationOptions |
access_token |
AzureChatDataSourceAccessTokenAuthenticationOptions |
connection_string |
AzureChatDataSourceConnectionStringAuthenticationOptions |
key_and_key_id |
AzureChatDataSourceKeyAndKeyIdAuthenticationOptions |
encoded_api_key |
AzureChatDataSourceEncodedApiKeyAuthenticationOptions |
username_and_password |
AzureChatDataSourceUsernameAndPasswordAuthenticationOptions |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | AzureChatDataSourceAuthenticationOptionsType | Yes |
AzureChatDataSourceAuthenticationOptionsType
| Property | Value |
|---|---|
| Type | string |
| Values | api_keyusername_and_passwordconnection_stringkey_and_key_idencoded_api_keyaccess_tokensystem_assigned_managed_identityuser_assigned_managed_identity |
AzureChatDataSourceConnectionStringAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| connection_string | string | Yes | ||
| type | enum | Possible values: connection_string |
Yes |
AzureChatDataSourceDeploymentNameVectorizationSource
Represents a vectorization source that makes internal service calls against an Azure OpenAI embedding model deployment. In contrast with the endpoint-based vectorization source, a deployment-name-based vectorization source must be part of the same Azure OpenAI resource but can be used even in private networks.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deployment_name | string | The embedding model deployment to use for vectorization. This deployment must exist within the same Azure OpenAI resource as the model deployment being used for chat completions. |
Yes | |
| dimensions | integer | The number of dimensions to request on embeddings. Only supported in 'text-embedding-3' and later models. |
No | |
| type | enum | The type identifier, always 'deployment_name' for this vectorization source type. Possible values: deployment_name |
Yes |
AzureChatDataSourceEncodedApiKeyAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| encoded_api_key | string | Yes | ||
| type | enum | Possible values: encoded_api_key |
Yes |
AzureChatDataSourceEndpointVectorizationSource
Represents a vectorization source that makes public service calls against an Azure OpenAI embedding model deployment.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| authentication | object | Yes | ||
| └─ access_token | string | No | ||
| └─ key | string | No | ||
| └─ type | enum | Possible values: access_token |
No | |
| dimensions | integer | The number of dimensions to request on embeddings. Only supported in 'text-embedding-3' and later models. |
No | |
| endpoint | string | Specifies the resource endpoint URL from which embeddings should be retrieved. It should be in the format of: https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings. The api-version query parameter is not allowed. |
Yes | |
| type | enum | The type identifier, always 'endpoint' for this vectorization source type. Possible values: endpoint |
Yes |
AzureChatDataSourceIntegratedVectorizationSource
Represents an integrated vectorization source as defined within the supporting search resource.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type identifier, always 'integrated' for this vectorization source type. Possible values: integrated |
Yes |
AzureChatDataSourceKeyAndKeyIdAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| key | string | Yes | ||
| key_id | string | Yes | ||
| type | enum | Possible values: key_and_key_id |
Yes |
AzureChatDataSourceModelIdVectorizationSource
Represents a vectorization source that makes service calls based on a search service model ID. This source type is currently only supported by Elasticsearch.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| model_id | string | The embedding model build ID to use for vectorization. | Yes | |
| type | enum | The type identifier, always 'model_id' for this vectorization source type. Possible values: model_id |
Yes |
AzureChatDataSourceSystemAssignedManagedIdentityAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: system_assigned_managed_identity |
Yes |
AzureChatDataSourceType
| Property | Value |
|---|---|
| Type | string |
| Values | azure_searchazure_cosmos_dbelasticsearchpineconemongo_db |
AzureChatDataSourceUserAssignedManagedIdentityAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| managed_identity_resource_id | string | Yes | ||
| type | enum | Possible values: user_assigned_managed_identity |
Yes |
AzureChatDataSourceUsernameAndPasswordAuthenticationOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| password | string | Yes | ||
| type | enum | Possible values: username_and_password |
Yes | |
| username | string | Yes |
AzureChatDataSourceVectorizationSource
A representation of a data vectorization source usable as an embedding resource with a data source.
Discriminator for AzureChatDataSourceVectorizationSource
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
deployment_name |
AzureChatDataSourceDeploymentNameVectorizationSource |
integrated |
AzureChatDataSourceIntegratedVectorizationSource |
model_id |
AzureChatDataSourceModelIdVectorizationSource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | object | Yes |
AzureChatDataSourceVectorizationSourceType
| Property | Value |
|---|---|
| Type | string |
| Values | endpointdeployment_namemodel_idintegrated |
AzureChatMessageContext
An additional property, added to chat completion response messages, produced by the Azure OpenAI service when using extension behavior. This includes intent and citation information from the On Your Data feature.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| all_retrieved_documents | object | Summary information about documents retrieved by the data retrieval operation. | No | |
| └─ chunk_id | string | The chunk ID for the citation. | No | |
| └─ content | string | The content of the citation. | No | |
| └─ data_source_index | integer | The index of the data source used for retrieval. | No | |
| └─ filepath | string | The file path for the citation. | No | |
| └─ filter_reason | enum | If applicable, an indication of why the document was filtered. Possible values: score, rerank |
No | |
| └─ original_search_score | number | The original search score for the retrieval. | No | |
| └─ rerank_score | number | The rerank score for the retrieval. | No | |
| └─ search_queries | array | The search queries executed to retrieve documents. | No | |
| └─ title | string | The title for the citation. | No | |
| └─ url | string | The URL of the citation. | No | |
| citations | array | The citations produced by the data retrieval. | No | |
| intent | string | The detected intent from the chat history, which is used to carry conversation context between interactions | No |
AzureContentFilterBlocklistResult
A collection of true/false filtering results for configured custom blocklists.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| details | array | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | Yes |
AzureContentFilterCompletionTextSpan
A representation of a span of completion text as used by Azure OpenAI content filter results.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_end_offset | integer | Offset of the first UTF32 code point which is excluded from the span. This field is always equal to completion_start_offset for empty spans. This field is always larger than completion_start_offset for non-empty spans. | Yes | |
| completion_start_offset | integer | Offset of the UTF32 code point which begins the span. | Yes |
AzureContentFilterCompletionTextSpanDetectionResult
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| details | array | Detailed information about the detected completion text spans. | Yes | |
| detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes |
AzureContentFilterCustomTopicResult
A collection of true/false filtering results for configured custom topics.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| details | array | The pairs of individual topic IDs and whether they are detected. | No | |
| filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | Yes |
AzureContentFilterDetectionResult
A labeled content filter result item that indicates whether the content was detected and whether the content was filtered.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes |
AzureContentFilterPersonallyIdentifiableInformationResult
A content filter detection result for Personally Identifiable Information that includes harm extensions.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| redacted_text | string | The redacted text with PII information removed or masked. | No | |
| sub_categories | array | Detailed results for individual PIIHarmSubCategory(s). | No |
AzureContentFilterResultForChoice
A content filter result for a single response item produced by a generative AI system.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom_blocklists | object | A collection of true/false filtering results for configured custom blocklists. | No | |
| └─ details | array | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| └─ filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | No | |
| custom_topics | object | A collection of true/false filtering results for configured custom topics. | No | |
| └─ details | array | The pairs of individual topic IDs and whether they are detected. | No | |
| └─ filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | No | |
| error | object | If present, details about an error that prevented content filtering from completing its evaluation. | No | |
| └─ code | integer | A distinct, machine-readable code associated with the error. | No | |
| └─ message | string | A human-readable message associated with the error. | No | |
| hate | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| personally_identifiable_information | object | A content filter detection result for Personally Identifiable Information that includes harm extensions. | No | |
| └─ redacted_text | string | The redacted text with PII information removed or masked. | No | |
| └─ sub_categories | array | Detailed results for individual PIIHarmSubCategory(s). | No | |
| profanity | object | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | No | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | No | |
| protected_material_code | object | A detection result that describes a match against licensed code or other protected source material. | No | |
| └─ citation | object | If available, the citation details describing the associated license and its ___location. | No | |
| └─ URL | string | The URL associated with the license. | No | |
| └─ license | string | The name or identifier of the license associated with the detection. | No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | No | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | No | |
| protected_material_text | object | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | No | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | No | |
| self_harm | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| sexual | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| ungrounded_material | AzureContentFilterCompletionTextSpanDetectionResult | No | ||
| violence | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No |
AzureContentFilterResultForPrompt
A content filter result associated with a single input prompt item into a generative AI system.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_filter_results | object | The content filter category details for the result. | No | |
| └─ custom_blocklists | object | A collection of true/false filtering results for configured custom blocklists. | No | |
| └─ details | array | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| └─ filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | No | |
| └─ custom_topics | object | A collection of true/false filtering results for configured custom topics. | No | |
| └─ details | array | The pairs of individual topic IDs and whether they are detected. | No | |
| └─ filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | No | |
| └─ error | object | If present, details about an error that prevented content filtering from completing its evaluation. | No | |
| └─ code | integer | A distinct, machine-readable code associated with the error. | No | |
| └─ message | string | A human-readable message associated with the error. | No | |
| └─ hate | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| └─ indirect_attack | object | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | No | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | No | |
| └─ jailbreak | object | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | No | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | No | |
| └─ profanity | object | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | No | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | No | |
| └─ self_harm | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| └─ sexual | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| └─ violence | object | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | No | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
No | |
| prompt_index | integer | The index of the input prompt associated with the accompanying content filter result categories. | No |
AzureContentFilterSeverityResult
A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes |
AzureCosmosDBChatDataSource
Represents a data source configuration that will use an Azure CosmosDB resource.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parameters | object | The parameter information to control the use of the Azure CosmosDB data source. | Yes | |
| └─ allow_partial_result | boolean | If set to true, the system will allow partial search results to be used and the request will fail if all partial queries fail. If not specified or specified as false, the request will fail if any search query fails. |
No | False |
| └─ authentication | AzureChatDataSourceConnectionStringAuthenticationOptions | No | ||
| └─ container_name | string | No | ||
| └─ database_name | string | No | ||
| └─ embedding_dependency | AzureChatDataSourceVectorizationSource | A representation of a data vectorization source usable as an embedding resource with a data source. | No | |
| └─ fields_mapping | object | No | ||
| └─ content_fields | array | No | ||
| └─ content_fields_separator | string | No | ||
| └─ filepath_field | string | No | ||
| └─ title_field | string | No | ||
| └─ url_field | string | No | ||
| └─ vector_fields | array | No | ||
| └─ in_scope | boolean | Whether queries should be restricted to use of the indexed data. | No | |
| └─ include_contexts | array | The output context properties to include on the response. By default, citations and intent will be requested. |
No | ['citations', 'intent'] |
| └─ index_name | string | No | ||
| └─ max_search_queries | integer | The maximum number of rewritten queries that should be sent to the search provider for a single user message. By default, the system will make an automatic determination. |
No | |
| └─ strictness | integer | The configured strictness of the search relevance filtering. Higher strictness will increase precision but lower recall of the answer. |
No | |
| └─ top_n_documents | integer | The configured number of documents to feature in the query. | No | |
| type | enum | The discriminated type identifier, which is always 'azure_cosmos_db'. Possible values: azure_cosmos_db |
Yes |
AzureCreateChatCompletionRequest
The extended request model for chat completions against the Azure OpenAI service. This adds the ability to provide data sources for the On Your Data feature.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | object | Parameters for audio output. Required when audio output is requested withmodalities: ["audio"]. |
No | |
| └─ format | enum | Specifies the output audio format. Must be one of wav, mp3, flac,opus, or pcm16.Possible values: wav, aac, mp3, flac, opus, pcm16 |
No | |
| └─ voice | object | No | ||
| data_sources | array | The data sources to use for the On Your Data feature, exclusive to Azure OpenAI. | No | |
| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
No | 0 |
| function_call | enum | Specifying a particular function via {"name": "my_function"} forces the model to call that function.Possible values: none, auto |
No | |
| functions | array | Deprecated in favor of tools.A list of functions the model may generate JSON inputs for. |
No | |
| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
No | None |
| logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. |
No | False |
| max_completion_tokens | integer | An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. |
No | |
| max_tokens | integer | The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API. This value is now deprecated in favor of max_completion_tokens, and isnot compatible with o1 series models. |
No | |
| messages | array | A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio. |
Yes | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| modalities | object | Output types that you would like the model to generate. Most models are capable of generating text, which is the default: ["text"]The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generateboth text and audio responses, you can use: ["text", "audio"] |
No | |
| model | string | The model deployment identifier to use for the chat completion request. | Yes | |
| n | integer | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs. |
No | 1 |
| parallel_tool_calls | object | Whether to enable parallel function calling during tool use. | No | |
| prediction | object | Base representation of predicted output from a model. | No | |
| └─ type | OpenAI.ChatOutputPredictionType | No | ||
| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
No | 0 |
| reasoning_effort | object | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| response_format | object | No | ||
| └─ type | enum | Possible values: text, json_object, json_schema |
No | |
| seed | integer | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. |
No | |
| stop | object | Not supported with latest reasoning models o3 and o4-mini.Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
No | |
| store | boolean | Whether or not to store the output of this chat completion request for use in model distillation or evals products. |
No | False |
| stream | boolean | If set to true, the model response data will be streamed to the client as it is generated using server-sent events. |
No | False |
| stream_options | object | Options for streaming response. Only set this when you set stream: true. |
No | |
| └─ include_usage | boolean | If set, an additional chunk will be streamed before the data: [DONE]message. The usage field on this chunk shows the token usage statisticsfor the entire request, and the choices field will always be an emptyarray. All other chunks will also include a usage field, but with a nullvalue. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request. |
No | |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | 1 |
| tool_choice | OpenAI.ChatCompletionToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools.Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.none is the default when no tools are present. auto is the default if tools are present. |
No | |
| tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. | No | |
| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | 1 |
| user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse. |
No | |
| user_security_context | AzureUserSecurityContext | User security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud. | No |
AzureCreateChatCompletionResponse
The extended top-level chat completion response model for the Azure OpenAI service. This model adds Responsible AI content filter annotations for prompt input.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| choices | array | Yes | ||
| created | integer | The Unix timestamp (in seconds) of when the chat completion was created. | Yes | |
| id | string | A unique identifier for the chat completion. | Yes | |
| model | string | The model used for the chat completion. | Yes | |
| object | enum | The object type, which is always chat.completion.Possible values: chat.completion |
Yes | |
| prompt_filter_results | array | The Responsible AI content filter annotations associated with prompt inputs into chat completions. | No | |
| system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism. |
No | |
| usage | OpenAI.CompletionUsage | Usage statistics for the completion request. | No |
AzureCreateChatCompletionStreamResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| choices | array | A list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for thelast chunk if you set stream_options: {"include_usage": true}. |
Yes | |
| content_filter_results | AzureContentFilterResultForChoice | A content filter result for a single response item produced by a generative AI system. | No | |
| created | integer | The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp. | Yes | |
| delta | AzureChatCompletionStreamResponseDelta | The extended response model for a streaming chat response message on the Azure OpenAI service. This model adds support for chat message context, used by the On Your Data feature for intent, citations, and other information related to retrieval-augmented generation performed. |
No | |
| id | string | A unique identifier for the chat completion. Each chunk has the same ID. | Yes | |
| model | string | The model to generate the completion. | Yes | |
| object | enum | The object type, which is always chat.completion.chunk.Possible values: chat.completion.chunk |
Yes | |
| system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism. |
No | |
| usage | object | Usage statistics for the completion request. | No | |
| └─ completion_tokens | integer | Number of tokens in the generated completion. | No | 0 |
| └─ completion_tokens_details | object | Breakdown of tokens used in a completion. | No | |
| └─ accepted_prediction_tokens | integer | When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion. |
No | 0 |
| └─ audio_tokens | integer | Audio input tokens generated by the model. | No | 0 |
| └─ reasoning_tokens | integer | Tokens generated by the model for reasoning. | No | 0 |
| └─ rejected_prediction_tokens | integer | When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits. |
No | 0 |
| └─ prompt_tokens | integer | Number of tokens in the prompt. | No | 0 |
| └─ prompt_tokens_details | object | Breakdown of tokens used in the prompt. | No | |
| └─ audio_tokens | integer | Audio input tokens present in the prompt. | No | 0 |
| └─ cached_tokens | integer | Cached tokens present in the prompt. | No | 0 |
| └─ total_tokens | integer | Total number of tokens used in the request (prompt + completion). | No | 0 |
AzureCreateEmbeddingRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| dimensions | integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. |
No | |
| encoding_format | enum | The format to return the embeddings in. Can be either float or base64.Possible values: float, base64 |
No | |
| input | string or array | Yes | ||
| model | string | The model to use for the embedding request. | Yes | |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No |
AzureCreateFileRequestMultiPart
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | object | Yes | ||
| └─ anchor | AzureFileExpiryAnchor | No | ||
| └─ seconds | integer | No | ||
| file | string | Yes | ||
| purpose | enum | The intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - evals: Used for eval data setsPossible values: assistants, batch, fine-tune, evals |
Yes |
AzureCreateResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| include | array | Specify additional output data to include in the model response. Currently supported values are: - code_interpreter_call.outputs: Includes the outputs of python code executionin code interpreter tool call items. - computer_call_output.output.image_url: Include image urls from the computer call output.- file_search_call.results: Include the search results ofthe file search tool call. - message.input_image.image_url: Include image urls from the input message.- message.output_text.logprobs: Include logprobs with assistant messages.- reasoning.encrypted_content: Includes an encrypted version of reasoningtokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization isenrolled in the zero data retention program). |
No | |
| input | string or array | No | ||
| instructions | string | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| model | string | The model deployment to use for the creation of this response. | Yes | |
| parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| prompt | object | Reference to a prompt template and its variables. |
No | |
| └─ id | string | The unique identifier of the prompt template to use. | No | |
| └─ variables | OpenAI.ResponsePromptVariables | Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. |
No | |
| └─ version | string | Optional version of the prompt template. | No | |
| reasoning | object | reasoning models only Configuration options for reasoning models. |
No | |
| └─ effort | OpenAI.ReasoningEffort | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| └─ generate_summary | enum | Deprecated: use summary instead.A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| └─ summary | enum | A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| store | boolean | Whether to store the generated model response for later retrieval via API. |
No | True |
| stream | boolean | If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information. |
No | False |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | 1 |
| text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| tool_choice | object | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one ormore tools. required means the model must call one or more tools. |
No | |
| └─ type | OpenAI.ToolChoiceObjectType | Indicates that the model should use a built-in tool to generate a response.. | No | |
| tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like file search. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. |
No | |
| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | 1 |
| truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No |
AzureErrorResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | object | The error details. | No | |
| └─ code | string | The distinct, machine-generated identifier for the error. | No | |
| └─ inner_error | No | |||
| └─ message | string | A human-readable message associated with the error. | No | |
| └─ param | string | If applicable, the request input parameter associated with the error | No | |
| └─ type | enum | The object type, always 'error.' Possible values: error |
No |
AzureEvalAPICompletionsSamplingParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parallel_tool_calls | boolean | No | ||
| response_format | OpenAI.ResponseTextFormatConfiguration | No | ||
| tools | array | No |
AzureEvalAPIModelSamplingParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| max_tokens | integer | The maximum number of tokens in the generated output. | No | |
| reasoning_effort | enum | Controls the level of reasoning effort applied during generation. Possible values: low, medium, high |
No | |
| seed | integer | A seed value to initialize the randomness during sampling. | No | |
| temperature | number | A higher temperature increases randomness in the outputs. | No | |
| top_p | number | An alternative to temperature for nucleus sampling; 1.0 includes all tokens. | No |
AzureEvalAPIResponseSamplingParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parallel_tool_calls | boolean | No | ||
| response_format | OpenAI.ResponseTextFormatConfiguration | No | ||
| tools | array | No |
AzureFileExpiryAnchor
| Property | Value |
|---|---|
| Type | string |
| Values | created_at |
AzureFineTuneReinforcementMethod
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | object | A StringCheckGrader object that performs a string comparison between input and reference using a specified operation. | Yes | |
| └─ calculate_output | string | A formula to calculate the output based on grader results. | No | |
| └─ evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
No | |
| └─ graders | object | No | ||
| └─ input | array | The input text. This may include template strings. | No | |
| └─ model | string | The model to use for the evaluation. | No | |
| └─ name | string | The name of the grader. | No | |
| └─ operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
No | |
| └─ range | array | The range of the score. Defaults to [0, 1]. |
No | |
| └─ reference | string | The text being graded against. | No | |
| └─ sampling_params | The sampling parameters for the model. | No | ||
| └─ type | enum | The object type, which is always multi.Possible values: multi |
No | |
| hyperparameters | OpenAI.FineTuneReinforcementHyperparameters | The hyperparameters used for the reinforcement fine-tuning job. | No | |
| response_format | object | No | ||
| └─ json_schema | object | JSON Schema for the response format | No | |
| └─ type | enum | Type of response format Possible values: json_schema |
No |
AzureListFilesResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | enum | Possible values: list |
Yes |
AzureOpenAIFile
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | integer | The size of the file, in bytes. | Yes | |
| created_at | integer | The Unix timestamp (in seconds) for when the file was created. | Yes | |
| expires_at | integer | The Unix timestamp (in seconds) for when the file will expire. | No | |
| filename | string | The name of the file. | Yes | |
| id | string | The file identifier, which can be referenced in the API endpoints. | Yes | |
| object | enum | The object type, which is always file.Possible values: file |
Yes | |
| purpose | enum | The intended purpose of the file. Supported values are assistants, assistants_output, batch, batch_output, fine-tune and fine-tune-results.Possible values: assistants, assistants_output, batch, batch_output, fine-tune, fine-tune-results, evals |
Yes | |
| status | enum | Possible values: uploaded, pending, running, processed, error, deleting, deleted |
Yes | |
| status_details | string | Deprecated. For details on why a fine-tuning training file failed validation, see the error field on fine_tuning.job. |
No |
AzurePiiSubCategoryResult
Result details for individual PIIHarmSubCategory(s).
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detected | boolean | Whether the labeled content subcategory was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action for this subcategory. | Yes | |
| redacted | boolean | Whether the content was redacted for this subcategory. | Yes | |
| sub_category | string | The PIIHarmSubCategory that was evaluated. | Yes |
AzureResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| error | object | An error object returned when the model fails to generate a Response. | Yes | |
| └─ code | OpenAI.ResponseErrorCode | The error code for the response. | No | |
| └─ message | string | A human-readable description of the error. | No | |
| id | string | Unique identifier for this Response. | Yes | |
| incomplete_details | object | Details about why the response is incomplete. | Yes | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| instructions | string or array | Yes | ||
| max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| model | string | The model used to generate this response. | Yes | |
| object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| prompt | object | Reference to a prompt template and its variables. |
No | |
| └─ id | string | The unique identifier of the prompt template to use. | No | |
| └─ variables | OpenAI.ResponsePromptVariables | Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. |
No | |
| └─ version | string | Optional version of the prompt template. | No | |
| reasoning | object | reasoning models only Configuration options for reasoning models. |
No | |
| └─ effort | OpenAI.ReasoningEffort | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| └─ generate_summary | enum | Deprecated: use summary instead.A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| └─ summary | enum | A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
Yes | |
| text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| tool_choice | object | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one ormore tools. required means the model must call one or more tools. |
No | |
| └─ type | OpenAI.ToolChoiceObjectType | Indicates that the model should use a built-in tool to generate a response.. | No | |
| tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
Yes | |
| truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | Yes |
AzureSearchChatDataSource
Represents a data source configuration that will use an Azure Search resource.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parameters | object | The parameter information to control the use of the Azure Search data source. | Yes | |
| └─ allow_partial_result | boolean | If set to true, the system will allow partial search results to be used and the request will fail if all partial queries fail. If not specified or specified as false, the request will fail if any search query fails. |
No | False |
| └─ authentication | object | No | ||
| └─ access_token | string | No | ||
| └─ key | string | No | ||
| └─ managed_identity_resource_id | string | No | ||
| └─ type | enum | Possible values: access_token |
No | |
| └─ embedding_dependency | object | Represents a vectorization source that makes public service calls against an Azure OpenAI embedding model deployment. | No | |
| └─ authentication | AzureChatDataSourceApiKeyAuthenticationOptions or AzureChatDataSourceAccessTokenAuthenticationOptions | The authentication mechanism to use with the endpoint-based vectorization source. Endpoint authentication supports API key and access token mechanisms. |
No | |
| └─ deployment_name | string | The embedding model deployment to use for vectorization. This deployment must exist within the same Azure OpenAI resource as the model deployment being used for chat completions. |
No | |
| └─ dimensions | integer | The number of dimensions to request on embeddings. Only supported in 'text-embedding-3' and later models. |
No | |
| └─ endpoint | string | Specifies the resource endpoint URL from which embeddings should be retrieved. It should be in the format of: https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings. The api-version query parameter is not allowed. |
No | |
| └─ type | enum | The type identifier, always 'integrated' for this vectorization source type. Possible values: integrated |
No | |
| └─ endpoint | string | The absolute endpoint path for the Azure Search resource to use. | No | |
| └─ fields_mapping | object | The field mappings to use with the Azure Search resource. | No | |
| └─ content_fields | array | The names of index fields that should be treated as content. | No | |
| └─ content_fields_separator | string | The separator pattern that content fields should use. | No | |
| └─ filepath_field | string | The name of the index field to use as a filepath. | No | |
| └─ image_vector_fields | array | The names of fields that represent image vector data. | No | |
| └─ title_field | string | The name of the index field to use as a title. | No | |
| └─ url_field | string | The name of the index field to use as a URL. | No | |
| └─ vector_fields | array | The names of fields that represent vector data. | No | |
| └─ filter | string | A filter to apply to the search. | No | |
| └─ in_scope | boolean | Whether queries should be restricted to use of the indexed data. | No | |
| └─ include_contexts | array | The output context properties to include on the response. By default, citations and intent will be requested. |
No | ['citations', 'intent'] |
| └─ index_name | string | The name of the index to use, as specified in the Azure Search resource. | No | |
| └─ max_search_queries | integer | The maximum number of rewritten queries that should be sent to the search provider for a single user message. By default, the system will make an automatic determination. |
No | |
| └─ query_type | enum | The query type for the Azure Search resource to use. Possible values: simple, semantic, vector, vector_simple_hybrid, vector_semantic_hybrid |
No | |
| └─ semantic_configuration | string | Additional semantic configuration for the query. | No | |
| └─ strictness | integer | The configured strictness of the search relevance filtering. Higher strictness will increase precision but lower recall of the answer. |
No | |
| └─ top_n_documents | integer | The configured number of documents to feature in the query. | No | |
| type | enum | The discriminated type identifier, which is always 'azure_search'. Possible values: azure_search |
Yes |
AzureUserSecurityContext
User security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| application_name | string | The name of the application. Sensitive personal information should not be included in this field. | No | |
| end_user_id | string | This identifier is the Microsoft Entra ID (formerly Azure Active Directory) user object ID used to authenticate end-users within the generative AI application. Sensitive personal information should not be included in this field. | No | |
| end_user_tenant_id | string | The Microsoft 365 tenant ID the end user belongs to. It's required when the generative AI application is multitenant. | No | |
| source_ip | string | Captures the original client's IP address. | No |
ChatCompletionMessageToolCallsItem
The tool calls generated by the model, such as function calls.
Array of: OpenAI.ChatCompletionMessageToolCall
CopiedAccountDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| destinationResourceId | string | The ID of the destination resource where the model was copied to. | Yes | |
| region | string | The region where the model was copied to. | Yes | |
| status | enum | The status of the copy operation. Possible values: Completed, Failed, InProgress |
Yes |
CopyModelRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| destinationResourceId | string | The ID of the destination Resource to copy. | Yes | |
| region | string | The region to copy the model to. | Yes |
CopyModelResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| checkpointedModelName | string | The ID of the copied model. | Yes | |
| copiedAccountDetails | array | The ID of the destination resource id where it was copied | Yes | |
| fineTuningJobId | string | The ID of the fine-tuning job that the checkpoint was copied from. | Yes |
ElasticsearchChatDataSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parameters | object | The parameter information to control the use of the Elasticsearch data source. | Yes | |
| └─ allow_partial_result | boolean | If set to true, the system will allow partial search results to be used and the request will fail if all partial queries fail. If not specified or specified as false, the request will fail if any search query fails. |
No | False |
| └─ authentication | object | No | ||
| └─ encoded_api_key | string | No | ||
| └─ key | string | No | ||
| └─ key_id | string | No | ||
| └─ type | enum | Possible values: encoded_api_key |
No | |
| └─ embedding_dependency | AzureChatDataSourceVectorizationSource | A representation of a data vectorization source usable as an embedding resource with a data source. | No | |
| └─ endpoint | string | No | ||
| └─ fields_mapping | object | No | ||
| └─ content_fields | array | No | ||
| └─ content_fields_separator | string | No | ||
| └─ filepath_field | string | No | ||
| └─ title_field | string | No | ||
| └─ url_field | string | No | ||
| └─ vector_fields | array | No | ||
| └─ in_scope | boolean | Whether queries should be restricted to use of the indexed data. | No | |
| └─ include_contexts | array | The output context properties to include on the response. By default, citations and intent will be requested. |
No | ['citations', 'intent'] |
| └─ index_name | string | No | ||
| └─ max_search_queries | integer | The maximum number of rewritten queries that should be sent to the search provider for a single user message. By default, the system will make an automatic determination. |
No | |
| └─ query_type | enum | Possible values: simple, vector |
No | |
| └─ strictness | integer | The configured strictness of the search relevance filtering. Higher strictness will increase precision but lower recall of the answer. |
No | |
| └─ top_n_documents | integer | The configured number of documents to feature in the query. | No | |
| type | enum | The discriminated type identifier, which is always 'elasticsearch'. Possible values: elasticsearch |
Yes |
MongoDBChatDataSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parameters | object | The parameter information to control the use of the MongoDB data source. | Yes | |
| └─ allow_partial_result | boolean | If set to true, the system will allow partial search results to be used and the request will fail if all partial queries fail. If not specified or specified as false, the request will fail if any search query fails. |
No | False |
| └─ app_name | string | The name of the MongoDB application. | No | |
| └─ authentication | object | No | ||
| └─ password | string | No | ||
| └─ type | enum | Possible values: username_and_password |
No | |
| └─ username | string | No | ||
| └─ collection_name | string | The name of the MongoDB collection. | No | |
| └─ database_name | string | The name of the MongoDB database. | No | |
| └─ embedding_dependency | object | Represents a vectorization source that makes public service calls against an Azure OpenAI embedding model deployment. | No | |
| └─ authentication | AzureChatDataSourceApiKeyAuthenticationOptions or AzureChatDataSourceAccessTokenAuthenticationOptions | The authentication mechanism to use with the endpoint-based vectorization source. Endpoint authentication supports API key and access token mechanisms. |
No | |
| └─ deployment_name | string | The embedding model deployment to use for vectorization. This deployment must exist within the same Azure OpenAI resource as the model deployment being used for chat completions. |
No | |
| └─ dimensions | integer | The number of dimensions to request on embeddings. Only supported in 'text-embedding-3' and later models. |
No | |
| └─ endpoint | string | Specifies the resource endpoint URL from which embeddings should be retrieved. It should be in the format of: https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings. The api-version query parameter is not allowed. |
No | |
| └─ type | enum | The type identifier, always 'deployment_name' for this vectorization source type. Possible values: deployment_name |
No | |
| └─ endpoint | string | The name of the MongoDB cluster endpoint. | No | |
| └─ fields_mapping | object | Field mappings to apply to data used by the MongoDB data source. Note that content and vector field mappings are required for MongoDB. |
No | |
| └─ content_fields | array | No | ||
| └─ content_fields_separator | string | No | ||
| └─ filepath_field | string | No | ||
| └─ title_field | string | No | ||
| └─ url_field | string | No | ||
| └─ vector_fields | array | No | ||
| └─ in_scope | boolean | Whether queries should be restricted to use of the indexed data. | No | |
| └─ include_contexts | array | The output context properties to include on the response. By default, citations and intent will be requested. |
No | ['citations', 'intent'] |
| └─ index_name | string | The name of the MongoDB index. | No | |
| └─ max_search_queries | integer | The maximum number of rewritten queries that should be sent to the search provider for a single user message. By default, the system will make an automatic determination. |
No | |
| └─ strictness | integer | The configured strictness of the search relevance filtering. Higher strictness will increase precision but lower recall of the answer. |
No | |
| └─ top_n_documents | integer | The configured number of documents to feature in the query. | No | |
| type | enum | The discriminated type identifier, which is always 'mongo_db'. Possible values: mongo_db |
Yes |
OpenAI.Annotation
Discriminator for OpenAI.Annotation
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_citation |
OpenAI.AnnotationFileCitation |
url_citation |
OpenAI.AnnotationUrlCitation |
file_path |
OpenAI.AnnotationFilePath |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.AnnotationType | Yes |
OpenAI.AnnotationFileCitation
A citation to a file.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | The ID of the file. | Yes | |
| filename | string | The filename of the file cited. | Yes | |
| index | integer | The index of the file in the list of files. | Yes | |
| type | enum | The type of the file citation. Always file_citation.Possible values: file_citation |
Yes |
OpenAI.AnnotationFilePath
A path to a file.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | The ID of the file. | Yes | |
| index | integer | The index of the file in the list of files. | Yes | |
| type | enum | The type of the file path. Always file_path.Possible values: file_path |
Yes |
OpenAI.AnnotationType
| Property | Value |
|---|---|
| Type | string |
| Values | file_citationurl_citationfile_pathcontainer_file_citation |
OpenAI.AnnotationUrlCitation
A citation for a web resource used to generate a model response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| end_index | integer | The index of the last character of the URL citation in the message. | Yes | |
| start_index | integer | The index of the first character of the URL citation in the message. | Yes | |
| title | string | The title of the web resource. | Yes | |
| type | enum | The type of the URL citation. Always url_citation.Possible values: url_citation |
Yes | |
| url | string | The URL of the web resource. | Yes |
OpenAI.ApproximateLocation
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| city | string | No | ||
| country | string | No | ||
| region | string | No | ||
| timezone | string | No | ||
| type | enum | Possible values: approximate |
Yes |
OpenAI.AutoChunkingStrategyRequestParam
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Always auto.Possible values: auto |
Yes |
OpenAI.ChatCompletionFunctionCallOption
Specifying a particular function via {"name": "my_function"} forces the model to call that function.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the function to call. | Yes |
OpenAI.ChatCompletionFunctions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of what the function does, used by the model to choose when and how to call the function. | No | |
| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
| parameters | The parameters the functions accepts, described as a JSON Schema object. See the JSON Schema reference for documentation about the format. Omitting parameters defines a function with an empty parameter list. |
No |
OpenAI.ChatCompletionMessageAudioChunk
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | string | No | ||
| expires_at | integer | No | ||
| id | string | No | ||
| transcript | string | No |
OpenAI.ChatCompletionMessageToolCall
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | object | The function that the model called. | Yes | |
| └─ arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | No | |
| └─ name | string | The name of the function to call. | No | |
| id | string | The ID of the tool call. | Yes | |
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
Yes |
OpenAI.ChatCompletionMessageToolCallChunk
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | object | No | ||
| └─ arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | No | |
| └─ name | string | The name of the function to call. | No | |
| id | string | The ID of the tool call. | No | |
| index | integer | Yes | ||
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
No |
OpenAI.ChatCompletionNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific function.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | object | Yes | ||
| └─ name | string | The name of the function to call. | No | |
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
Yes |
OpenAI.ChatCompletionRequestAssistantMessage
Messages sent by the model in response to user messages.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | object | Data about a previous audio response from the model. | No | |
| └─ id | string | Unique identifier for a previous audio response from the model. | No | |
| content | string or array | No | ||
| function_call | object | Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model. |
No | |
| └─ arguments | string | No | ||
| └─ name | string | No | ||
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| refusal | string | The refusal message by the assistant. | No | |
| role | enum | The role of the messages author, in this case assistant.Possible values: assistant |
Yes | |
| tool_calls | ChatCompletionMessageToolCallsItem | The tool calls generated by the model, such as function calls. | No |
OpenAI.ChatCompletionRequestAssistantMessageContentPart
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal message generated by the model. | Yes | |
| text | string | The text content. | Yes | |
| type | enum | The type of the content part. Possible values: refusal |
Yes |
OpenAI.ChatCompletionRequestDeveloperMessage
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, developer messages
replace the previous system messages.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | Yes | ||
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| role | enum | The role of the messages author, in this case developer.Possible values: developer |
Yes |
OpenAI.ChatCompletionRequestFunctionMessage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string | The contents of the function message. | Yes | |
| name | string | The name of the function to call. | Yes | |
| role | enum | The role of the messages author, in this case function.Possible values: function |
Yes |
OpenAI.ChatCompletionRequestMessage
Discriminator for OpenAI.ChatCompletionRequestMessage
This component uses the property role to discriminate between different types:
| Type Value | Schema |
|---|---|
system |
OpenAI.ChatCompletionRequestSystemMessage |
developer |
OpenAI.ChatCompletionRequestDeveloperMessage |
user |
OpenAI.ChatCompletionRequestUserMessage |
assistant |
OpenAI.ChatCompletionRequestAssistantMessage |
tool |
OpenAI.ChatCompletionRequestToolMessage |
function |
OpenAI.ChatCompletionRequestFunctionMessage |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | No | ||
| role | object | The role of the author of a message | Yes |
OpenAI.ChatCompletionRequestMessageContentPart
Discriminator for OpenAI.ChatCompletionRequestMessageContentPart
This component uses the property type to discriminate between different types:
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ChatCompletionRequestMessageContentPartType | Yes |
OpenAI.ChatCompletionRequestMessageContentPartAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_audio | object | Yes | ||
| └─ data | string | Base64 encoded audio data. | No | |
| └─ format | enum | The format of the encoded audio data. Currently supports "wav" and "mp3". Possible values: wav, mp3 |
No | |
| type | enum | The type of the content part. Always input_audio.Possible values: input_audio |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartFile
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file | object | Yes | ||
| └─ file_data | string | The base64 encoded file data, used when passing the file to the model as a string. |
No | |
| └─ file_id | string | The ID of an uploaded file to use as input. | No | |
| └─ filename | string | The name of the file, used when passing the file to the model as a string. |
No | |
| type | enum | The type of the content part. Always file.Possible values: file |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartImage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_url | object | Yes | ||
| └─ detail | enum | Specifies the detail level of the image. Possible values: auto, low, high |
No | |
| └─ url | string | Either a URL of the image or the base64 encoded image data. | No | |
| type | enum | The type of the content part. Possible values: image_url |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartRefusal
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal message generated by the model. | Yes | |
| type | enum | The type of the content part. Possible values: refusal |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text content. | Yes | |
| type | enum | The type of the content part. Possible values: text |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartType
| Property | Value |
|---|---|
| Type | string |
| Values | textfileinput_audioimage_urlrefusal |
OpenAI.ChatCompletionRequestSystemMessage
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, use developer messages
for this purpose instead.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | Yes | ||
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| role | enum | The role of the messages author, in this case system.Possible values: system |
Yes |
OpenAI.ChatCompletionRequestSystemMessageContentPart
References: OpenAI.ChatCompletionRequestMessageContentPartText
OpenAI.ChatCompletionRequestToolMessage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | Yes | ||
| role | enum | The role of the messages author, in this case tool.Possible values: tool |
Yes | |
| tool_call_id | string | Tool call that this message is responding to. | Yes |
OpenAI.ChatCompletionRequestToolMessageContentPart
References: OpenAI.ChatCompletionRequestMessageContentPartText
OpenAI.ChatCompletionRequestUserMessage
Messages sent by an end user, containing prompts or additional context information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | Yes | ||
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| role | enum | The role of the messages author, in this case user.Possible values: user |
Yes |
OpenAI.ChatCompletionRequestUserMessageContentPart
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file | object | Yes | ||
| └─ file_data | string | The base64 encoded file data, used when passing the file to the model as a string. |
No | |
| └─ file_id | string | The ID of an uploaded file to use as input. | No | |
| └─ filename | string | The name of the file, used when passing the file to the model as a string. |
No | |
| image_url | object | Yes | ||
| └─ detail | enum | Specifies the detail level of the image. Possible values: auto, low, high |
No | |
| └─ url | string | Either a URL of the image or the base64 encoded image data. | No | |
| input_audio | object | Yes | ||
| └─ data | string | Base64 encoded audio data. | No | |
| └─ format | enum | The format of the encoded audio data. Currently supports "wav" and "mp3". Possible values: wav, mp3 |
No | |
| text | string | The text content. | Yes | |
| type | enum | The type of the content part. Always file.Possible values: file |
Yes |
OpenAI.ChatCompletionRole
The role of the author of a message
| Property | Value |
|---|---|
| Description | The role of the author of a message |
| Type | string |
| Values | systemdeveloperuserassistanttoolfunction |
OpenAI.ChatCompletionStreamOptions
Options for streaming response. Only set this when you set stream: true.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include_usage | boolean | If set, an additional chunk will be streamed before the data: [DONE]message. The usage field on this chunk shows the token usage statisticsfor the entire request, and the choices field will always be an emptyarray. All other chunks will also include a usage field, but with a nullvalue. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request. |
No |
OpenAI.ChatCompletionStreamResponseDelta
A chat completion delta generated by streamed model responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | object | No | ||
| └─ data | string | No | ||
| └─ expires_at | integer | No | ||
| └─ id | string | No | ||
| └─ transcript | string | No | ||
| content | string | The contents of the chunk message. | No | |
| function_call | object | Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model. |
No | |
| └─ arguments | string | No | ||
| └─ name | string | No | ||
| refusal | string | The refusal message generated by the model. | No | |
| role | object | The role of the author of a message | No | |
| tool_calls | array | No |
OpenAI.ChatCompletionTokenLogprob
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array | A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token. |
Yes | |
| logprob | number | The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely. |
Yes | |
| token | string | The token. | Yes | |
| top_logprobs | array | List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned. |
Yes |
OpenAI.ChatCompletionTool
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.FunctionObject | Yes | ||
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
Yes |
OpenAI.ChatCompletionToolChoiceOption
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | object | Yes | ||
| └─ name | string | The name of the function to call. | No | |
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
Yes |
OpenAI.ChatOutputPrediction
Base representation of predicted output from a model.
Discriminator for OpenAI.ChatOutputPrediction
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
content |
OpenAI.ChatOutputPredictionContent |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ChatOutputPredictionType | Yes |
OpenAI.ChatOutputPredictionContent
Static predicted output content, such as the content of a text file that is being regenerated.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | Yes | ||
| type | enum | The type of the predicted content you want to provide. This type is currently always content.Possible values: content |
Yes |
OpenAI.ChatOutputPredictionType
| Property | Value |
|---|---|
| Type | string |
| Values | content |
OpenAI.ChunkingStrategyRequestParam
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
Discriminator for OpenAI.ChunkingStrategyRequestParam
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
static |
OpenAI.StaticChunkingStrategyRequestParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of chunking strategy. Possible values: auto, static |
Yes |
OpenAI.ChunkingStrategyResponseParam
Discriminator for OpenAI.ChunkingStrategyResponseParam
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
other |
OpenAI.OtherChunkingStrategyResponseParam |
static |
OpenAI.StaticChunkingStrategyResponseParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: static, other |
Yes |
OpenAI.CodeInterpreterOutput
Discriminator for OpenAI.CodeInterpreterOutput
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
image |
OpenAI.CodeInterpreterOutputImage |
logs |
OpenAI.CodeInterpreterOutputLogs |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.CodeInterpreterOutputType | Yes |
OpenAI.CodeInterpreterOutputImage
The image output from the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the output. Always 'image'. Possible values: image |
Yes | |
| url | string | The URL of the image output from the code interpreter. | Yes |
OpenAI.CodeInterpreterOutputLogs
The logs output from the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| logs | string | The logs output from the code interpreter. | Yes | |
| type | enum | The type of the output. Always 'logs'. Possible values: logs |
Yes |
OpenAI.CodeInterpreterOutputType
| Property | Value |
|---|---|
| Type | string |
| Values | logsimage |
OpenAI.CodeInterpreterTool
A tool that runs Python code to help generate a response to a prompt.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| container | object | Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on. |
Yes | |
| └─ file_ids | array | An optional list of uploaded files to make available to your code. | No | |
| └─ type | enum | Always auto.Possible values: auto |
No | |
| type | enum | The type of the code interpreter tool. Always code_interpreter.Possible values: code_interpreter |
Yes |
OpenAI.CodeInterpreterToolAuto
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_ids | array | An optional list of uploaded files to make available to your code. | No | |
| type | enum | Always auto.Possible values: auto |
Yes |
OpenAI.CodeInterpreterToolCallItemParam
A tool call to run code.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The code to run, or null if not available. | Yes | |
| container_id | string | The ID of the container used to run the code. | Yes | |
| outputs | array | The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available. |
Yes | |
| type | enum | Possible values: code_interpreter_call |
Yes |
OpenAI.CodeInterpreterToolCallItemResource
A tool call to run code.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The code to run, or null if not available. | Yes | |
| container_id | string | The ID of the container used to run the code. | Yes | |
| outputs | array | The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available. |
Yes | |
| status | enum | Possible values: in_progress, completed, incomplete, interpreting, failed |
Yes | |
| type | enum | Possible values: code_interpreter_call |
Yes |
OpenAI.ComparisonFilter
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| key | string | The key to compare against the value. | Yes | |
| type | enum | Specifies the comparison operator: eq, ne, gt, gte, lt, lte.- eq: equals- ne: not equal- gt: greater than- gte: greater than or equal- lt: less than- lte: less than or equalPossible values: eq, ne, gt, gte, lt, lte |
Yes | |
| value | string or number or boolean | Yes |
OpenAI.CompletionUsage
Usage statistics for the completion request.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_tokens | integer | Number of tokens in the generated completion. | Yes | 0 |
| completion_tokens_details | object | Breakdown of tokens used in a completion. | No | |
| └─ accepted_prediction_tokens | integer | When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion. |
No | 0 |
| └─ audio_tokens | integer | Audio input tokens generated by the model. | No | 0 |
| └─ reasoning_tokens | integer | Tokens generated by the model for reasoning. | No | 0 |
| └─ rejected_prediction_tokens | integer | When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits. |
No | 0 |
| prompt_tokens | integer | Number of tokens in the prompt. | Yes | 0 |
| prompt_tokens_details | object | Breakdown of tokens used in the prompt. | No | |
| └─ audio_tokens | integer | Audio input tokens present in the prompt. | No | 0 |
| └─ cached_tokens | integer | Cached tokens present in the prompt. | No | 0 |
| total_tokens | integer | Total number of tokens used in the request (prompt + completion). | Yes | 0 |
OpenAI.CompoundFilter
Combine multiple filters using and or or.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | array | Array of filters to combine. Items can be ComparisonFilter or CompoundFilter. |
Yes | |
| type | enum | Type of operation: and or or.Possible values: and, or |
Yes |
OpenAI.ComputerAction
Discriminator for OpenAI.ComputerAction
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
click |
OpenAI.ComputerActionClick |
double_click |
OpenAI.ComputerActionDoubleClick |
drag |
OpenAI.ComputerActionDrag |
move |
OpenAI.ComputerActionMove |
screenshot |
OpenAI.ComputerActionScreenshot |
scroll |
OpenAI.ComputerActionScroll |
type |
OpenAI.ComputerActionTypeKeys |
wait |
OpenAI.ComputerActionWait |
keypress |
OpenAI.ComputerActionKeyPress |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ComputerActionType | Yes |
OpenAI.ComputerActionClick
A click action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| button | enum | Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.Possible values: left, right, wheel, back, forward |
Yes | |
| type | enum | Specifies the event type. For a click action, this property is always set to click.Possible values: click |
Yes | |
| x | integer | The x-coordinate where the click occurred. | Yes | |
| y | integer | The y-coordinate where the click occurred. | Yes |
OpenAI.ComputerActionDoubleClick
A double click action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a double click action, this property is always set to double_click.Possible values: double_click |
Yes | |
| x | integer | The x-coordinate where the double click occurred. | Yes | |
| y | integer | The y-coordinate where the double click occurred. | Yes |
OpenAI.ComputerActionDrag
A drag action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| path | array | An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg <br>[<br> { x: 100, y: 200 },<br> { x: 200, y: 300 }<br>]<br> |
Yes | |
| type | enum | Specifies the event type. For a drag action, this property is always set to drag.Possible values: drag |
Yes |
OpenAI.ComputerActionKeyPress
A collection of keypresses the model would like to perform.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| keys | array | The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key. |
Yes | |
| type | enum | Specifies the event type. For a keypress action, this property is always set to keypress.Possible values: keypress |
Yes |
OpenAI.ComputerActionMove
A mouse move action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a move action, this property is always set to move.Possible values: move |
Yes | |
| x | integer | The x-coordinate to move to. | Yes | |
| y | integer | The y-coordinate to move to. | Yes |
OpenAI.ComputerActionScreenshot
A screenshot action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a screenshot action, this property is always set to screenshot.Possible values: screenshot |
Yes |
OpenAI.ComputerActionScroll
A scroll action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| scroll_x | integer | The horizontal scroll distance. | Yes | |
| scroll_y | integer | The vertical scroll distance. | Yes | |
| type | enum | Specifies the event type. For a scroll action, this property is always set to scroll.Possible values: scroll |
Yes | |
| x | integer | The x-coordinate where the scroll occurred. | Yes | |
| y | integer | The y-coordinate where the scroll occurred. | Yes |
OpenAI.ComputerActionType
| Property | Value |
|---|---|
| Type | string |
| Values | screenshotclickdouble_clickscrolltypewaitkeypressdragmove |
OpenAI.ComputerActionTypeKeys
An action to type in text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text to type. | Yes | |
| type | enum | Specifies the event type. For a type action, this property is always set to type.Possible values: type |
Yes |
OpenAI.ComputerActionWait
A wait action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a wait action, this property is always set to wait.Possible values: wait |
Yes |
OpenAI.ComputerToolCallItemParam
A tool call to a computer use tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.ComputerAction | Yes | ||
| call_id | string | An identifier used when responding to the tool call with output. | Yes | |
| pending_safety_checks | array | The pending safety checks for the computer call. | Yes | |
| type | enum | Possible values: computer_call |
Yes |
OpenAI.ComputerToolCallItemResource
A tool call to a computer use tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.ComputerAction | Yes | ||
| call_id | string | An identifier used when responding to the tool call with output. | Yes | |
| pending_safety_checks | array | The pending safety checks for the computer call. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | Possible values: computer_call |
Yes |
OpenAI.ComputerToolCallOutputItemOutput
Discriminator for OpenAI.ComputerToolCallOutputItemOutput
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
computer_screenshot |
OpenAI.ComputerToolCallOutputItemOutputComputerScreenshot |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ComputerToolCallOutputItemOutputType | A computer screenshot image used with the computer use tool. | Yes |
OpenAI.ComputerToolCallOutputItemOutputComputerScreenshot
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | No | ||
| image_url | string | No | ||
| type | enum | Possible values: computer_screenshot |
Yes |
OpenAI.ComputerToolCallOutputItemOutputType
A computer screenshot image used with the computer use tool.
| Property | Value |
|---|---|
| Description | A computer screenshot image used with the computer use tool. |
| Type | string |
| Values | computer_screenshot |
OpenAI.ComputerToolCallOutputItemParam
The output of a computer tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| acknowledged_safety_checks | array | The safety checks reported by the API that have been acknowledged by the developer. |
No | |
| call_id | string | The ID of the computer tool call that produced the output. | Yes | |
| output | OpenAI.ComputerToolCallOutputItemOutput | Yes | ||
| type | enum | Possible values: computer_call_output |
Yes |
OpenAI.ComputerToolCallOutputItemResource
The output of a computer tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| acknowledged_safety_checks | array | The safety checks reported by the API that have been acknowledged by the developer. |
No | |
| call_id | string | The ID of the computer tool call that produced the output. | Yes | |
| output | OpenAI.ComputerToolCallOutputItemOutput | Yes | ||
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | Possible values: computer_call_output |
Yes |
OpenAI.ComputerToolCallSafetyCheck
A pending safety check for the computer call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The type of the pending safety check. | Yes | |
| id | string | The ID of the pending safety check. | Yes | |
| message | string | Details about the pending safety check. | Yes |
OpenAI.ComputerUsePreviewTool
A tool that controls a virtual computer.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| display_height | integer | The height of the computer display. | Yes | |
| display_width | integer | The width of the computer display. | Yes | |
| environment | enum | The type of computer environment to control. Possible values: windows, mac, linux, ubuntu, browser |
Yes | |
| type | enum | The type of the computer use tool. Always computer_use_preview.Possible values: computer_use_preview |
Yes |
OpenAI.Coordinate
An x/y coordinate pair, e.g. { x: 100, y: 200 }.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| x | integer | The x-coordinate. | Yes | |
| y | integer | The y-coordinate. | Yes |
OpenAI.CreateEmbeddingResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | The list of embeddings generated by the model. | Yes | |
| model | string | The name of the model used to generate the embedding. | Yes | |
| object | enum | The object type, which is always "list". Possible values: list |
Yes | |
| usage | object | The usage information for the request. | Yes | |
| └─ prompt_tokens | integer | The number of tokens used by the prompt. | No | |
| └─ total_tokens | integer | The total number of tokens used by the request. | No |
OpenAI.CreateEvalItem
A chat message that makes up the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or OpenAI.EvalItemContent | Text inputs to the model - can contain template strings. | Yes | |
| role | enum | The role of the message input. One of user, assistant, system, ordeveloper.Possible values: user, assistant, system, developer |
Yes | |
| type | enum | The type of the message input. Always message.Possible values: message |
No |
OpenAI.CreateEvalRunRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data_source | object | Yes | ||
| └─ type | OpenAI.EvalRunDataSourceType | No | ||
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the run. | No |
OpenAI.CreateFineTuningJobRequest
Valid models:
babbage-002
davinci-002
gpt-3.5-turbo
gpt-4o-mini
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | object | The hyperparameters used for the fine-tuning job. This value is now deprecated in favor of method, and should be passed in under the method parameter. |
No | |
| └─ batch_size | enum | Possible values: auto |
No | |
| └─ learning_rate_multiplier | enum | Possible values: auto |
No | |
| └─ n_epochs | enum | Possible values: auto |
No | |
| integrations | array | A list of integrations to enable for your fine-tuning job. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| method | OpenAI.FineTuneMethod | The method used for fine-tuning. | No | |
| model | string (see valid models below) | The name of the model to fine-tune. You can select one of the supported models. |
Yes | |
| seed | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. |
No | |
| suffix | string | A string of up to 64 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel. |
No | None |
| training_file | string | The ID of an uploaded file that contains training data. See upload file for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.The contents of the file should differ depending on if the model uses the chat, or if the fine-tuning method uses the preference format. See the fine-tuning guide for more details. |
Yes | |
| validation_file | string | The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.See the fine-tuning guide for more details. |
No |
OpenAI.CreateFineTuningJobRequestIntegration
Discriminator for OpenAI.CreateFineTuningJobRequestIntegration
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
wandb |
OpenAI.CreateFineTuningJobRequestWandbIntegration |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | string (see valid models below) | Yes |
OpenAI.CreateFineTuningJobRequestWandbIntegration
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: wandb |
Yes | |
| wandb | object | Yes | ||
| └─ entity | string | No | ||
| └─ name | string | No | ||
| └─ project | string | No | ||
| └─ tags | array | No |
OpenAI.CreateVectorStoreFileBatchRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
No | |
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. |
No | |
| file_ids | array | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
Yes |
OpenAI.CreateVectorStoreFileRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
No | |
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. |
No | |
| file_id | string | A File ID that the vector store should use. Useful for tools like file_search that can access files. |
Yes |
OpenAI.CreateVectorStoreRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| chunking_strategy | object | The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400. |
No | |
| └─ static | OpenAI.StaticChunkingStrategy | No | ||
| └─ type | enum | Always static.Possible values: static |
No | |
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| file_ids | array | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the vector store. | No |
OpenAI.DeleteFileResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: file |
Yes |
OpenAI.DeleteVectorStoreFileResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: vector_store.file.deleted |
Yes |
OpenAI.DeleteVectorStoreResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: vector_store.deleted |
Yes |
OpenAI.Embedding
Represents an embedding vector returned by embedding endpoint.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| embedding | array or string | Yes | ||
| index | integer | The index of the embedding in the list of embeddings. | Yes | |
| object | enum | The object type, which is always "embedding". Possible values: embedding |
Yes |
OpenAI.Eval
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the eval was created. | Yes | |
| data_source_config | object | Yes | ||
| └─ type | OpenAI.EvalDataSourceConfigType | No | ||
| id | string | Unique identifier for the evaluation. | Yes | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| name | string | The name of the evaluation. | Yes | |
| object | enum | The object type. Possible values: eval |
Yes | |
| testing_criteria | array | A list of testing criteria. | Yes | None |
OpenAI.EvalApiError
An object representing an error response from the Eval API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The error code. | Yes | |
| message | string | The error message. | Yes |
OpenAI.EvalCompletionsRunDataSourceParams
A CompletionsRunDataSource object describing a model sampling configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_messages | object | No | ||
| └─ item_reference | string | A reference to a variable in the item namespace. Ie, "item.input_trajectory" |
No | |
| └─ template | array | A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}. |
No | |
| └─ type | enum | The type of input messages. Always item_reference.Possible values: item_reference |
No | |
| model | string | The name of the model to use for generating completions (e.g. "o3-mini"). | No | |
| sampling_params | AzureEvalAPICompletionsSamplingParams | No | ||
| source | object | Yes | ||
| └─ content | array | The content of the jsonl file. | No | |
| └─ created_after | integer | An optional Unix timestamp to filter items created after this time. | No | |
| └─ created_before | integer | An optional Unix timestamp to filter items created before this time. | No | |
| └─ id | string | The identifier of the file. | No | |
| └─ limit | integer | An optional maximum number of items to return. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ model | string | An optional model to filter by (e.g., 'gpt-4o'). | No | |
| └─ type | enum | The type of source. Always stored_completions.Possible values: stored_completions |
No | |
| type | enum | The type of run data source. Always completions.Possible values: completions |
Yes |
OpenAI.EvalCustomDataSourceConfigParams
A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include_sample_schema | boolean | Whether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source) | No | False |
| item_schema | object | The json schema for each row in the data source. | Yes | |
| type | enum | The type of data source. Always custom.Possible values: custom |
Yes |
OpenAI.EvalCustomDataSourceConfigResource
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| schema | object | The json schema for the run data source items. Learn how to build JSON schemas here. |
Yes | |
| type | enum | The type of data source. Always custom.Possible values: custom |
Yes |
OpenAI.EvalDataSourceConfigParams
Discriminator for OpenAI.EvalDataSourceConfigParams
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
custom |
OpenAI.EvalCustomDataSourceConfigParams |
logs |
OpenAI.EvalLogsDataSourceConfigParams |
stored_completions |
OpenAI.EvalStoredCompletionsDataSourceConfigParams |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalDataSourceConfigType | Yes |
OpenAI.EvalDataSourceConfigResource
Discriminator for OpenAI.EvalDataSourceConfigResource
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
custom |
OpenAI.EvalCustomDataSourceConfigResource |
stored_completions |
OpenAI.EvalStoredCompletionsDataSourceConfigResource |
logs |
OpenAI.EvalLogsDataSourceConfigResource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalDataSourceConfigType | Yes |
OpenAI.EvalDataSourceConfigType
| Property | Value |
|---|---|
| Type | string |
| Values | customlogsstored_completions |
OpenAI.EvalGraderLabelModelParams
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array | A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}. |
Yes | |
| labels | array | The labels to classify to each item in the evaluation. | Yes | |
| model | string | The model to use for the evaluation. Must support structured outputs. | Yes | |
| name | string | The name of the grader. | Yes | |
| passing_labels | array | The labels that indicate a passing result. Must be a subset of labels. | Yes | |
| type | enum | The object type, which is always label_model.Possible values: label_model |
Yes |
OpenAI.EvalGraderLabelModelResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array | Yes | ||
| labels | array | The labels to assign to each item in the evaluation. | Yes | |
| model | string | The model to use for the evaluation. Must support structured outputs. | Yes | |
| name | string | The name of the grader. | Yes | |
| passing_labels | array | The labels that indicate a passing result. Must be a subset of labels. | Yes | |
| type | enum | The object type, which is always label_model.Possible values: label_model |
Yes |
OpenAI.EvalGraderParams
Discriminator for OpenAI.EvalGraderParams
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
label_model |
OpenAI.EvalGraderLabelModelParams |
string_check |
OpenAI.EvalGraderStringCheckParams |
text_similarity |
OpenAI.EvalGraderTextSimilarityParams |
python |
OpenAI.EvalGraderPythonParams |
score_model |
OpenAI.EvalGraderScoreModelParams |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.GraderType | Yes |
OpenAI.EvalGraderPythonParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_tag | string | The image tag to use for the python script. | No | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | No | |
| source | string | The source code of the python script. | Yes | |
| type | enum | The object type, which is always python.Possible values: python |
Yes |
OpenAI.EvalGraderPythonResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_tag | string | The image tag to use for the python script. | No | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | No | |
| source | string | The source code of the python script. | Yes | |
| type | enum | The object type, which is always python.Possible values: python |
Yes |
OpenAI.EvalGraderResource
Discriminator for OpenAI.EvalGraderResource
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
label_model |
OpenAI.EvalGraderLabelModelResource |
text_similarity |
OpenAI.EvalGraderTextSimilarityResource |
python |
OpenAI.EvalGraderPythonResource |
score_model |
OpenAI.EvalGraderScoreModelResource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.GraderType | Yes |
OpenAI.EvalGraderScoreModelParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array | The input text. This may include template strings. | Yes | |
| model | string | The model to use for the evaluation. | Yes | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | No | |
| range | array | The range of the score. Defaults to [0, 1]. |
No | |
| sampling_params | The sampling parameters for the model. | No | ||
| type | enum | The object type, which is always score_model.Possible values: score_model |
Yes |
OpenAI.EvalGraderScoreModelResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array | The input text. This may include template strings. | Yes | |
| model | string | The model to use for the evaluation. | Yes | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | No | |
| range | array | The range of the score. Defaults to [0, 1]. |
No | |
| sampling_params | The sampling parameters for the model. | No | ||
| type | enum | The object type, which is always score_model.Possible values: score_model |
Yes |
OpenAI.EvalGraderStringCheckParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | string | The input text. This may include template strings. | Yes | |
| name | string | The name of the grader. | Yes | |
| operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
Yes | |
| reference | string | The reference text. This may include template strings. | Yes | |
| type | enum | The object type, which is always string_check.Possible values: string_check |
Yes |
OpenAI.EvalGraderTextSimilarityParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
Yes | |
| input | string | The text being graded. | Yes | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | Yes | |
| reference | string | The text being graded against. | Yes | |
| type | enum | The type of grader. Possible values: text_similarity |
Yes |
OpenAI.EvalGraderTextSimilarityResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
Yes | |
| input | string | The text being graded. | Yes | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | Yes | |
| reference | string | The text being graded against. | Yes | |
| type | enum | The type of grader. Possible values: text_similarity |
Yes |
OpenAI.EvalItem
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | object | Yes | ||
| └─ type | OpenAI.EvalItemContentType | No | ||
| role | enum | The role of the message input. One of user, assistant, system, ordeveloper.Possible values: user, assistant, system, developer |
Yes | |
| type | enum | The type of the message input. Always message.Possible values: message |
No |
OpenAI.EvalItemContent
Discriminator for OpenAI.EvalItemContent
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
input_text |
OpenAI.EvalItemContentInputText |
output_text |
OpenAI.EvalItemContentOutputText |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalItemContentType | Yes |
OpenAI.EvalItemContentInputText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | Yes | ||
| type | enum | Possible values: input_text |
Yes |
OpenAI.EvalItemContentOutputText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | Yes | ||
| type | enum | Possible values: output_text |
Yes |
OpenAI.EvalItemContentType
| Property | Value |
|---|---|
| Type | string |
| Values | input_textoutput_text |
OpenAI.EvalJsonlRunDataSourceParams
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| source | object | Yes | ||
| └─ content | array | The content of the jsonl file. | No | |
| └─ id | string | The identifier of the file. | No | |
| └─ type | enum | The type of jsonl source. Always file_id.Possible values: file_id |
No | |
| type | enum | The type of data source. Always jsonl.Possible values: jsonl |
Yes |
OpenAI.EvalList
An object representing a list of evals.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | An array of eval objects. | Yes | |
| first_id | string | The identifier of the first eval in the data array. | Yes | |
| has_more | boolean | Indicates whether there are more evals available. | Yes | |
| last_id | string | The identifier of the last eval in the data array. | Yes | |
| object | enum | The type of this object. It is always set to "list". Possible values: list |
Yes |
OpenAI.EvalLogsDataSourceConfigParams
A data source config which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| type | enum | The type of data source. Always logs.Possible values: logs |
Yes |
OpenAI.EvalLogsDataSourceConfigResource
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| schema | object | The json schema for the run data source items. Learn how to build JSON schemas here. |
Yes | |
| type | enum | The type of data source. Always logs.Possible values: logs |
Yes |
OpenAI.EvalResponsesRunDataSourceParams
A ResponsesRunDataSource object describing a model sampling configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_messages | object | No | ||
| └─ item_reference | string | A reference to a variable in the item namespace. Ie, "item.name" |
No | |
| └─ template | array | A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}. |
No | |
| └─ type | enum | The type of input messages. Always item_reference.Possible values: item_reference |
No | |
| model | string | The name of the model to use for generating completions (e.g. "o3-mini"). | No | |
| sampling_params | AzureEvalAPIResponseSamplingParams | No | ||
| source | object | Yes | ||
| └─ content | array | The content of the jsonl file. | No | |
| └─ created_after | integer | Only include items created after this timestamp (inclusive). This is a query parameter used to select responses. | No | |
| └─ created_before | integer | Only include items created before this timestamp (inclusive). This is a query parameter used to select responses. | No | |
| └─ id | string | The identifier of the file. | No | |
| └─ instructions_search | string | Optional string to search the 'instructions' field. This is a query parameter used to select responses. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ model | string | The name of the model to find responses for. This is a query parameter used to select responses. | No | |
| └─ reasoning_effort | OpenAI.ReasoningEffort | Optional reasoning effort parameter. This is a query parameter used to select responses. | No | |
| └─ temperature | number | Sampling temperature. This is a query parameter used to select responses. | No | |
| └─ tools | array | List of tool names. This is a query parameter used to select responses. | No | |
| └─ top_p | number | Nucleus sampling parameter. This is a query parameter used to select responses. | No | |
| └─ type | enum | The type of run data source. Always responses.Possible values: responses |
No | |
| └─ users | array | List of user identifiers. This is a query parameter used to select responses. | No | |
| type | enum | The type of run data source. Always responses.Possible values: responses |
Yes |
OpenAI.EvalRun
A schema representing an evaluation run.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | Unix timestamp (in seconds) when the evaluation run was created. | Yes | |
| data_source | object | Yes | ||
| └─ type | OpenAI.EvalRunDataSourceType | No | ||
| error | OpenAI.EvalApiError | An object representing an error response from the Eval API. | Yes | |
| eval_id | string | The identifier of the associated evaluation. | Yes | |
| id | string | Unique identifier for the evaluation run. | Yes | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| model | string | The model that is evaluated, if applicable. | Yes | |
| name | string | The name of the evaluation run. | Yes | |
| object | enum | The type of the object. Always "eval.run". Possible values: eval.run |
Yes | |
| per_model_usage | array | Usage statistics for each model during the evaluation run. | Yes | |
| per_testing_criteria_results | array | Results per testing criteria applied during the evaluation run. | Yes | |
| report_url | string | The URL to the rendered evaluation run report on the UI dashboard. | Yes | |
| result_counts | object | Counters summarizing the outcomes of the evaluation run. | Yes | |
| └─ errored | integer | Number of output items that resulted in an error. | No | |
| └─ failed | integer | Number of output items that failed to pass the evaluation. | No | |
| └─ passed | integer | Number of output items that passed the evaluation. | No | |
| └─ total | integer | Total number of executed output items. | No | |
| status | string | The status of the evaluation run. | Yes |
OpenAI.EvalRunDataContentSource
Discriminator for OpenAI.EvalRunDataContentSource
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_id |
OpenAI.EvalRunFileIdDataContentSource |
stored_completions |
OpenAI.EvalRunStoredCompletionsDataContentSource |
responses |
OpenAI.EvalRunResponsesDataContentSource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalRunDataContentSourceType | Yes |
OpenAI.EvalRunDataContentSourceType
| Property | Value |
|---|---|
| Type | string |
| Values | file_idfile_contentstored_completionsresponses |
OpenAI.EvalRunDataSourceCompletionsResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: completions |
Yes |
OpenAI.EvalRunDataSourceJsonlResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: jsonl |
Yes |
OpenAI.EvalRunDataSourceParams
Discriminator for OpenAI.EvalRunDataSourceParams
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
jsonl |
OpenAI.EvalJsonlRunDataSourceParams |
completions |
OpenAI.EvalCompletionsRunDataSourceParams |
responses |
OpenAI.EvalResponsesRunDataSourceParams |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalRunDataSourceType | Yes |
OpenAI.EvalRunDataSourceResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalRunDataSourceType | Yes |
OpenAI.EvalRunDataSourceResponsesResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: responses |
Yes |
OpenAI.EvalRunDataSourceType
| Property | Value |
|---|---|
| Type | string |
| Values | jsonlcompletionsresponses |
OpenAI.EvalRunFileContentDataContentSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content of the jsonl file. | Yes | |
| type | enum | The type of jsonl source. Always file_content.Possible values: file_content |
Yes |
OpenAI.EvalRunFileIdDataContentSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The identifier of the file. | Yes | |
| type | enum | The type of jsonl source. Always file_id.Possible values: file_id |
Yes |
OpenAI.EvalRunList
An object representing a list of runs for an evaluation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | An array of eval run objects. | Yes | |
| first_id | string | The identifier of the first eval run in the data array. | Yes | |
| has_more | boolean | Indicates whether there are more evals available. | Yes | |
| last_id | string | The identifier of the last eval run in the data array. | Yes | |
| object | enum | The type of this object. It is always set to "list". Possible values: list |
Yes |
OpenAI.EvalRunOutputItem
A schema representing an evaluation run output item.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | Unix timestamp (in seconds) when the evaluation run was created. | Yes | |
| datasource_item | object | Details of the input data source item. | Yes | |
| datasource_item_id | integer | The identifier for the data source item. | Yes | |
| eval_id | string | The identifier of the evaluation group. | Yes | |
| id | string | Unique identifier for the evaluation run output item. | Yes | |
| object | enum | The type of the object. Always "eval.run.output_item". Possible values: eval.run.output_item |
Yes | |
| results | array | A list of results from the evaluation run. | Yes | |
| run_id | string | The identifier of the evaluation run associated with this output item. | Yes | |
| sample | object | A sample containing the input and output of the evaluation run. | Yes | |
| └─ error | OpenAI.EvalApiError | An object representing an error response from the Eval API. | No | |
| └─ finish_reason | string | The reason why the sample generation was finished. | No | |
| └─ input | array | An array of input messages. | No | |
| └─ max_completion_tokens | integer | The maximum number of tokens allowed for completion. | No | |
| └─ model | string | The model used for generating the sample. | No | |
| └─ output | array | An array of output messages. | No | |
| └─ seed | integer | The seed used for generating the sample. | No | |
| └─ temperature | number | The sampling temperature used. | No | |
| └─ top_p | number | The top_p value used for sampling. | No | |
| └─ usage | object | Token usage details for the sample. | No | |
| └─ cached_tokens | integer | The number of tokens retrieved from cache. | No | |
| └─ completion_tokens | integer | The number of completion tokens generated. | No | |
| └─ prompt_tokens | integer | The number of prompt tokens used. | No | |
| └─ total_tokens | integer | The total number of tokens used. | No | |
| status | string | The status of the evaluation run. | Yes |
OpenAI.EvalRunOutputItemList
An object representing a list of output items for an evaluation run.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | An array of eval run output item objects. | Yes | |
| first_id | string | The identifier of the first eval run output item in the data array. | Yes | |
| has_more | boolean | Indicates whether there are more eval run output items available. | Yes | |
| last_id | string | The identifier of the last eval run output item in the data array. | Yes | |
| object | enum | The type of this object. It is always set to "list". Possible values: list |
Yes |
OpenAI.EvalRunResponsesDataContentSource
A EvalResponsesSource object describing a run data source configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_after | integer | Only include items created after this timestamp (inclusive). This is a query parameter used to select responses. | No | |
| created_before | integer | Only include items created before this timestamp (inclusive). This is a query parameter used to select responses. | No | |
| instructions_search | string | Optional string to search the 'instructions' field. This is a query parameter used to select responses. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| model | string | The name of the model to find responses for. This is a query parameter used to select responses. | No | |
| reasoning_effort | object | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| temperature | number | Sampling temperature. This is a query parameter used to select responses. | No | |
| tools | array | List of tool names. This is a query parameter used to select responses. | No | |
| top_p | number | Nucleus sampling parameter. This is a query parameter used to select responses. | No | |
| type | enum | The type of run data source. Always responses.Possible values: responses |
Yes | |
| users | array | List of user identifiers. This is a query parameter used to select responses. | No |
OpenAI.EvalRunStoredCompletionsDataContentSource
A StoredCompletionsRunDataSource configuration describing a set of filters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_after | integer | An optional Unix timestamp to filter items created after this time. | No | |
| created_before | integer | An optional Unix timestamp to filter items created before this time. | No | |
| limit | integer | An optional maximum number of items to return. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| model | string | An optional model to filter by (e.g., 'gpt-4o'). | No | |
| type | enum | The type of source. Always stored_completions.Possible values: stored_completions |
Yes |
OpenAI.EvalStoredCompletionsDataSourceConfigParams
Deprecated in favor of LogsDataSourceConfig.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Metadata filters for the stored completions data source. | No | |
| type | enum | The type of data source. Always stored_completions.Possible values: stored_completions |
Yes |
OpenAI.EvalStoredCompletionsDataSourceConfigResource
Deprecated in favor of LogsDataSourceConfig.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| schema | object | The json schema for the run data source items. Learn how to build JSON schemas here. |
Yes | |
| type | enum | The type of data source. Always stored_completions.Possible values: stored_completions |
Yes |
OpenAI.FileSearchTool
A tool that searches for relevant content from uploaded files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | object | No | ||
| max_num_results | integer | The maximum number of results to return. This number should be between 1 and 50 inclusive. | No | |
| ranking_options | object | No | ||
| └─ ranker | enum | The ranker to use for the file search. Possible values: auto, default-2024-11-15 |
No | |
| └─ score_threshold | number | The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results. | No | |
| type | enum | The type of the file search tool. Always file_search.Possible values: file_search |
Yes | |
| vector_store_ids | array | The IDs of the vector stores to search. | Yes |
OpenAI.FileSearchToolCallItemParam
The results of a file search tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| queries | array | The queries used to search for files. | Yes | |
| results | array | The results of the file search tool call. | No | |
| type | enum | Possible values: file_search_call |
Yes |
OpenAI.FileSearchToolCallItemResource
The results of a file search tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| queries | array | The queries used to search for files. | Yes | |
| results | array | The results of the file search tool call. | No | |
| status | enum | The status of the file search tool call. One of in_progress, searching, incomplete or failed,Possible values: in_progress, searching, completed, incomplete, failed |
Yes | |
| type | enum | Possible values: file_search_call |
Yes |
OpenAI.Filters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | array | Array of filters to combine. Items can be ComparisonFilter or CompoundFilter. |
Yes | |
| key | string | The key to compare against the value. | Yes | |
| type | enum | Type of operation: and or or.Possible values: and, or |
Yes | |
| value | string or number or boolean | The value to compare against the attribute key; supports string, number, or boolean types. | Yes |
OpenAI.FineTuneDPOHyperparameters
The hyperparameters used for the DPO fine-tuning job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | enum | Possible values: auto |
No | |
| beta | enum | Possible values: auto |
No | |
| learning_rate_multiplier | enum | Possible values: auto |
No | |
| n_epochs | enum | Possible values: auto |
No |
OpenAI.FineTuneDPOMethod
Configuration for the DPO fine-tuning method.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | OpenAI.FineTuneDPOHyperparameters | The hyperparameters used for the DPO fine-tuning job. | No |
OpenAI.FineTuneMethod
The method used for fine-tuning.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| dpo | OpenAI.FineTuneDPOMethod | Configuration for the DPO fine-tuning method. | No | |
| reinforcement | AzureFineTuneReinforcementMethod | No | ||
| supervised | OpenAI.FineTuneSupervisedMethod | Configuration for the supervised fine-tuning method. | No | |
| type | enum | The type of method. Is either supervised, dpo, or reinforcement.Possible values: supervised, dpo, reinforcement |
Yes |
OpenAI.FineTuneReinforcementHyperparameters
The hyperparameters used for the reinforcement fine-tuning job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | enum | Possible values: auto |
No | |
| compute_multiplier | enum | Possible values: auto |
No | |
| eval_interval | enum | Possible values: auto |
No | |
| eval_samples | enum | Possible values: auto |
No | |
| learning_rate_multiplier | enum | Possible values: auto |
No | |
| n_epochs | enum | Possible values: auto |
No | |
| reasoning_effort | enum | Level of reasoning effort. Possible values: default, low, medium, high |
No |
OpenAI.FineTuneSupervisedHyperparameters
The hyperparameters used for the fine-tuning job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | enum | Possible values: auto |
No | |
| learning_rate_multiplier | enum | Possible values: auto |
No | |
| n_epochs | enum | Possible values: auto |
No |
OpenAI.FineTuneSupervisedMethod
Configuration for the supervised fine-tuning method.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | OpenAI.FineTuneSupervisedHyperparameters | The hyperparameters used for the fine-tuning job. | No |
OpenAI.FineTuningIntegration
Discriminator for OpenAI.FineTuningIntegration
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
wandb |
OpenAI.FineTuningIntegrationWandb |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | string (see valid models below) | Yes |
OpenAI.FineTuningIntegrationWandb
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the integration being enabled for the fine-tuning job Possible values: wandb |
Yes | |
| wandb | object | The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run. |
Yes | |
| └─ entity | string | The entity to use for the run. This allows you to set the team or username of the WandB user that you would like associated with the run. If not set, the default entity for the registered WandB API key is used. |
No | |
| └─ name | string | A display name to set for the run. If not set, we will use the Job ID as the name. | No | |
| └─ project | string | The name of the project that the new run will be created under. | No | |
| └─ tags | array | A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}". |
No |
OpenAI.FineTuningJob
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the fine-tuning job was created. | Yes | |
| error | object | For fine-tuning jobs that have failed, this will contain more information on the cause of the failure. |
Yes | |
| └─ code | string | A machine-readable error code. | No | |
| └─ message | string | A human-readable error message. | No | |
| └─ param | string | The parameter that was invalid, usually training_file or validation_file. This field will be null if the failure was not parameter-specific. |
No | |
| estimated_finish | integer | The Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running. | No | |
| fine_tuned_model | string | The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running. | Yes | |
| finished_at | integer | The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running. | Yes | |
| hyperparameters | object | The hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs. |
Yes | |
| └─ batch_size | enum | Possible values: auto |
No | |
| └─ learning_rate_multiplier | enum | Possible values: auto |
No | |
| └─ n_epochs | enum | Possible values: auto |
No | |
| id | string | The object identifier, which can be referenced in the API endpoints. | Yes | |
| integrations | array | A list of integrations to enable for this fine-tuning job. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| method | OpenAI.FineTuneMethod | The method used for fine-tuning. | No | |
| model | string | The base model that is being fine-tuned. | Yes | |
| object | enum | The object type, which is always "fine_tuning.job". Possible values: fine_tuning.job |
Yes | |
| organization_id | string | The organization that owns the fine-tuning job. | Yes | |
| result_files | array | The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API. | Yes | |
| seed | integer | The seed used for the fine-tuning job. | Yes | |
| status | enum | The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.Possible values: validating_files, queued, running, succeeded, failed, cancelled |
Yes | |
| trained_tokens | integer | The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running. | Yes | |
| training_file | string | The file ID used for training. You can retrieve the training data with the Files API. | Yes | |
| user_provided_suffix | string | The descriptive suffix applied to the job, as specified in the job creation request. | No | |
| validation_file | string | The file ID used for validation. You can retrieve the validation results with the Files API. | Yes |
OpenAI.FineTuningJobCheckpoint
The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the checkpoint was created. | Yes | |
| fine_tuned_model_checkpoint | string | The name of the fine-tuned checkpoint model that is created. | Yes | |
| fine_tuning_job_id | string | The name of the fine-tuning job that this checkpoint was created from. | Yes | |
| id | string | The checkpoint identifier, which can be referenced in the API endpoints. | Yes | |
| metrics | object | Metrics at the step number during the fine-tuning job. | Yes | |
| └─ full_valid_loss | number | No | ||
| └─ full_valid_mean_token_accuracy | number | No | ||
| └─ step | number | No | ||
| └─ train_loss | number | No | ||
| └─ train_mean_token_accuracy | number | No | ||
| └─ valid_loss | number | No | ||
| └─ valid_mean_token_accuracy | number | No | ||
| object | enum | The object type, which is always "fine_tuning.job.checkpoint". Possible values: fine_tuning.job.checkpoint |
Yes | |
| step_number | integer | The step number that the checkpoint was created at. | Yes |
OpenAI.FineTuningJobEvent
Fine-tuning job event object
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the fine-tuning job was created. | Yes | |
| data | The data associated with the event. | No | ||
| id | string | The object identifier. | Yes | |
| level | enum | The log level of the event. Possible values: info, warn, error |
Yes | |
| message | string | The message of the event. | Yes | |
| object | enum | The object type, which is always "fine_tuning.job.event". Possible values: fine_tuning.job.event |
Yes | |
| type | enum | The type of event. Possible values: message, metrics |
No |
OpenAI.FunctionObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of what the function does, used by the model to choose when and how to call the function. | No | |
| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
| parameters | The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format. Omitting parameters defines a function with an empty parameter list. |
No | ||
| strict | boolean | Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide. |
No | False |
OpenAI.FunctionTool
Defines a function in your own code the model can choose to call. Learn more about function calling.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of the function. Used by the model to determine whether or not to call the function. | No | |
| name | string | The name of the function to call. | Yes | |
| parameters | A JSON schema object describing the parameters of the function. | Yes | ||
| strict | boolean | Whether to enforce strict parameter validation. Default true. |
Yes | |
| type | enum | The type of the function tool. Always function.Possible values: function |
Yes |
OpenAI.FunctionToolCallItemParam
A tool call to run a function. See the function calling guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments to pass to the function. | Yes | |
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| name | string | The name of the function to run. | Yes | |
| type | enum | Possible values: function_call |
Yes |
OpenAI.FunctionToolCallItemResource
A tool call to run a function. See the function calling guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments to pass to the function. | Yes | |
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| name | string | The name of the function to run. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | Possible values: function_call |
Yes |
OpenAI.FunctionToolCallOutputItemParam
The output of a function tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| output | string | A JSON string of the output of the function tool call. | Yes | |
| type | enum | Possible values: function_call_output |
Yes |
OpenAI.FunctionToolCallOutputItemResource
The output of a function tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| output | string | A JSON string of the output of the function tool call. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | Possible values: function_call_output |
Yes |
OpenAI.Grader
Discriminator for OpenAI.Grader
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
label_model |
OpenAI.GraderLabelModel |
text_similarity |
OpenAI.GraderTextSimilarity |
python |
OpenAI.GraderPython |
score_model |
OpenAI.GraderScoreModel |
multi |
OpenAI.GraderMulti |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.GraderType | Yes |
OpenAI.GraderLabelModel
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array | Yes | ||
| labels | array | The labels to assign to each item in the evaluation. | Yes | |
| model | string | The model to use for the evaluation. Must support structured outputs. | Yes | |
| name | string | The name of the grader. | Yes | |
| passing_labels | array | The labels that indicate a passing result. Must be a subset of labels. | Yes | |
| type | enum | The object type, which is always label_model.Possible values: label_model |
Yes |
OpenAI.GraderMulti
A MultiGrader object combines the output of multiple graders to produce a single score.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| calculate_output | string | A formula to calculate the output based on grader results. | Yes | |
| graders | object | Yes | ||
| name | string | The name of the grader. | Yes | |
| type | enum | The object type, which is always multi.Possible values: multi |
Yes |
OpenAI.GraderPython
A PythonGrader object that runs a python script on the input.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_tag | string | The image tag to use for the python script. | No | |
| name | string | The name of the grader. | Yes | |
| source | string | The source code of the python script. | Yes | |
| type | enum | The object type, which is always python.Possible values: python |
Yes |
OpenAI.GraderScoreModel
A ScoreModelGrader object that uses a model to assign a score to the input.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array | The input text. This may include template strings. | Yes | |
| model | string | The model to use for the evaluation. | Yes | |
| name | string | The name of the grader. | Yes | |
| range | array | The range of the score. Defaults to [0, 1]. |
No | |
| sampling_params | The sampling parameters for the model. | No | ||
| type | enum | The object type, which is always score_model.Possible values: score_model |
Yes |
OpenAI.GraderStringCheck
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | string | The input text. This may include template strings. | Yes | |
| name | string | The name of the grader. | Yes | |
| operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
Yes | |
| reference | string | The reference text. This may include template strings. | Yes | |
| type | enum | The object type, which is always string_check.Possible values: string_check |
Yes |
OpenAI.GraderTextSimilarity
A TextSimilarityGrader object which grades text based on similarity metrics.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
Yes | |
| input | string | The text being graded. | Yes | |
| name | string | The name of the grader. | Yes | |
| reference | string | The text being graded against. | Yes | |
| type | enum | The type of grader. Possible values: text_similarity |
Yes |
OpenAI.GraderType
| Property | Value |
|---|---|
| Type | string |
| Values | string_checktext_similarityscore_modellabel_modelpythonmulti |
OpenAI.ImageGenTool
A tool that generates images using a model like gpt-image-1.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | enum | Background type for the generated image. One of transparent,opaque, or auto. Default: auto.Possible values: transparent, opaque, auto |
No | |
| input_image_mask | object | Optional mask for inpainting. Contains image_url(string, optional) and file_id (string, optional). |
No | |
| └─ file_id | string | File ID for the mask image. | No | |
| └─ image_url | string | Base64-encoded mask image. | No | |
| model | enum | The image generation model to use. Default: gpt-image-1.Possible values: gpt-image-1 |
No | |
| moderation | enum | Moderation level for the generated image. Default: auto.Possible values: auto, low |
No | |
| output_compression | integer | Compression level for the output image. Default: 100. | No | 100 |
| output_format | enum | The output format of the generated image. One of png, webp, orjpeg. Default: png.Possible values: png, webp, jpeg |
No | |
| partial_images | integer | Number of partial images to generate in streaming mode, from 0 (default value) to 3. | No | 0 |
| quality | enum | The quality of the generated image. One of low, medium, high,or auto. Default: auto.Possible values: low, medium, high, auto |
No | |
| size | enum | The size of the generated image. One of 1024x1024, 1024x1536,1536x1024, or auto. Default: auto.Possible values: 1024x1024, 1024x1536, 1536x1024, auto |
No | |
| type | enum | The type of the image generation tool. Always image_generation.Possible values: image_generation |
Yes |
OpenAI.ImageGenToolCallItemParam
An image generation request made by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| result | string | The generated image encoded in base64. | Yes | |
| type | enum | Possible values: image_generation_call |
Yes |
OpenAI.ImageGenToolCallItemResource
An image generation request made by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| result | string | The generated image encoded in base64. | Yes | |
| status | enum | Possible values: in_progress, completed, generating, failed |
Yes | |
| type | enum | Possible values: image_generation_call |
Yes |
OpenAI.ImplicitUserMessage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array | Yes |
OpenAI.Includable
Specify additional output data to include in the model response. Currently supported values are:
code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.computer_call_output.output.image_url: Include image urls from the computer call output.file_search_call.results: Include the search results of the file search tool call.message.input_image.image_url: Include image urls from the input message.message.output_text.logprobs: Include logprobs with assistant messages.reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when thestoreparameter is set tofalse, or when an organization is enrolled in the zero data retention program).
| Property | Value |
|---|---|
| Description | Specify additional output data to include in the model response. Currently supported values are: - code_interpreter_call.outputs: Includes the outputs of python code executionin code interpreter tool call items. - computer_call_output.output.image_url: Include image urls from the computer call output.- file_search_call.results: Include the search results ofthe file search tool call. - message.input_image.image_url: Include image urls from the input message.- message.output_text.logprobs: Include logprobs with assistant messages.- reasoning.encrypted_content: Includes an encrypted version of reasoningtokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization isenrolled in the zero data retention program). |
| Type | string |
| Values | code_interpreter_call.outputscomputer_call_output.output.image_urlfile_search_call.resultsmessage.input_image.image_urlmessage.output_text.logprobsreasoning.encrypted_content |
OpenAI.ItemContent
Discriminator for OpenAI.ItemContent
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
input_audio |
OpenAI.ItemContentInputAudio |
output_audio |
OpenAI.ItemContentOutputAudio |
refusal |
OpenAI.ItemContentRefusal |
input_text |
OpenAI.ItemContentInputText |
input_image |
OpenAI.ItemContentInputImage |
input_file |
OpenAI.ItemContentInputFile |
output_text |
OpenAI.ItemContentOutputText |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ItemContentType | Multi-modal input and output contents. | Yes |
OpenAI.ItemContentInputAudio
An audio input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | string | Base64-encoded audio data. | Yes | |
| format | enum | The format of the audio data. Currently supported formats are mp3 andwav.Possible values: mp3, wav |
Yes | |
| type | enum | The type of the input item. Always input_audio.Possible values: input_audio |
Yes |
OpenAI.ItemContentInputFile
A file input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_data | string | The content of the file to be sent to the model. | No | |
| file_id | string | The ID of the file to be sent to the model. | No | |
| filename | string | The name of the file to be sent to the model. | No | |
| type | enum | The type of the input item. Always input_file.Possible values: input_file |
Yes |
OpenAI.ItemContentInputImage
An image input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | enum | The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.Possible values: low, high, auto |
No | |
| file_id | string | The ID of the file to be sent to the model. | No | |
| image_url | string | The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL. | No | |
| type | enum | The type of the input item. Always input_image.Possible values: input_image |
Yes |
OpenAI.ItemContentInputText
A text input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text input to the model. | Yes | |
| type | enum | The type of the input item. Always input_text.Possible values: input_text |
Yes |
OpenAI.ItemContentOutputAudio
An audio output from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | string | Base64-encoded audio data from the model. | Yes | |
| transcript | string | The transcript of the audio data from the model. | Yes | |
| type | enum | The type of the output audio. Always output_audio.Possible values: output_audio |
Yes |
OpenAI.ItemContentOutputText
A text output from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array | The annotations of the text output. | Yes | |
| logprobs | array | No | ||
| text | string | The text output from the model. | Yes | |
| type | enum | The type of the output text. Always output_text.Possible values: output_text |
Yes |
OpenAI.ItemContentRefusal
A refusal from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal explanationfrom the model. | Yes | |
| type | enum | The type of the refusal. Always refusal.Possible values: refusal |
Yes |
OpenAI.ItemContentType
Multi-modal input and output contents.
| Property | Value |
|---|---|
| Description | Multi-modal input and output contents. |
| Type | string |
| Values | input_textinput_audioinput_imageinput_fileoutput_textoutput_audiorefusal |
OpenAI.ItemParam
Content item used to generate a response.
Discriminator for OpenAI.ItemParam
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_search_call |
OpenAI.FileSearchToolCallItemParam |
computer_call |
OpenAI.ComputerToolCallItemParam |
computer_call_output |
OpenAI.ComputerToolCallOutputItemParam |
web_search_call |
OpenAI.WebSearchToolCallItemParam |
function_call |
OpenAI.FunctionToolCallItemParam |
function_call_output |
OpenAI.FunctionToolCallOutputItemParam |
reasoning |
OpenAI.ReasoningItemParam |
item_reference |
OpenAI.ItemReferenceItemParam |
image_generation_call |
OpenAI.ImageGenToolCallItemParam |
code_interpreter_call |
OpenAI.CodeInterpreterToolCallItemParam |
local_shell_call |
OpenAI.LocalShellToolCallItemParam |
local_shell_call_output |
OpenAI.LocalShellToolCallOutputItemParam |
mcp_list_tools |
OpenAI.MCPListToolsItemParam |
mcp_approval_request |
OpenAI.MCPApprovalRequestItemParam |
mcp_approval_response |
OpenAI.MCPApprovalResponseItemParam |
mcp_call |
OpenAI.MCPCallItemParam |
message |
OpenAI.ResponsesMessageItemParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ItemType | Yes |
OpenAI.ItemReferenceItemParam
An internal identifier for an item to reference.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The service-originated ID of the previously generated response item being referenced. | Yes | |
| type | enum | Possible values: item_reference |
Yes |
OpenAI.ItemResource
Content item used to generate a response.
Discriminator for OpenAI.ItemResource
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_search_call |
OpenAI.FileSearchToolCallItemResource |
computer_call |
OpenAI.ComputerToolCallItemResource |
computer_call_output |
OpenAI.ComputerToolCallOutputItemResource |
web_search_call |
OpenAI.WebSearchToolCallItemResource |
function_call |
OpenAI.FunctionToolCallItemResource |
function_call_output |
OpenAI.FunctionToolCallOutputItemResource |
reasoning |
OpenAI.ReasoningItemResource |
image_generation_call |
OpenAI.ImageGenToolCallItemResource |
code_interpreter_call |
OpenAI.CodeInterpreterToolCallItemResource |
local_shell_call |
OpenAI.LocalShellToolCallItemResource |
local_shell_call_output |
OpenAI.LocalShellToolCallOutputItemResource |
mcp_list_tools |
OpenAI.MCPListToolsItemResource |
mcp_approval_request |
OpenAI.MCPApprovalRequestItemResource |
mcp_approval_response |
OpenAI.MCPApprovalResponseItemResource |
mcp_call |
OpenAI.MCPCallItemResource |
message |
OpenAI.ResponsesMessageItemResource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | Yes | ||
| type | OpenAI.ItemType | Yes |
OpenAI.ItemType
| Property | Value |
|---|---|
| Type | string |
| Values | messagefile_search_callfunction_callfunction_call_outputcomputer_callcomputer_call_outputweb_search_callreasoningitem_referenceimage_generation_callcode_interpreter_calllocal_shell_calllocal_shell_call_outputmcp_list_toolsmcp_approval_requestmcp_approval_responsemcp_call |
OpenAI.ListFineTuningJobCheckpointsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| first_id | string | No | ||
| has_more | boolean | Yes | ||
| last_id | string | No | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListFineTuningJobEventsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| has_more | boolean | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListModelsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListPaginatedFineTuningJobsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| has_more | boolean | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListVectorStoreFilesFilter
| Property | Value |
|---|---|
| Type | string |
| Values | in_progresscompletedfailedcancelled |
OpenAI.ListVectorStoreFilesResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListVectorStoresResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.LocalShellExecAction
Execute a shell command on the server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| command | array | The command to run. | Yes | |
| env | object | Environment variables to set for the command. | Yes | |
| timeout_ms | integer | Optional timeout in milliseconds for the command. | No | |
| type | enum | The type of the local shell action. Always exec.Possible values: exec |
Yes | |
| user | string | Optional user to run the command as. | No | |
| working_directory | string | Optional working directory to run the command in. | No |
OpenAI.LocalShellTool
A tool that allows the model to execute shell commands in a local environment.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the local shell tool. Always local_shell.Possible values: local_shell |
Yes |
OpenAI.LocalShellToolCallItemParam
A tool call to run a command on the local shell.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.LocalShellExecAction | Execute a shell command on the server. | Yes | |
| call_id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| type | enum | Possible values: local_shell_call |
Yes |
OpenAI.LocalShellToolCallItemResource
A tool call to run a command on the local shell.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.LocalShellExecAction | Execute a shell command on the server. | Yes | |
| call_id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| status | enum | Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | Possible values: local_shell_call |
Yes |
OpenAI.LocalShellToolCallOutputItemParam
The output of a local shell tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| output | string | A JSON string of the output of the local shell tool call. | Yes | |
| type | enum | Possible values: local_shell_call_output |
Yes |
OpenAI.LocalShellToolCallOutputItemResource
The output of a local shell tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| output | string | A JSON string of the output of the local shell tool call. | Yes | |
| status | enum | Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | Possible values: local_shell_call_output |
Yes |
OpenAI.Location
Discriminator for OpenAI.Location
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
approximate |
OpenAI.ApproximateLocation |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.LocationType | Yes |
OpenAI.LocationType
| Property | Value |
|---|---|
| Type | string |
| Values | approximate |
OpenAI.LogProb
The log probability of a token.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array | Yes | ||
| logprob | number | Yes | ||
| token | string | Yes | ||
| top_logprobs | array | Yes |
OpenAI.MCPApprovalRequestItemParam
A request for human approval of a tool invocation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of arguments for the tool. | Yes | |
| name | string | The name of the tool to run. | Yes | |
| server_label | string | The label of the MCP server making the request. | Yes | |
| type | enum | Possible values: mcp_approval_request |
Yes |
OpenAI.MCPApprovalRequestItemResource
A request for human approval of a tool invocation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of arguments for the tool. | Yes | |
| name | string | The name of the tool to run. | Yes | |
| server_label | string | The label of the MCP server making the request. | Yes | |
| type | enum | Possible values: mcp_approval_request |
Yes |
OpenAI.MCPApprovalResponseItemParam
A response to an MCP approval request.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string | The ID of the approval request being answered. | Yes | |
| approve | boolean | Whether the request was approved. | Yes | |
| reason | string | Optional reason for the decision. | No | |
| type | enum | Possible values: mcp_approval_response |
Yes |
OpenAI.MCPApprovalResponseItemResource
A response to an MCP approval request.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string | The ID of the approval request being answered. | Yes | |
| approve | boolean | Whether the request was approved. | Yes | |
| reason | string | Optional reason for the decision. | No | |
| type | enum | Possible values: mcp_approval_response |
Yes |
OpenAI.MCPCallItemParam
An invocation of a tool on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments passed to the tool. | Yes | |
| error | string | The error from the tool call, if any. | No | |
| name | string | The name of the tool that was run. | Yes | |
| output | string | The output from the tool call. | No | |
| server_label | string | The label of the MCP server running the tool. | Yes | |
| type | enum | Possible values: mcp_call |
Yes |
OpenAI.MCPCallItemResource
An invocation of a tool on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments passed to the tool. | Yes | |
| error | string | The error from the tool call, if any. | No | |
| name | string | The name of the tool that was run. | Yes | |
| output | string | The output from the tool call. | No | |
| server_label | string | The label of the MCP server running the tool. | Yes | |
| type | enum | Possible values: mcp_call |
Yes |
OpenAI.MCPListToolsItemParam
A list of tools available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | string | Error message if the server could not list tools. | No | |
| server_label | string | The label of the MCP server. | Yes | |
| tools | array | The tools available on the server. | Yes | |
| type | enum | Possible values: mcp_list_tools |
Yes |
OpenAI.MCPListToolsItemResource
A list of tools available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | string | Error message if the server could not list tools. | No | |
| server_label | string | The label of the MCP server. | Yes | |
| tools | array | The tools available on the server. | Yes | |
| type | enum | Possible values: mcp_list_tools |
Yes |
OpenAI.MCPListToolsTool
A tool available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | Additional annotations about the tool. | No | ||
| description | string | The description of the tool. | No | |
| input_schema | The JSON schema describing the tool's input. | Yes | ||
| name | string | The name of the tool. | Yes |
OpenAI.MCPTool
Give the model access to additional tools via remote Model Context Protocol (MCP) servers.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| allowed_tools | object | No | ||
| └─ tool_names | array | List of allowed tool names. | No | |
| headers | object | Optional HTTP headers to send to the MCP server. Use for authentication or other purposes. |
No | |
| require_approval | object (see valid models below) | Specify which of the MCP server's tools require approval. | No | |
| server_label | string | A label for this MCP server, used to identify it in tool calls. | Yes | |
| server_url | string | The URL for the MCP server. | Yes | |
| type | enum | The type of the MCP tool. Always mcp.Possible values: mcp |
Yes |
OpenAI.MetadataPropertyForRequest
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No |
OpenAI.Model
Describes an OpenAI model offering that can be used with the API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created | integer | The Unix timestamp (in seconds) when the model was created. | Yes | |
| id | string | The model identifier, which can be referenced in the API endpoints. | Yes | |
| object | enum | The object type, which is always "model". Possible values: model |
Yes | |
| owned_by | string | The organization that owns the model. | Yes |
OpenAI.OtherChunkingStrategyResponseParam
This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the chunking_strategy concept was introduced in the API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Always other.Possible values: other |
Yes |
OpenAI.ParallelToolCalls
Whether to enable parallel function calling during tool use.
Type: boolean
OpenAI.Prompt
Reference to a prompt template and its variables.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique identifier of the prompt template to use. | Yes | |
| variables | object | Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. |
No | |
| version | string | Optional version of the prompt template. | No |
OpenAI.RankingOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| ranker | enum | The ranker to use for the file search. Possible values: auto, default-2024-11-15 |
No | |
| score_threshold | number | The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results. | No |
OpenAI.Reasoning
reasoning models only
Configuration options for reasoning models.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| effort | object | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| generate_summary | enum | Deprecated: use summary instead.A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| summary | enum | A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No |
OpenAI.ReasoningEffort
reasoning models only
Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
| Property | Value |
|---|---|
| Description | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
| Type | string |
| Values | lowmediumhigh |
OpenAI.ReasoningItemParam
A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| encrypted_content | string | The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter. |
No | |
| summary | array | Reasoning text contents. | Yes | |
| type | enum | Possible values: reasoning |
Yes |
OpenAI.ReasoningItemResource
A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| encrypted_content | string | The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter. |
No | |
| summary | array | Reasoning text contents. | Yes | |
| type | enum | Possible values: reasoning |
Yes |
OpenAI.ReasoningItemSummaryPart
Discriminator for OpenAI.ReasoningItemSummaryPart
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
summary_text |
OpenAI.ReasoningItemSummaryTextPart |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ReasoningItemSummaryPartType | Yes |
OpenAI.ReasoningItemSummaryPartType
| Property | Value |
|---|---|
| Type | string |
| Values | summary_text |
OpenAI.ReasoningItemSummaryTextPart
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | Yes | ||
| type | enum | Possible values: summary_text |
Yes |
OpenAI.Response
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| error | object | An error object returned when the model fails to generate a Response. | Yes | |
| └─ code | OpenAI.ResponseErrorCode | The error code for the response. | No | |
| └─ message | string | A human-readable description of the error. | No | |
| id | string | Unique identifier for this Response. | Yes | |
| incomplete_details | object | Details about why the response is incomplete. | Yes | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| instructions | string or array | Yes | ||
| max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| prompt | object | Reference to a prompt template and its variables. |
No | |
| └─ id | string | The unique identifier of the prompt template to use. | No | |
| └─ variables | OpenAI.ResponsePromptVariables | Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. |
No | |
| └─ version | string | Optional version of the prompt template. | No | |
| reasoning | object | reasoning models only Configuration options for reasoning models. |
No | |
| └─ effort | OpenAI.ReasoningEffort | reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. |
No | |
| └─ generate_summary | enum | Deprecated: use summary instead.A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| └─ summary | enum | A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. One of auto, concise, or detailed.Possible values: auto, concise, detailed |
No | |
| status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
Yes | |
| text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| tool_choice | object | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one ormore tools. required means the model must call one or more tools. |
No | |
| └─ type | OpenAI.ToolChoiceObjectType | Indicates that the model should use a built-in tool to generate a response.. | No | |
| tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
Yes | |
| truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | Yes |
OpenAI.ResponseCodeInterpreterCallCodeDeltaEvent
Emitted when a partial code snippet is streamed by the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The partial code snippet being streamed by the code interpreter. | Yes | |
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item in the response for which the code is being streamed. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call_code.delta.Possible values: response.code_interpreter_call_code.delta |
Yes |
OpenAI.ResponseCodeInterpreterCallCodeDoneEvent
Emitted when the code snippet is finalized by the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The final code snippet output by the code interpreter. | Yes | |
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code is finalized. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call_code.done.Possible values: response.code_interpreter_call_code.done |
Yes |
OpenAI.ResponseCodeInterpreterCallCompletedEvent
Emitted when the code interpreter call is completed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code interpreter call is completed. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call.completed.Possible values: response.code_interpreter_call.completed |
Yes |
OpenAI.ResponseCodeInterpreterCallInProgressEvent
Emitted when a code interpreter call is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code interpreter call is in progress. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call.in_progress.Possible values: response.code_interpreter_call.in_progress |
Yes |
OpenAI.ResponseCodeInterpreterCallInterpretingEvent
Emitted when the code interpreter is actively interpreting the code snippet.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code interpreter is interpreting code. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call.interpreting.Possible values: response.code_interpreter_call.interpreting |
Yes |
OpenAI.ResponseCompletedEvent
Emitted when the model response is complete.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | object | Yes | ||
| └─ background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | No | |
| └─ error | OpenAI.ResponseError | An error object returned when the model fails to generate a Response. | No | |
| └─ id | string | Unique identifier for this Response. | No | |
| └─ incomplete_details | object | Details about why the response is incomplete. | No | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| └─ instructions | string or array | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| └─ max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| └─ max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
No | |
| └─ output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
No | |
| └─ output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| └─ previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. |
No | |
| └─ reasoning | OpenAI.Reasoning | reasoning models only Configuration options for reasoning models. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| └─ text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObject | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| └─ top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| └─ top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| └─ truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No | |
| type | enum | The type of the event. Always response.completed.Possible values: response.completed |
Yes |
OpenAI.ResponseContentPartAddedEvent
Emitted when a new content part is added.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that was added. | Yes | |
| item_id | string | The ID of the output item that the content part was added to. | Yes | |
| output_index | integer | The index of the output item that the content part was added to. | Yes | |
| part | object | Yes | ||
| └─ type | OpenAI.ItemContentType | Multi-modal input and output contents. | No | |
| type | enum | The type of the event. Always response.content_part.added.Possible values: response.content_part.added |
Yes |
OpenAI.ResponseContentPartDoneEvent
Emitted when a content part is done.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that is done. | Yes | |
| item_id | string | The ID of the output item that the content part was added to. | Yes | |
| output_index | integer | The index of the output item that the content part was added to. | Yes | |
| part | object | Yes | ||
| └─ type | OpenAI.ItemContentType | Multi-modal input and output contents. | No | |
| type | enum | The type of the event. Always response.content_part.done.Possible values: response.content_part.done |
Yes |
OpenAI.ResponseCreatedEvent
An event that is emitted when a response is created.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | object | Yes | ||
| └─ background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | No | |
| └─ error | OpenAI.ResponseError | An error object returned when the model fails to generate a Response. | No | |
| └─ id | string | Unique identifier for this Response. | No | |
| └─ incomplete_details | object | Details about why the response is incomplete. | No | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| └─ instructions | string or array | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| └─ max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| └─ max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
No | |
| └─ output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
No | |
| └─ output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| └─ previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. |
No | |
| └─ reasoning | OpenAI.Reasoning | reasoning models only Configuration options for reasoning models. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| └─ text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObject | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| └─ top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| └─ top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| └─ truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No | |
| type | enum | The type of the event. Always response.created.Possible values: response.created |
Yes |
OpenAI.ResponseError
An error object returned when the model fails to generate a Response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | OpenAI.ResponseErrorCode | The error code for the response. | Yes | |
| message | string | A human-readable description of the error. | Yes |
OpenAI.ResponseErrorCode
The error code for the response.
| Property | Value |
|---|---|
| Description | The error code for the response. |
| Type | string |
| Values | server_errorrate_limit_exceededinvalid_promptvector_store_timeoutinvalid_imageinvalid_image_formatinvalid_base64_imageinvalid_image_urlimage_too_largeimage_too_smallimage_parse_errorimage_content_policy_violationinvalid_image_modeimage_file_too_largeunsupported_image_media_typeempty_image_filefailed_to_download_imageimage_file_not_found |
OpenAI.ResponseErrorEvent
Emitted when an error occurs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The error code. | Yes | |
| message | string | The error message. | Yes | |
| param | string | The error parameter. | Yes | |
| type | enum | The type of the event. Always error.Possible values: error |
Yes |
OpenAI.ResponseFailedEvent
An event that is emitted when a response fails.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | object | Yes | ||
| └─ background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | No | |
| └─ error | OpenAI.ResponseError | An error object returned when the model fails to generate a Response. | No | |
| └─ id | string | Unique identifier for this Response. | No | |
| └─ incomplete_details | object | Details about why the response is incomplete. | No | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| └─ instructions | string or array | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| └─ max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| └─ max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
No | |
| └─ output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
No | |
| └─ output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| └─ previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. |
No | |
| └─ reasoning | OpenAI.Reasoning | reasoning models only Configuration options for reasoning models. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| └─ text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObject | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| └─ top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| └─ top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| └─ truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No | |
| type | enum | The type of the event. Always response.failed.Possible values: response.failed |
Yes |
OpenAI.ResponseFileSearchCallCompletedEvent
Emitted when a file search call is completed (results found).
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the output item that the file search call is initiated. | Yes | |
| output_index | integer | The index of the output item that the file search call is initiated. | Yes | |
| type | enum | The type of the event. Always response.file_search_call.completed.Possible values: response.file_search_call.completed |
Yes |
OpenAI.ResponseFileSearchCallInProgressEvent
Emitted when a file search call is initiated.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the output item that the file search call is initiated. | Yes | |
| output_index | integer | The index of the output item that the file search call is initiated. | Yes | |
| type | enum | The type of the event. Always response.file_search_call.in_progress.Possible values: response.file_search_call.in_progress |
Yes |
OpenAI.ResponseFileSearchCallSearchingEvent
Emitted when a file search is currently searching.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the output item that the file search call is initiated. | Yes | |
| output_index | integer | The index of the output item that the file search call is searching. | Yes | |
| type | enum | The type of the event. Always response.file_search_call.searching.Possible values: response.file_search_call.searching |
Yes |
OpenAI.ResponseFormat
Discriminator for OpenAI.ResponseFormat
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
text |
OpenAI.ResponseFormatText |
json_object |
OpenAI.ResponseFormatJsonObject |
json_schema |
OpenAI.ResponseFormatJsonSchema |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: text, json_object, json_schema |
Yes |
OpenAI.ResponseFormatJsonObject
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always json_object.Possible values: json_object |
Yes |
OpenAI.ResponseFormatJsonSchema
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| json_schema | object | Structured Outputs configuration options, including a JSON Schema. | Yes | |
| └─ description | string | A description of what the response format is for, used by the model to determine how to respond in the format. |
No | |
| └─ name | string | The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. |
No | |
| └─ schema | OpenAI.ResponseFormatJsonSchemaSchema | The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here. |
No | |
| └─ strict | boolean | Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported whenstrict is true. To learn more, read the Structured Outputsguide. |
No | False |
| type | enum | The type of response format being defined. Always json_schema.Possible values: json_schema |
Yes |
OpenAI.ResponseFormatJsonSchemaSchema
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Type: object
OpenAI.ResponseFormatText
Default response format. Used to generate text responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always text.Possible values: text |
Yes |
OpenAI.ResponseFunctionCallArgumentsDeltaEvent
Emitted when there is a partial function-call arguments delta.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The function-call arguments delta that is added. | Yes | |
| item_id | string | The ID of the output item that the function-call arguments delta is added to. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item that the function-call arguments delta is added to. | Yes | |
| type | enum | The type of the event. Always response.function_call_arguments.delta.Possible values: response.function_call_arguments.delta |
Yes |
OpenAI.ResponseFunctionCallArgumentsDoneEvent
Emitted when function-call arguments are finalized.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | The function-call arguments. | Yes | |
| item_id | string | The ID of the item. | Yes | |
| output_index | integer | The index of the output item. | Yes | |
| type | enum | Possible values: response.function_call_arguments.done |
Yes |
OpenAI.ResponseImageGenCallCompletedEvent
Emitted when an image generation tool call has completed and the final image is available.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.completed'. Possible values: response.image_generation_call.completed |
Yes |
OpenAI.ResponseImageGenCallGeneratingEvent
Emitted when an image generation tool call is actively generating an image (intermediate state).
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.generating'. Possible values: response.image_generation_call.generating |
Yes |
OpenAI.ResponseImageGenCallInProgressEvent
Emitted when an image generation tool call is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.in_progress'. Possible values: response.image_generation_call.in_progress |
Yes |
OpenAI.ResponseImageGenCallPartialImageEvent
Emitted when a partial image is available during image generation streaming.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| partial_image_b64 | string | Base64-encoded partial image data, suitable for rendering as an image. | Yes | |
| partial_image_index | integer | 0-based index for the partial image (backend is 1-based, but this is 0-based for the user). | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.partial_image'. Possible values: response.image_generation_call.partial_image |
Yes |
OpenAI.ResponseInProgressEvent
Emitted when the response is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | object | Yes | ||
| └─ background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | No | |
| └─ error | OpenAI.ResponseError | An error object returned when the model fails to generate a Response. | No | |
| └─ id | string | Unique identifier for this Response. | No | |
| └─ incomplete_details | object | Details about why the response is incomplete. | No | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| └─ instructions | string or array | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| └─ max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| └─ max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
No | |
| └─ output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
No | |
| └─ output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| └─ previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. |
No | |
| └─ reasoning | OpenAI.Reasoning | reasoning models only Configuration options for reasoning models. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| └─ text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObject | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| └─ top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| └─ top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| └─ truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No | |
| type | enum | The type of the event. Always response.in_progress.Possible values: response.in_progress |
Yes |
OpenAI.ResponseIncompleteEvent
An event that is emitted when a response finishes as incomplete.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | object | Yes | ||
| └─ background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | No | |
| └─ error | OpenAI.ResponseError | An error object returned when the model fails to generate a Response. | No | |
| └─ id | string | Unique identifier for this Response. | No | |
| └─ incomplete_details | object | Details about why the response is incomplete. | No | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| └─ instructions | string or array | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| └─ max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| └─ max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
No | |
| └─ output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
No | |
| └─ output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| └─ previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. |
No | |
| └─ reasoning | OpenAI.Reasoning | reasoning models only Configuration options for reasoning models. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| └─ text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObject | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| └─ top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| └─ top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| └─ truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No | |
| type | enum | The type of the event. Always response.incomplete.Possible values: response.incomplete |
Yes |
OpenAI.ResponseItemList
A list of Response items.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array | A list of items used to generate this response. | Yes | |
| first_id | string | The ID of the first item in the list. | Yes | |
| has_more | boolean | Whether there are more items available. | Yes | |
| last_id | string | The ID of the last item in the list. | Yes | |
| object | enum | The type of object returned, must be list.Possible values: list |
Yes |
OpenAI.ResponseMCPCallArgumentsDeltaEvent
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | The partial update to the arguments for the MCP tool call. | Yes | ||
| item_id | string | The unique identifier of the MCP tool call item being processed. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_call.arguments_delta'. Possible values: response.mcp_call.arguments_delta |
Yes |
OpenAI.ResponseMCPCallArgumentsDoneEvent
Emitted when the arguments for an MCP tool call are finalized.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | The finalized arguments for the MCP tool call. | Yes | ||
| item_id | string | The unique identifier of the MCP tool call item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_call.arguments_done'. Possible values: response.mcp_call.arguments_done |
Yes |
OpenAI.ResponseMCPCallCompletedEvent
Emitted when an MCP tool call has completed successfully.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the event. Always 'response.mcp_call.completed'. Possible values: response.mcp_call.completed |
Yes |
OpenAI.ResponseMCPCallFailedEvent
Emitted when an MCP tool call has failed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the event. Always 'response.mcp_call.failed'. Possible values: response.mcp_call.failed |
Yes |
OpenAI.ResponseMCPCallInProgressEvent
Emitted when an MCP tool call is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the MCP tool call item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_call.in_progress'. Possible values: response.mcp_call.in_progress |
Yes |
OpenAI.ResponseMCPListToolsCompletedEvent
Emitted when the list of available MCP tools has been successfully retrieved.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the event. Always 'response.mcp_list_tools.completed'. Possible values: response.mcp_list_tools.completed |
Yes |
OpenAI.ResponseMCPListToolsFailedEvent
Emitted when the attempt to list available MCP tools has failed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the event. Always 'response.mcp_list_tools.failed'. Possible values: response.mcp_list_tools.failed |
Yes |
OpenAI.ResponseMCPListToolsInProgressEvent
Emitted when the system is in the process of retrieving the list of available MCP tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the event. Always 'response.mcp_list_tools.in_progress'. Possible values: response.mcp_list_tools.in_progress |
Yes |
OpenAI.ResponseOutputItemAddedEvent
Emitted when a new output item is added.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item | object | Content item used to generate a response. | Yes | |
| └─ id | string | No | ||
| └─ type | OpenAI.ItemType | No | ||
| output_index | integer | The index of the output item that was added. | Yes | |
| type | enum | The type of the event. Always response.output_item.added.Possible values: response.output_item.added |
Yes |
OpenAI.ResponseOutputItemDoneEvent
Emitted when an output item is marked done.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item | object | Content item used to generate a response. | Yes | |
| └─ id | string | No | ||
| └─ type | OpenAI.ItemType | No | ||
| output_index | integer | The index of the output item that was marked done. | Yes | |
| type | enum | The type of the event. Always response.output_item.done.Possible values: response.output_item.done |
Yes |
OpenAI.ResponsePromptVariables
Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.
Type: object
OpenAI.ResponseQueuedEvent
Emitted when a response is queued and waiting to be processed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | object | Yes | ||
| └─ background | boolean | Whether to run the model response in the background. Learn more. |
No | False |
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | No | |
| └─ error | OpenAI.ResponseError | An error object returned when the model fails to generate a Response. | No | |
| └─ id | string | Unique identifier for this Response. | No | |
| └─ incomplete_details | object | Details about why the response is incomplete. | No | |
| └─ reason | enum | The reason why the response is incomplete. Possible values: max_output_tokens, content_filter |
No | |
| └─ instructions | string or array | A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previousresponse will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. |
No | |
| └─ max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. | No | |
| └─ max_tool_calls | integer | The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. | No | |
| └─ metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
No | |
| └─ output | array | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
No | |
| └─ output_text | string | SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present.Supported in the Python and JavaScript SDKs. |
No | |
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | No | True |
| └─ previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. |
No | |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. |
No | |
| └─ reasoning | OpenAI.Reasoning | reasoning models only Configuration options for reasoning models. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| └─ text | object | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: Structured Outputs |
No | |
| └─ format | OpenAI.ResponseTextFormatConfiguration | No | ||
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObject | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | array | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.The two categories of tools you can provide the model are: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. |
No | |
| └─ top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. | No | |
| └─ top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| └─ truncation | enum | The truncation strategy to use for the model response. - auto: If the context of this response and previous ones exceedsthe model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation. - disabled (default): If a model response will exceed the context windowsize for a model, the request will fail with a 400 error. Possible values: auto, disabled |
No | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. | No | |
| type | enum | The type of the event. Always 'response.queued'. Possible values: response.queued |
Yes |
OpenAI.ResponseReasoningDeltaEvent
Emitted when there is a delta (partial update) to the reasoning content.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the reasoning content part within the output item. | Yes | |
| delta | The partial update to the reasoning content. | Yes | ||
| item_id | string | The unique identifier of the item for which reasoning is being updated. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| type | enum | The type of the event. Always 'response.reasoning.delta'. Possible values: response.reasoning.delta |
Yes |
OpenAI.ResponseReasoningDoneEvent
Emitted when the reasoning content is finalized for an item.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the reasoning content part within the output item. | Yes | |
| item_id | string | The unique identifier of the item for which reasoning is finalized. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| text | string | The finalized reasoning text. | Yes | |
| type | enum | The type of the event. Always 'response.reasoning.done'. Possible values: response.reasoning.done |
Yes |
OpenAI.ResponseReasoningSummaryDeltaEvent
Emitted when there is a delta (partial update) to the reasoning summary content.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | The partial update to the reasoning summary content. | Yes | ||
| item_id | string | The unique identifier of the item for which the reasoning summary is being updated. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| summary_index | integer | The index of the summary part within the output item. | Yes | |
| type | enum | The type of the event. Always 'response.reasoning_summary.delta'. Possible values: response.reasoning_summary.delta |
Yes |
OpenAI.ResponseReasoningSummaryDoneEvent
Emitted when the reasoning summary content is finalized for an item.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the item for which the reasoning summary is finalized. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| summary_index | integer | The index of the summary part within the output item. | Yes | |
| text | string | The finalized reasoning summary text. | Yes | |
| type | enum | The type of the event. Always 'response.reasoning_summary.done'. Possible values: response.reasoning_summary.done |
Yes |
OpenAI.ResponseReasoningSummaryPartAddedEvent
Emitted when a new reasoning summary part is added.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the item this summary part is associated with. | Yes | |
| output_index | integer | The index of the output item this summary part is associated with. | Yes | |
| part | object | Yes | ||
| └─ type | OpenAI.ReasoningItemSummaryPartType | No | ||
| summary_index | integer | The index of the summary part within the reasoning summary. | Yes | |
| type | enum | The type of the event. Always response.reasoning_summary_part.added.Possible values: response.reasoning_summary_part.added |
Yes |
OpenAI.ResponseReasoningSummaryPartDoneEvent
Emitted when a reasoning summary part is completed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the item this summary part is associated with. | Yes | |
| output_index | integer | The index of the output item this summary part is associated with. | Yes | |
| part | object | Yes | ||
| └─ type | OpenAI.ReasoningItemSummaryPartType | No | ||
| summary_index | integer | The index of the summary part within the reasoning summary. | Yes | |
| type | enum | The type of the event. Always response.reasoning_summary_part.done.Possible values: response.reasoning_summary_part.done |
Yes |
OpenAI.ResponseReasoningSummaryTextDeltaEvent
Emitted when a delta is added to a reasoning summary text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The text delta that was added to the summary. | Yes | |
| item_id | string | The ID of the item this summary text delta is associated with. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item this summary text delta is associated with. | Yes | |
| summary_index | integer | The index of the summary part within the reasoning summary. | Yes | |
| type | enum | The type of the event. Always response.reasoning_summary_text.delta.Possible values: response.reasoning_summary_text.delta |
Yes |
OpenAI.ResponseReasoningSummaryTextDoneEvent
Emitted when a reasoning summary text is completed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the item this summary text is associated with. | Yes | |
| output_index | integer | The index of the output item this summary text is associated with. | Yes | |
| summary_index | integer | The index of the summary part within the reasoning summary. | Yes | |
| text | string | The full text of the completed reasoning summary. | Yes | |
| type | enum | The type of the event. Always response.reasoning_summary_text.done.Possible values: response.reasoning_summary_text.done |
Yes |
OpenAI.ResponseRefusalDeltaEvent
Emitted when there is a partial refusal text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that the refusal text is added to. | Yes | |
| delta | string | The refusal text that is added. | Yes | |
| item_id | string | The ID of the output item that the refusal text is added to. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item that the refusal text is added to. | Yes | |
| type | enum | The type of the event. Always response.refusal.delta.Possible values: response.refusal.delta |
Yes |
OpenAI.ResponseRefusalDoneEvent
Emitted when refusal text is finalized.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that the refusal text is finalized. | Yes | |
| item_id | string | The ID of the output item that the refusal text is finalized. | Yes | |
| output_index | integer | The index of the output item that the refusal text is finalized. | Yes | |
| refusal | string | The refusal text that is finalized. | Yes | |
| type | enum | The type of the event. Always response.refusal.done.Possible values: response.refusal.done |
Yes |
OpenAI.ResponseStreamEvent
Discriminator for OpenAI.ResponseStreamEvent
This component uses the property type to discriminate between different types:
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| sequence_number | integer | The sequence number for this event. | Yes | |
| type | OpenAI.ResponseStreamEventType | Yes |
OpenAI.ResponseStreamEventType
| Property | Value |
|---|---|
| Type | string |
| Values | response.audio.deltaresponse.audio.doneresponse.audio_transcript.deltaresponse.audio_transcript.doneresponse.code_interpreter_call_code.deltaresponse.code_interpreter_call_code.doneresponse.code_interpreter_call.completedresponse.code_interpreter_call.in_progressresponse.code_interpreter_call.interpretingresponse.completedresponse.content_part.addedresponse.content_part.doneresponse.createderrorresponse.file_search_call.completedresponse.file_search_call.in_progressresponse.file_search_call.searchingresponse.function_call_arguments.deltaresponse.function_call_arguments.doneresponse.in_progressresponse.failedresponse.incompleteresponse.output_item.addedresponse.output_item.doneresponse.refusal.deltaresponse.refusal.doneresponse.output_text.annotation.addedresponse.output_text.deltaresponse.output_text.doneresponse.reasoning_summary_part.addedresponse.reasoning_summary_part.doneresponse.reasoning_summary_text.deltaresponse.reasoning_summary_text.doneresponse.web_search_call.completedresponse.web_search_call.in_progressresponse.web_search_call.searchingresponse.image_generation_call.completedresponse.image_generation_call.generatingresponse.image_generation_call.in_progressresponse.image_generation_call.partial_imageresponse.mcp_call.arguments_deltaresponse.mcp_call.arguments_doneresponse.mcp_call.completedresponse.mcp_call.failedresponse.mcp_call.in_progressresponse.mcp_list_tools.completedresponse.mcp_list_tools.failedresponse.mcp_list_tools.in_progressresponse.queuedresponse.reasoning.deltaresponse.reasoning.doneresponse.reasoning_summary.deltaresponse.reasoning_summary.done |
OpenAI.ResponseTextDeltaEvent
Emitted when there is an additional text delta.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that the text delta was added to. | Yes | |
| delta | string | The text delta that was added. | Yes | |
| item_id | string | The ID of the output item that the text delta was added to. | Yes | |
| obfuscation | string | A field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks. | Yes | |
| output_index | integer | The index of the output item that the text delta was added to. | Yes | |
| type | enum | The type of the event. Always response.output_text.delta.Possible values: response.output_text.delta |
Yes |
OpenAI.ResponseTextDoneEvent
Emitted when text content is finalized.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that the text content is finalized. | Yes | |
| item_id | string | The ID of the output item that the text content is finalized. | Yes | |
| output_index | integer | The index of the output item that the text content is finalized. | Yes | |
| text | string | The text content that is finalized. | Yes | |
| type | enum | The type of the event. Always response.output_text.done.Possible values: response.output_text.done |
Yes |
OpenAI.ResponseTextFormatConfiguration
Discriminator for OpenAI.ResponseTextFormatConfiguration
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
text |
OpenAI.ResponseTextFormatConfigurationText |
json_object |
OpenAI.ResponseTextFormatConfigurationJsonObject |
json_schema |
OpenAI.ResponseTextFormatConfigurationJsonSchema |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ResponseTextFormatConfigurationType | An object specifying the format that the model must output. Configuring { "type": "json_schema" } enables Structured Outputs,which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide. The default format is { "type": "text" } with no additional options.Not recommended for gpt-4o and newer models: Setting to { "type": "json_object" } enables the older JSON mode, whichensures the message the model generates is valid JSON. Using json_schemais preferred for models that support it. |
Yes |
OpenAI.ResponseTextFormatConfigurationJsonObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: json_object |
Yes |
OpenAI.ResponseTextFormatConfigurationJsonSchema
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of what the response format is for, used by the model to determine how to respond in the format. |
No | |
| name | string | The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. |
Yes | |
| schema | OpenAI.ResponseFormatJsonSchemaSchema | The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here. |
Yes | |
| strict | boolean | Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported whenstrict is true. To learn more, read the Structured Outputsguide. |
No | False |
| type | enum | The type of response format being defined. Always json_schema.Possible values: json_schema |
Yes |
OpenAI.ResponseTextFormatConfigurationText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: text |
Yes |
OpenAI.ResponseTextFormatConfigurationType
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.
| Property | Value |
|---|---|
| Description | An object specifying the format that the model must output. |
Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it. |
| Type | string |
| Values | textjson_schemajson_object |
OpenAI.ResponseUsage
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_tokens | integer | The number of input tokens. | Yes | |
| input_tokens_details | object | A detailed breakdown of the input tokens. | Yes | |
| └─ cached_tokens | integer | The number of tokens that were retrieved from the cache. More on prompt caching. |
No | |
| output_tokens | integer | The number of output tokens. | Yes | |
| output_tokens_details | object | A detailed breakdown of the output tokens. | Yes | |
| └─ reasoning_tokens | integer | The number of reasoning tokens. | No | |
| total_tokens | integer | The total number of tokens used. | Yes |
OpenAI.ResponseWebSearchCallCompletedEvent
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | Unique ID for the output item associated with the web search call. | Yes | |
| output_index | integer | The index of the output item that the web search call is associated with. | Yes | |
| type | enum | The type of the event. Always response.web_search_call.completed.Possible values: response.web_search_call.completed |
Yes |
OpenAI.ResponseWebSearchCallInProgressEvent
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | Unique ID for the output item associated with the web search call. | Yes | |
| output_index | integer | The index of the output item that the web search call is associated with. | Yes | |
| type | enum | The type of the event. Always response.web_search_call.in_progress.Possible values: response.web_search_call.in_progress |
Yes |
OpenAI.ResponseWebSearchCallSearchingEvent
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | Unique ID for the output item associated with the web search call. | Yes | |
| output_index | integer | The index of the output item that the web search call is associated with. | Yes | |
| type | enum | The type of the event. Always response.web_search_call.searching.Possible values: response.web_search_call.searching |
Yes |
OpenAI.ResponsesAssistantMessageItemParam
A message parameter item with the assistant role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always assistant.Possible values: assistant |
Yes |
OpenAI.ResponsesAssistantMessageItemResource
A message resource item with the assistant role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always assistant.Possible values: assistant |
Yes |
OpenAI.ResponsesDeveloperMessageItemParam
A message parameter item with the developer role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always developer.Possible values: developer |
Yes |
OpenAI.ResponsesDeveloperMessageItemResource
A message resource item with the developer role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always developer.Possible values: developer |
Yes |
OpenAI.ResponsesMessageItemParam
A response message item, representing a role and content, as provided as client request parameters.
Discriminator for OpenAI.ResponsesMessageItemParam
This component uses the property role to discriminate between different types:
| Type Value | Schema |
|---|---|
user |
OpenAI.ResponsesUserMessageItemParam |
system |
OpenAI.ResponsesSystemMessageItemParam |
developer |
OpenAI.ResponsesDeveloperMessageItemParam |
assistant |
OpenAI.ResponsesAssistantMessageItemParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| role | object | The collection of valid roles for responses message items. | Yes | |
| type | enum | The type of the responses item, which is always 'message'. Possible values: message |
Yes |
OpenAI.ResponsesMessageItemResource
A response message resource item, representing a role and content, as provided on service responses.
Discriminator for OpenAI.ResponsesMessageItemResource
This component uses the property role to discriminate between different types:
| Type Value | Schema |
|---|---|
user |
OpenAI.ResponsesUserMessageItemResource |
system |
OpenAI.ResponsesSystemMessageItemResource |
developer |
OpenAI.ResponsesDeveloperMessageItemResource |
assistant |
OpenAI.ResponsesAssistantMessageItemResource |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| role | object | The collection of valid roles for responses message items. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the responses item, which is always 'message'. Possible values: message |
Yes |
OpenAI.ResponsesMessageRole
The collection of valid roles for responses message items.
| Property | Value |
|---|---|
| Description | The collection of valid roles for responses message items. |
| Type | string |
| Values | systemdeveloperuserassistant |
OpenAI.ResponsesSystemMessageItemParam
A message parameter item with the system role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always system.Possible values: system |
Yes |
OpenAI.ResponsesSystemMessageItemResource
A message resource item with the system role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always system.Possible values: system |
Yes |
OpenAI.ResponsesUserMessageItemParam
A message parameter item with the user role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always user.Possible values: user |
Yes |
OpenAI.ResponsesUserMessageItemResource
A message resource item with the user role.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array | The content associated with the message. | Yes | |
| role | enum | The role of the message, which is always user.Possible values: user |
Yes |
OpenAI.RunGraderRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | object | A StringCheckGrader object that performs a string comparison between input and reference using a specified operation. | Yes | |
| └─ calculate_output | string | A formula to calculate the output based on grader results. | No | |
| └─ evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
No | |
| └─ graders | object | No | ||
| └─ image_tag | string | The image tag to use for the python script. | No | |
| └─ input | array | The input text. This may include template strings. | No | |
| └─ model | string | The model to use for the evaluation. | No | |
| └─ name | string | The name of the grader. | No | |
| └─ operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
No | |
| └─ range | array | The range of the score. Defaults to [0, 1]. |
No | |
| └─ reference | string | The text being graded against. | No | |
| └─ sampling_params | The sampling parameters for the model. | No | ||
| └─ source | string | The source code of the python script. | No | |
| └─ type | enum | The object type, which is always multi.Possible values: multi |
No | |
| item | The dataset item provided to the grader. This will be used to populate the item namespace. See the guide for more details. |
No | ||
| model_sample | string | The model sample to be evaluated. This value will be used to populate the sample namespace. See the guide for more details.The output_json variable will be populated if the model sample is avalid JSON string. |
Yes |
OpenAI.RunGraderResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Yes | ||
| └─ errors | object | No | ||
| └─ formula_parse_error | boolean | No | ||
| └─ invalid_variable_error | boolean | No | ||
| └─ model_grader_parse_error | boolean | No | ||
| └─ model_grader_refusal_error | boolean | No | ||
| └─ model_grader_server_error | boolean | No | ||
| └─ model_grader_server_error_details | string | No | ||
| └─ other_error | boolean | No | ||
| └─ python_grader_runtime_error | boolean | No | ||
| └─ python_grader_runtime_error_details | string | No | ||
| └─ python_grader_server_error | boolean | No | ||
| └─ python_grader_server_error_type | string | No | ||
| └─ sample_parse_error | boolean | No | ||
| └─ truncated_observation_error | boolean | No | ||
| └─ unresponsive_reward_error | boolean | No | ||
| └─ execution_time | number | No | ||
| └─ name | string | No | ||
| └─ sampled_model_name | string | No | ||
| └─ scores | No | |||
| └─ token_usage | integer | No | ||
| └─ type | string | No | ||
| model_grader_token_usage_per_model | Yes | |||
| reward | number | Yes | ||
| sub_rewards | Yes |
OpenAI.StaticChunkingStrategy
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| chunk_overlap_tokens | integer | The number of tokens that overlap between chunks. The default value is 400.Note that the overlap must not exceed half of max_chunk_size_tokens. |
Yes | |
| max_chunk_size_tokens | integer | The maximum number of tokens in each chunk. The default value is 800. The minimum value is 100 and the maximum value is 4096. |
Yes |
OpenAI.StaticChunkingStrategyRequestParam
Customize your own chunking strategy by setting chunk size and chunk overlap.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| static | OpenAI.StaticChunkingStrategy | Yes | ||
| type | enum | Always static.Possible values: static |
Yes |
OpenAI.StaticChunkingStrategyResponseParam
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| static | OpenAI.StaticChunkingStrategy | Yes | ||
| type | enum | Always static.Possible values: static |
Yes |
OpenAI.StopConfiguration
Not supported with latest reasoning models o3 and o4-mini.
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
This schema accepts one of the following types:
- string
- array
OpenAI.Tool
Discriminator for OpenAI.Tool
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
function |
OpenAI.FunctionTool |
file_search |
OpenAI.FileSearchTool |
computer_use_preview |
OpenAI.ComputerUsePreviewTool |
web_search_preview |
OpenAI.WebSearchPreviewTool |
code_interpreter |
OpenAI.CodeInterpreterTool |
image_generation |
OpenAI.ImageGenTool |
local_shell |
OpenAI.LocalShellTool |
mcp |
OpenAI.MCPTool |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ToolType | A tool that can be used to generate a response. | Yes |
OpenAI.ToolChoiceObject
Discriminator for OpenAI.ToolChoiceObject
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_search |
OpenAI.ToolChoiceObjectFileSearch |
computer_use_preview |
OpenAI.ToolChoiceObjectComputer |
web_search_preview |
OpenAI.ToolChoiceObjectWebSearch |
image_generation |
OpenAI.ToolChoiceObjectImageGen |
code_interpreter |
OpenAI.ToolChoiceObjectCodeInterpreter |
function |
OpenAI.ToolChoiceObjectFunction |
mcp |
OpenAI.ToolChoiceObjectMCP |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ToolChoiceObjectType | Indicates that the model should use a built-in tool to generate a response.. | Yes |
OpenAI.ToolChoiceObjectCodeInterpreter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: code_interpreter |
Yes |
OpenAI.ToolChoiceObjectComputer
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: computer_use_preview |
Yes |
OpenAI.ToolChoiceObjectFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: file_search |
Yes |
OpenAI.ToolChoiceObjectFunction
Use this option to force the model to call a specific function.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the function to call. | Yes | |
| type | enum | For function calling, the type is always function.Possible values: function |
Yes |
OpenAI.ToolChoiceObjectImageGen
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: image_generation |
Yes |
OpenAI.ToolChoiceObjectMCP
Use this option to force the model to call a specific tool on a remote MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the tool to call on the server. | No | |
| server_label | string | The label of the MCP server to use. | Yes | |
| type | enum | For MCP tools, the type is always mcp.Possible values: mcp |
Yes |
OpenAI.ToolChoiceObjectType
Indicates that the model should use a built-in tool to generate a response.
| Property | Value |
|---|---|
| Description | Indicates that the model should use a built-in tool to generate a response. |
| Type | string |
| Values | file_searchfunctioncomputer_use_previewweb_search_previewimage_generationcode_interpretermcp |
OpenAI.ToolChoiceObjectWebSearch
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: web_search_preview |
Yes |
OpenAI.ToolChoiceOptions
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
| Property | Value |
|---|---|
| Description | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one ormore tools. required means the model must call one or more tools. |
| Type | string |
| Values | noneautorequired |
OpenAI.ToolType
A tool that can be used to generate a response.
| Property | Value |
|---|---|
| Description | A tool that can be used to generate a response. |
| Type | string |
| Values | file_searchfunctioncomputer_use_previewweb_search_previewmcpcode_interpreterimage_generationlocal_shell |
OpenAI.TopLogProb
The top log probability of a token.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array | Yes | ||
| logprob | number | Yes | ||
| token | string | Yes |
OpenAI.UpdateVectorStoreFileAttributesRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
Yes |
OpenAI.UpdateVectorStoreRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | object | The expiration policy for a vector store. | No | |
| └─ anchor | enum | Anchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.Possible values: last_active_at |
No | |
| └─ days | integer | The number of days after the anchor time that the vector store will expire. | No | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | The name of the vector store. | No |
OpenAI.ValidateGraderRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | object | A StringCheckGrader object that performs a string comparison between input and reference using a specified operation. | Yes | |
| └─ calculate_output | string | A formula to calculate the output based on grader results. | No | |
| └─ evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
No | |
| └─ graders | object | No | ||
| └─ image_tag | string | The image tag to use for the python script. | No | |
| └─ input | array | The input text. This may include template strings. | No | |
| └─ model | string | The model to use for the evaluation. | No | |
| └─ name | string | The name of the grader. | No | |
| └─ operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
No | |
| └─ range | array | The range of the score. Defaults to [0, 1]. |
No | |
| └─ reference | string | The text being graded against. | No | |
| └─ sampling_params | The sampling parameters for the model. | No | ||
| └─ source | string | The source code of the python script. | No | |
| └─ type | enum | The object type, which is always multi.Possible values: multi |
No |
OpenAI.ValidateGraderResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | object | A StringCheckGrader object that performs a string comparison between input and reference using a specified operation. | No | |
| └─ calculate_output | string | A formula to calculate the output based on grader results. | No | |
| └─ evaluation_metric | enum | The evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
No | |
| └─ graders | object | No | ||
| └─ image_tag | string | The image tag to use for the python script. | No | |
| └─ input | array | The input text. This may include template strings. | No | |
| └─ model | string | The model to use for the evaluation. | No | |
| └─ name | string | The name of the grader. | No | |
| └─ operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
No | |
| └─ range | array | The range of the score. Defaults to [0, 1]. |
No | |
| └─ reference | string | The text being graded against. | No | |
| └─ sampling_params | The sampling parameters for the model. | No | ||
| └─ source | string | The source code of the python script. | No | |
| └─ type | enum | The object type, which is always multi.Possible values: multi |
No |
OpenAI.VectorStoreExpirationAfter
The expiration policy for a vector store.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| anchor | enum | Anchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.Possible values: last_active_at |
Yes | |
| days | integer | The number of days after the anchor time that the vector store will expire. | Yes |
OpenAI.VectorStoreFileAttributes
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
Type: object
OpenAI.VectorStoreFileBatchObject
A batch of files attached to a vector store.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the vector store files batch was created. | Yes | |
| file_counts | object | Yes | ||
| └─ cancelled | integer | The number of files that where cancelled. | No | |
| └─ completed | integer | The number of files that have been processed. | No | |
| └─ failed | integer | The number of files that have failed to process. | No | |
| └─ in_progress | integer | The number of files that are currently being processed. | No | |
| └─ total | integer | The total number of files. | No | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| object | enum | The object type, which is always vector_store.file_batch.Possible values: vector_store.files_batch |
Yes | |
| status | enum | The status of the vector store files batch, which can be either in_progress, completed, cancelled or failed.Possible values: in_progress, completed, cancelled, failed |
Yes | |
| vector_store_id | string | The ID of the vector store that the File is attached to. | Yes |
OpenAI.VectorStoreFileObject
A list of files attached to a vector store.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. |
No | |
| chunking_strategy | object | No | ||
| └─ type | enum | Possible values: static, other |
No | |
| created_at | integer | The Unix timestamp (in seconds) for when the vector store file was created. | Yes | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| last_error | object | The last error associated with this vector store file. Will be null if there are no errors. |
Yes | |
| └─ code | enum | One of server_error or rate_limit_exceeded.Possible values: server_error, unsupported_file, invalid_file |
No | |
| └─ message | string | A human-readable description of the error. | No | |
| object | enum | The object type, which is always vector_store.file.Possible values: vector_store.file |
Yes | |
| status | enum | The status of the vector store file, which can be either in_progress, completed, cancelled, or failed. The status completed indicates that the vector store file is ready for use.Possible values: in_progress, completed, cancelled, failed |
Yes | |
| usage_bytes | integer | The total vector store usage in bytes. Note that this may be different from the original file size. | Yes | |
| vector_store_id | string | The ID of the vector store that the File is attached to. | Yes |
OpenAI.VectorStoreObject
A vector store is a collection of processed files can be used by the file_search tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the vector store was created. | Yes | |
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| expires_at | integer | The Unix timestamp (in seconds) for when the vector store will expire. | No | |
| file_counts | object | Yes | ||
| └─ cancelled | integer | The number of files that were cancelled. | No | |
| └─ completed | integer | The number of files that have been successfully processed. | No | |
| └─ failed | integer | The number of files that have failed to process. | No | |
| └─ in_progress | integer | The number of files that are currently being processed. | No | |
| └─ total | integer | The total number of files. | No | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| last_active_at | integer | The Unix timestamp (in seconds) for when the vector store was last active. | Yes | |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | |
| name | string | The name of the vector store. | Yes | |
| object | enum | The object type, which is always vector_store.Possible values: vector_store |
Yes | |
| status | enum | The status of the vector store, which can be either expired, in_progress, or completed. A status of completed indicates that the vector store is ready for use.Possible values: expired, in_progress, completed |
Yes | |
| usage_bytes | integer | The total number of bytes used by the files in the vector store. | Yes |
OpenAI.VoiceIdsShared
| Property | Value |
|---|---|
| Type | string |
| Values | alloyashballadcoralechofableonyxnovasageshimmerverse |
OpenAI.WebSearchAction
Discriminator for OpenAI.WebSearchAction
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
find |
OpenAI.WebSearchActionFind |
open_page |
OpenAI.WebSearchActionOpenPage |
search |
OpenAI.WebSearchActionSearch |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.WebSearchActionType | Yes |
OpenAI.WebSearchActionFind
Action type "find": Searches for a pattern within a loaded page.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| pattern | string | The pattern or text to search for within the page. | Yes | |
| type | enum | The action type. Possible values: find |
Yes | |
| url | string | The URL of the page searched for the pattern. | Yes |
OpenAI.WebSearchActionOpenPage
Action type "open_page" - Opens a specific URL from search results.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The action type. Possible values: open_page |
Yes | |
| url | string | The URL opened by the model. | Yes |
OpenAI.WebSearchActionSearch
Action type "search" - Performs a web search query.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| query | string | The search query. | Yes | |
| type | enum | The action type. Possible values: search |
Yes |
OpenAI.WebSearchActionType
| Property | Value |
|---|---|
| Type | string |
| Values | searchopen_pagefind |
OpenAI.WebSearchPreviewTool
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| search_context_size | enum | High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.Possible values: low, medium, high |
No | |
| type | enum | The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.Possible values: web_search_preview |
Yes | |
| user_location | object | No | ||
| └─ type | OpenAI.LocationType | No |
OpenAI.WebSearchToolCallItemParam
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | object | Yes | ||
| └─ type | OpenAI.WebSearchActionType | No | ||
| type | enum | Possible values: web_search_call |
Yes |
OpenAI.WebSearchToolCallItemResource
Note
web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | object | Yes | ||
| └─ type | OpenAI.WebSearchActionType | No | ||
| status | enum | The status of the web search tool call. Possible values: in_progress, searching, completed, failed |
Yes | |
| type | enum | Possible values: web_search_call |
Yes |
PineconeChatDataSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| parameters | object | The parameter information to control the use of the Pinecone data source. | Yes | |
| └─ allow_partial_result | boolean | If set to true, the system will allow partial search results to be used and the request will fail if all partial queries fail. If not specified or specified as false, the request will fail if any search query fails. |
No | False |
| └─ authentication | object | No | ||
| └─ key | string | No | ||
| └─ type | enum | Possible values: api_key |
No | |
| └─ embedding_dependency | object | A representation of a data vectorization source usable as an embedding resource with a data source. | No | |
| └─ type | AzureChatDataSourceVectorizationSourceType | The differentiating identifier for the concrete vectorization source. | No | |
| └─ environment | string | The environment name to use with Pinecone. | No | |
| └─ fields_mapping | object | Field mappings to apply to data used by the Pinecone data source. Note that content field mappings are required for Pinecone. |
No | |
| └─ content_fields | array | No | ||
| └─ content_fields_separator | string | No | ||
| └─ filepath_field | string | No | ||
| └─ title_field | string | No | ||
| └─ url_field | string | No | ||
| └─ in_scope | boolean | Whether queries should be restricted to use of the indexed data. | No | |
| └─ include_contexts | array | The output context properties to include on the response. By default, citations and intent will be requested. |
No | ['citations', 'intent'] |
| └─ index_name | string | The name of the Pinecone database index to use. | No | |
| └─ max_search_queries | integer | The maximum number of rewritten queries that should be sent to the search provider for a single user message. By default, the system will make an automatic determination. |
No | |
| └─ strictness | integer | The configured strictness of the search relevance filtering. Higher strictness will increase precision but lower recall of the answer. |
No | |
| └─ top_n_documents | integer | The configured number of documents to feature in the query. | No | |
| type | enum | The discriminated type identifier, which is always 'pinecone'. Possible values: pinecone |
Yes |
ResponseFormatJSONSchemaRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| json_schema | object | JSON Schema for the response format | Yes | |
| type | enum | Type of response format Possible values: json_schema |
Yes |
ResponseModalities
Output types that you would like the model to generate. Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:
["text", "audio"]
Array of: string