Share via


ContentHarmEvaluator Class

Definition

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of a variety of harmful content such as violence, hate speech, etc.

public ref class ContentHarmEvaluator : Microsoft::Extensions::AI::Evaluation::Safety::ContentSafetyEvaluator
public class ContentHarmEvaluator : Microsoft.Extensions.AI.Evaluation.Safety.ContentSafetyEvaluator
type ContentHarmEvaluator = class
    inherit ContentSafetyEvaluator
Public Class ContentHarmEvaluator
Inherits ContentSafetyEvaluator
Inheritance
ContentHarmEvaluator
Derived

Remarks

ContentHarmEvaluator can be used to evaluate responses for all supported content harm metrics in one go. You can achieve this by omitting the metricNames parameter.

ContentHarmEvaluator also serves as a base class for HateAndUnfairnessEvaluator, ViolenceEvaluator, SelfHarmEvaluator and SexualEvaluator which can be used to evaluate responses for one specific content harm metric at a time.

Constructors

ContentHarmEvaluator(IDictionary<String,String>)

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of a variety of harmful content such as violence, hate speech, etc.

Properties

EvaluationMetricNames

Gets the Names of the EvaluationMetrics produced by this IEvaluator.

(Inherited from ContentSafetyEvaluator)

Methods

EvaluateAsync(IEnumerable<ChatMessage>, ChatResponse, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)
EvaluateContentSafetyAsync(IChatClient, IEnumerable<ChatMessage>, ChatResponse, IEnumerable<EvaluationContext>, String, Boolean, CancellationToken)

Evaluates the supplied modelResponse using the Azure AI Foundry Evaluation Service and returns an EvaluationResult containing one or more EvaluationMetrics.

(Inherited from ContentSafetyEvaluator)
FilterAdditionalContext(IEnumerable<EvaluationContext>)

Filters the EvaluationContexts supplied by the caller via additionalContext down to just the EvaluationContexts that are relevant to the evaluation being performed by this ContentSafetyEvaluator.

(Inherited from ContentSafetyEvaluator)

Extension Methods

EvaluateAsync(IEvaluator, ChatMessage, ChatMessage, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, ChatMessage, ChatResponse, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, ChatMessage, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, ChatResponse, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, String, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, String, String, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

Applies to