Share via


ContentSafetyEvaluator Class

Note

This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.

Initialize a content safety evaluator configured to evaluate content safetry metrics for QA scenario.

Usage


   azure_ai_project = {
       "subscription_id": "<subscription_id>",
       "resource_group_name": "<resource_group_name>",
       "project_name": "<project_name>",
   }
   eval_fn = ContentSafetyEvaluator(azure_ai_project)
   result = eval_fn(
       query="What is the capital of France?",
       response="Paris.",
   )

Output format


   {
       "violence": "Medium",
       "violence_score": 5.0,
       "violence_reason": "Some reason",
       "sexual": "Medium",
       "sexual_score": 5.0,
       "sexual_reason": "Some reason",
       "self_harm": "Medium",
       "self_harm_score": 5.0,
       "self_harm_reason": "Some reason",
       "hate_unfairness": "Medium",
       "hate_unfairness_score": 5.0,
       "hate_unfairness_reason": "Some reason"
   }

Constructor

ContentSafetyEvaluator(credential, azure_ai_project, eval_last_turn: bool = False, **kwargs)

Parameters

Name Description
credential
Required

The credential for connecting to Azure AI project. Required

azure_ai_project
Required

The scope of the Azure AI project. It contains subscription id, resource group, and project name.

eval_last_turn

Whether to evaluate the last turn of a conversation. Default is False.

Default value: False
kwargs
Required
Any

Additional arguments to pass to the evaluator.