Download Latest Version azure-ai-evaluation_1.16.4 source code.tar.gz (133.4 MB)
Email in envelope

Get an email when there's a new version of Azure SDK for Python

Home / azure-ai-evaluation_1.16.3
Name Modified Size InfoDownloads / Week
Parent folder
azure-ai-evaluation_1.16.3 source code.tar.gz 2026-04-01 133.4 MB
azure-ai-evaluation_1.16.3 source code.zip 2026-04-01 188.3 MB
README.md 2026-04-01 932 Bytes
Totals: 3 Items   321.7 MB 0

1.16.3 (2026-04-01)

Features Added

  • Added extra_headers support to OpenAIModelConfiguration to allow passing custom HTTP headers.

Bugs Fixed

  • Fixed attack success rate (ASR) always reporting 0% because the sync eval API's passed field indicates task completion, not content safety. Replaced passed-based logic with score-based threshold comparison matching _evaluation_processor.py.
  • Fixed partial red team results being discarded when some objectives fail. Previously, if PyRIT raised due to incomplete objectives (e.g., evaluator model refuses to score), all completed results were lost. Now recovers partial results from PyRIT's memory database.
  • Fixed evaluator token metrics (promptTokens, completionTokens) not persisted in red teaming output items. The sync eval API returns camelCase keys but the extraction code only checked for snake_case, silently dropping all evaluator token usage data.
Source: README.md, updated 2026-04-01