| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| director_ai-3.11.1.tar.gz.sigstore.json | 2026-03-27 | 9.1 kB | |
| director_ai-3.11.1-py3-none-any.whl.sigstore.json | 2026-03-27 | 9.0 kB | |
| sbom.json | 2026-03-27 | 72.6 kB | |
| director_ai-3.11.1-py3-none-any.whl | 2026-03-27 | 300.2 kB | |
| director_ai-3.11.1.tar.gz | 2026-03-27 | 511.8 kB | |
| README.md | 2026-03-27 | 1.3 kB | |
| v3.11.1 source code.tar.gz | 2026-03-27 | 14.2 MB | |
| v3.11.1 source code.zip | 2026-03-27 | 14.7 MB | |
| Totals: 8 Items | 29.9 MB | 0 | |
Fixed
- NLI CUDA auto-detection:
_load_nli_model()now auto-selects CUDA whentorch.cuda.is_available()and device isNone. Previously the model stayed on CPU unlessnli_device="cuda"was passed explicitly. 6.8x latency improvement on L40S (169.5 ms → 24.9 ms). director_assert()crash: passed a float toHallucinationErrorwhich expected aCoherenceScoreobject. Any hallucination detection viadirector_assert()would raiseAttributeErrorinstead ofHallucinationError.
Added
- 16 tests for
integrations/dspy.py(coherence_check + director_assert) - 15 tests for
integrations/semantic_kernel.py(DirectorAIFilter init + async call) - VerifiedScorer docs:
atomic=True,evidence_top_k,SourceSpandataclass, multi-span evidence - Privacy policy page
- L40S GPU benchmark results (24.9 ms NLI median, 40.2 RPS)
- Rust vs Python signal benchmark (BM25 10.2x, trend_drop 20.7x)
Measured Numbers
| Metric | Value |
|---|---|
| L40S NLI GPU median | 24.9 ms (was 169.5 ms on CPU) |
| L40S NLI throughput | 40.2 RPS |
| Heuristic median | 0.088 ms |
| Rust BM25 (100 docs) | 10.8 us (10.2x vs Python) |
| Rust trend_drop | 0.3 us (20.7x vs Python) |
| AggreFact BA | 75.86% (29,320 samples) |
Full changelog: https://github.com/anulum/director-ai/compare/v3.11.0...v3.11.1