| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-02-26 | 1.2 kB | |
| v1.1.0 source code.tar.gz | 2026-02-26 | 288.3 kB | |
| v1.1.0 source code.zip | 2026-02-26 | 381.1 kB | |
| Totals: 3 Items | 670.6 kB | 0 | |
What's New
Pluggable NLI Backends
NLIScorer(backend="minicheck")— MiniCheck-DeBERTa-L as alternative to default DeBERTa- Graceful fallback to heuristic when backend package not installed
Native LLM Providers
CoherenceAgent(provider="openai")— readsOPENAI_API_KEYfrom envCoherenceAgent(provider="anthropic")— readsANTHROPIC_API_KEYfrom env- Backward-compatible:
llm_api_url=and default mock still work
CLI Benchmark Runner
director-ai eval --dataset aggrefact --max-samples 100 --output results.json- Delegates to benchmark suite with comparison table output
Streaming Halt Callbacks
StreamingKernel(on_halt=callback)— fires withStreamSessionon haltSafetyKernel(on_halt=callback)— fires with score on halt
SQLite Usage Dashboard
GET /v1/stats— summary statisticsGET /v1/stats/hourly— hourly breakdownGET /v1/dashboard— inline HTML dashboard- Auto-records all reviews via the API server
Fixes
- CI type check now excludes benchmarks directory from mypy
- Docs workflow no longer requires deleted
[research]extra
Full Changelog: https://github.com/anulum/director-ai/compare/v1.0.0...v1.1.0