| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-03-22 | 1.1 kB | |
| v1.2.0 source code.tar.gz | 2026-03-22 | 10.6 MB | |
| v1.2.0 source code.zip | 2026-03-22 | 10.7 MB | |
| Totals: 3 Items | 21.3 MB | 1 | |
What's New
Features
- Cross-chunk context awareness for coreference resolution (#306)
- Resolves pronouns and references across chunk boundaries (e.g., "She" → "Dr. Sarah Johnson")
- New
context_window_charsparameter onextract()
Bug Fixes
- Load builtin providers before resolution regardless of config path (#419)
- Fixes
InferenceConfigErrorwhen specifying provider by name viaModelConfig(provider='ollama') - Graceful handling of chunks with no extractable entities (#423)
suppress_parse_errorsnow defaults toTrueinextract()so one unparseable chunk does not fail the entire document- Sanitizes suppress-parse-error log path to exclude raw chunk text
- Send
keep_aliveat top level for Ollama API (#421) - Support Enum/dataclass values in GCS batch cache hashing (#359)
- Handle non-Gemini model output parsing edge cases (#300)
Documentation
- Clarify that ungrounded extractions have
char_interval=None(#420) - Clarify best practices for few-shot examples (#302)
Full Changelog: https://github.com/google/langextract/compare/v1.1.1...v1.2.0