| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-02-17 | 1.2 kB | |
| v0.2.81 source code.tar.gz | 2026-02-17 | 10.6 MB | |
| v0.2.81 source code.zip | 2026-02-17 | 10.7 MB | |
| Totals: 3 Items | 21.2 MB | 0 | |
- Engine and API overhaul: ChatModule refactored into Engine/MLCEngine, consolidated constructor/reload behavior, multi-model loading, better worker lifecycle, concurrency handling
- OpenAI API: mirror chat/completions APIs, stateful options, function calling and embeddings support
- Conversation templates: unified conversation template schema with custom templates
- Expanded prebuilt model support: added support for more models (Llama 2/3/3.1/3.2, Mistral variants, Gemma 2, Qwen2/2.5/3, Phi family, including vision)
- Runtime and caching: WebGPU performance/reliability improvements (more GPU-side kernels, better OOM/deviceLost handling), wasm/prebuilt versioning updates, support IndexedDB caching
- XGrammar integration: JSON-schema/grammar-constrained generation, XGrammar structural tag
- TVM-FFI integration: refactor for compatibility with more recent TVM commits and TVM FFI
- Examples: ServiceWorkerEngine + updated Chrome extension demos, new RAG/doc-chat examples, tool calls via structural tag
- CI: GitHub actions for linting and pre-commit hooks
Full Changelog: https://github.com/mlc-ai/web-llm/compare/v0.2.0...v0.2.81