Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
Performance Hot Fix (important) source code.tar.gz | 2025-08-09 | 1.9 MB | |
Performance Hot Fix (important) source code.zip | 2025-08-09 | 2.1 MB | |
README.md | 2025-08-09 | 897 Bytes | |
Totals: 3 Items | 3.9 MB | 0 |
What's Changed
Most important change is that the default setting of temperature=0.1 and frequency_penalty=0.0 was reverted to 0.2 and 0.05, which performs much better with a lot of model! * Support structured output in system prompt by @mertunsall in https://github.com/browser-use/browser-use/pull/2622 * Update groq chat models list by @mertunsall in https://github.com/browser-use/browser-use/pull/2627 * Hot fix/llm timeout by @mertunsall in https://github.com/browser-use/browser-use/pull/2628 * add gpt5 example by @mertunsall in https://github.com/browser-use/browser-use/pull/2630 * hot-fix/reasoning-models by @MagMueller in https://github.com/browser-use/browser-use/pull/2631 * Bump pyproject.toml to 0.5.11 by @mertunsall in https://github.com/browser-use/browser-use/pull/2629
Full Changelog: https://github.com/browser-use/browser-use/compare/0.5.10...0.5.11