| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-04-30 | 3.7 kB | |
| v2.1.0-ce source code.tar.gz | 2026-04-30 | 9.1 MB | |
| v2.1.0-ce source code.zip | 2026-04-30 | 9.6 MB | |
| Totals: 3 Items | 18.7 MB | 0 | |
✨ New Features
- AI Gateway & OpenAI-Compatible APIs
- Audio Transcription: Added OpenAI-compatible
/v1/audio/transcriptionssupport with multipart request rewriting and audio token usage counting. - Text/Image-to-Video: Added
/v1/videos,/v1/videos/{id}, and/v1/videos/{id}/contentAPIs with provider adapters for OpenAI-compatible endpoints, LightX2V, MiniMax, and Seedance. - Model Routing: Added provider-aware model IDs, composite model ID parsing, upstream catalog support, session routing, fallback retry for chat completions, and per-upstream availability reporting.
- Usage Limits: Added Redis-backed per-window usage limit checks for configured upstream policies.
-
API Key Auth: AI Gateway inference endpoints now require user/org API keys instead of normal login sessions.
-
API Key Management
- Added namespace-scoped API key management for users and organizations, including create, list, update, delete, built-in key retrieval, and built-in key refresh APIs.
-
Added user/org API key authentication context propagation for downstream services.
-
Inference & Evaluation
- Added configurable model architecture checks for inference, including admin APIs to view and update inference architecture rules.
- Added SGLang-based Qwen3-Guard stream inference configuration and Docker assets.
- Added AMD EvalScope evaluation configuration and Docker image support.
-
Updated vLLM and AMD vLLM inference images/configuration to v0.19.0.
-
Repository, Tags & Skills
- Added automatic industry tag scanning for model and dataset repositories using configured LLM prompts.
- Added
sourcetracking for repository tags and safer tag replacement/removal behavior. - Added skill
mirror_from_saasroutes and skill clone URL fields. - Added a dedicated skill tag category seed.
- Improved
SKILL.mdvalidation and added broader validator tests.
🚀 Enhancements & Bug Fixes
- AI Gateway Reliability
- Fixed async model cache writes mutating live model lists.
- Fixed nil-user panic risk when listing CSGHub models.
- Improved sensitive-check whitelist lookup behavior.
- Made sensitive-check behavior configurable per LLM config where available.
-
Improved SGLang Guard stream trace/session header handling.
-
Resource Scheduling
- Added unavailable reasons to resource list responses.
- Added cluster offline/unavailable status handling.
- Prevented CPU-only workloads from being scheduled onto XPU nodes.
-
Added replica-aware resource checks for Spaces.
-
Repository & LFS
- Added repository size calculation trigger command.
- Added LFS pointer download nil-URL protection.
-
Added LFS size checks before syncing files.
-
Data Viewer
-
Added file-size checks and optimizations before converting preview files.
-
Finetune & Runner
- Fixed finetune jobs missing model and dataset revision data.
-
Fixed potential runner panic paths in service/workflow handling.
-
Proxy & Networking
- Set proxied
Hostheaders without port where required. - Sanitized logged authorization headers in internal proxy logs.
🛠 Maintenance
- Upgraded vulnerable dependencies reported by Dependabot.
- Improved accounting metering retry limit configurability.
- Added and refreshed unit tests across AI Gateway, API keys, resource checks, tags, skills, LFS, and database stores.
- Improved CI/test stability and separated CI build cache behavior.
Full Changelog: https://github.com/OpenCSGs/csghub-server/compare/v2.0.0-ce...v2.1.0-ce