| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-01-14 | 6.4 kB | |
| ver0.5.0 ! source code.tar.gz | 2026-01-14 | 84.2 MB | |
| ver0.5.0 ! source code.zip | 2026-01-14 | 84.4 MB | |
| Totals: 3 Items | 168.6 MB | 0 | |
DeepTutor v0.5.0 Release Notes
Release Date: 2026.01.15
We're thrilled to announce DeepTutor v0.5.0! This release delivers unified service configuration, flexible RAG pipeline selection, and major UI/UX improvements across multiple modules.
Stability Update: This release fixes multiple environment configuration and stability issues. We recommend all users to pull the latest version! Remember to update your .env file!
[!TIP] Call for Issues: We welcome your feedback! If you encounter any bugs or have feature requests, please open an issue! If you would like to submit a PR, please check out our Contributing Guide.
Quick Summary
- Configuration — Refactored config logic for smoother LLM/Embedding setup. Backend secrets stay hidden from frontend. Added more search providers.
- RAG Pipelines — Select different pipelines per KB: LlamaIndex (direct), LightRAG (graph), RAG-Anything (multimodal graph).
- Question Gen — Unified BaseAgent architecture with more intuitive UI.
- Home — Save chat history to notebooks.
- Sidebar — Drag-and-drop reordering + customizable top-left label.
- Misc — Various bug fixes and stability improvements.
✨ Highlights
Unified Configuration System
Completely redesigned configuration management for LLM, Embedding, TTS, and Search services:
Key Features:
- Environment-based secrets: Store sensitive API keys in .env while managing configurations in the UI
- {"use_env": "VAR_NAME"} syntax: Reference environment variables without exposing them in the frontend
- Per-service active config: Each service (LLM, Embedding, TTS, Search) maintains its own active configuration
- Seamless provider switching: Add new providers in the frontend without touching backend secrets
New Search Providers: | Provider | Description | |:---|:---| | Tavily | AI-native search API | | Exa | Neural search engine | | Jina | Reader-based web search | | Serper | Google SERP API |
RAG Pipeline Selection
Choose the optimal RAG pipeline for each knowledge base based on your speed/quality requirements:
| Pipeline | Index Type | Best For | Speed |
|---|---|---|---|
| LlamaIndex | Vector (Direct) | Quick setup, simple documents | Fastest |
| LightRAG | Knowledge Graph | General documents, text-heavy | Fast |
| RAG-Anything | Multimodal Graph | Academic papers, textbooks with figures/equations | Thorough |
Question Generation Overhaul
Refactored the Question Generation module with unified agent architecture:
Backend Changes:
- Migrated to BaseAgent pattern consistent with other modules
- New specialized agents: RetrieveAgent, GenerateAgent, RelevanceAnalyzer
- Single-pass generation with relevance classification (no iterative validation loops)
- Improved JSON parsing with markdown code block extraction
Frontend Improvements: - Real-time progress dashboard with stage indicators - Log drawer for debugging generation process - Cleaner question card layout with answer submission - "Add to Notebook" integration
Home Page Enhancements
Save Chat to Notebook: - New "Save to Notebook" button in chat interface - Automatically formats conversation as markdown - Preserves user queries and assistant responses with role labels
Sidebar Customization
Drag-and-Drop Navigation: - Reorder sidebar items within groups by dragging - Visual feedback during drag operations - Persistent order saved to user settings
Customizable Description: - Click to edit the sidebar description label - Personalize your workspace identity
📦 What's Changed
Core Infrastructure
- Added
src/services/config/unified_config.py— Centralized configuration manager - Added
src/api/routers/config.py— Unified REST API for all service configs - Refactored web search to support multiple providers (
src/services/search/) - Enhanced error handling with LLM error framework
RAG System
- Implemented
LlamaIndexPipelinewith custom embedding adapter - Implemented pure
LightRAGPipelinewith complete initialization - Added pipeline selection during KB create/upload (PR [#129])
- Factory pattern in
src/services/rag/factory.pyfor pipeline management
Question Generation
- Refactored
AgentCoordinatorwith specialized agents - New
RetrieveAgent,GenerateAgent,RelevanceAnalyzerinsrc/agents/question/agents/ - Removed iterative validation loops for faster generation
- Added
useQuestionReducerhook for frontend state management
Frontend Updates
web/app/settings/page.tsx— Complete rebuild with unified config UIweb/app/question/page.tsx— New dashboard with progress trackingweb/app/page.tsx— Added "Save to Notebook" functionalityweb/components/Sidebar.tsx— Drag-and-drop + editable descriptionweb/components/AddToNotebookModal.tsx— Reusable notebook integration
What's Changed
- feat(kb): allow selecting RAG provider during KB create/upload by @tusharkhatriofficial in https://github.com/HKUDS/DeepTutor/pull/129
- fix(docker): make Dockerfile portable by @scrrlt in https://github.com/HKUDS/DeepTutor/pull/128
- feat: pre-commit CI integration by @scrrlt in https://github.com/HKUDS/DeepTutor/pull/126
- feat: LlamaIndex pipeline implementation by @tusharkhatriofficial in https://github.com/HKUDS/DeepTutor/pull/98
- feat: web search providers (Tavily, Exa, Jina, Serper) by @Andres77872 in https://github.com/HKUDS/DeepTutor/pull/95
- feat: Azure OpenAI support enhancement by @scrrlt in https://github.com/HKUDS/DeepTutor/pull/87
- fix: modal vertical centering by @OlalalalaO in https://github.com/HKUDS/DeepTutor/pull/117
- work: LLM error framework by @scrrlt in https://github.com/HKUDS/DeepTutor/pull/118
New Contributors
- @Andres77872 made their first contribution in https://github.com/HKUDS/DeepTutor/pull/95
- @OlalalalaO made their first contribution in https://github.com/HKUDS/DeepTutor/pull/117
Full Changelog: https://github.com/HKUDS/DeepTutor/compare/v0.4.1...v0.5.0