Download Latest Version ver0.6.0 source code.tar.gz (84.3 MB)
Email in envelope

Get an email when there's a new version of DeepTutor

Home / v0.4.1
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2026-01-09 3.2 kB
Update_ ver0.4.1 source code.tar.gz 2026-01-09 84.1 MB
Update_ ver0.4.1 source code.zip 2026-01-09 84.4 MB
Totals: 3 Items   168.6 MB 0

🔧 DeepTutor v0.4.1 Release Notes

Release Date: 2026.01.09

A maintenance release focused on LLM Provider system optimization, Question Generation robustness, and Docker deployment fixes.

✨ Highlights

🔌 LLM Provider System Overhaul

Completely redesigned LLM provider management with persistent configuration:

Three Deployment Modes (LLM_MODE env var): | Mode | Description | |:---|:---| | hybrid (default) | Use active provider if available, else env config | | api | Cloud API providers only (OpenAI, Anthropic, etc.) | | local | Local/self-hosted only (Ollama, LM Studio, etc.) |

Provider Presets for quick setup:

:::python
# API Providers
API_PROVIDER_PRESETS = {
    "openai": {"base_url": "https://api.openai.com/v1", "requires_key": True},
    "anthropic": {"base_url": "https://api.anthropic.com/v1", "requires_key": True},
    "deepseek": {"base_url": "https://api.deepseek.com", "requires_key": True},
    "openrouter": {"base_url": "https://openrouter.ai/api/v1", "requires_key": True},
}

# Local Providers
LOCAL_PROVIDER_PRESETS = {
    "ollama": {"base_url": "http://localhost:11434/v1", "requires_key": False},
    "lm_studio": {"base_url": "http://localhost:1234/v1", "requires_key": False},
    "vllm": {"base_url": "http://localhost:8000/v1", "requires_key": False},
    "llama_cpp": {"base_url": "http://localhost:8080/v1", "requires_key": False},
}

New API Endpoints: - GET /api/llm-providers/mode/ - Get current LLM mode info - GET /api/llm-providers/presets/ - Get provider presets - POST /api/llm-providers/test/ - Test provider connection

🛡️ Question Generation Robustness (PR [#81])

Enhanced JSON parsing for LLM responses: - Added _extract_json_from_markdown() to handle \``json ... ```` wrapped responses - Comprehensive error handling with detailed logging - Graceful fallbacks when LLM returns invalid JSON

🐳 Docker Deployment Fixes

  • Fixed frontend startup script for proper NEXT_PUBLIC_API_BASE injection
  • Improved supervisor configuration for better service management
  • Environment variable handling improvements

🧹 Codebase Cleanup

Removed src/core module - All functionality migrated to src/services:

Old Import New Import
from src.core.core import load_config_with_main from src.services.config import load_config_with_main
from src.core.llm_factory import llm_complete from src.services.llm import complete
from src.core.prompt_manager import get_prompt_manager from src.services.prompt import get_prompt_manager
from src.core.logging import get_logger from src.logging import get_logger

📦 What's Changed

  • Merge pull request [#81] from tusharkhatriofficial/fix/question-generation-json-parsing
  • fix: Add comprehensive error handling and JSON parsing for question generation
  • fix: llm providers, frontend
  • fix: docker deployment

Full Changelog: https://github.com/HKUDS/DeepTutor/compare/v0.4.0...v0.4.1

Source: README.md, updated 2026-01-09