LLM Telegram Bot is a self-hosted Telegram chatbot that connects messaging interactions with large language models, typically powered by Ollama or similar backends. The project is designed to provide a customizable AI assistant that can operate within Telegram conversations, supporting dynamic responses based on user input and configurable parameters. It includes features such as conversation memory, allowing the bot to maintain context across multiple messages and provide more coherent responses. The system supports multiple modes or personas, enabling users to switch between different conversational styles or use cases. It also allows fine-tuning of generation parameters such as temperature and token limits, giving users control over response behavior. The architecture is modular, making it easy to extend or adapt for different workflows or integrations.
Features
- Context-aware conversations with memory
- Support for multiple AI personas or modes
- Adjustable generation parameters
- Integration with local LLM backends like Ollama
- Modular and extensible architecture
- Self-hosted deployment for privacy