Jlama is a modern inference engine written entirely in Java that enables developers to run large language models locally within Java applications. Unlike frameworks that require external APIs or remote services, Jlama performs inference directly on a machine using pre-trained models. This allows organizations to integrate generative AI features into their systems while maintaining full control over data privacy and infrastructure. The engine supports a wide range of open-source model architectures and formats, including variants of Llama, Mistral, and other transformer-based models. It provides tools for running chat interactions, completing prompts, or exposing an OpenAI-compatible REST API for applications that expect standard LLM endpoints. The project focuses on performance and portability by using native Java optimizations and the Java Vector API to accelerate inference workloads.
Features
- Native Java inference engine for large language models
- Support for multiple model architectures including Llama and Mistral
- Local execution without reliance on external AI APIs
- OpenAI-compatible REST API for application integration
- Prompt completion and conversational chat interfaces
- Performance optimizations using Java Vector API and modern JVM features