mllm is an open-source inference engine designed to run multimodal large language models efficiently on mobile devices and edge computing environments. The framework focuses on delivering high-performance AI inference in resource-constrained systems such as smartphones, embedded hardware, and lightweight computing platforms. Implemented primarily in C and C++, it is designed to operate with minimal external dependencies while taking advantage of hardware-specific acceleration technologies such as ARM NEON and x86 AVX2 instructions. The system supports multiple optimization techniques including quantization, pruning, and speculative decoding to improve performance while reducing computational overhead. It also provides tools to convert models from popular formats like PyTorch checkpoints into optimized runtime formats that can be executed on supported hardware platforms.
Features
- Lightweight multimodal LLM inference engine optimized for mobile and edge devices
- Support for ARM CPUs, x86 processors, and specialized accelerators such as Qualcomm NPUs
- Model conversion utilities for importing PyTorch and SafeTensors checkpoints
- Advanced optimization techniques including quantization, pruning, and speculative decoding
- Command-line and Android demonstration applications for running local inference
- Support for multimodal models combining text, vision, and image understanding tasks