mllm is an open-source inference engine designed to run multimodal large language models efficiently on mobile devices and edge computing environments. The framework focuses on delivering high-performance AI inference in resource-constrained systems such as smartphones, embedded hardware, and lightweight computing platforms. Implemented primarily in C and C++, it is designed to operate with minimal external dependencies while taking advantage of hardware-specific acceleration technologies such as ARM NEON and x86 AVX2 instructions. The system supports multiple optimization techniques including quantization, pruning, and speculative decoding to improve performance while reducing computational overhead. It also provides tools to convert models from popular formats like PyTorch checkpoints into optimized runtime formats that can be executed on supported hardware platforms.

Features

  • Lightweight multimodal LLM inference engine optimized for mobile and edge devices
  • Support for ARM CPUs, x86 processors, and specialized accelerators such as Qualcomm NPUs
  • Model conversion utilities for importing PyTorch and SafeTensors checkpoints
  • Advanced optimization techniques including quantization, pruning, and speculative decoding
  • Command-line and Android demonstration applications for running local inference
  • Support for multimodal models combining text, vision, and image understanding tasks

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow mllm

mllm Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of mllm!

Additional Project Details

Programming Language

C++

Related Categories

C++ Large Language Models (LLM)

Registered

2026-03-09