mllm is an open-source inference engine designed to run multimodal large language models efficiently on mobile devices and edge computing environments. The framework focuses on delivering high-performance AI inference in resource-constrained systems such as smartphones, embedded hardware, and lightweight computing platforms. Implemented primarily in C and C++, it is designed to operate with minimal external dependencies while taking advantage of hardware-specific acceleration technologies such as ARM NEON and x86 AVX2 instructions. The system supports multiple optimization techniques including quantization, pruning, and speculative decoding to improve performance while reducing computational overhead. It also provides tools to convert models from popular formats like PyTorch checkpoints into optimized runtime formats that can be executed on supported hardware platforms.

Features

  • Lightweight multimodal LLM inference engine optimized for mobile and edge devices
  • Support for ARM CPUs, x86 processors, and specialized accelerators such as Qualcomm NPUs
  • Model conversion utilities for importing PyTorch and SafeTensors checkpoints
  • Advanced optimization techniques including quantization, pruning, and speculative decoding
  • Command-line and Android demonstration applications for running local inference
  • Support for multimodal models combining text, vision, and image understanding tasks

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow mllm

mllm Web Site

Other Useful Business Software
Go From AI Idea to AI App Fast Icon
Go From AI Idea to AI App Fast

One platform to build, fine-tune, and deploy ML models. No MLOps team required.

Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of mllm!

Additional Project Details

Programming Language

C++

Related Categories

C++ Large Language Models (LLM)

Registered

2026-03-09