Low-latency AI inference engine optimized for mobile devices
...It supports a wide range of AI tasks including text generation, speech-to-text, vision processing, and retrieval-augmented workflows through a unified API interface. A notable feature of Cactus is its hybrid execution model, which can dynamically route tasks between on-device processing and cloud services when additional compute is required.
Deep learning inference framework optimized for mobile platforms
Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster.