Omi is an open source AI wearable platform designed to capture spoken conversations and convert them into useful digital information such as transcripts, summaries, and action items. It combines hardware, firmware, mobile applications, and backend services to create a complete ecosystem for voice-driven interaction. Users can connect the wearable device to a mobile phone and automatically record and transcribe meetings, conversations, and voice memos. Omi includes firmware for wearable hardware, a Flutter-based mobile companion application, backend services built with Python and FastAPI, and various SDKs for developers. These components work together to process audio, perform speech recognition, and integrate AI features such as summaries and automated actions. Developers can extend the platform by building plugins, integrations, and custom applications using provided SDKs and APIs. The repository also supports experimental hardware implementations.
Features
- Automatic transcription of conversations, meetings, and voice memos
- AI-generated summaries and action items from captured speech
- Companion mobile app built with Flutter for device interaction
- Firmware for wearable hardware with Bluetooth audio streaming
- Plugin and integration system for extending functionality
- SDKs available for building apps and integrations