The Omi project is an open-source AI wearable ecosystem developed by Based Hardware that combines hardware, software, and cloud infrastructure to create a persistent “second brain” for capturing and processing real-world interactions. It is designed as a system that continuously listens to conversations and monitors screen activity, converting this input into structured data such as transcripts, summaries, and actionable insights in real time. The platform operates across multiple environments, including wearable devices, mobile apps, and desktop applications, ensuring seamless integration into a user’s daily workflow. At its core, omi uses a pipeline of speech-to-text systems, large language models, and memory storage services to transform raw audio and context into meaningful outputs like tasks and reminders. The architecture is modular and extensible, featuring APIs, SDKs, and plugin-like capabilities that allow developers to build custom applications.
Features
- Real-time transcription of conversations and screen activity
- Automatic generation of summaries and actionable tasks
- Persistent AI memory with searchable interaction history
- Cross-platform support across wearable devices, mobile, and desktop
- Modular architecture with APIs, SDKs, and extensibility
- Integration of speech-to-text, LLMs, and contextual data processing