Open Source ChromeOS Artificial Intelligence Software - Page 3

Artificial Intelligence Software for ChromeOS

  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    edge-tts

    edge-tts

    Use Microsoft Edge's online text-to-speech service from Python

    edge-tts is a Python module and command-line tool that gives you direct access to Microsoft Edge’s online text-to-speech service without needing the Edge browser, Windows, or any API key. It wraps the same cloud voices used by Edge, exposing them through a simple CLI (edge-tts, edge-playback) and a Python API, so you can script high-quality speech generation in your own applications. The tool lets you list available voices, specify locale and voice name, and generate audio files in common formats like MP3 or WAV. It also supports generating subtitle files (such as SRT or VTT) alongside the speech, which is handy for video narration, e-learning, or accessibility workflows. From the CLI you can adjust parameters such as speaking rate, volume, and pitch, giving you some control over prosody without diving into SSML. The library is asynchronous under the hood, which makes it efficient for batch jobs or web services that need to synthesize many utterances concurrently.
    Downloads: 28 This Week
    Last Update:
    See Project
  • 2
    Clippy

    Clippy

    Clippy, now with some AI

    Clippy is an open-source desktop assistant that allows users to run modern large language models locally while presenting them through a nostalgic interface inspired by Microsoft’s classic Clippy assistant from the 1990s. The project serves as both a playful homage to the early days of personal computing and a practical demonstration of local AI inference. Clippy integrates with the llama.cpp runtime to run models directly on a user’s computer without requiring cloud-based AI services. It supports models in the GGUF format, which allows it to run many publicly available open-source LLMs efficiently on consumer hardware. Users interact with the system through a simple animated assistant interface that can answer questions, generate text, and perform conversational tasks. The application includes one-click installation support for several popular models such as Meta’s Llama, Google’s Gemma, and other open models.
    Downloads: 27 This Week
    Last Update:
    See Project
  • 3
    HY-World 2.0

    HY-World 2.0

    A Multi-Modal World Model for Reconstructing, Generating, Simulation

    HY-World 2.0 is a multi-modal world model framework for reconstructing, generating, and simulating navigable 3D worlds from diverse inputs. It accepts text prompts, single-view images, multi-view images, and videos, and produces 3D world representations rather than limiting output to flat video generation. For text and single-image inputs, it generates high-fidelity 3D Gaussian Splatting scenes through a multi-stage pipeline that includes panorama generation, trajectory planning, world expansion, and world composition. The system also improves reconstruction from multi-view images and video by upgrading its feed-forward 3D prediction components and its memory-aware view generation process. Another major part of the project is WorldLens, a rendering platform designed for interactive exploration with an engine-agnostic architecture, automatic image-based lighting, collision detection, and support for character interaction.
    Downloads: 26 This Week
    Last Update:
    See Project
  • 4
    Happy Coder

    Happy Coder

    Mobile and Web client for Codex and Claude Code, with realtime voice

    Happy is an open-source, cross-platform mobile and web client designed to bring powerful AI coding agents such as Claude Code and Codex to your fingertips no matter where you are. At its core, Happy wraps existing AI coding tools with a unified interface, providing real-time voice interactions, encrypted communication, and seamless device switching between desktop and mobile. You can start a coding session locally through the Happy CLI or connect from a phone or browser, allowing developers to inspect, interact with, and guide the AI as it generates, tests, or explains code. The project includes components like a dedicated backend server for encrypted sync, a rich front-end experience across web and native apps, and support for push notifications when your coding agent encounters permission requests or errors. Happy prioritizes security with end-to-end encryption so your code and interactions remain private and auditable.
    Downloads: 26 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 5
    Llama Coder

    Llama Coder

    Open source Claude Artifacts – built with Llama 3.1 405B

    Llama Coder is an open-source tool that lets you generate small applications (often React or web apps) from a single natural-language prompt using the Llama 3 family of models. It’s framed as an open-source “Claude Artifacts”-style experience: you describe the app you want, the tool calls an LLM hosted on Together.ai, and you get back a runnable code artifact. The project includes a web interface where you can enter prompts, see generated code, and run or tweak the result directly in the browser. Technically, it is built using a modern TypeScript/Next.js stack and integrates with Together’s API, making it a good blueprint for building your own AI-powered developer tools. By focusing on small self-contained apps or components, it keeps scope manageable while still showcasing the power of code generation. Developers can fork the repo to plug in different models, change the UI, or integrate it into their own IDE-adjacent workflows.
    Downloads: 25 This Week
    Last Update:
    See Project
  • 6
    Qwen3-TTS

    Qwen3-TTS

    Qwen3-TTS is an open-source series of TTS models

    Qwen3-TTS is an open-source text-to-speech (TTS) project built around the Qwen3 large language model family, focused on generating high-quality, natural-sounding speech from plain text input. It provides researchers and developers with tools to transform text into expressive, intelligible audio, supporting multiple languages and voice characteristics tuned for clarity and fluidity. The project includes pre-trained models and inference scripts that let users synthesize speech locally or integrate TTS into larger pipelines such as voice assistants, accessibility tools, or multimedia generation workflows. Because it’s part of the broader Qwen ecosystem, it benefits from the model’s understanding of linguistic nuances, enabling more accurate pronunciation, prosody, and contextual delivery than many traditional TTS systems. Developers can customize voice output parameters like speed, pitch, and volume, and combine the TTS stack with other AI components.
    Downloads: 25 This Week
    Last Update:
    See Project
  • 7
    VoxCPM2

    VoxCPM2

    Tokenizer-Free TTS for Multilingual Speech Generation

    VoxCPM2 is an advanced open-source text-to-speech system that redefines speech synthesis by eliminating traditional tokenization and instead generating continuous speech representations through a diffusion-based autoregressive architecture. Built on top of the MiniCPM model family, it enables highly natural, expressive, and context-aware speech generation that adapts tone, emotion, and pacing directly from input text. The system is trained on massive multilingual datasets, enabling support for dozens of languages and dialects while maintaining high fidelity and realism in generated audio. VoxCPM stands out for its ability to perform voice cloning with minimal input, capturing not only the speaker’s timbre but also nuanced features such as rhythm, accent, and emotional delivery. It also introduces voice design capabilities, allowing users to generate entirely new voices from natural language descriptions without requiring reference audio.
    Downloads: 25 This Week
    Last Update:
    See Project
  • 8
    Huashu Design

    Huashu Design

    Huashu Design · HTML-native design skill for Claude Code

    Huashu-design is a framework focused on designing and optimizing conversational scripts, particularly for persuasive or structured communication scenarios such as sales, marketing, or customer interaction. The project emphasizes the creation of “huashu,” or structured dialogue patterns, that guide interactions toward specific goals. It provides methodologies and tools for organizing conversation flows, ensuring that responses are consistent, effective, and aligned with intended outcomes. The system is designed to be adaptable, allowing users to customize scripts for different domains or audiences. It also encourages iterative refinement, enabling continuous improvement of conversational strategies based on feedback and performance. The framework can be applied to both human-driven and AI-driven interactions, making it versatile across use cases. Overall, huashu-design offers a systematic approach to crafting and managing effective communication patterns.
    Downloads: 23 This Week
    Last Update:
    See Project
  • 9
    VGGFace2

    VGGFace2

    VGGFace2 Dataset for Face Recognition

    VGGFace2 is a large-scale face recognition dataset developed to support research on facial recognition across variations in pose, age, illumination, and identity. It consists of 3.31 million images covering 9,131 subjects, with an average of over 360 images per subject. The dataset was collected from Google Image Search, ensuring a wide diversity in ethnicity, profession, and real-world conditions. It is split into a training set with 8,631 identities and a test set with 500 identities, making it suitable for benchmarking and large-scale model training. Alongside the dataset, the repository provides pre-trained models based on ResNet-50 and SE-ResNet-50 architectures, trained with both MS-Celeb-1M pretraining and fine-tuning on VGGFace2. These models achieve strong verification performance on benchmarks such as IJB-B and include variants with lower-dimensional embeddings for compact feature representation. The project also includes preprocessing tools, face detection scripts, and etc.
    Downloads: 22 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    gImageReader

    gImageReader

    A graphical frontend to tesseract-ocr

    gImageReader is a simple Gtk/Qt front-end to tesseract. Features include: - Import PDF documents and images from disk, scanning devices, clipboard and screenshots - Process multiple images and documents in one go - Manual or automatic recognition area definition - Recognize to plain text or to hOCR documents - Recognized text displayed directly next to the image - Post-process the recognized text, including spellchecking - Generate PDF documents from hOCR documents **Note**: This page is only a mirror for the downloads. Development is happening on github at https://github.com/manisandro/gImageReader, release binaries are also posted there.
    Leader badge
    Downloads: 102 This Week
    Last Update:
    See Project
  • 11
    Open-LLM-VTuber

    Open-LLM-VTuber

    Open source AI VTuber platform with voice chat and Live2D avatars

    Open-LLM-VTuber is an open source platform designed to create AI-powered VTuber characters that can interact with users through voice and animated avatars. It enables hands-free conversations with large language models by combining speech recognition, language processing, and text-to-speech synthesis into a single system. Users can speak directly to the AI character, and the system can respond with a generated voice while animating a Live2D avatar to simulate a talking virtual personality. Open-LLM-VTuber is modular, allowing developers to swap or configure different language models, speech recognition engines, and voice synthesis systems depending on their needs. It can run locally and supports both offline and online AI services, giving users flexibility in how models and resources are used. Open-LLM-VTuber was originally inspired by the goal of recreating an AI VTuber experience using open source tools that work across multiple operating systems.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 12
    TurboQuant+

    TurboQuant+

    Implementation of TurboQuant (ICLR 2026)

    TurboQuant Plus is an extended and enhanced version of quantization tooling aimed at improving neural network efficiency through advanced compression and optimization strategies. It builds upon the concept of reducing model precision to accelerate inference while attempting to maintain or recover accuracy through refined techniques. The project explores additional enhancements such as improved calibration, adaptive quantization, and potentially hybrid precision approaches that combine multiple levels of compression. It is designed to be used in conjunction with modern machine learning workflows, particularly those involving large models that require optimization for deployment. TurboQuant Plus focuses on experimentation and performance tuning, allowing developers to test different configurations and evaluate trade-offs. Its architecture supports extensibility, enabling further development of quantization methods and integration with existing ML pipelines.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 13
    CVPR 2026

    CVPR 2026

    Collection of CVPR 2026 Papers and Open Source Projects

    CVPR2026-Papers-with-Code is a community-maintained repository that collects research papers and corresponding open-source implementations from the CVPR 2026 conference and related computer vision research. The repository acts as a continuously updated catalog of cutting-edge research across a wide range of topics including computer vision, multimodal AI, generative models, diffusion systems, autonomous driving, medical imaging, and remote sensing. Each entry typically links to the research paper as well as the public code repository associated with the work, allowing researchers and developers to quickly access reproducible implementations. The project serves as a centralized index that makes it easier for practitioners to explore the latest advances presented at major computer vision conferences. In addition to the current CVPR cycle, the repository also references related lists covering earlier conferences such as ECCV and ICCV, creating a broader archive of vision research.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 14
    DeepSeek-V3.2-Exp

    DeepSeek-V3.2-Exp

    An experimental version of DeepSeek model

    DeepSeek-V3.2-Exp is an experimental release of the DeepSeek model family, intended as a stepping stone toward the next generation architecture. The key innovation in this version is DeepSeek Sparse Attention (DSA), a sparse attention mechanism that aims to optimize training and inference efficiency in long-context settings without degrading output quality. According to the authors, they aligned the training setup of V3.2-Exp with V3.1-Terminus so that benchmark results remain largely comparable, even though the internal attention mechanism changes. In public evaluations across a variety of reasoning, code, and question-answering benchmarks (e.g. MMLU, LiveCodeBench, AIME, Codeforces, etc.), V3.2-Exp shows performance very close to or in some cases matching that of V3.1-Terminus. The repository includes tools and kernels to support the new sparse architecture—for instance, CUDA kernels, logit indexers, and open-source modules like FlashMLA and DeepGEMM are invoked for performance.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 15
    MLC LLM

    MLC LLM

    Universal LLM Deployment Engine with ML Compilation

    MLC LLM is a machine learning compiler and deployment framework designed to enable efficient execution of large language models across a wide range of hardware platforms. The project focuses on compiling models into optimized runtimes that can run natively on devices such as GPUs, mobile processors, browsers, and edge hardware. By leveraging machine learning compilation techniques, mlc-llm produces high-performance inference engines that maintain consistent APIs across platforms. The system supports deployment on environments including Linux, macOS, Windows, iOS, Android, and web browsers while utilizing different acceleration technologies such as CUDA, Vulkan, Metal, and WebGPU. It also provides OpenAI-compatible APIs that allow developers to integrate locally deployed models into existing AI applications without major code changes.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 16
    System Prompts and Models of AI Tools

    System Prompts and Models of AI Tools

    Full System Prompts, Internal Tools & AI Models

    System Prompts and Models of AI Tools is a large open-source repository that collects and documents system prompts, internal tools, and model configurations used by popular AI platforms. It aggregates prompts from tools like Claude, Cursor, Devin AI, Perplexity, and many others to provide insight into how modern AI agents are structured and guided. The repository serves as a valuable resource for developers, researchers, and AI enthusiasts interested in understanding prompt engineering and agent behavior. By exposing these system-level instructions, it highlights how AI tools are designed to reason, act, and interact with users. It also emphasizes transparency and security awareness, especially around prompt leaks and vulnerabilities. Overall, it acts as a comprehensive knowledge base for studying and experimenting with real-world AI system prompts.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 17
    DINOv3

    DINOv3

    Reference PyTorch implementation and models for DINOv3

    DINOv3 is the third-generation iteration of Meta’s self-supervised visual representation learning framework, building upon the ideas from DINO and DINOv2. It continues the paradigm of learning strong image representations without labels using teacher–student distillation, but introduces a simplified and more scalable training recipe that performs well across datasets and architectures. DINOv3 removes the need for complex augmentations or momentum encoders, streamlining the pipeline while maintaining or improving feature quality. The model supports multiple backbone architectures, including Vision Transformers (ViT), and can handle larger image resolutions with improved stability during training. The learned embeddings generalize robustly across tasks like classification, retrieval, and segmentation without fine-tuning, showing state-of-the-art transfer performance among self-supervised models.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 18
    HunyuanWorld-Voyager

    HunyuanWorld-Voyager

    RGBD video generation model conditioned on camera input

    HunyuanWorld-Voyager is a next-generation video diffusion framework developed by Tencent-Hunyuan for generating world-consistent 3D scene videos from a single input image. By leveraging user-defined camera paths, it enables immersive scene exploration and supports controllable video synthesis with high realism. The system jointly produces aligned RGB and depth video sequences, making it directly applicable to 3D reconstruction tasks. At its core, Voyager integrates a world-consistent video diffusion model with an efficient long-range world exploration engine powered by auto-regressive inference. To support training, the team built a scalable data engine that automatically curates large video datasets with camera pose estimation and metric depth prediction. As a result, Voyager delivers state-of-the-art performance on world exploration benchmarks while maintaining photometric, style, and 3D consistency.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 19
    Paperclip

    Paperclip

    Open-source orchestration for zero-human companies

    Paperclip is an open-source tool designed to help AI systems and developer tools access academic research papers through a standardized interface. The project implements a server based on the Model Context Protocol (MCP), a framework that allows large language models and AI agents to connect to external data sources and tools in a consistent way. By acting as a middleware layer, Paperclip aggregates multiple academic databases and exposes them through a single interface, allowing AI applications to search and retrieve scholarly papers without needing to integrate with each provider individually. The system supports repositories such as arXiv, OpenAlex, and the Open Science Framework, giving AI agents access to a large body of research literature. Instead of requiring separate APIs and authentication flows for each service, Paperclip provides unified search and retrieval capabilities that simplify integration into AI workflows.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 20
    SoniTranslate

    SoniTranslate

    Synchronized Translation for Videos

    SoniTranslate is a video translation and dubbing system that produces synchronized target-language audio tracks for existing video content. It provides a web UI built with Gradio, allowing users to upload a video, choose source and target languages, and then run a pipeline that handles transcription, translation and re-synthesis of speech. Under the hood, it uses advanced speech and diarization models to separate speakers, align audio with timecodes and respect subtitle timing, which lets the generated dub track stay in sync with the original video structure. The project supports a wide range of languages for translation, spanning major world languages (English, Spanish, French, German, Chinese, Arabic, etc.) and many regional or less widely spoken languages, making it suitable for broad internationalization. It offers multiple usage modes, including a Colab notebook for cloud-based experimentation, a Hugging Face Space demo for quick trials, and instructions.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 21
    dlib C++ Library
    Dlib is a C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems.
    Leader badge
    Downloads: 82 This Week
    Last Update:
    See Project
  • 22
    HY-World 1.5

    HY-World 1.5

    A Systematic Framework for Interactive World Modeling

    HY-WorldPlay is a Hunyuan AI project focusing on immersive multimodal content generation and interaction within virtual worlds or simulated environments. It aims to empower AI agents with the capability to both understand and generate multimedia content — including text, audio, image, and potentially 3D or game-world elements — enabling lifelike dialogue, environmental interpretations, and responsive world behavior. The platform targets use cases in digital entertainment, game worlds, training simulators, and interactive storytelling, where AI agents need to adapt to real-time user inputs and changes in environment state. It blends advanced reasoning with multimodal synthesis, enabling agents to describe scenes, generate context-appropriate responses, and contribute to narrative or gameplay flows. The underlying framework typically supports large-context state tracking across extended interactions, blending temporal and spatial multimodal signals.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 23
    Hunyuan3D-2.1

    Hunyuan3D-2.1

    From Images to High-Fidelity 3D Assets

    Hunyuan3D-2.1 is Tencent Hunyuan’s advanced 3D asset generation system that produces high-fidelity 3D models with Physically Based Rendering (PBR) textures. It is fully open-source with released model weights, training, and inference code. It improves on prior versions by using a PBR texture pipeline (enabling realistic material effects like reflections and subsurface scattering) and allowing community fine-tuning and extension. It supports both shape generation (mesh geometry) and texture generation modules. Physically Based Rendering texture synthesis to model realistic material effects, including reflections, subsurface scattering, etc. Cross-platform support (MacOS, Windows, Linux) via Python / PyTorch, including diffusers-style APIs.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 24
    Qwen3.6

    Qwen3.6

    Qwen3.6 is the large language model series developed by Qwen team

    The Qwen3.6 project is an open-source large language model series developed by Alibaba’s Qwen team, designed to deliver high-performance AI capabilities with a strong emphasis on real-world usability and developer productivity. It builds upon the advancements introduced in Qwen3.5, focusing on improving stability, responsiveness, and practical application in coding and agent-based workflows. The repository serves as a central hub for documentation, community discussion, and access to the latest model releases, rather than a standalone application. One of its defining goals is to enhance “agentic coding,” enabling the model to reason across entire codebases, handle multi-step development tasks, and assist with complex software engineering workflows. The architecture incorporates modern techniques such as mixture-of-experts and hybrid attention mechanisms, allowing it to scale efficiently while maintaining strong performance.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 25
    WhisperJAV

    WhisperJAV

    Uses Qwen3-ASR, local LLM, Whisper, TEN-VAD

    WhisperJAV is an open-source speech transcription pipeline designed specifically for generating subtitles for Japanese adult video content. The project addresses challenges that standard speech recognition models face when transcribing this type of audio, which often includes low signal-to-noise ratios and large numbers of non-verbal vocalizations. Traditional automatic speech recognition systems can misinterpret these sounds as words, leading to inaccurate transcripts. WhisperJAV introduces a specialized pipeline that separates text generation from timestamp alignment, allowing the system to generate transcripts and then align them with audio using forced alignment techniques. The framework supports several speech recognition models, including Qwen-based ASR systems and fine-tuned Whisper models trained on domain-specific dialogue.
    Downloads: 18 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB