Showing 506 open source projects for "input"

View related business solutions
  • Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud Icon
    Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud

    Get back to your application and leave the database to us. Cloud SQL automatically handles backups, replication, and scaling.

    Cloud SQL is a fully managed relational database for MySQL, PostgreSQL, and SQL Server. We handle patching, backups, replication, encryption, and failover—so you can focus on your app. Migrate from on-prem or other clouds with free Database Migration Service. IDC found customers achieved 246% ROI. New customers get $300 in credits plus a 30-day free trial.
    Try Cloud SQL Free
  • Run Any Workload on Compute Engine VMs Icon
    Run Any Workload on Compute Engine VMs

    From dev environments to AI training, choose preset or custom VMs with 1–96 vCPUs and industry-leading 99.95% uptime SLA.

    Compute Engine delivers high-performance virtual machines for web apps, databases, containers, and AI workloads. Choose from general-purpose, compute-optimized, or GPU/TPU-accelerated machine types—or build custom VMs to match your exact specs. With live migration and automatic failover, your workloads stay online. New customers get $300 in free credits.
    Try Compute Engine
  • 1
    OpenLIT

    OpenLIT

    OpenLIT is an open-source LLM Observability tool

    OpenLIT is an OpenTelemetry-native tool designed to help developers gain insights into the performance of their LLM applications in production. It automatically collects LLM input and output metadata and monitors GPU performance for self-hosted LLMs. OpenLIT makes integrating observability into GenAI projects effortless with just a single line of code. Whether you're working with popular LLM providers such as OpenAI and HuggingFace, or leveraging vector databases like ChromaDB, OpenLIT ensures your applications are monitored seamlessly, providing critical insights including GPU performance stats for self-hosted LLMs to improve performance and reliability. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    SageMaker TensorFlow Training Toolkit

    SageMaker TensorFlow Training Toolkit

    Toolkit for running TensorFlow training scripts on SageMaker

    ...A SageMaker Model contains references to a model.tar.gz file in S3 containing serialized model data, and a Docker image used to serve predictions with that model. A Batch Transform job runs an offline-inference job using your TensorFlow Serving model. Input data in S3 is converted to HTTP requests, and responses are saved to an output bucket in S3.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Open-AutoGLM

    Open-AutoGLM

    An open phone agent model & framework

    Open-AutoGLM is an open-source framework and model designed to empower autonomous mobile intelligent assistants by enabling AI agents to understand and interact with phone screens in a multimodal manner, blending vision and language capability to control real devices. It aims to create an “AI phone agent” that can perceive on-screen content, reason about user goals, and execute sequences of taps, swipes, and text input via automated device control interfaces like ADB, enabling hands-off completion of multi-step tasks such as navigating apps, filling forms, and more. Unlike traditional automation scripts that depend on brittle heuristics, Open-AutoGLM uses pretrained large language and vision-language models to interpret visual context and natural language instructions, giving the agent robust adaptability across apps and interfaces.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Synthetic Data Vault (SDV)

    Synthetic Data Vault (SDV)

    Synthetic Data Generation for tabular, relational and time series data

    The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset. Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent...
    Downloads: 1 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    Step1X-3D

    Step1X-3D

    High-Fidelity and Controllable Generation of Textured 3D Assets

    ...It combines a hybrid architecture: a geometry generation stage using a VAE-DiT model to output a watertight 3D representation (e.g. TSDF surface), and a texture synthesis stage that conditions on geometry and optionally reference input (or prompts) to produce view-consistent textures using a diffusion-based texture module. The result is fully 3D assets — meshes + textures — which can be rendered from any viewpoint, textured consistently, and used in 3D applications. To achieve this, the project includes a massive curated dataset: among more than 5 million candidate 3D assets, it filters and standardizes to produce a high-quality 2 million–asset subset suitable for training.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    SDGym

    SDGym

    Benchmarking synthetic data generation methods

    ...You can use any of its synthesizers, datasets or metrics for benchmarking. You also customize the process to include your own work. Select any of the publicly available datasets from the SDV project, or input your own data. Choose from any of the SDV synthesizers and baselines. Or write your own custom machine learning model. In addition to performance and memory usage, you can also measure synthetic data quality and privacy through a variety of metrics. Install SDGym using pip or conda. We recommend using a virtual environment to avoid conflicts with other software on your device.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Uncertainty Baselines

    Uncertainty Baselines

    High-quality implementations of standard and SOTA methods

    Uncertainty Baselines is a collection of strong, well-documented training pipelines that make it straightforward to evaluate predictive uncertainty in modern machine learning models. Rather than offering toy scripts, it provides end-to-end recipes—data input, model architectures, training loops, evaluation metrics, and logging—so results are comparable across runs and research groups. The library spans canonical modalities and tasks, from image classification and NLP to tabular problems, with baselines that cover both deterministic and probabilistic approaches. Techniques include deep ensembles, Monte Carlo dropout, temperature scaling, stochastic variational inference, heteroscedastic heads, and out-of-distribution detection workflows. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    HunyuanCustom

    HunyuanCustom

    Multimodal-Driven Architecture for Customized Video Generation

    HunyuanCustom is a multimodal video customization framework by Tencent Hunyuan, aimed at generating customized videos featuring particular subjects (people, characters) under flexible conditions, while maintaining subject/identity consistency. It supports conditioning via image, audio, video, and text, and can perform subject replacement in videos, generate avatars speaking given audio, or combine multiple subject images. The architecture builds on HunyuanVideo, with added modules for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    segment-geospatial

    segment-geospatial

    A Python package for segmenting geospatial data with the SAM

    The segment-geospatial package draws its inspiration from segment-anything-eo repository authored by Aliaksandr Hancharenka. To facilitate the use of the Segment Anything Model (SAM) for geospatial data, I have developed the segment-anything-py and segment-geospatial Python packages, which are now available on PyPI and conda-forge. My primary objective is to simplify the process of leveraging SAM for geospatial data analysis by enabling users to achieve this with minimal coding effort. I...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build AI Apps with Gemini 3 on Vertex AI Icon
    Build AI Apps with Gemini 3 on Vertex AI

    Access Google’s most capable multimodal models. Train, test, and deploy AI with 200+ foundation models on one platform.

    Vertex AI gives developers access to Gemini 3—Google’s most advanced reasoning and coding model—plus 200+ foundation models including Claude, Llama, and Gemma. Build generative AI apps with Vertex AI Studio, customize with fine-tuning, and deploy to production with enterprise-grade MLOps. New customers get $300 in free credits.
    Try Vertex AI Free
  • 10
    Changelog CI

    Changelog CI

    Changelog CI is a GitHub Action that enables a project

    Changelog CI is a GitHub Action that enables a project to automatically generate changelogs. Changelog CI can be triggered on pull_request, workflow_dispatch, and any other events that can provide the required inputs. Changelog CI uses python and GitHub API to generate a changelog for a repository. First, it tries to get the latest release from the repository (If available). Then, it checks all the pull requests/commits merged after the last release using the GitHub API. After that, it...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Trafilatura

    Trafilatura

    Python & command-line tool to gather text on the Web

    Trafilatura is a Python package and command-line tool designed to gather text on the Web. It includes discovery, extraction and text-processing components. Its main applications are web crawling, downloads, scraping, and extraction of main texts, metadata and comments. It aims at staying handy and modular: no database is required, the output can be converted to various commonly used formats. Going from raw HTML to essential parts can alleviate many problems related to text quality, first by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    pybaselines

    pybaselines

    Library of algorithms for baseline correction of experimental data

    pybaselines is a Python library that provides many different algorithms for performing baseline correction on data from experimental techniques such as Raman, FTIR, NMR, XRD, XRF, PIXE, etc. The aim of the project is to provide a semi-unified API to allow quick testing and comparing multiple baseline correction algorithms to find the best one for a set of data. pybaselines has 50+ baseline correction algorithms. These include popular algorithms, such as AsLS, airPLS, ModPoly, and SNIP, as...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Kaleidoscope-SDK

    Kaleidoscope-SDK

    User toolkit for analyzing and interfacing with Large Language Models

    kaleidoscope-sdk is a Python module used to interact with large language models hosted via the Kaleidoscope service available at: https://github.com/VectorInstitute/kaleidoscope. It provides a simple interface to launch LLMs on an HPC cluster, asking them to perform basic features like text generation, but also retrieve intermediate information from inside the model, such as log probabilities and activations. Users must authenticate using their Vector Institute cluster credentials. This can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Qwen2-Audio

    Qwen2-Audio

    Repo of Qwen2-Audio chat & pretrained large audio language model

    ...It is trained to accept various audio signal inputs (including speech, sounds, etc.) and perform both voice chat and audio analysis, producing textual responses. It supports two major modes: Voice Chat (interactive voice only input) and Audio Analysis (audio + text instructions), with both base and instruction-tuned models. It is evaluated on many benchmarks (speech recognition, translation, sound classification, emotion, etc.), and offers pretrained models (e.g. 7B) released via ModelScope and Hugging Face. Code & examples provided with Hugging Face transformers, and usage via AutoProcessor, model classes etc. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    Qualcomm Innovation Center (QuIC) is at the forefront of enabling low-power inference at the edge through its pioneering model-efficiency research. QuIC has a mission to help migrate the ecosystem toward fixed-point inference. With this goal, QuIC presents the AI Model Efficiency Toolkit (AIMET) - a library that provides advanced quantization and compression techniques for trained neural network models. AIMET enables neural networks to run more efficiently on fixed-point AI hardware...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    ML Sharp

    ML Sharp

    Sharp Monocular View Synthesis in Less Than a Second

    ML Sharp is a research code release that turns a single 2D photograph into a photorealistic 3D representation that can be rendered from nearby viewpoints. Instead of requiring multi-view input, it predicts the parameters of a 3D Gaussian scene representation directly from one image using a single forward pass through a neural network. The core idea is speed: the 3D representation is produced in under a second on a standard GPU, and then the resulting scene can be rendered in real time to generate new views interactively. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    EKS Best Practices

    EKS Best Practices

    A best practices guide for day 2 operations

    The Amazon EKS Best Practices Guide is a public repository containing comprehensive documentation and guidance for operating production-grade Kubernetes clusters on AWS’s managed service, Amazon EKS. Rather than a code library, it serves as a reference catalogue of patterns, anti-patterns, checklists and architectures across domains such as security, reliability, scalability, networking, cost optimization and hybrid cloud deployments. The repository is maintained by AWS but open to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MuJoCo Playground

    MuJoCo Playground

    An open source library for GPU-accelerated robot learning

    ...The project includes classic control benchmarks from dm_control, advanced quadruped and bipedal locomotion systems, and dexterous as well as non-prehensile manipulation setups. It also offers optional vision-based training capabilities through integration with Madrona-MJX, allowing researchers to train policies directly from image input on GPUs. MuJoCo Playground supports both the MJX JAX implementation and the Warp physics engine, enabling flexible use across research pipelines. The environments are designed for fast training, compatibility with reinforcement learning libraries, and real-time trajectory visualization using rscope.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    4M

    4M

    4M: Massively Multimodal Masked Modeling

    ...The repository releases code and models for multiple variants (e.g., 4M-7 and 4M-21), emphasizing transfer to unseen tasks and modalities. Training/inference configs and issues discuss things like depth tokenizers, input masks for generation, and CUDA build questions, signaling active research iteration. The design leans into flexibility and steerability, so prompts and masks can shape behavior without bespoke heads per task. In short, 4M provides a unified recipe to pretrain large multimodal models that generalize broadly while remaining practical to fine-tune.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows. It targets both general software synthesis and language-specific productivity, offering strong performance among open models at release time. Typical usage includes prompt-driven generation, function or class completion, and zero-shot adherence to natural-language instructions about code changes. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Ring

    Ring

    Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI

    Ring is a reasoning Mixture-of-Experts (MoE) large language model (LLM) developed by inclusionAI. It is built from or derived from Ling. Its design emphasizes reasoning, efficiency, and modular expert activation. In its “flash” variant (Ring-flash-2.0), it optimizes inference by activating only a subset of experts. It applies reinforcement learning/reasoning optimization techniques. Its architectures and training approaches are tuned to enable efficient and capable reasoning performance....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Tarsier

    Tarsier

    Vision utilities for web interaction agents

    ...In doing this, we provide a mapping between elements and IDs for an LLM to take actions upon (e.g. CLICK [23]). We define interactable elements as buttons, links, or input fields that are visible on the page; Tarsier can also tag all textual elements if you pass tag_text_elements=True. Furthermore, we've developed an OCR algorithm to convert a page screenshot into a whitespace-structured string (almost like ASCII art) that an LLM even without vision can understand. Since current vision-language models still lack fine-grained representations needed for web interaction tasks, this is critical.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    GLM-4.6V

    GLM-4.6V

    GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning

    GLM-4.6V represents the latest generation of the GLM-V family and marks a major step forward in multimodal AI by combining advanced vision-language understanding with native “tool-call” capabilities, long-context reasoning, and strong generalization across domains. Unlike many vision-language models that treat images and text separately or require intermediate conversions, GLM-4.6V allows inputs such as images, screenshots or document pages directly as part of its reasoning pipeline — and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    OpenAI-Compatible Edge-TTS API

    OpenAI-Compatible Edge-TTS API

    Free, high-quality text-to-speech API endpoint to replace OpenAI

    ...The project emulates the /v1/audio/speech endpoint used by OpenAI, so any client that can talk to the OpenAI TTS API can be redirected to this service with minimal changes. It exposes parameters for input text, voice selection, audio format, and playback speed, mirroring the OpenAI interface while mapping popular OpenAI voice names to equivalent Edge voices. Because it relies on Edge’s TTS, the audio generation itself is free, and the project essentially acts as a smart proxy that handles formatting and streaming. The server supports Server-Sent Events (SSE) for streaming audio, enabling low-latency playback in chat UIs and other interactive tools. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    GLM-TTS

    GLM-TTS

    Controllable & emotion-expressive zero-shot TTS

    ...The system introduces a multi-reward reinforcement learning framework that jointly optimizes for voice similarity, emotional expressiveness, pronunciation, and intelligibility, yielding output that can rival commercial options in naturalness and expressiveness. GLM-TTS also supports phoneme-level control and hybrid text + phoneme input, giving developers precise control over pronunciation critical for multilingual or polyphone­-rich languages.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.
Try Free →