Showing 156 open source projects for "ai code agent"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Fully Managed MySQL, PostgreSQL, and SQL Server Icon
    Fully Managed MySQL, PostgreSQL, and SQL Server

    Automatic backups, patching, replication, and failover. Focus on your app, not your database.

    Cloud SQL handles your database ops end to end, so you can focus on your app.
    Try Free
  • 1
    Image GPT

    Image GPT

    Large-scale autoregressive pixel model for image generation by OpenAI

    Image-GPT is the official research code and models from OpenAI’s paper Generative Pretraining from Pixels. The project adapts GPT-2 to the image domain, showing that the same transformer architecture can model sequences of pixels without altering its fundamental structure. It provides scripts to download pretrained checkpoints of different model sizes (small, medium, large) trained on large-scale datasets and includes utilities for handling color quantization with a 9-bit palette....
    Downloads: 6 This Week
    Last Update:
    See Project
  • 2
    MUSE

    MUSE

    A library for Multilingual Unsupervised or Supervised word Embeddings

    MUSE is a framework for learning multilingual word embeddings that live in a shared space, enabling bilingual lexicon induction, cross-lingual retrieval, and zero-shot transfer. It supports both supervised alignment with seed dictionaries and unsupervised alignment that starts without parallel data by using adversarial initialization followed by Procrustes refinement. The code can align pre-trained monolingual embeddings (such as fastText) across dozens of languages and provides standardized...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Improved GAN

    Improved GAN

    Code for the paper "Improved Techniques for Training GANs"

    Improved-GAN is the official code release from OpenAI accompanying the research paper Improved Techniques for Training GANs. It provides implementations of experiments conducted on datasets such as MNIST, SVHN, CIFAR-10, and ImageNet. The project focuses on demonstrating enhanced training methods for Generative Adversarial Networks, addressing stability and performance issues that were common in earlier GAN models. The repository includes training scripts, evaluation methods, and pretrained...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    InfoGAN

    InfoGAN

    Code for reproducing key results in the paper

    The InfoGAN repository contains the original implementation used to reproduce the results in the paper “InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets”. InfoGAN is a variant of the GAN (Generative Adversarial Network) architecture that aims to learn disentangled and interpretable latent representations by maximizing the mutual information between a subset of the latent codes and the generated outputs. That extra incentive encourages the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    SG2Im

    SG2Im

    Code for "Image Generation from Scene Graphs", Johnson et al, CVPR 201

    sg2im is a research codebase that learns to synthesize images from scene graphs—structured descriptions of objects and their relationships. Instead of conditioning on free-form text alone, it leverages graph structure to control layout and interactions, generating scenes that respect constraints like “person left of dog” or “cup on table.” The pipeline typically predicts object layouts (bounding boxes and masks) from the graph, then renders a realistic image conditioned on those layouts....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Leanstral

    Leanstral

    Open-source code agent designed for Lean 4

    Leanstral is an open-weight large language model developed by Mistral AI and specifically designed as a code agent for the Lean 4 proof assistant, enabling advanced interaction with formal mathematics and program verification systems. The model is built to understand and generate Lean 4 code, which is used to express complex mathematical constructs as well as formal software specifications. By focusing on theorem proving and formal reasoning, Leanstral represents a specialized direction within large language models, targeting domains that require strict correctness and logical rigor rather than general conversational tasks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Nemotron 3 Super

    Nemotron 3 Super

    Open language model developed by NVIDIA as part of Nemotron-3 family

    NVIDIA-Nemotron-3-Super-120B-A12B-FP8 is a large-scale open language model developed by NVIDIA as part of the Nemotron-3 family of generative AI systems designed for advanced reasoning, conversational interaction, and agent-based workflows. The model contains approximately 120 billion parameters, but employs a Mixture-of-Experts architecture that activates only a smaller subset of parameters during inference, improving computational efficiency while maintaining high capability. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Nemotron 3

    Nemotron 3

    Large language model developed and released by NVIDIA

    ...This configuration supports a massive context length of up to 1 million tokens, making it suitable for long-context reasoning, agentic tasks, extended dialogues, and applications like code generation or document summarization.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    DeepSeek-V3.1-Terminus

    DeepSeek-V3.1-Terminus

    685B model with improved agents and consistency

    ...It improves language consistency, reducing mixed Chinese-English outputs and eliminating abnormal characters, enhancing reliability in multilingual scenarios. The update also refines agentic capabilities, especially for the Code Agent and Search Agent, leading to better tool integration and query handling. Benchmarks show small but notable gains, such as raising MMLU-Pro from 84.8 to 85.0, GPQA-Diamond from 80.1 to 80.7, and SWE Verified from 66.0 to 68.4, along with significant improvements in agent benchmarks like BrowseComp (30.0 → 38.5) and Terminal-bench (31.3 → 36.7). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 10
    MiniMax-M2.7

    MiniMax-M2.7

    Self-evolving AI model for agents, coding, and complex workflows

    MiniMax-M2.7 is a large-scale open-weight language model designed for advanced agent-based workflows, professional software engineering, and complex productivity tasks. With 229B parameters, it introduces a self-evolution framework in which the model actively improves its own capabilities by updating memory, generating skills, and iterating through reinforcement learning experiments. This process enables it to autonomously refine systems, achieving measurable performance gains such as a 30%...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Qwen3.6-27B

    Qwen3.6-27B

    Dense multimodal Qwen model for coding, agents, and long context

    Qwen3.6-27B is an open-weight multimodal model built to deliver strong real-world coding, agent, and long-context performance in a dense 27B-parameter architecture. It combines a causal language model with a vision encoder and supports text, image, and video inputs, making it suitable for both software workflows and broader multimodal tasks. The model emphasizes stability and practical developer utility, with major improvements in agentic coding, frontend generation, and repository-level...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    gpt-oss-120b

    gpt-oss-120b

    OpenAI’s open-weight 120B model optimized for reasoning and tooling

    ...Developers can control the reasoning level (low, medium, high) to balance speed and depth depending on the task. Released under the Apache 2.0 license, it enables both commercial and research applications. The model supports function calling, web browsing, and code execution, streamlining intelligent agent development.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Kimi K2.6

    Kimi K2.6

    Multimodal agent model for coding, orchestration, and autonomy

    Kimi K2.6 is an open-source native multimodal agentic model built for advanced autonomous execution, long-horizon coding, and large-scale task orchestration. It is designed to handle complex end-to-end software workflows across multiple languages and domains, including front-end development, DevOps, performance optimization, and coding-driven design. Beyond coding, it can transform prompts and visual inputs into production-ready interfaces and lightweight full-stack outputs with structured...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Qwen3.6-35B-A3B

    Qwen3.6-35B-A3B

    Open multimodal model for coding, agents, and long-context tasks

    Qwen3.6-35B-A3B is an open-weight multimodal model built for real-world coding, agent workflows, and long-context reasoning. It combines a causal language model with a vision encoder, supports text, image, and video inputs, and is optimized for frameworks such as Transformers, vLLM, SGLang, and KTransformers. The model emphasizes stability, responsiveness, and practical developer productivity, with major improvements in agentic coding, frontend generation, and repository-level reasoning. A...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    DeepSeek-V4-Pro

    DeepSeek-V4-Pro

    Flagship MoE model for advanced reasoning, coding, and agents

    DeepSeek-V4-Pro is a flagship open-weight Mixture-of-Experts language model designed for high-performance reasoning, coding, and agent-based workflows at scale. It features approximately 1.6 trillion total parameters with around 49B activated during inference, enabling strong efficiency while maintaining frontier-level capability. The model supports an ultra-long context window of up to 1 million tokens, making it highly suitable for long-document reasoning, large codebases, and complex...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Hunyuan-A13B-Instruct

    Hunyuan-A13B-Instruct

    Efficient 13B MoE language model with long context and reasoning modes

    ...Open-source under a custom license, it's ideal for researchers and developers seeking scalable, high-context AI capabilities with optimized inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Qwen2.5-VL-3B-Instruct

    Qwen2.5-VL-3B-Instruct

    Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video

    Qwen2.5-VL-3B-Instruct is a 3.75 billion parameter multimodal model by Qwen, designed to handle complex vision-language tasks in both image and video formats. As part of the Qwen2.5 series, it supports image-text-to-text generation with capabilities like chart reading, object localization, and structured data extraction. The model can serve as an intelligent visual agent capable of interacting with digital interfaces and understanding long-form videos by dynamically sampling resolution and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    GLM-4.5-Air

    GLM-4.5-Air

    Compact hybrid reasoning language model for intelligent responses

    GLM-4.5-Air is a multilingual large language model with 106 billion total parameters and 12 billion active parameters, designed for conversational AI and intelligent agents. It is part of the GLM-4.5 family developed by Zhipu AI, offering hybrid reasoning capabilities via two modes: a thinking mode for complex reasoning and tool use, and a non-thinking mode for immediate responses. The model is optimized for efficiency and deployment, delivering strong results across 12 industry benchmarks,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Qwen3.6-35B-A3B-FP8

    Qwen3.6-35B-A3B-FP8

    FP8 Qwen model for efficient multimodal coding and agent tasks

    Qwen3.6-35B-A3B-FP8 is an FP8-quantized version of Qwen3.6 designed to deliver nearly the same performance as the original model while improving deployment efficiency. It is a multimodal open-weight model that combines a causal language model with a vision encoder, supporting text, image, and video inputs. Built for stability and real-world developer use, it emphasizes agentic coding, repository-level reasoning, and productive long-context workflows. A key capability is thinking...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Devstral 2

    Devstral 2

    Agentic 123B coding model optimized for large-scale engineering

    Devstral 2 is a large-scale agentic language model purpose-built for software engineering tasks, excelling at codebase exploration, multi-file editing, and tool-driven automation. With 123B parameters and FP8 instruct tuning, it delivers strong instruction following for chat-based workflows, coding assistants, and autonomous developer agents. The model demonstrates outstanding performance on SWE-bench, validating its effectiveness in real-world engineering scenarios. It generalizes well...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DeepSeek-V3.2

    DeepSeek-V3.2

    High-efficiency reasoning and agentic intelligence model

    DeepSeek-V3.2 is a cutting-edge large language model developed by DeepSeek-AI, focused on achieving high reasoning accuracy and computational efficiency for agentic tasks. It introduces DeepSeek Sparse Attention (DSA), a new attention mechanism that dramatically reduces computational overhead while maintaining strong long-context performance. Built with a scalable reinforcement learning framework, it reaches near-GPT-5 levels of reasoning and outperforms comparable models like DeepSeek-V3.1...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    gpt-oss-20b

    gpt-oss-20b

    OpenAI’s compact 20B open model for fast, agentic, and local use

    ...It is ideal for developers building lightweight AI agents or experimenting with fine-tuning on consumer-grade hardware.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Mellum-4b-base

    Mellum-4b-base

    JetBrains’ 4B parameter code model for completions

    Mellum-4b-base is JetBrains’ first open-source large language model designed and optimized for code-related tasks. Built with 4 billion parameters and a LLaMA-style architecture, it was trained on over 4.2 trillion tokens across multiple programming languages, including datasets such as The Stack, StarCoder, and CommitPack. With a context window of 8,192 tokens, it excels at code completion, fill-in-the-middle tasks, and intelligent code suggestions for professional developer tools and IDEs....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Qwen2.5-VL-7B-Instruct

    Qwen2.5-VL-7B-Instruct

    Multimodal 7B model for image, video, and text understanding tasks

    Qwen2.5-VL-7B-Instruct is a multimodal vision-language model developed by the Qwen team, designed to handle text, images, and long videos with high precision. Fine-tuned from Qwen2.5-VL, this 7-billion-parameter model can interpret visual content such as charts, documents, and user interfaces, as well as recognize common objects. It supports complex tasks like visual question answering, localization with bounding boxes, and structured output generation from documents. The model is also...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    VaultGemma

    VaultGemma

    VaultGemma: 1B DP-trained Gemma variant for private NLP tasks

    VaultGemma is a sub-1B parameter variant of Google’s Gemma family that is pre-trained from scratch with Differential Privacy (DP), providing mathematically backed guarantees that its outputs do not reveal information about any single training example. Using DP-SGD with a privacy budget across a large English-language corpus (web documents, code, mathematics), it prioritizes privacy over raw utility. The model follows a Gemma-2–style architecture, outputs text from up to 1,024 input tokens,...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB