17 projects for "artificial intelligence php software" with 2 filters applied:

  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build, govern, and optimize agents and models with Gemini Enterprise Agent Platform.
    Start Free
  • 1
    GLM-5

    GLM-5

    From Vibe Coding to Agentic Engineering

    GLM-5 is a next-generation open-source large language model (LLM) developed by the Z .ai team under the zai-org organization that pushes the boundaries of reasoning, coding, and long-horizon agentic intelligence. Building on earlier GLM series models, GLM-5 dramatically scales the parameter count (to roughly 744 billion) and expands pre-training data to significantly improve performance on complex tasks such as multi-step reasoning, software engineering workflows, and agent orchestration...
    Downloads: 88 This Week
    Last Update:
    See Project
  • 2
    MiniMax-M2.1

    MiniMax-M2.1

    MiniMax M2.1, a SOTA model for real-world dev & agents.

    MiniMax-M2.1 is an open-source, state-of-the-art agentic language model released to democratize high-performance AI capabilities. It goes beyond a simple parameter upgrade, delivering major gains in coding, tool use, instruction following, and long-horizon planning. The model is designed to be transparent, controllable, and accessible, enabling developers to build autonomous systems without relying on closed platforms. MiniMax-M2.1 excels in real-world software engineering tasks, including...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 3
    Qwen3.6

    Qwen3.6

    Qwen3.6 is the large language model series developed by Qwen team

    The Qwen3.6 project is an open-source large language model series developed by Alibaba’s Qwen team, designed to deliver high-performance AI capabilities with a strong emphasis on real-world usability and developer productivity. It builds upon the advancements introduced in Qwen3.5, focusing on improving stability, responsiveness, and practical application in coding and agent-based workflows. The repository serves as a central hub for documentation, community discussion, and access to the...
    Downloads: 26 This Week
    Last Update:
    See Project
  • 4
    Granite Code Models

    Granite Code Models

    A Family of Open Foundation Models for Code Intelligence

    Granite Code Models are IBM’s open-source, decoder-only models tailored for code tasks such as fixing bugs, explaining and documenting code, and modernizing codebases. Trained on code from 116 programming languages, the family targets strong performance across diverse benchmarks while remaining accessible to the community. The repository introduces the model lineup, intended uses, and evaluation highlights, and it complements IBM’s broader Granite initiative spanning multiple modalities....
    Downloads: 2 This Week
    Last Update:
    See Project
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 5
    FlashMLA

    FlashMLA

    FlashMLA: Efficient Multi-head Latent Attention Kernels

    FlashMLA is a high-performance decoding kernel library designed especially for Multi-Head Latent Attention (MLA) workloads, targeting NVIDIA Hopper GPU architectures. It provides optimized kernels for MLA decoding, including support for variable-length sequences, helping reduce latency and increase throughput in model inference systems using that attention style. The library supports both BF16 and FP16 data types, and includes a paged KV cache implementation with a block size of 64 to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    IQuest-Coder-V1 Model Family

    IQuest-Coder-V1 Model Family

    New family of code large language models (LLMs)

    IQuest-Coder-V1 is a cutting-edge family of open-source large language models specifically engineered for code generation, deep code understanding, and autonomous software engineering tasks. These models range from tens of billions to smaller footprints and are trained on a novel code-flow multi-stage paradigm that captures how real software evolves over time — not just static code snapshots — giving them a deeper semantic understanding of programming logic. They support native long contexts...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is presented as the world’s first open-weight, large-scale hybrid-attention reasoning model, designed to push the frontier of long-context, tool-using, and deeply “thinking” language models. It is built on the MiniMax-Text-01 foundation and keeps the same massive parameter budget, but reworks the attention and training setup for better reasoning and test-time compute scaling. Architecturally, it combines Mixture-of-Experts layers with lightning attention, enabling the model to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 9
    Leanstral

    Leanstral

    Open-source code agent designed for Lean 4

    Leanstral is an open-weight large language model developed by Mistral AI and specifically designed as a code agent for the Lean 4 proof assistant, enabling advanced interaction with formal mathematics and program verification systems. The model is built to understand and generate Lean 4 code, which is used to express complex mathematical constructs as well as formal software specifications. By focusing on theorem proving and formal reasoning, Leanstral represents a specialized direction...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Compliant and Reliable File Transfers Backed by Top Security Certifications Icon
    Compliant and Reliable File Transfers Backed by Top Security Certifications

    Cerberus FTP Server delivers SOC 2 Type II certified security and FIPS 140-2 validated encryption.

    Stop relying on non-certified, legacy file transfer tools that creak under the weight of modern security demands. Get full audit trails, advanced access controls and more supported by an award-winning team of experts. Start your free 25-day trial today.
    Start Free Trial
  • 10
    Mistral Small 4

    Mistral Small 4

    Model that fuses instruct, reasoning and agentic skills

    The Mistral Small 4 collection is a set of open-weight large language models developed by Mistral AI that aim to unify multiple capabilities, including instruction following, reasoning, and coding, within a single efficient architecture. These models are part of the broader Mistral Small family, which is designed to deliver strong performance across a wide range of everyday AI tasks while maintaining relatively low latency and efficient deployment requirements. The collection reflects an...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MiniMax-M2.7

    MiniMax-M2.7

    Self-evolving AI model for agents, coding, and complex workflows

    MiniMax-M2.7 is a large-scale open-weight language model designed for advanced agent-based workflows, professional software engineering, and complex productivity tasks. With 229B parameters, it introduces a self-evolution framework in which the model actively improves its own capabilities by updating memory, generating skills, and iterating through reinforcement learning experiments. This process enables it to autonomously refine systems, achieving measurable performance gains such as a 30%...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    MiMo-V2.5-Pro

    MiMo-V2.5-Pro

    Flagship MoE model for long-context agents and complex coding

    MiMo-V2.5-Pro is Xiaomi’s flagship Mixture-of-Experts (MoE) model built for the most demanding agentic, software engineering, and long-horizon reasoning tasks. It features approximately 1.02 trillion total parameters with 42B activated per inference, balancing extreme capability with efficient execution. The model supports a 1 million token context window, enabling it to maintain coherence across long workflows involving thousands of tool calls and multi-step reasoning chains....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    MiMo-V2.5

    MiMo-V2.5

    Omnimodal AI model for agents, coding, and long-context tasks

    MiMo-V2.5 is a native omnimodal large language model developed by Xiaomi, designed for advanced agentic workflows, multimodal reasoning, and long-context processing. Built on a Mixture-of-Experts architecture with approximately 309B total parameters and around 15B activated per inference, it balances high capability with efficient execution. The model natively processes text, images, video, and audio within a unified system, enabling cross-modal understanding and complex task execution in a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Devstral Small 2

    Devstral Small 2

    Lightweight 24B agentic coding model with vision and long context

    Devstral Small 2 is a compact agentic language model designed for software engineering workflows, excelling at tool usage, codebase exploration, and multi-file editing. With 24B parameters and FP8 instruct tuning, it delivers strong instruction following while remaining lightweight enough for local and on-device deployment. The model achieves competitive performance on SWE-bench, validating its effectiveness for real-world coding and automation tasks. It introduces vision capabilities,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Devstral 2

    Devstral 2

    Agentic 123B coding model optimized for large-scale engineering

    Devstral 2 is a large-scale agentic language model purpose-built for software engineering tasks, excelling at codebase exploration, multi-file editing, and tool-driven automation. With 123B parameters and FP8 instruct tuning, it delivers strong instruction following for chat-based workflows, coding assistants, and autonomous developer agents. The model demonstrates outstanding performance on SWE-bench, validating its effectiveness in real-world engineering scenarios. It generalizes well...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Qwen3.6-27B

    Qwen3.6-27B

    Dense multimodal Qwen model for coding, agents, and long context

    Qwen3.6-27B is an open-weight multimodal model built to deliver strong real-world coding, agent, and long-context performance in a dense 27B-parameter architecture. It combines a causal language model with a vision encoder and supports text, image, and video inputs, making it suitable for both software workflows and broader multimodal tasks. The model emphasizes stability and practical developer utility, with major improvements in agentic coding, frontend generation, and repository-level...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Kimi K2.6

    Kimi K2.6

    Multimodal agent model for coding, orchestration, and autonomy

    Kimi K2.6 is an open-source native multimodal agentic model built for advanced autonomous execution, long-horizon coding, and large-scale task orchestration. It is designed to handle complex end-to-end software workflows across multiple languages and domains, including front-end development, DevOps, performance optimization, and coding-driven design. Beyond coding, it can transform prompts and visual inputs into production-ready interfaces and lightweight full-stack outputs with structured...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB