Showing 23 open source projects for "org..json"

View related business solutions
  • Cut Data Warehouse Costs up to 54% with BigQuery Icon
    Cut Data Warehouse Costs up to 54% with BigQuery

    Migrate from Snowflake, Databricks, or Redshift with free migration tools. Exabyte scale without the Exabyte price.

    BigQuery delivers up to 54% lower TCO than cloud alternatives. Migrate from legacy or competing warehouses using free BigQuery Migration Service with automated SQL translation. Get serverless scale with no infrastructure to manage, compressed storage, and flexible pricing—pay per query or commit for deeper discounts. New customers get $300 in free credit.
    Try BigQuery Free
  • 99.99% Uptime for MySQL and PostgreSQL on Google Cloud Icon
    99.99% Uptime for MySQL and PostgreSQL on Google Cloud

    Enterprise Plus edition delivers sub-second maintenance downtime and 2x read/write performance. Built for critical apps.

    Cloud SQL Enterprise Plus gives you a 99.99% availability SLA with near-zero downtime maintenance—typically under 10 seconds. Get 2x better read/write performance, intelligent data caching, and 35 days of point-in-time recovery. Supports MySQL, PostgreSQL, and SQL Server with built-in vector search for gen AI apps. New customers get $300 in free credit.
    Try Cloud SQL Free
  • 1
    GLM-5

    GLM-5

    From Vibe Coding to Agentic Engineering

    GLM-5 is a next-generation open-source large language model (LLM) developed by the Z .ai team under the zai-org organization that pushes the boundaries of reasoning, coding, and long-horizon agentic intelligence. Building on earlier GLM series models, GLM-5 dramatically scales the parameter count (to roughly 744 billion) and expands pre-training data to significantly improve performance on complex tasks such as multi-step reasoning, software engineering workflows, and agent orchestration compared to its predecessors like GLM-4.5. ...
    Downloads: 161 This Week
    Last Update:
    See Project
  • 2
    AlphaFold 3

    AlphaFold 3

    AlphaFold 3 inference pipeline

    ...The system is designed for scientific research applications in structural biology, biochemistry, and bioinformatics, enabling accurate modeling of proteins, ligands, and covalent modifications. Users can perform local predictions via Docker containers, integrating AlphaFold 3’s inference process with provided JSON input configurations. The software includes flexible options for running both data preprocessing and GPU-accelerated inference, allowing users to adapt to available computational resources.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 3
    Qwen-2.5-VL

    Qwen-2.5-VL

    Qwen2.5-VL is the multimodal large language model series

    ...Trained on a comprehensive dataset of up to 18 trillion tokens, Qwen2.5 models exhibit significant improvements in instruction following, long-text generation (exceeding 8,000 tokens), and structured data comprehension, such as tables and JSON formats. They support context lengths up to 128,000 tokens and offer multilingual capabilities in over 29 languages, including Chinese, English, French, Spanish, and more. The models are open-source under the Apache 2.0 license, with resources and documentation available on platforms like Hugging Face and ModelScope.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    Profile Data

    Profile Data

    Analyze computation-communication overlap in V3/R1

    ...The profiling data targets insights into computation-communication overlap, pipeline scheduling (e.g. DualPipe), and how MoE / EP / parallelism strategies interact in real systems. The repository contains JSON trace files like train.json, prefill.json, decode.json, and associated assets. Users can load them into tools like Chrome tracing to inspect GPU idle times, overlapping operations, and scheduling alignment. The idea is to bring transparency to internal efficiency tradeoffs, enabling researchers to reproduce, analyze, or improve on DeepSeek’s parallelism strategies. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    ChatGPT Retrieval Plugin

    ChatGPT Retrieval Plugin

    The ChatGPT Retrieval Plugin lets you easily find personal documents

    ...The repo provides code for ingestion pipelines (embedding documents), APIs for querying, local server components, and privacy / PII detection modules. It also contains plugin manifest files (OpenAPI spec, plugin JSON) so that the retrieval backend can be registered in a plugin ecosystem. Because retrieval is often needed to make LLMs “know what’s in your docs” without leaking everything, this plugin aims to be a secure, flexible building block for retrieval-augmented generation (RAG) systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Anthropic SDK TypeScript

    Anthropic SDK TypeScript

    Access to Anthropic's safety-first language model APIs

    ...Example usage shows how to instantiate the Anthropic client, call client.messages.create(...), and obtain responses. It supports streaming endpoints as well. Because TypeScript provides type safety, it helps avoid common errors in JSON interplay. The repo also includes documentation (API spec in api.md) and examples (e.g. streaming examples).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    HunyuanOCR

    HunyuanOCR

    OCR expert VLM powered by Hunyuan's native multimodal architecture

    HunyuanOCR is an open-source, end-to-end OCR (optical character recognition) Vision-Language Model (VLM) developed by Tencent‑Hunyuan. It’s designed to unify the entire OCR pipeline, detection, recognition, layout parsing, information extraction, translation, and even subtitle or structured output generation, into a single model inference instead of a cascade of separate tools. Despite being fairly lightweight (about 1 billion parameters), it delivers state-of-the-art performance across a...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Protenix

    Protenix

    A trainable PyTorch reproduction of AlphaFold 3

    Protenix is an open-source, trainable PyTorch reimplementation of AlphaFold 3, developed by ByteDance with the goal of democratizing high-accuracy protein structure prediction for computational biology and drug-discovery research. Protenix provides a complete pipeline for turning protein sequences (with optional MSA / sequence alignment) or structural inputs (e.g. PDB/CIF) into full 3D atomic-level structure predictions. It supports both “full” models and lightweight variants such as...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Qwen2.5

    Qwen2.5

    Open source large language model by Alibaba

    ...Trained on a comprehensive dataset of up to 18 trillion tokens, Qwen2.5 models exhibit significant improvements in instruction following, long-text generation (exceeding 8,000 tokens), and structured data comprehension, such as tables and JSON formats. They support context lengths up to 128,000 tokens and offer multilingual capabilities in over 29 languages, including Chinese, English, French, Spanish, and more. The models are open-source under the Apache 2.0 license, with resources and documentation available on platforms like Hugging Face and ModelScope. This is a full ZIP snapshot of the Qwen2.5 code.
    Downloads: 23 This Week
    Last Update:
    See Project
  • Easily Host LLMs and Web Apps on Cloud Run Icon
    Easily Host LLMs and Web Apps on Cloud Run

    Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
    Try Cloud Run Free
  • 10
    Hermes 4

    Hermes 4

    Hermes 4 FP8: hybrid reasoning Llama-3.1-405B model by Nous Research

    ...Post-training improvements include a vastly expanded corpus with ~60B tokens, boosting performance across math, code, STEM, logic, creativity, and structured outputs. The model is designed for schema adherence, producing valid JSON and repairing malformed outputs, making it highly suitable for tool use and function calling. Hermes 4 is engineered for superior steerability with reduced refusal rates, aligning responses to user values while preserving assistant quality. It achieves state-of-the-art results on RefusalBench, outperforming both closed and open models in balancing helpfulness with adaptability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Qwen2.5-VL-3B-Instruct

    Qwen2.5-VL-3B-Instruct

    Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video

    ...It uses a SwiGLU and RMSNorm-enhanced ViT architecture and introduces mRoPE updates for robust temporal and spatial understanding. The model supports flexible image input (file path, URL, base64) and outputs structured responses like bounding boxes or JSON, making it highly versatile in commercial and research settings. It excels in a wide range of benchmarks such as DocVQA, InfoVQA, and AndroidWorld control tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Ministral 3 8B Instruct 2512

    Ministral 3 8B Instruct 2512

    Compact 8B multimodal instruct model optimized for edge deployment

    ...Its multilingual support covers dozens of major languages, allowing it to work across diverse global environments and applications. The model adheres reliably to system prompts, supports native function calling, and outputs clean JSON, giving it strong tool-use behavior.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Ministral 3 8B Reasoning 2512

    Ministral 3 8B Reasoning 2512

    Efficient 8B multimodal model tuned for advanced reasoning tasks.

    ...Despite its reasoning-focused training, the model remains edge-optimized and can run locally on a single 24GB GPU in BF16, or under 12GB when quantized. It supports dozens of languages, adheres reliably to system prompts, and provides native function calling and structured JSON output—key capabilities for agentic and automation workflows. The model also includes a 256k context window, allowing it to handle long documents and extended reasoning chains.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Ministral 3 14B Reasoning 2512

    Ministral 3 14B Reasoning 2512

    High-precision 14B multimodal model built for advanced reasoning tasks

    ...Despite its scale, the model is engineered for practical deployment and can run locally on 32GB of VRAM in BF16 or under 24GB when quantized. It maintains robust system-prompt adherence, supports dozens of languages, and provides native function calling with clean JSON output for agentic workflows. The model's architecture also delivers a 256k context window, unlocking large-document analysis and long-form reasoning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Ministral 3 3B Instruct 2512

    Ministral 3 3B Instruct 2512

    Ultra-efficient 3B multimodal instruct model built for edge deployment

    ...It supports dozens of languages across major global regions, making it well-suited for multilingual and embedded applications. The model also provides function calling, clean JSON output, and stable tool-use behavior, enabling it to serve as a small but effective agentic system.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Ministral 3 14B Instruct 2512

    Ministral 3 14B Instruct 2512

    Efficient 14B multimodal instruct model with edge deployment and FP8

    ...Its multilingual support spans dozens of major languages, making it suitable for global, multilingual, and localized AI applications. The model’s architecture provides native function calling, structured JSON outputs, and reliable tool-use behavior essential for agentic automation. Overall, it delivers a powerful blend of
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Ministral 3 3B Reasoning 2512

    Ministral 3 3B Reasoning 2512

    Compact 3B-param multimodal model for efficient on-device reasoning

    ...It supports dozens of languages, allowing it to function across global and multilingual contexts. The model retains strong system-prompt adherence, supports function-calling with structured JSON output, and offers a large 256k token context window for extended context reasoning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    QwQ-32B

    QwQ-32B

    QwQ-32B is a reasoning-focused language model for complex tasks

    QwQ-32B is a 32.8 billion parameter reasoning-optimized language model developed by Qwen as part of the Qwen2.5 family, designed to outperform conventional instruction-tuned models on complex tasks. Built with RoPE positional encoding, SwiGLU activations, RMSNorm, and Attention QKV bias, it excels in multi-turn conversation and long-form reasoning. It supports an extended context length of up to 131,072 tokens and incorporates supervised fine-tuning and reinforcement learning for enhanced...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Qwen2.5-VL-7B-Instruct

    Qwen2.5-VL-7B-Instruct

    Multimodal 7B model for image, video, and text understanding tasks

    Qwen2.5-VL-7B-Instruct is a multimodal vision-language model developed by the Qwen team, designed to handle text, images, and long videos with high precision. Fine-tuned from Qwen2.5-VL, this 7-billion-parameter model can interpret visual content such as charts, documents, and user interfaces, as well as recognize common objects. It supports complex tasks like visual question answering, localization with bounding boxes, and structured output generation from documents. The model is also...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Qwen2.5-14B-Instruct

    Qwen2.5-14B-Instruct

    Powerful 14B LLM with strong instruction and long-text handling

    ...Qwen2.5-14B-Instruct is built on a transformer backbone with RoPE, SwiGLU, RMSNorm, and attention QKV bias. It’s resilient to varied prompt styles and is especially effective for JSON and tabular data generation. The model is instruction-tuned and supports chat templating, making it ideal for chatbot and assistant use cases.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Mistral Large 3 675B Instruct 2512 Eagle

    Mistral Large 3 675B Instruct 2512 Eagle

    Speculative-decoding accelerator for the 675B Mistral Large 3

    Mistral Large 3 675B Instruct 2512 Eagle is the dedicated speculative-decoding draft model for the full Mistral Large 3 Instruct system, designed to significantly speed up generation while preserving high output quality. It works alongside the primary 675B instruct model, enabling faster response times by predicting several tokens ahead using Mistral’s Eagle speculative method. Built on the same frontier-scale multimodal Mixture-of-Experts architecture, it complements a system featuring 41B...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Mistral Large 3 675B Instruct 2512 NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4

    Quantized 675B multimodal instruct model optimized for NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4 is a frontier-scale multimodal Mixture-of-Experts model featuring 675B total parameters and 41B active parameters, trained from scratch on 3,000 H200 GPUs. This NVFP4 checkpoint is a post-training-activation quantized version of the original instruct model, created through a collaboration between Mistral AI, vLLM, and Red Hat using llm-compressor. It retains the same instruction-tuned behavior as the FP8 model, making it ideal for production...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Mistral Large 3 675B Instruct 2512

    Mistral Large 3 675B Instruct 2512

    Frontier-scale 675B multimodal instruct MoE model for enterprise AIMis

    Mistral Large 3 675B Instruct 2512 is a state-of-the-art multimodal granular Mixture-of-Experts model featuring 675B total parameters and 41B active parameters, trained from scratch on 3,000 H200 GPUs. As the instruct-tuned FP8 variant, it is optimized for reliable instruction following, agentic workflows, production-grade assistants, and long-context enterprise tasks. It incorporates a massive 673B-parameter language MoE backbone and a 2.5B-parameter vision encoder, enabling rich multimodal...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.