Showing 21 open source projects for "math"

View related business solutions
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 1
    DeepSeek Math

    DeepSeek Math

    Pushing the Limits of Mathematical Reasoning in Open Language Models

    DeepSeek-Math is DeepSeek’s specialized model (or dataset + evaluation) focusing on mathematical reasoning, symbolic manipulation, proof steps, and advanced quantitative problem solving. The repository is likely to include fine-tuning routines or task datasets (e.g. MATH, GSM8K, ARB), demonstration notebooks, prompt templates, and evaluation results on math benchmarks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    Qwen2.5-Math

    Qwen2.5-Math

    A series of math-specific large language models of our Qwen2 series

    Qwen2.5-Math is a series of mathematics-specialized large language models in the Qwen2 family, released by Alibaba’s QwenLM. It includes base models (1.5B / 7B / 72B parameters), instruction-tuned versions, and a reward model (RM) to improve alignment. Unlike its predecessor Qwen2-Math, Qwen2.5-Math supports both Chain-of-Thought (CoT) reasoning and Tool-Integrated Reasoning (TIR) for solving math problems, and works in both Chinese and English.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    DeepSeekMath-V2

    DeepSeekMath-V2

    Towards self-verifiable mathematical reasoning

    DeepSeekMath-V2 is a large-scale open-source AI model designed specifically for advanced mathematical reasoning, theorem proving, and rigorous proof verification. It’s built by DeepSeek as a successor to their earlier math-specialist models. Unlike general-purpose LLMs that might generate plausible-looking math but sometimes hallucinate or mishandle rigorous logic, Math-V2 is engineered to not only generate solutions but also self-verify them, meaning it examines the derivations, checks logical consistency, and flags or corrects mistakes, producing proofs + verification rather than just a final answer. ...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 4
    DeepSeek LLM

    DeepSeek LLM

    DeepSeek LLM: Let there be answers

    The DeepSeek-LLM repository hosts the code, model files, evaluations, and documentation for DeepSeek’s LLM series (notably the 67B Chat variant). Its tagline is “Let there be answers.” The repo includes an “evaluation” folder (with results like math benchmark scores) and code artifacts (e.g. pre-commit config) that support model development and deployment. According to the evaluation files, DeepSeek LLM 67B Chat achieves strong performance on math benchmarks under both chain-of-thought (CoT) and tool-assisted reasoning modes. The model is trained from scratch, reportedly on a vast multilingual + code + reasoning dataset, and competes with other open or open-weight models. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 5
    LTX-2

    LTX-2

    Python inference and LoRA trainer package for the LTX-2 audio–video

    ...It is architected to give developers low-level control over rendering pipelines, GPU resource management, shader orchestration, and cross-platform abstractions so they can craft visually compelling experiences without starting from scratch. Beyond basic rendering scaffolding, LTX-2 includes optimized math libraries, resource loaders, utilities for texture and buffer handling, and integration points for native event loops and input systems. The framework targets both interactive graphical applications and media-rich experiences, making it a solid foundation for games, creative tools, or visualization systems that demand both performance and flexibility. ...
    Downloads: 26 This Week
    Last Update:
    See Project
  • 6
    DeepSeek V2

    DeepSeek V2

    Strong, Economical, and Efficient Mixture-of-Experts Language Model

    ...This version likely includes architectural improvements, training enhancements, and expanded dataset coverage compared to V1. The repository includes model weight artifacts, evaluation benchmarks across a broad suite (e.g. reasoning, math, multilingual), configuration files, and possibly tokenization / inference scripts. The V2 model is expected to support more advanced features like better context window handling, more efficient inference, better performance on challenging tasks, and stronger alignment with human feedback. Because DeepSeek is pushing open-weight competition, this V2 iteration is meant to solidify its position in benchmark rankings and in developer adoption. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    Tencent-Hunyuan-Large

    Tencent-Hunyuan-Large

    Open-source large language model family from Tencent Hunyuan

    ...It aims to provide competitive capability with efficient deployment and inference. FP8 quantization support to reduce memory usage (~50%) while maintaining precision. High benchmarking performance on tasks like MMLU, MATH, CMMLU, C-Eval, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Qwen3-VL

    Qwen3-VL

    Qwen3-VL, the multimodal large language model series by Alibaba Cloud

    Qwen3-VL is the latest multimodal large language model series from Alibaba Cloud’s Qwen team, designed to integrate advanced vision and language understanding. It represents a major upgrade in the Qwen lineup, with stronger text generation, deeper visual reasoning, and expanded multimodal comprehension. The model supports dense and Mixture-of-Experts (MoE) architectures, making it scalable from edge devices to cloud deployments, and is available in both instruction-tuned and...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    Kimi K2

    Kimi K2

    Kimi K2 is the large language model series developed by Moonshot AI

    ...The model family includes variants like a foundational base model that researchers can fine-tune for specific use cases and an instruct-optimized variant primed for general-purpose chat and agent-style interactions, offering flexibility for both experimentation and deployment. With its high-dimensional attention mechanisms and expert routing, Kimi-K2 excels across benchmarks in live coding, math reasoning, and problem solving.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Papirfly: Best user-friendly DAM and Content Creation Software Icon
    Papirfly: Best user-friendly DAM and Content Creation Software

    The #1 solution to create and manage content. On‑brand. At scale.

    Papirfly provides a single online destination for all your employees and other stakeholders who are engaging with your brand, ensuring consistency in all aspects of their communications. Teams can produce infinite studio-standard marketing materials from bespoke templates, store, share and adapt them for their own markets and stay firmly educated on the brand’s purpose, guidelines and evolution – with no specialist skills or agency help necessary.
    Learn More
  • 10
    DeepSeek Prover V2

    DeepSeek Prover V2

    Advancing Formal Mathematical Reasoning via Reinforcement Learning

    DeepSeek-Prover-V2 is DeepSeek’s specialized model for formal theorem proving, particularly targeting proof in Lean 4. The repository describes how they use recursive proof decomposition by prompting DeepSeek-V3 to break complex theorems into subgoals, synthesize proof sketches, and then combine them to bootstrap training data. They then fine-tune via reinforcement learning with binary correct/incorrect feedback to integrate informal reasoning with formal proof behavior. The repo releases...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is presented as the world’s first open-weight, large-scale hybrid-attention reasoning model, designed to push the frontier of long-context, tool-using, and deeply “thinking” language models. It is built on the MiniMax-Text-01 foundation and keeps the same massive parameter budget, but reworks the attention and training setup for better reasoning and test-time compute scaling. Architecturally, it combines Mixture-of-Experts layers with lightning attention, enabling the model to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    PRM800K

    PRM800K

    800,000 step-level correctness labels on LLM solutions to MATH problem

    PRM800K is a process supervision dataset accompanying the paper Let’s Verify Step by Step, providing 800,000 step-level correctness labels on model-generated solutions to problems from the MATH dataset. The repository releases the raw labels and the labeler instructions used in two project phases, enabling researchers to study how human raters graded intermediate reasoning. Data are stored as newline-delimited JSONL files tracked with Git LFS, where each line is a full solution sample that can contain many step-level labels and rich metadata such as labeler UUIDs, timestamps, generation identifiers, and quality-control flags. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    LLaMA.go

    LLaMA.go

    llama.go is like llama.cpp in pure Golang

    llama.go is like llama.cpp in pure Golang. The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Ministral 3 8B Reasoning 2512

    Ministral 3 8B Reasoning 2512

    Efficient 8B multimodal model tuned for advanced reasoning tasks.

    ...It combines an 8.4B-parameter language model with a 0.4B vision encoder, enabling it to process both text and images for advanced reasoning tasks. This version is specifically post-trained for reasoning, making it well-suited for math, coding, and STEM applications requiring multi-step logic and problem-solving. Despite its reasoning-focused training, the model remains edge-optimized and can run locally on a single 24GB GPU in BF16, or under 12GB when quantized. It supports dozens of languages, adheres reliably to system prompts, and provides native function calling and structured JSON output—key capabilities for agentic and automation workflows. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Ministral 3 14B Reasoning 2512

    Ministral 3 14B Reasoning 2512

    High-precision 14B multimodal model built for advanced reasoning tasks

    ...It pairs a 13.5B-parameter language model with a 0.4B vision encoder, enabling strong multimodal reasoning across both text and images. This version is specifically post-trained for reasoning tasks, making it highly effective for math, coding, STEM workloads, and complex multi-step problem-solving. Despite its scale, the model is engineered for practical deployment and can run locally on 32GB of VRAM in BF16 or under 24GB when quantized. It maintains robust system-prompt adherence, supports dozens of languages, and provides native function calling with clean JSON output for agentic workflows. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Ministral 3 3B Reasoning 2512

    Ministral 3 3B Reasoning 2512

    Compact 3B-param multimodal model for efficient on-device reasoning

    ...It pairs a 3.4B-parameter language model with a 0.4B-parameter vision encoder, enabling it to understand both text and image inputs. This reasoning-tuned variant is optimized for tasks like math, coding, and other STEM-related problem solving, making it suitable for applications that require logical reasoning, analysis, or structured thinking. Despite its modest size, the model is designed for edge deployment and can run locally, fitting in ~16 GB of VRAM in BF16 or under 8 GB of RAM/VRAM when quantized. It supports dozens of languages, allowing it to function across global and multilingual contexts. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Qwen2-7B-Instruct

    Qwen2-7B-Instruct

    Instruction-tuned 7B language model for chat and complex tasks

    Qwen2-7B-Instruct is a 7.62-billion-parameter instruction-tuned language model from the Qwen2 series developed by Alibaba's Qwen team. Built on a transformer architecture with SwiGLU activation and group query attention, it is optimized for chat, reasoning, coding, multilingual tasks, and extended context understanding up to 131,072 tokens. The model was pretrained on a large-scale dataset and aligned via supervised fine-tuning and direct preference optimization. It shows strong performance...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    QwQ-32B

    QwQ-32B

    QwQ-32B is a reasoning-focused language model for complex tasks

    ...The model is capable of structured thinking and delivers competitive performance against top models like DeepSeek-R1 and o1-mini. Recommended usage involves prompts starting with <think>\n, non-greedy sampling strategies, and support for standardized outputs on math and multiple-choice tasks. For long input handling, it supports YaRN (Yet another RoPE Namer) for context scaling.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Hermes 4

    Hermes 4

    Hermes 4 FP8: hybrid reasoning Llama-3.1-405B model by Nous Research

    ...It introduces a hybrid reasoning mode with explicit <think> segments, enabling the model to deliberate deeply when needed and switch to faster responses when desired. Post-training improvements include a vastly expanded corpus with ~60B tokens, boosting performance across math, code, STEM, logic, creativity, and structured outputs. The model is designed for schema adherence, producing valid JSON and repairing malformed outputs, making it highly suitable for tool use and function calling. Hermes 4 is engineered for superior steerability with reduced refusal rates, aligning responses to user values while preserving assistant quality. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Qwen2.5-14B-Instruct

    Qwen2.5-14B-Instruct

    Powerful 14B LLM with strong instruction and long-text handling

    Qwen2.5-14B-Instruct is a powerful instruction-tuned language model developed by the Qwen team, based on the Qwen2.5 architecture. It features 14.7 billion parameters and is optimized for tasks like dialogue, long-form generation, and structured output. The model supports context lengths up to 128K tokens and can generate up to 8K tokens, making it suitable for long-context applications. It demonstrates improved performance in coding, mathematics, and multilingual understanding across over...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    VaultGemma

    VaultGemma

    VaultGemma: 1B DP-trained Gemma variant for private NLP tasks

    VaultGemma is a sub-1B parameter variant of Google’s Gemma family that is pre-trained from scratch with Differential Privacy (DP), providing mathematically backed guarantees that its outputs do not reveal information about any single training example. Using DP-SGD with a privacy budget across a large English-language corpus (web documents, code, mathematics), it prioritizes privacy over raw utility. The model follows a Gemma-2–style architecture, outputs text from up to 1,024 input tokens,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next