Showing 646 open source projects for "compute"

View related business solutions
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 1

    pagerank

    pagerank

    a simple djanog app to compute the page ranks of webpages within a give url
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2

    Aircraft Trajectory Predictor

    Compute Aircraft Trajectories

    Compute Aircraft (jet) Trajectories based upon Eurocontrol BADA Aircraft Performance Model
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    DeepSeek-V3.2-Speciale

    DeepSeek-V3.2-Speciale

    High-compute ultra-reasoning model surpassing model surpassing GPT-5

    DeepSeek-V3.2-Speciale is the high-compute, ultra-reasoning variant of DeepSeek-V3.2, designed specifically to push the boundaries of mathematical, logical, and algorithmic intelligence. It builds on the DeepSeek Sparse Attention (DSA) framework, delivering dramatically improved long-context efficiency while preserving full model quality. Unlike the standard version, Speciale is tuned exclusively for deep reasoning and therefore does not support tool-calling, focusing its full capacity on pure cognitive performance. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4

    QFLIB

    C++ routines for local and global computations with integer-valued qua

    These routines compute local densities (at all places) for integer-valued quadratic forms and check representability of all numbers up to a given (multiplicatively defined) bound. These computations produce the explicit (sharp) lower bound for the constant in the asymptotic expression for the "representation numbers" r_Q(m) described in the 2004 Duke Math.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 5

    LFPsim

    Simulation scripts to reconstruct Local Field Potentials (LFP).

    LFPsim - Simulation scripts to compute Local Field Potentials (LFP) from cable compartmental models of neurons and networks implemented in NEURON simulation environment. LFPsim works reliably on biophysically detailed multi-compartmental neurons with ion channels in some or all compartments. Last updated 12-March-2016 Developed by : Harilal Parasuram & Shyam Diwakar Computational Neuroscience & Neurophysiology Lab, School of Biotechnology, Amrita University, India.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    A program written in Java to compute Julia sets, density graphs, and other such graphs for complex valued functions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    A group of Stackless Python libraries to compute game mechanics for a turn based strategy game.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8

    model-based-tracking-or-model-fitting

    Originates from model-based pattern tracking and/or homography RA

    - this algorithm works on point feature. It performs a random search (RANSAC) on neighboring of previous 4 corner points. Based on these 4 corner random search, it collects a set of new 4 corners and compute homography estimation. The set that gives best estimation is chosen as result
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    DeepSeek-V4-Pro

    DeepSeek-V4-Pro

    Flagship MoE model for advanced reasoning, coding, and agents

    ...The model supports an ultra-long context window of up to 1 million tokens, making it highly suitable for long-document reasoning, large codebases, and complex multi-step tasks. Architecturally, it introduces optimizations to reduce compute and memory costs while improving stability across long sequences. DeepSeek-V4-Pro is positioned as the high-end variant of the V4 family, outperforming most open-source models in areas such as agentic coding, STEM reasoning, and world knowledge, and approaching the performance of leading closed-source systems. It also supports advanced reasoning modes and tool-based workflows, enabling autonomous task execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Auth0 B2B Essentials: SSO, MFA, and RBAC Built In Icon
    Auth0 B2B Essentials: SSO, MFA, and RBAC Built In

    Unlimited organizations, 3 enterprise SSO connections, role-based access control, and pro MFA included. Dev and prod tenants out of the box.

    Auth0's B2B Essentials plan gives you everything you need to ship secure multi-tenant apps. Unlimited orgs, enterprise SSO, RBAC, audit log streaming, and higher auth and API limits included. Add on M2M tokens, enterprise MFA, or additional SSO connections as you scale.
    Sign Up Free
  • 10

    epsiram

    RAM machine for Android

    ...There are few pseudo instructions for creating pseudo programs. This can help you to understand, what does instruction set really mean and how computer processing works. You can sort some numbers from input tape, compute prime numbers and write result to output tape. With adding in cycle you can supply instruction of multiply. Direct and indirect addressing included. And you can do it all now on your android Phone during bus ride instead of boring aplet. I know, for mainstream user it is an incredibly useless app. Here it IS as it IS.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11

    hashdeep

    Compute, compare, or audit hashes.

    Computes, matches, and audits hashes recursively. Supports the MD5, SHA-1, SHA-256, Tiger, and Whirlpool algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    SwarmNet is a framework for distributed computing applications. Its goal is to make the creation of applications that compute complex calculations in distributed fashion as simple as possible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    ...When recursion is combined with pointers, it creates a powerful method for performing iterative calculations without using loops. In this program, a pointer is used to keep track of the current number being processed, while the recursive function repeatedly updates this value to compute the final sum. In real-world programming, recursion is used in various applications such as file
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    ...Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch transformer library. It requires installing the model-specific library from GitHub to function properly. Once set up, it can be used to compute similarity scores with minimal code. BLEURT-20-D12 enables more flexible deployment in PyTorch-based workflows for evaluating language generation outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    ...trimAl implements a series of automated algorithms that trim the alignment searching for optimum thresholds based on inherent characteristics of the input alignment, to be used so that the signal-to-noise ratio after alignment trimming phase is increased. Among trimAl’s additional features, trimAl allows getting the complementary alignment (columns that were trimmed), to compute statistics from the alignment, to select the output file format , to get a summary of trimAl’s trimming in HTML and SVG formats, and many other options.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    ZAYA1-8B

    ZAYA1-8B

    Efficient MoE reasoning model for coding and math workloads

    ...The model contains 8.4B total parameters with around 760M active during inference, allowing it to achieve strong reasoning, mathematics, and coding performance while remaining lightweight enough for efficient local or on-device deployment. ZAYA1-8B is optimized for long-form reasoning and test-time compute workflows, making it particularly effective for mathematical problem solving, coding tasks, and advanced reasoning chains. It introduces architectural innovations such as Compressed Convolutional Attention, a novel MLP-based expert router, and learned residual scaling to improve routing stability and inference efficiency. The model was trained entirely on AMD infrastructure and refined through supervised fine-tuning and multi-stage reinforcement learning focused on reasoning and coding.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. It achieves an impressive 80.1% top-1 accuracy on ImageNet-1k without any fine-tuning, showcasing its robustness in open-domain settings. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    granite-timeseries-ttm-r2

    granite-timeseries-ttm-r2

    Tiny pre-trained IBM model for multivariate time series forecasting

    ...The get_model() utility makes it easy to auto-select the best TTM model for specific context and prediction lengths. These models significantly outperform benchmarks like Chronos, GPT4TS, and Moirai while demanding a fraction of the compute.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    wav2vec2-large-xlsr-53-russian

    wav2vec2-large-xlsr-53-russian

    Russian ASR model fine-tuned on Common Voice and CSS10 datasets

    ...The model supports both PyTorch and JAX and is compatible with the Hugging Face Transformers and HuggingSound libraries. It is ideal for Russian voice transcription tasks in research, accessibility, and interface development. The training was made possible with compute support from OVHcloud, and the training scripts are publicly available for replication.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Mistral Large 3 675B Base 2512

    Mistral Large 3 675B Base 2512

    Frontier-scale 675B multimodal base model for custom AI training

    Mistral Large 3 675B Base 2512 is the foundational, pre-trained version of the Mistral Large 3 family, built as a frontier-scale multimodal Mixture-of-Experts model with 41B active parameters and a total size of 675B. It is trained from scratch using 3000 H200 GPUs, making it one of the most advanced and compute-intensive open-weight models available. As the base version, it is not fine-tuned for instruction following or reasoning, making it ideal for teams planning their own domain-specific finetuning or custom training pipelines. The model is engineered for reliability, long-context comprehension, and stable performance across many enterprise, scientific, and knowledge-intensive workloads. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    VaultGemma

    VaultGemma

    VaultGemma: 1B DP-trained Gemma variant for private NLP tasks

    ...The model follows a Gemma-2–style architecture, outputs text from up to 1,024 input tokens, and is intended to be instruction-tuned for downstream language understanding and generation tasks. Training ran on TPU v6e using JAX and Pathways with privacy-preserving algorithms (DP-SGD, truncated Poisson subsampling) and DP scaling laws to balance compute and privacy budgets. Benchmarks on the 1B pre-trained checkpoint show expected utility trade-offs (e.g., HellaSwag 10-shot 39.09, BoolQ 0-shot 62.04, PIQA 0-shot 68.00), reflecting its privacy-first design.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB