Showing 14 open source projects for "budget"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 1
    Aden Hive

    Aden Hive

    Outcome driven agent development framework that evolves

    ...Once deployed, agents can capture failure data, evolve automatically to meet their success criteria, and redeploy without constant manual intervention, delivering continual improvement over time. The framework also includes human-in-the-loop nodes, credential management, cost and budget controls, and real-time observability so teams can monitor execution and intervene as needed. Hive is designed for production environments and supports a wide range of large language models, local models, and business system connectivity.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 2
    Claude Cognitive

    Claude Cognitive

    Persistent context and multi-instance coordination

    ...It introduces an attention-based context router that prioritizes files and content relevant to the current development discussion — tagging them as HOT, WARM, or COLD based on recency and keyword activation — so Claude Code doesn’t waste token budget rereading irrelevant code. This context routing dramatically reduces redundant token usage and accelerates large codebase interactions by focusing only on what truly matters to the current task. Additionally, Claude-Cognitive includes a pool coordinator to share state across multiple Claude Code instances, preserving what’s been learned or completed and preventing repetitive debugging or redundant exploration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    autoresearch-mlx

    autoresearch-mlx

    Apple Silicon (MLX) port of Karpathy's autoresearch

    autoresearch-mlx is an Apple Silicon–optimized implementation of the autoresearch framework that enables autonomous AI research loops to run natively on MLX without requiring PyTorch or CUDA dependencies. It maintains the core autoresearch structure, where an AI agent iteratively edits a training script, executes experiments under a fixed time budget, and evaluates results based on a defined metric such as validation bits per byte. The system is tailored for Apple hardware, leveraging unified memory and MLX capabilities to achieve efficient training on Mac devices. It includes a minimal and focused project structure consisting of data preparation utilities, a modifiable training file, and a program specification that governs the agent’s behavior. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    autoresearch for AMD

    autoresearch for AMD

    AI agents running research on single-GPU nanochat training

    ...The system is built around a minimal structure that includes a data preparation module, a training script that can be modified, and a program specification that guides the agent’s decision-making process. During each iteration, the agent edits the training code, runs an experiment within a fixed time budget, evaluates performance metrics, and decides whether to retain or discard the changes. This loop allows the system to explore a wide range of architectural and hyperparameter configurations without human intervention. The framework emphasizes simplicity and reproducibility, ensuring that experiments are comparable and results are traceable over time.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    More flexibility. More control.

    Generate interest, access liquidity without selling, and execute trades seamlessly. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • 5
    autoresearch-win-rtx

    autoresearch-win-rtx

    AI agents running research on single-GPU nanochat training

    ...The system revolves around a small set of core files, including a training script that is continuously modified by an AI agent, along with supporting utilities for data preparation and evaluation. Experiments are executed within a fixed time budget, ensuring consistent benchmarking across iterations and allowing the agent to focus on incremental improvements. The framework is designed to be lightweight and accessible, making it suitable for developers and researchers working on desktop hardware. It also supports modern GPU acceleration features through PyTorch, enabling efficient experimentation even on limited resources.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Robyn

    Robyn

    Experimental, AI/ML-powered and open sourced Marketing Mix Modeling

    Robyn is an open-source, AI/ML-powered Marketing Mix Modeling (MMM) toolkit developed by Meta Marketing Science under the “facebookexperimental” GitHub umbrella. Its goal is to democratize rigorous MMM: what traditionally required expert statisticians and expensive consulting becomes accessible to any company with data. Robyn takes in historical data (spends on different marketing channels, conversions, or revenue, and optional context or organic-media variables) and uses a combination of...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 7
    FLAML

    FLAML

    A fast library for AutoML and tuning

    ...It supports both classical machine learning models and deep neural networks. It is easy to customize or extend. Users can find their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space, and metric), or full customization (arbitrary training and evaluation code). It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a new, cost-effective hyperparameter optimization and learner selection method invented by Microsoft Research.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 8
    Opacus

    Opacus

    Training PyTorch models with differential privacy

    ...It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment. Vectorized per-sample gradient computation that is 10x faster than micro batching. Supports most types of PyTorch models and can be used with minimal modification to the original neural network. Open source, modular API for differential privacy research. Everyone is welcome to contribute. ML practitioners will find this to be a gentle introduction to training a model with differential privacy as it requires minimal code changes. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    VibeThinker

    VibeThinker

    Diversity-driven optimization and large-model reasoning ability

    VibeThinker is a compact but high-capability open-source language model released by WeiboAI (Sina AI Lab). It contains about 1.5 billion parameters, far smaller than many “frontier” models, yet it is explicitly optimized for reasoning, mathematics, and code generation tasks rather than general open-domain chat. The innovation lies in its training methodology: the team uses what they call the Spectrum-to-Signal Principle (SSP), where a first stage emphasizes diversity of reasoning paths (the...
    Downloads: 1 This Week
    Last Update:
    See Project
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 10
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is presented as the world’s first open-weight, large-scale hybrid-attention reasoning model, designed to push the frontier of long-context, tool-using, and deeply “thinking” language models. It is built on the MiniMax-Text-01 foundation and keeps the same massive parameter budget, but reworks the attention and training setup for better reasoning and test-time compute scaling. Architecturally, it combines Mixture-of-Experts layers with lightning attention, enabling the model to support a native context length of 1 million tokens while using far fewer FLOPs than comparable reasoning models for very long generations. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    finetuner

    finetuner

    Task-oriented finetuning for better embeddings on neural search

    Fine-tuning is an effective way to improve performance on neural search tasks. However, setting up and performing fine-tuning can be very time-consuming and resource-intensive. Jina AI’s Finetuner makes fine-tuning easier and faster by streamlining the workflow and handling all the complexity and infrastructure in the cloud. With Finetuner, you can easily enhance the performance of pre-trained models, making them production-ready without extensive labeling or expensive hardware. Create...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Auto-PyTorch

    Auto-PyTorch

    Automatic architecture search and hyperparameter optimization

    While early AutoML frameworks focused on optimizing traditional ML pipelines and their hyperparameters, another trend in AutoML is to focus on neural architecture search. To bring the best of these two worlds together, we developed Auto-PyTorch, which jointly and robustly optimizes the network architecture and the training hyperparameters to enable fully automated deep learning (AutoDL). Auto-PyTorch is mainly developed to support tabular data (classification, regression) and time series...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    BudgetML

    BudgetML

    Deploy a ML inference service on a budget in 10 lines of code

    Deploy a ML inference service on a budget in less than 10 lines of code. BudgetML is perfect for practitioners who would like to quickly deploy their models to an endpoint, but not waste a lot of time, money, and effort trying to figure out how to do this end-to-end. We built BudgetML because it's hard to find a simple way to get a model in production fast and cheaply. Deploying from scratch involves learning too many different concepts like SSL certificate generation, Docker, REST, Uvicorn/Gunicorn, backend servers etc., that are simply not within the scope of a typical data scientist. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    EfficientNet Keras

    EfficientNet Keras

    Implementation of EfficientNet model. Keras and TensorFlow Keras

    This repository contains a Keras (and TensorFlow Keras) reimplementation of EfficientNet, a lightweight convolutional neural network architecture achieving state-of-the-art accuracy with an order of magnitude fewer parameters and FLOPS, on both ImageNet and five other commonly used transfer learning datasets. Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB