Showing 308 open source projects for "tuning"

View related business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Build enterprise-ready GenAI experiences with MongoDB Atlas Icon
    Build enterprise-ready GenAI experiences with MongoDB Atlas

    Combine the power of Google Cloud's robust infrastructure with the flexibility and scalability of MongoDB Atlas.

    MongoDB Atlas is a unified developer platform that enables you to confidently accelerate the deployment of GenAI-powered applications. Additionally, when purchased on Google Cloud Marketplace, you pay for only the resources you use with no upfront commitment.
    Get Started
  • 1
    TensorFlow Documentation

    TensorFlow Documentation

    TensorFlow documentation

    An end-to-end platform for machine learning. TensorFlow makes it easy to create ML models that can run in any environment. Learn how to use the intuitive APIs through interactive code samples.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    sktime

    sktime

    A unified framework for machine learning with time series

    ... interface for distinct but related time series learning tasks. It features dedicated time series algorithms and tools for composite model building such as pipelining, ensembling, tuning, and reduction, empowering users to apply an algorithm designed for one task to another.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    Petals

    Petals

    Run 100B+ language models at home, BitTorrent-style

    Run 100B+ language models at home, BitTorrent‑style. Run large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning. Single-batch inference runs at ≈ 1 sec per step (token) — up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec. Beyond classic language model APIs — you can employ any fine-tuning...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    FLAML

    FLAML

    A fast library for AutoML and tuning

    ... their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space, and metric), or full customization (arbitrary training and evaluation code). It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a new, cost-effective hyperparameter optimization and learner selection method invented by Microsoft Research.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Free CRM Software With Something for Everyone Icon
    Free CRM Software With Something for Everyone

    216,000+ customers in over 135 countries grow their businesses with HubSpot

    Think CRM software is just about contact management? Think again. HubSpot CRM has free tools for everyone on your team, and it’s 100% free. Here’s how our free CRM solution makes your job easier.
    Get free CRM
  • 5
    Katib

    Katib

    Automated Machine Learning on Kubernetes

    Katib is a Kubernetes-native project for automated machine learning (AutoML). Katib supports Hyperparameter Tuning, Early Stopping and Neural Architecture Search. Katib is a project that is agnostic to machine learning (ML) frameworks. It can tune hyperparameters of applications written in any language of the users’ choice and natively supports many ML frameworks, such as TensorFlow, Apache MXNet, PyTorch, XGBoost, and others. Katib can perform training jobs using any Kubernetes Custom...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ScikitLearn.jl

    ScikitLearn.jl

    Julia implementation of the scikit-learn API

    The scikit-learn Python library has proven very popular with machine learning researchers and data scientists in the last five years. It provides a uniform interface for training and using models, as well as a set of tools for chaining (pipelines), evaluating, and tuning model hyperparameters. ScikitLearn.jl brings these capabilities to Julia. Its primary goal is to integrate both Julia- and Python-defined models together into the scikit-learn framework.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    uvicorn-gunicorn-fastapi

    uvicorn-gunicorn-fastapi

    Docker image with Uvicorn managed by Gunicorn

    Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python with performance auto-tuning. Optionally with Alpine Linux.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    promptfoo

    promptfoo

    Evaluate and compare LLM outputs, catch regressions, improve prompts

    Ensure high-quality LLM outputs with automatic evals. Use a representative sample of user inputs to reduce subjectivity when tuning prompts. Use built-in metrics, LLM-graded evals, or define your own custom metrics. Compare prompts and model outputs side-by-side, or integrate the library into your existing test/CI workflow. Use OpenAI, Anthropic, and open-source models like Llama and Vicuna, or integrate custom API providers for any LLM API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    OpenCLIP

    OpenCLIP

    An open source implementation of CLIP

    The goal of this repository is to enable training models with contrastive image-text supervision and to investigate their properties such as robustness to distribution shift. Our starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset. Specifically, a ResNet-50 model trained with our codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet. OpenAI's CLIP model reaches 31.3% when...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Bright Data - All in One Platform for Proxies and Web Scraping Icon
    Bright Data - All in One Platform for Proxies and Web Scraping

    Say goodbye to blocks, restrictions, and CAPTCHAs

    Bright Data offers the highest quality proxies with automated session management, IP rotation, and advanced web unlocking technology. Enjoy reliable, fast performance with easy integration, a user-friendly dashboard, and enterprise-grade scaling. Powered by ethically-sourced residential IPs for seamless web scraping.
    Get Started
  • 10
    MLJ

    MLJ

    A Julia machine learning framework

    MLJ (Machine Learning in Julia) is a toolbox written in Julia providing a common interface and meta-algorithms for selecting, tuning, evaluating, composing and comparing about 200 machine learning models written in Julia and other languages. The functionality of MLJ is distributed over several repositories illustrated in the dependency chart below. These repositories live at the JuliaAI umbrella organization.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    NGBoost

    NGBoost

    Natural Gradient Boosting for Probabilistic Prediction

    ngboost is a Python library that implements Natural Gradient Boosting, as described in "NGBoost: Natural Gradient Boosting for Probabilistic Prediction". It is built on top of Scikit-Learn and is designed to be scalable and modular with respect to the choice of proper scoring rule, distribution, and base learner. A didactic introduction to the methodology underlying NGBoost is available in this slide deck.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    MLJ.jl

    MLJ.jl

    A Julia machine learning framework

    MLJ (Machine Learning in Julia) is a toolbox written in Julia providing a common interface and meta-algorithms for selecting, tuning, evaluating, composing, and comparing about 200 machine learning models written in Julia and other languages. The functionality of MLJ is distributed over several repositories illustrated in the dependency chart below. These repositories live at the JuliaAI umbrella organization.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    NeMo Curator

    NeMo Curator

    Scalable data pre processing and curation toolkit for LLMs

    NeMo Curator is a Python library specifically designed for fast and scalable dataset preparation and curation for large language model (LLM) use-cases such as foundation model pretraining, domain-adaptive pretraining (DAPT), supervised fine-tuning (SFT) and paramter-efficient fine-tuning (PEFT). It greatly accelerates data curation by leveraging GPUs with Dask and RAPIDS, resulting in significant time savings. The library provides a customizable and modular interface, simplifying pipeline...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Chinese-LLaMA-Alpaca 2

    Chinese-LLaMA-Alpaca 2

    Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project

    This project is developed based on the commercially available large model Llama-2 released by Meta. It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open-sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2, use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    ModelScope

    ModelScope

    Bring the notion of Model-as-a-Service to life

    ... unified experience to explore state-of-the-art models spanning across domains such as CV, NLP, Speech, Multi-Modality, and Scientific-computation. Model contributors of different areas can integrate models into the ModelScope ecosystem through the layered APIs, allowing easy and unified access to their models. Once integrated, model inference, fine-tuning, and evaluations can be done with only a few lines of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Open Source Vizier

    Open Source Vizier

    Python-based research interface for blackbox

    Open Source (OSS) Vizier is a Python-based interface for blackbox optimization and research, based on Google’s original internal Vizier, one of the first hyperparameter tuning services designed to work at scale. Allows a user to setup an OSS Vizier Server, which can host black-box optimization algorithms to serve multiple clients simultaneously in a fault-tolerant manner to tune their objective functions. Defines abstractions and utilities for implementing new optimization algorithms...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ants

    ants

    ants is a high-performance and low-cost goroutine pool in Go

    Library ants implements a goroutine pool with fixed capacity, managing and recycling a massive number of goroutines, allowing developers to limit the number of goroutines in your concurrent programs. Managing and recycling a massive number of goroutines automatically. Purging overdue goroutines periodically. Abundant APIs: submitting tasks, getting the number of running goroutines, tuning capacity of pool dynamically, releasing pool, rebooting pool. Handle panic gracefully to prevent programs...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Ray

    Ray

    A unified framework for scalable computing

    Modern workloads like deep learning and hyperparameter tuning are compute-intensive and require distributed or parallel execution. Ray makes it effortless to parallelize single machine code — go from a single CPU to multi-core, multi-GPU or multi-node with minimal code changes. Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray. Accelerate your hyperparameter search workloads with Ray Tune. Find the best...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    auto-sklearn

    auto-sklearn

    Automated machine learning with scikit-learn

    auto-sklearn is an automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator. auto-sklearn frees a machine learning user from algorithm selection and hyperparameter tuning. It leverages recent advantages in Bayesian optimization, meta-learning and ensemble construction. Auto-sklearn 2.0 includes latest research on automatically configuring the AutoML system itself and contains a multitude of improvements which speed up the fitting the AutoML system. auto-sklearn...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Guidance

    Guidance

    A guidance language for controlling large language models

    Guidance is an efficient programming paradigm for steering language models. With Guidance, you can control how output is structured and get high-quality output for your use case—while reducing latency and cost vs. conventional prompting or fine-tuning. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditionals, loops, tool use) and generation seamlessly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Backtrack Sampler

    Backtrack Sampler

    An easy-to-understand framework for LLM samplers

    Backtrack Sampler is a framework designed for experimenting with custom sampling strategies for language models (LLMs), enabling the ability to rewind and revise generated tokens. It allows developers to create and test their own token generation strategies by providing a base structure for manipulating logits and probabilities, making it a flexible tool for those interested in fine-tuning the behavior of LLMs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Lightning Bolts

    Lightning Bolts

    Toolbox of models, callbacks, and datasets for AI/ML researchers

    Bolts package provides a variety of components to extend PyTorch Lightning, such as callbacks & datasets, for applied research and production. Torch ORT converts your model into an optimized ONNX graph, speeding up training & inference when using NVIDIA or AMD GPUs. We can introduce sparsity during fine-tuning with SparseML, which ultimately allows us to leverage the DeepSparse engine to see performance improvements at inference time.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Advanced Solutions Lab

    Advanced Solutions Lab

    This repos contains notebooks for the Advanced Solutions Lab

    This repository contains Jupyter notebooks meant to be run on Vertex AI. This is maintained by Google Cloud’s Advanced Solutions Lab (ASL) team. Vertex AI is the next-generation AI Platform on the Google Cloud Platform. The material covered in this repo will take a software engineer with no exposure to machine learning to an advanced level.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Chronos Forecasting

    Chronos Forecasting

    Pretrained (Language) Models for Probabilistic Time Series Forecasting

    Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as...
    Downloads: 0 This Week
    Last Update:
    See Project