Showing 86 open source projects for "tuning"

View related business solutions
  • Enterprise and Small Business CRM Solution | Clear C2 C2CRM Icon
    Enterprise and Small Business CRM Solution | Clear C2 C2CRM

    Voted Best CRM System with Top Ranked Customer Support. CRM Management includes Sales, Marketing, Relationship Management, and Help Desk.

    C2CRM consists of four modules that integrate to provide a comprehensive CRM solution: Relationship Management, Sales Automation, Marketing Automation, and Customer Service. Only buy what each user needs.
    Learn More
  • Accounting Software for Small Businesses | Xero Icon
    Accounting Software for Small Businesses | Xero

    Save 90% for 6 months on Xero's award-winning accounting and online bookkeeping platform for businesses of all sizes and stages of growth.

    Xero offers a robust ecosystem of connected apps and integrations with banks and financial institutions, enabling small businesses to access a wide range of solutions within Xero's open platform to streamline operations and manage finances. Additionally, accounting and bookkeeping firms benefit from efficient compliance tools, advanced practice management software, and a cloud-based unified accounting ledger for all clients, centralized in one place.
    Get 90% off for 6 months
  • 1
    TensorFlow Documentation

    TensorFlow Documentation

    TensorFlow documentation

    An end-to-end platform for machine learning. TensorFlow makes it easy to create ML models that can run in any environment. Learn how to use the intuitive APIs through interactive code samples.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    Petals

    Petals

    Run 100B+ language models at home, BitTorrent-style

    Run 100B+ language models at home, BitTorrent‑style. Run large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning. Single-batch inference runs at ≈ 1 sec per step (token) — up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec. Beyond classic language model APIs — you can employ any fine-tuning...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    FLAML

    FLAML

    A fast library for AutoML and tuning

    ... their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space, and metric), or full customization (arbitrary training and evaluation code). It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a new, cost-effective hyperparameter optimization and learner selection method invented by Microsoft Research.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    DeepEval
    DeepEval is a simple-to-use, open-source LLM evaluation framework, for evaluating and testing large-language model systems. It is similar to Pytest but specialized for unit testing LLM outputs. DeepEval incorporates the latest research to evaluate LLM outputs based on metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., which uses LLMs and various other NLP models that run locally on your machine for evaluation. Whether your application is implemented via RAG or fine-tuning...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Software Testing Platform | Testeum Icon
    Software Testing Platform | Testeum

    Testeum is a Software Testing & User Test platform

    Tired of bugs and poor UX going unnoticed despite thorough internal testing? Testeum is the SaaS crowdtesting platform that connects mobile and web app creators with carefully selected testers based on your criteria.
    Learn More
  • 5
    MLJAR Studio

    MLJAR Studio

    Python package for AutoML on Tabular Data with Feature Engineering

    ... the machine learning models, and perform hyper-parameter tuning to find the best model. It is no black box, as you can see exactly how the ML pipeline is constructed (with a detailed Markdown report for each ML model).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    Adapters

    Adapters

    A Unified Library for Parameter-Efficient Learning

    Adapters is an add-on library to HuggingFace's Transformers, integrating 10+ adapter methods into 20+ state-of-the-art Transformer models with minimal coding overhead for training and inference. Adapters provide a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning), adapter merging via task arithmetics or the composition of multiple adapters...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    uvicorn-gunicorn-fastapi

    uvicorn-gunicorn-fastapi

    Docker image with Uvicorn managed by Gunicorn

    Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python with performance auto-tuning. Optionally with Alpine Linux.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    MLE-Agent

    MLE-Agent

    Intelligent companion for seamless AI engineering and research

    MLE-Agent is designed as a pairing LLM agent for machine learning engineers and researchers. A library designed for managing machine learning experiments, tracking metrics, and model deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    MindNLP

    MindNLP

    Easy-to-use and high-performance NLP and LLM framework

    MindNLP is a natural language processing library built on the MindSpore framework, providing tools and models for various NLP tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • The next chapter in business mental wellness Icon
    The next chapter in business mental wellness

    Entrust your employee well-being to Calmerry's nationwide network of licensed mental health professionals.

    Calmerry is beneficial for businesses of all sizes, particularly those in high-stress industries, organizations with remote teams, and HR departments seeking to improve employee well-being and productivity
    Learn More
  • 10
    UniEM

    UniEM

    Unified embedding model

    UniEM is a unified embedding model designed to create high-quality text embeddings for various natural language processing tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    DOLMA

    DOLMA

    Data and tools for generating and inspecting OLMo pre-training data

    DOLMA (Data Optimization and Learning for Model Alignment) is a framework designed to manage large-scale datasets for training and fine-tuning language models efficiently.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    LightAutoML

    LightAutoML

    Fast and customizable framework for automatic ML model creation

    LightAutoML is an automated machine learning (AutoML) framework optimized for efficient model training and hyperparameter tuning, focusing on both tabular and text data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    NGBoost

    NGBoost

    Natural Gradient Boosting for Probabilistic Prediction

    ngboost is a Python library that implements Natural Gradient Boosting, as described in "NGBoost: Natural Gradient Boosting for Probabilistic Prediction". It is built on top of Scikit-Learn and is designed to be scalable and modular with respect to the choice of proper scoring rule, distribution, and base learner. A didactic introduction to the methodology underlying NGBoost is available in this slide deck.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    NeMo Curator

    NeMo Curator

    Scalable data pre processing and curation toolkit for LLMs

    NeMo Curator is a Python library specifically designed for fast and scalable dataset preparation and curation for large language model (LLM) use-cases such as foundation model pretraining, domain-adaptive pretraining (DAPT), supervised fine-tuning (SFT) and paramter-efficient fine-tuning (PEFT). It greatly accelerates data curation by leveraging GPUs with Dask and RAPIDS, resulting in significant time savings. The library provides a customizable and modular interface, simplifying pipeline...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Chinese-LLaMA-Alpaca 2

    Chinese-LLaMA-Alpaca 2

    Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project

    This project is developed based on the commercially available large model Llama-2 released by Meta. It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open-sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2, use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Open Source Vizier

    Open Source Vizier

    Python-based research interface for blackbox

    Open Source (OSS) Vizier is a Python-based interface for blackbox optimization and research, based on Google’s original internal Vizier, one of the first hyperparameter tuning services designed to work at scale. Allows a user to setup an OSS Vizier Server, which can host black-box optimization algorithms to serve multiple clients simultaneously in a fault-tolerant manner to tune their objective functions. Defines abstractions and utilities for implementing new optimization algorithms...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Guidance

    Guidance

    A guidance language for controlling large language models

    Guidance is an efficient programming paradigm for steering language models. With Guidance, you can control how output is structured and get high-quality output for your use case—while reducing latency and cost vs. conventional prompting or fine-tuning. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditionals, loops, tool use) and generation seamlessly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Backtrack Sampler

    Backtrack Sampler

    An easy-to-understand framework for LLM samplers

    Backtrack Sampler is a framework designed for experimenting with custom sampling strategies for language models (LLMs), enabling the ability to rewind and revise generated tokens. It allows developers to create and test their own token generation strategies by providing a base structure for manipulating logits and probabilities, making it a flexible tool for those interested in fine-tuning the behavior of LLMs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Lightning Bolts

    Lightning Bolts

    Toolbox of models, callbacks, and datasets for AI/ML researchers

    Bolts package provides a variety of components to extend PyTorch Lightning, such as callbacks & datasets, for applied research and production. Torch ORT converts your model into an optimized ONNX graph, speeding up training & inference when using NVIDIA or AMD GPUs. We can introduce sparsity during fine-tuning with SparseML, which ultimately allows us to leverage the DeepSparse engine to see performance improvements at inference time.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Advanced Solutions Lab

    Advanced Solutions Lab

    This repos contains notebooks for the Advanced Solutions Lab

    This repository contains Jupyter notebooks meant to be run on Vertex AI. This is maintained by Google Cloud’s Advanced Solutions Lab (ASL) team. Vertex AI is the next-generation AI Platform on the Google Cloud Platform. The material covered in this repo will take a software engineer with no exposure to machine learning to an advanced level.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    HDBSCAN

    HDBSCAN

    A high performance implementation of HDBSCAN clustering

    HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. In practice this means that HDBSCAN returns a good clustering straight away with little or no parameter tuning -- and the primary parameter, minimum cluster size...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Transformer Reinforcement Learning X

    Transformer Reinforcement Learning X

    A repo for distributed training of language models with Reinforcement

    trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset. Training support for Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters, trlX...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    sktime

    sktime

    A unified framework for machine learning with time series

    ... interface for distinct but related time series learning tasks. It features dedicated time series algorithms and tools for composite model building such as pipelining, ensembling, tuning, and reduction, empowering users to apply an algorithm designed for one task to another.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    unit-minions

    unit-minions

    AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA

    "AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA", including Llama (Alpaca LoRA) model, ChatGLM (ChatGLM Tuning) related Lora training. Training content: user story generation, test code generation, code-assisted generation, text to SQL, text generation code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Nixtla Neural Forecast

    Nixtla Neural Forecast

    Scalable and user friendly neural forecasting algorithms.

    NeuralForecast offers a large collection of neural forecasting models focusing on their performance, usability, and robustness. The models range from classic networks like RNNs to the latest transformers: MLP, LSTM, GRU, RNN, TCN, TimesNet, BiTCN, DeepAR, NBEATS, NBEATSx, NHITS, TiDE, DeepNPTS, TSMixer, TSMixerx, MLPMultivariate, DLinear, NLinear, TFT, Informer, AutoFormer, FedFormer, PatchTST, iTransformer, StemGNN, and TimeLLM. There is a shared belief in Neural forecasting methods'...
    Downloads: 0 This Week
    Last Update:
    See Project