Showing 37 open source projects for "optimize"

View related business solutions
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    Codeflash

    Codeflash

    Optimize your code automatically with AI

    Codeflash is a general-purpose optimizer for Python that uses advanced large language models (LLMs) to automatically generate, test, and benchmark multiple optimization ideas, then creates merge-ready pull requests with the best improvements for your code. Optimize an entire existing codebase by running codeflash --all. Automate optimizing all future code you will write by installing Codeflash as a GitHub action. Optimize a Python workflow python myscript.py end-to-end by running codeflash optimize myscript.py. Optimizing the performance of new code for a Pull Request through GitHub Actions. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    TensorFlow Model Optimization Toolkit

    TensorFlow Model Optimization Toolkit

    A toolkit to optimize ML models for deployment for Keras & TensorFlow

    ...Among many uses, the toolkit supports techniques used to reduce latency and inference costs for cloud and edge devices (e.g. mobile, IoT). Deploy models to edge devices with restrictions on processing, memory, power consumption, network usage, and model storage space. Enable execution on and optimize for existing hardware or new special purpose accelerators. Choose the model and optimization tool depending on your task. In many cases, pre-optimized models can improve the efficiency of your application. Try the post-training tools to optimize an already-trained TensorFlow model. Use training-time optimization tools and learn about the techniques.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Label Studio

    Label Studio

    Label Studio is a multi-type data labeling and annotation tool

    ...Build custom UIs or use pre-built labeling templates. Detect objects on image, bboxes, polygons, circular, and keypoints supported. Partition image into multiple segments. Use ML models to pre-label and optimize the process. Label Studio is an open-source data labeling tool. It lets you label data types like audio, text, images, videos, and time series with a simple and straightforward UI and export to various model formats. It can be used to prepare raw data or improve existing training data to get more accurate ML models. The frontend part of Label Studio app lies in the frontend/ folder and written in React JSX. ...
    Downloads: 12 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    AdalFlow

    AdalFlow

    The library to build & auto-optimize LLM applications

    AdalFlow is a framework for building AI-powered automation workflows, enabling users to design and execute intelligent automation pipelines with minimal coding.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    LMDeploy

    LMDeploy

    LMDeploy is a toolkit for compressing, deploying, and serving LLMs

    LMDeploy is a toolkit designed for compressing, deploying, and serving large language models (LLMs). It offers tools and workflows to optimize LLMs for production environments, ensuring efficient performance and scalability. LMDeploy supports various model architectures and provides deployment solutions across different platforms.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    Anthropic's Original Performance

    Anthropic's Original Performance

    Anthropic's original performance take-home, now open for you to try

    Anthropic's Original Performance repository contains the publicly released version of a performance challenge originally used by Anthropic as part of their technical interview process, offering developers the opportunity to optimize and benchmark low-level code against simulated models. The project sets up a baseline performance problem where participants work to reduce simulated “clock cycles” required to run a given workload, effectively challenging them to engineer faster code under constraints. This take-home includes starter code, tests, and tools to debug performance, aiming to measure how effectively one can apply algorithmic improvements and optimizations. ...
    Downloads: 13 This Week
    Last Update:
    See Project
  • 8
    langrocks

    langrocks

    Tools like web browser, computer access and code runner for LLMs

    Langrocks is a programming language experimentation toolkit that enables developers to create, test, and optimize custom programming languages.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    ChatDev

    ChatDev

    Create Customized Software using Natural Language Idea

    ...It allows multiple AI agents to take on roles such as product managers, developers, and testers to collaboratively generate, refine, and evaluate software code. This project explores how AI can be leveraged to automate and optimize development workflows.
    Downloads: 4 This Week
    Last Update:
    See Project
  • Run applications fast and securely in a fully managed environment Icon
    Run applications fast and securely in a fully managed environment

    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of scalable infrastructure.

    Run frontend and backend services, batch jobs, deploy websites and applications, and queue processing workloads without the need to manage infrastructure.
    Try for free
  • 10
    BioNeMo

    BioNeMo

    BioNeMo Framework: For building and adapting AI models

    BioNeMo is an AI-powered framework developed by NVIDIA for protein and molecular generation using deep learning models. It provides researchers and developers with tools to design, analyze, and optimize biological molecules, aiding in drug discovery and synthetic biology applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    DeepSeek-V3.2-Exp

    DeepSeek-V3.2-Exp

    An experimental version of DeepSeek model

    DeepSeek-V3.2-Exp is an experimental release of the DeepSeek model family, intended as a stepping stone toward the next generation architecture. The key innovation in this version is DeepSeek Sparse Attention (DSA), a sparse attention mechanism that aims to optimize training and inference efficiency in long-context settings without degrading output quality. According to the authors, they aligned the training setup of V3.2-Exp with V3.1-Terminus so that benchmark results remain largely comparable, even though the internal attention mechanism changes. In public evaluations across a variety of reasoning, code, and question-answering benchmarks (e.g. ...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 12
    Intel Extension for Transformers

    Intel Extension for Transformers

    Build your chatbot within minutes on your favorite device

    ...It offers state-of-the-art compression techniques for Large Language Models (LLMs) and provides tools to build chatbots within minutes on various devices. The extension aims to optimize the performance of Transformer-based models, making them more efficient and accessible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    BitNet

    BitNet

    Inference framework for 1-bit LLMs

    BitNet (bitnet.cpp) is a high-performance inference framework designed to optimize the execution of 1-bit large language models, making them more efficient for edge devices and local deployment. The framework offers significant speedups and energy reductions, achieving up to 6.17x faster performance on x86 CPUs and 70% energy savings, allowing the running of models such as the BitNet b1.58 100B with impressive efficiency.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 14
    AgentUniverse

    AgentUniverse

    agentUniverse is a LLM multi-agent framework

    AgentUniverse is a multi-agent AI framework that enables coordination between multiple intelligent agents for complex task execution and automation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Optax

    Optax

    Optax is a gradient processing and optimization library for JAX

    Optax is a gradient processing and optimization library for JAX. It is designed to facilitate research by providing building blocks that can be recombined in custom ways in order to optimize parametric models such as, but not limited to, deep neural networks. We favor focusing on small composable building blocks that can be effectively combined into custom solutions. Others may build upon these basic components in more complicated abstractions. Whenever reasonable, implementations prioritize readability and structuring code to match standard equations, over code reuse.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    xFormers

    xFormers

    Hackable and optimized Transformers building blocks

    xformers is a modular, performance-oriented library of transformer building blocks, designed to allow researchers and engineers to compose, experiment, and optimize transformer architectures more flexibly than monolithic frameworks. It abstracts components like attention layers, feedforward modules, normalization, and positional encoding, so you can mix and match or swap optimized kernels easily. One of its key goals is efficient attention: it supports dense, sparse, low-rank, and approximate attention mechanisms (e.g. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    OpenAdapt

    OpenAdapt

    Open Source Generative Process Automation

    OpenAdapt is the open source software adapter between Large Multimodal Models (LMMs) and traditional desktop and web Graphical User Interfaces (GUIs). OpenAdapt learns to automate your desktop and web workflows by observing your demonstrations. Spend less time on repetitive tasks and more on work that truly matters. Boost team productivity in HR operations. Automate candidate sourcing using LinkedIn Recruiter, LinkedIn Talent Solutions, GetProspect, Reply.io, outreach.io, Gmail/Outlook, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Core ML Tools

    Core ML Tools

    Core ML tools contain supporting tools for Core ML model conversion

    Use Core ML Tools (coremltools) to convert machine learning models from third-party libraries to the Core ML format. This Python package contains the supporting tools for converting models from training libraries. Core ML is an Apple framework to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to fine-tune models, all on the user’s device. Core ML optimizes on-device...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    TPOT

    TPOT

    A Python Automated Machine Learning tool that optimizes ML

    Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming. TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    OpenDAN

    OpenDAN

    OpenDAN is an open source Personal AI OS

    OpenDAN is an open-source Personal AI OS , that consolidates various AI modules in one place for your personal use. The goal of OpenDAN (Open and Do Anything Now with AI) is to create a Personal AI OS , which provides a runtime environment for various Al modules as well as protocols for interoperability between them. With OpenDAN, users can securely collaborate with various AI modules using their private data to create powerful personal AI agents, such as butlers, lawyers, doctors, teachers,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    OpenVINO Training Extensions

    OpenVINO Training Extensions

    Trainable models and NN optimization tools

    ...When ote_cli is installed in the virtual environment, you can use the ote command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. ote train trains a model (a particular model template) on a dataset and saves results in two files. ote optimize optimizes a pre-trained model using NNCF or POT depending on the model format. NNCF optimization used for trained snapshots in a framework-specific format. POT optimization used for models exported in the OpenVINO IR format.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Rhino

    Rhino

    On-device Speech-to-Intent engine powered by deep learning

    ...Develop voice features with a few lines of code using intuitive and cross-platform SDKs. Deliver voice AI everywhere: on-device, mobile, web browsers, on-premise, or cloud. Measure adoption, learn, and iterate. Continuously re-design and re-train to optimize engagement. Building accurate, responsive, and private voice technology is difficult. We learned the hard way, so you don’t have to. Picovoice heavily invests in R&D to offer superior voice AI that surpasses even Big Tech in accuracy and efficiency. Picovoice researchers do not follow recent frameworks and techniques but build them.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    PaSa

    PaSa

    An advanced paper search agent powered by large language models

    PaSa is an open-source “paper search agent” built around large language models (LLMs), designed to automate the process of academic literature retrieval with human-like decision making. Instead of simply translating a query into keywords and returning a flat list of matching papers, PaSa uses a dual-agent architecture (Crawler + Selector) that can iteratively search, read, analyze, and filter academic publications — simulating how a researcher might dig through citation networks, expand...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Chinese-LLaMA-Alpaca 2

    Chinese-LLaMA-Alpaca 2

    Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project

    ...It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open-sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2, use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding of Chinese. Performance improvements. The related model supports FlashAttention-2 training, supports 4K context and can be extended up to 18K+ through the NTK method.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    SuperAGI

    SuperAGI

    A dev-first open source autonomous AI agent framework

    ...Connect to multiple Vector DBs to enhance your agent’s performance. Each agent is unique, use different models of your choice. Get insights into your agent’s performance and optimize accordingly. Control token usage to manage costs effectively. Enable your agents to learn and adapt by storing their memory. Get notified when agents get stuck in the loop, and provide proactive resolution. Read and store files generated by Agents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next