Showing 68 open source projects for "optimize"

View related business solutions
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    Codeflash

    Codeflash

    Optimize your code automatically with AI

    Codeflash is a general-purpose optimizer for Python that uses advanced large language models (LLMs) to automatically generate, test, and benchmark multiple optimization ideas, then creates merge-ready pull requests with the best improvements for your code. Optimize an entire existing codebase by running codeflash --all. Automate optimizing all future code you will write by installing Codeflash as a GitHub action. Optimize a Python workflow python myscript.py end-to-end by running codeflash optimize myscript.py. Optimizing the performance of new code for a Pull Request through GitHub Actions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    TensorFlow Model Optimization Toolkit

    TensorFlow Model Optimization Toolkit

    A toolkit to optimize ML models for deployment for Keras & TensorFlow

    ...Among many uses, the toolkit supports techniques used to reduce latency and inference costs for cloud and edge devices (e.g. mobile, IoT). Deploy models to edge devices with restrictions on processing, memory, power consumption, network usage, and model storage space. Enable execution on and optimize for existing hardware or new special purpose accelerators. Choose the model and optimization tool depending on your task. In many cases, pre-optimized models can improve the efficiency of your application. Try the post-training tools to optimize an already-trained TensorFlow model. Use training-time optimization tools and learn about the techniques.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    MongoDB Lens

    MongoDB Lens

    MongoDB Lens: Full Featured MCP Server for MongoDB Databases

    MongoDB Lens is a local Model Context Protocol (MCP) server offering full-featured access to MongoDB databases using natural language via large language models (LLMs). It enables users to perform queries, run aggregations, optimize performance, and more through conversational interactions. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    Label Studio

    Label Studio

    Label Studio is a multi-type data labeling and annotation tool

    ...Build custom UIs or use pre-built labeling templates. Detect objects on image, bboxes, polygons, circular, and keypoints supported. Partition image into multiple segments. Use ML models to pre-label and optimize the process. Label Studio is an open-source data labeling tool. It lets you label data types like audio, text, images, videos, and time series with a simple and straightforward UI and export to various model formats. It can be used to prepare raw data or improve existing training data to get more accurate ML models. The frontend part of Label Studio app lies in the frontend/ folder and written in React JSX. ...
    Downloads: 15 This Week
    Last Update:
    See Project
  • 6
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. ...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 7
    TorchServe

    TorchServe

    Serve, optimize and scale PyTorch models in production

    ...REST and gRPC support for batched inference. Export your model for optimized inference. Torchscript out of the box, ORT, IPEX, TensorRT, FasterTransformer. Performance Guide: built-in support to optimize, benchmark and profile PyTorch and TorchServe performance. Expressive handlers: An expressive handler architecture that makes it trivial to support inferencing for your use case with many supported out of the box. Out-of-box support for system-level metrics with Prometheus exports, custom metrics and PyTorch profiler support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    LangDB AI Gateway

    LangDB AI Gateway

    Govern, secure, and optimize your AI traffic

    ...Developed by the LangDB team, AI Gateway acts as an intermediary between clients and backend LLMs, providing advanced features like caching, rate limiting, prompt management, and observability. It helps teams secure and optimize their LLM deployments, whether using local models or external APIs like OpenAI or Anthropic. With native support for multi-tenant environments and low-latency inference routing, AI Gateway is an essential tool for companies building production-grade generative AI services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    AdalFlow

    AdalFlow

    The library to build & auto-optimize LLM applications

    AdalFlow is a framework for building AI-powered automation workflows, enabling users to design and execute intelligent automation pipelines with minimal coding.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Nonprofit Budgeting Software Icon
    Nonprofit Budgeting Software

    Martus Solutions provides seamless budgeting, reporting, and forecasting tools that integrate with accounting systems for real-time financial insights

    Martus' collaborative and easy-to-use budgeting and reporting platform will save you hundreds of hours each year. It's designed to make the entire budgeting process easier and create unlimited financial transparency.
    Learn More
  • 10
    LMDeploy

    LMDeploy

    LMDeploy is a toolkit for compressing, deploying, and serving LLMs

    LMDeploy is a toolkit designed for compressing, deploying, and serving large language models (LLMs). It offers tools and workflows to optimize LLMs for production environments, ensuring efficient performance and scalability. LMDeploy supports various model architectures and provides deployment solutions across different platforms.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators...
    Downloads: 49 This Week
    Last Update:
    See Project
  • 12
    Anthropic's Original Performance

    Anthropic's Original Performance

    Anthropic's original performance take-home, now open for you to try

    Anthropic's Original Performance repository contains the publicly released version of a performance challenge originally used by Anthropic as part of their technical interview process, offering developers the opportunity to optimize and benchmark low-level code against simulated models. The project sets up a baseline performance problem where participants work to reduce simulated “clock cycles” required to run a given workload, effectively challenging them to engineer faster code under constraints. This take-home includes starter code, tests, and tools to debug performance, aiming to measure how effectively one can apply algorithmic improvements and optimizations. ...
    Downloads: 13 This Week
    Last Update:
    See Project
  • 13
    langrocks

    langrocks

    Tools like web browser, computer access and code runner for LLMs

    Langrocks is a programming language experimentation toolkit that enables developers to create, test, and optimize custom programming languages.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Chat2DB

    Chat2DB

    AI-driven database tool and SQL client

    ...If you don't know SQL well, you can get instant information without writing SQL. Generate high-performance SQL for your complicated queries using natural language, as well as correcting errors and getting AI suggestions to optimize the performance of SQL queries. Developers can write complex SQL queries quickly and accurately with the help of the AI SQL editor, saving time and improving development efficiency. Just enter the names of the tables and columns, and we will automatically configure the type, password, and comment, saving you 90% of the time. Imports and exports data in multiple formats (CSV, XLSX, XLS, SQL) to facilitate exchange, backup, and migration. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 15
    BioNeMo

    BioNeMo

    BioNeMo Framework: For building and adapting AI models

    BioNeMo is an AI-powered framework developed by NVIDIA for protein and molecular generation using deep learning models. It provides researchers and developers with tools to design, analyze, and optimize biological molecules, aiding in drug discovery and synthetic biology applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DeepSeek-V3.2-Exp

    DeepSeek-V3.2-Exp

    An experimental version of DeepSeek model

    DeepSeek-V3.2-Exp is an experimental release of the DeepSeek model family, intended as a stepping stone toward the next generation architecture. The key innovation in this version is DeepSeek Sparse Attention (DSA), a sparse attention mechanism that aims to optimize training and inference efficiency in long-context settings without degrading output quality. According to the authors, they aligned the training setup of V3.2-Exp with V3.1-Terminus so that benchmark results remain largely comparable, even though the internal attention mechanism changes. In public evaluations across a variety of reasoning, code, and question-answering benchmarks (e.g. ...
    Downloads: 16 This Week
    Last Update:
    See Project
  • 17
    Model Explorer

    Model Explorer

    A modern model graph visualizer and debugger

    Model Explorer is a visual tool for exploring, debugging, and optimizing ML models deployed on edge devices. Developed by Google AI Edge, it offers a browser-based interface to inspect layer-wise performance, memory usage, and inference timing of TensorFlow Lite and other supported models. It’s a powerful utility for developers optimizing models for constrained environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MCP Image Compression

    MCP Image Compression

    A high-performance image compression microservice based on MCP

    The MCP Image Compression server is a high-performance microservice based on the Model Context Protocol architecture. It focuses on providing fast and high-quality image compression capabilities to help developers optimize image resources for websites and applications, improving loading speed and user experience. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Plandex

    Plandex

    AI driven development in your terminal

    Plandex is an AI-powered project planning and scheduling tool that optimizes resource allocation and workflow efficiency using predictive algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Presubmit

    Presubmit

    Context-aware AI reviewer for Pull Requests

    Optimize your code review process with Presubmit's AI Code Reviewer catches bugs, suggests improvements, and provides a meaningful summary - all before human reviewers take their first look.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ChatDev

    ChatDev

    Create Customized Software using Natural Language Idea

    ...It allows multiple AI agents to take on roles such as product managers, developers, and testers to collaboratively generate, refine, and evaluate software code. This project explores how AI can be leveraged to automate and optimize development workflows.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    Intel Extension for Transformers

    Intel Extension for Transformers

    Build your chatbot within minutes on your favorite device

    ...It offers state-of-the-art compression techniques for Large Language Models (LLMs) and provides tools to build chatbots within minutes on various devices. The extension aims to optimize the performance of Transformer-based models, making them more efficient and accessible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Kodus

    Kodus

    AI code reviews, just like your senior dev would do

    Kodus-AI is a framework for building, training, and deploying intelligent agents and models, especially focusing on practical AI workflows for businesses and automation. It provides a structured set of tools and abstractions that help teams design agent behaviors, orchestrate data pipelines, optimize inference, and integrate AI capabilities with applications or services. The platform often includes model management, scalable training workflows, and orchestration patterns that help teams move from research or prototypes to production-ready AI deployments. Through configurable pipelines and a focus on modularity, it supports experimentation while maintaining reproducibility and performance. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    BitNet

    BitNet

    Inference framework for 1-bit LLMs

    BitNet (bitnet.cpp) is a high-performance inference framework designed to optimize the execution of 1-bit large language models, making them more efficient for edge devices and local deployment. The framework offers significant speedups and energy reductions, achieving up to 6.17x faster performance on x86 CPUs and 70% energy savings, allowing the running of models such as the BitNet b1.58 100B with impressive efficiency.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 25
    Matter AI

    Matter AI

    Matter AI is open-source AI Code Reviewer Agent

    Matter AI is an AI-powered platform designed to enhance productivity through automated content generation, data analysis, and decision support. It leverages machine learning models to process text, analyze patterns, and generate insights, making it suitable for businesses looking to optimize data-driven decision-making. Matter AI integrates with various data sources and provides customizable AI workflows tailored to different industries.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next