Showing 10 open source projects for "optimize"

View related business solutions
  • Build Secure Enterprise Apps Fast with Retool Icon
    Build Secure Enterprise Apps Fast with Retool

    Stop wasting engineering hours. Build secure, production-grade apps that connect directly to your company’s SQL and APIs.

    Create internal software that meets enterprise security standards. Retool connects to your business data—databases, APIs, and vector stores while ensuring compliance with granular permissions and audit logs. Whether on our cloud or self-hosted, build the dashboards and admin panels your organization needs without compromising on security or control.
    Learn More
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 1
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. ...
    Downloads: 18 This Week
    Last Update:
    See Project
  • 2
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators...
    Downloads: 46 This Week
    Last Update:
    See Project
  • 3
    BitNet

    BitNet

    Inference framework for 1-bit LLMs

    BitNet (bitnet.cpp) is a high-performance inference framework designed to optimize the execution of 1-bit large language models, making them more efficient for edge devices and local deployment. The framework offers significant speedups and energy reductions, achieving up to 6.17x faster performance on x86 CPUs and 70% energy savings, allowing the running of models such as the BitNet b1.58 100B with impressive efficiency.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    Bolt NLP

    Bolt NLP

    Bolt is a deep learning library with high performance

    Bolt is a high-performance deep learning inference framework developed by Huawei Noah's Ark Lab. It is designed to optimize and accelerate the deployment of deep learning models across various hardware platforms. Bolt is a light-weight library for deep learning. Bolt, as a universal deployment tool for all kinds of neural networks, aims to automate the deployment pipeline and achieve extreme acceleration. Bolt has been widely deployed and used in many departments of HUAWEI company, such as 2012 Laboratory, CBG and HUAWEI Product Lines. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    MegEngine

    MegEngine

    Easy-to-use deep learning framework with 3 key features

    MegEngine is a fast, scalable and easy-to-use deep learning framework with 3 key features. You can represent quantization/dynamic shape/image pre-processing and even derivation in one model. After training, just put everything into your model and inference it on any platform at ease. Speed and precision problems won't bother you anymore due to the same core inside. In training, GPU memory usage could go down to one-third at the cost of only one additional line, which enables the DTR...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    LightSeq

    LightSeq

    A High Performance Library for Sequence Processing and Generation

    Lightseq is a high-performance library focused on efficient inference and training for deep learning models, especially large language models (LLMs) and transformer-based architectures. Its goal is to optimize both memory usage and computational throughput, enabling faster training or inference on limited hardware while maintaining model quality. Lightseq provides optimized CUDA kernels, quantization strategies, and runtime optimizations tailored for transformer operations — which often are bottlenecks in conventional frameworks — thereby reducing memory footprint, improving speed, and making deployment of large-scale models more accessible. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    SINGA

    SINGA

    A distributed deep learning platform

    Apache SINGA is an Apache Top Level Project, focusing on distributed training of deep learning and machine learning models. Various example deep learning models are provided in SINGA repo on Github and on Google Colab. SINGA supports data parallel training across multiple GPUs (on a single node or across different nodes). SINGA supports various popular optimizers including stochastic gradient descent with momentum, Adam, RMSProp, and AdaGrad, etc. SINGA records the computation graph and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    NOMAD is a C++ code that implements the MADS algorithm (Mesh Adaptive Direct Search) for difficult blackbox optimization problems. Such problems occur when the functions to optimize are costly computer simulations with no derivatives.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    NNVM

    NNVM

    Open deep learning compiler stack for cpu, gpu

    The vision of the Apache NNVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. Compilation of deep learning models into minimum deployable modules. Infrastructure to automatically generates and optimize models on more backend with better performance. Compilation and minimal runtimes commonly unlock ML workloads on existing hardware. Automatically generate and optimize tensor operators on more backends. Need support for block sparsity, quantization (1,2,4,8 bit integers, posit), random forests/classical ML, memory planning, MISRA-C compatibility, Python prototyping or all of the above? ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Dun and Bradstreet Risk Analytics - Supplier Intelligence Icon
    Dun and Bradstreet Risk Analytics - Supplier Intelligence

    Use an AI-powered solution for supply and compliance teams who want to mitigate costly supplier risks intelligently.

    Risk, procurement, and compliance teams across the globe are under pressure to deal with geopolitical and business risks. Third-party risk exposure is impacted by rapidly scaling complexity in domestic and cross-border businesses, along with complicated and diverse regulations. It is extremely important for companies to proactively manage their third-party relationships. An AI-powered solution to mitigate and monitor counterparty risks on a continuous basis, this cutting-edge platform is powered by D&B’s Data Cloud with 520M+ Global Business Records and 2B+ yearly updates for third-party risk insights. With high-risk procurement alerts and multibillion match points, D&B Risk Analytics leverages best-in-class risk data to help drive informed decisions. Perform quick and comprehensive screening, using intelligent workflows. Receive ongoing alerts of key business indicators and disruptions.
    Learn More
  • 10
    The Athlete is a virtual robot living in the ODE (Open Dynamics Engine)world. An evolution algorithm is implemented based on the DR-EA-M to optimize its performances. This involves a Genetic algorithm, network of neurons and morphology. All is in java.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next