Showing 62 open source projects for "build-essential"

View related business solutions
  • Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    More flexibility. More control.

    Generate interest, access liquidity without selling, and execute trades seamlessly. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    XiaoZhi AI Chatbot

    XiaoZhi AI Chatbot

    Build your own AI friend

    xiaozhi-esp32 is an open-source project that guides users in building their own AI-powered conversational companion using the ESP32 microcontroller. The project provides detailed instructions on assembling the hardware, setting up the software, and integrating AI models to enable natural language interactions. This DIY approach offers an accessible entry point into AI and hardware development.
    Downloads: 204 This Week
    Last Update:
    See Project
  • 2
    CV-CUDA

    CV-CUDA

    CV-CUDA™ is an open-source, GPU accelerated library

    CV-CUDA is an open-source project that enables building efficient cloud-scale Artificial Intelligence (AI) imaging and computer vision (CV) applications. It uses graphics processing unit (GPU) acceleration to help developers build highly efficient pre- and post-processing pipelines. CV-CUDA originated as a collaborative effort between NVIDIA and ByteDance.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 3
    Simd Library

    Simd Library

    C++ image processing and machine learning library with using of SIMD

    ...The Simd Library has C API and also contains useful C++ classes and functions to facilitate access to C API. The library supports dynamic and static linking, 32-bit and 64-bit Windows and Linux, MSVS, G++ and Clang compilers, MSVS projects, and CMake build systems.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    mlpack

    mlpack

    mlpack: a scalable C++ machine learning library

    ...Written in C++ and built on the Armadillo linear algebra library, the ensmallen numerical optimization library, and parts of Boost. Aims to provide fast, extensible implementations of cutting-edge machine learning algorithms. mlpack uses CMake as a build system and allows several flexible build configuration options. You can consult any of the CMake tutorials for further documentation, but this tutorial should be enough to get mlpack built and installed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8 Monitoring Tools in One APM. Install in 5 Minutes. Icon
    8 Monitoring Tools in One APM. Install in 5 Minutes.

    Errors, performance, logs, uptime, hosts, anomalies, dashboards, and check-ins. One interface.

    AppSignal works out of the box for Ruby, Elixir, Node.js, Python, and more. 30-day free trial, no credit card required.
    Start Free
  • 5
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 6
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    OpenCV (Open Source Computer Vision Library) is a comprehensive open-source library for computer vision, machine learning, and image processing. It enables developers to build real-time vision applications ranging from facial recognition to object tracking. OpenCV supports a wide range of programming languages including C++, Python, and Java, and is optimized for both CPU and GPU operations.
    Downloads: 28 This Week
    Last Update:
    See Project
  • 7
    pytorch-cpp

    pytorch-cpp

    C++ Implementation of PyTorch Tutorials for Everyone

    ...Section 1 to 3) Interactive Tutorials are currently running on LibTorch Nightly Version. Libtorch only supports 64bit Windows and an x64 generator needs to be specified. Create all required script module files for pre-learned models/weights during the build. Requires installed python3 with PyTorch and torch-vision. You can choose to only build tutorials in one of the categories basics, intermediate, advanced or popular. You can build and run the tutorials (on CPU) in a Docker container using the provided Dockerfile and docker-compose.yml files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    FAY

    FAY

    Framework for building AI-powered interactive digital humans and agent

    Fay is an open source framework designed to build and deploy interactive digital humans powered by large language models. It acts as a middleware layer that connects digital character technologies with conversational AI systems and business applications. Fay supports various types of digital humans, including 2.5D and 3D avatars, and can be integrated with applications running on mobile devices, PCs, web platforms, and embedded systems.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 9
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators...
    Downloads: 50 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build, govern, and optimize agents and models with Gemini Enterprise Agent Platform.
    Start Free
  • 10
    AsmJit

    AsmJit

    Low-latency machine code generation

    ...The library supports multiple architectures, including x86 and x64, making it versatile for cross-platform development. It is commonly used in applications such as emulators, compilers, and high-performance computing systems where runtime optimization is essential. asmjit emphasizes low latency and efficiency, ensuring that generated code executes quickly without significant overhead. Its modular design allows developers to integrate it into various systems with minimal friction. Overall, asmjit bridges the gap between high-level programming and low-level execution by enabling efficient runtime code generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    OpenMLDB

    OpenMLDB

    OpenMLDB is an open-source machine learning database

    ...It prioritizes the capability of feature engineering using SQL for open-source, which offers a feature platform enabling consistent features for training and inference. Real-time features are essential for many machine learning applications, such as real-time personalized recommendations and risk analytics. However, a feature engineering script developed by data scientists (Python scripts in most cases) cannot be directly deployed into production for online inference because it usually cannot meet the engineering requirements, such as low latency, high throughput and high availability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    dlib

    dlib

    Toolkit for making machine learning and data analysis applications

    Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib's open source licensing allows you to use it in any application, free of charge. Good unit test coverage, the ratio of unit test lines of code to library lines of code is...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    DALI

    DALI

    A GPU-accelerated library containing highly optimized building blocks

    The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provides a collection of highly optimized building blocks for loading and processing image, video and audio data. It can be used as a portable drop-in replacement for built-in data loaders and data iterators in popular deep learning frameworks. Deep learning applications require complex, multi-stage data processing pipelines that include loading, decoding,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    ONNX

    ONNX

    Open standard for machine learning interoperability

    ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 15
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 16
    Vespa

    Vespa

    The open big data serving engine

    ...You can even combine both approaches efficiently in the same query, something no other engine can do. Recommendation, personalization and targeting involves evaluating recommender models over content items to select the best ones. Vespa lets you build applications which does this online, typically combining fast vector search and filtering with evaluation of machine-learned models over the items. This makes it possible to make recommendations specifically for each user or situation, using completely up to date information.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 17
    ggml

    ggml

    Tensor library for machine learning

    ...It is widely used as a foundational component in projects that run large language models locally, including tools that perform inference for transformer-based models. The library also implements optimization algorithms and computation graph functionality so developers can build training and inference workflows directly on top of its tensor operations.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 18
    Zvec

    Zvec

    A lightweight, lightning-fast, in-process vector database

    ...Zvec excels at approximate nearest neighbor search and retrieval tasks that power features like semantic search, recommendation systems, and retrieval-augmented generation (RAG) setups. Its performance benchmarks show it achieving high queries-per-second and fast index build times compared to similar tools. Because it runs in-process, developers can embed it in native apps, microservices, or edge computing scenarios where traditional server-based vector databases might be overkill.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    RunAnywhere

    RunAnywhere

    Production ready toolkit to run AI locally

    ...By running models entirely on device, the platform eliminates network latency and protects user data because information does not leave the device. The SDK supports popular open-source models such as Llama, Mistral, and Qwen, enabling developers to build AI-powered features such as chat interfaces and voice assistants with minimal external dependencies. It also includes integrated pipelines that combine speech-to-text, large language models, and text-to-speech into a complete conversational system.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    ArrayFire

    ArrayFire

    ArrayFire, a general purpose GPU library

    ...Data structures in ArrayFire are smartly managed to avoid costly memory transfers and to take advantage of each performance feature provided by the underlying hardware. The community of ArrayFire developers invites you to build with us if you're interested and able to write top performing tensor functions. Together we can fulfill The ArrayFire Mission under an excellent Code of Conduct that promotes a respectful and friendly building experience. Rigorous benchmarks and tests ensuring top performance and numerical accuracy. Cross-platform compatibility with support for CUDA, OpenCL, and native CPU on Windows, Mac, and Linux. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    rwkv.cpp

    rwkv.cpp

    INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model

    Besides the usual FP32, it supports FP16, quantized INT4, INT5 and INT8 inference. This project is focused on CPU, but cuBLAS is also supported. RWKV is a novel large language model architecture, with the largest model in the family having 14B parameters. In contrast to Transformer with O(n^2) attention, RWKV requires only state from the previous step to calculate logits. This makes RWKV very CPU-friendly on large context lengths.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    CUTLASS

    CUTLASS

    CUDA Templates for Linear Algebra Subroutines

    CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN. CUTLASS decomposes these "moving parts" into reusable, modular software components abstracted by C++ template classes. These thread-wide, warp-wide, block-wide, and device-wide...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    MIVisionX

    MIVisionX

    Set of comprehensive computer vision & machine intelligence libraries

    MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. AMD MIVisionX delivers highly optimized open-source implementation of the Khronos OpenVX™ and OpenVX™ Extensions along with Convolution Neural Net Model Compiler & Optimizer supporting ONNX, and Khronos NNEF™ exchange formats. The toolkit allows for rapid prototyping and deployment of optimized computer vision and machine learning...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    TensorRT Backend For ONNX

    TensorRT Backend For ONNX

    ONNX-TensorRT: TensorRT backend for ONNX

    ...For building within docker, we recommend using and setting up the docker containers as instructed in the main (TensorRT repository). Note that this project has a dependency on CUDA. By default the build will look in /usr/local/cuda for the CUDA toolkit installation. If your CUDA path is different, overwrite the default path. ONNX models can be converted to serialized TensorRT engines using the onnx2trt executable.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    AWS IoT FleetWise Edge

    AWS IoT FleetWise Edge

    AWS IoT FleetWise Edge Agent

    Easily collect, transform, and transfer vehicle data to the cloud in near-real-time. AWS IoT FleetWise makes it easy and cost-effective for automakers to collect, transform, and transfer vehicle data to the cloud in near-real-time and use it to build applications with analytics and machine learning that improve vehicle quality, safety, and autonomy. Train autonomous vehicles (AVs) and advanced driver assistance systems (ADAS) with camera data collected from a fleet of production vehicles. Improve electric vehicle (EV) battery range estimates with crowdsourced environmental data, such as weather and driving conditions, from nearby vehicles. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next
MongoDB Logo MongoDB