Showing 29 open source projects for "intel"

View related business solutions
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 1
    Intel Extension for PyTorch

    Intel Extension for PyTorch

    A Python package for extending the official PyTorch

    Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    oneDNN

    oneDNN

    oneAPI Deep Neural Network Library (oneDNN)

    This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) and Deep Neural Network Library (DNNL). oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN is part of oneAPI. The library is optimized for Intel(R) Architecture Processors, Intel Processor Graphics and Xe Architecture graphics. oneDNN has experimental support for the following architectures: Arm* 64-bit Architecture (AArch64), NVIDIA* GPU, OpenPOWER* Power ISA (PPC64), IBMz* (s390x), and RISC-V. oneDNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    ...Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks. Use models trained with popular frameworks like TensorFlow, PyTorch and more. Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
    Downloads: 22 This Week
    Last Update:
    See Project
  • 4
    Neural Speed

    Neural Speed

    An innovative library for efficient LLM inference

    neural-speed is an innovative library developed by Intel to enhance the efficiency of Large Language Model (LLM) inference through low-bit quantization techniques. By reducing the precision of model weights and activations, neural-speed aims to accelerate inference while maintaining model accuracy, making it suitable for deployment in resource-constrained environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 5
    Constellation

    Constellation

    Constellation is the first Confidential Kubernetes

    ...It allows developers to run Kubernetes-native applications in secure enclaves across multiple machines, ensuring end-to-end encryption and trusted execution for workloads. Built on top of Kubernetes and using technologies like Intel SGX and Gramine, Constellation guarantees that not even infrastructure operators can access data or code, making it ideal for privacy-sensitive workloads and multi-party computation scenarios.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Jan.ai

    Jan.ai

    Open source alternative to ChatGPT that runs 100% offline

    Jan.ai is an open-source, privacy-focused AI assistant that serves as an alternative to ChatGPT, running completely locally on your device. It allows you to download and run LLMs (local language models) offline while also offering optional integration with cloud-based model providers—giving you full control over your data and AI interactions. Download and run LLMs (Llama, Gemma, Qwen, GPT-oss etc.) from HuggingFace. Connect to GPT models via OpenAI, Claude models via Anthropic, Mistral,...
    Downloads: 36 This Week
    Last Update:
    See Project
  • 7
    NNCF

    NNCF

    Neural Network Compression Framework for enhanced OpenVINO

    NNCF (Neural Network Compression Framework) is an optimization toolkit for deep learning models, designed to apply quantization, pruning, and other techniques to improve inference efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ...The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    GoCV

    GoCV

    Go package for computer vision using OpenCV 4 and beyond

    GoCV gives programmers who use the Go programming language access to the OpenCV 4 computer vision library. The GoCV package supports the latest releases of Go and OpenCV v4.5.4 on Linux, macOS, and Windows. Our mission is to make the Go language a “first-class” client compatible with the latest developments in the OpenCV ecosystem. Computer Vision (CV) is the ability of computers to process visual information, and perform tasks normally associated with those performed by humans. CV software...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Pest Control Management Software Icon
    Pest Control Management Software

    Pocomos is a cloud-based field service solution that caters to businesses

    Built for the pest control industry, but also works great for Mosquito Control, Bin Cleaning, Window Washing, Solar Panel Cleaning, and other Home Service Businesses in need of an easy-to-use software that helps you simplify routing, scheduling, communications, payment processing, truck tracking, time tracking, and reporting.
    Learn More
  • 10
    OpenFace Face Recognition

    OpenFace Face Recognition

    Face recognition with deep neural networks

    ...Torch allows the network to be executed on a CPU or with CUDA. This research was supported by the National Science Foundation (NSF) under grant number CNS-1518865. Additional support was provided by the Intel Corporation, Google, Vodafone, NVIDIA, and the Conklin Kistler family fund. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and should not be attributed to their employers or funding sources. Accuracies from research papers have just begun to surpass human accuracies on some benchmarks. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    Armadillo

    Armadillo

    fast C++ library for linear algebra & scientific computing

    * Fast C++ library for linear algebra (matrix maths) and scientific computing * Easy to use functions and syntax, deliberately similar to Matlab / Octave * Uses template meta-programming techniques to increase efficiency * Provides user-friendly wrappers for OpenBLAS, Intel MKL, LAPACK, ATLAS, ARPACK, SuperLU and FFTW libraries * Useful for machine learning, pattern recognition, signal processing, bioinformatics, statistics, finance, etc. * Downloads: http://arma.sourceforge.net/download.html * Documentation: http://arma.sourceforge.net/docs.html * Bug reports: http://arma.sourceforge.net/faq.html * Git repo: https://gitlab.com/conradsnicta/armadillo-code
    Leader badge
    Downloads: 2,249 This Week
    Last Update:
    See Project
  • 13
    Dead Deer 3.14.56.2025

    Dead Deer 3.14.56.2025

    3D modeler, 3D game maker, 3D demo maker

    ...Android .NED Player (install APK and "open with" with file managers) APK generator for Android. Support for: Direct3D9 (SM3) Direct3D10 (SM4) Direct3D11 (SM5) Direct3D12 (SM5) OpenGL and GLSL OpenGLES 2/3 Apple METAL Retina, UHD. Intel x86/64, ARMv7/ARM64, RISCV. Linux (Ubuntu/wxWidgets(Gtk3)). iOS /iPasOS (with XCode) (GLES20/METAL) Windows Phone Windows VR (Steam/Oculus) WebAsm/WebGL UWP Windows/XBOX SDL2 Linux ARM 32/64 RISCV OpenXR (Quest?/Pico) 3.14.56.2025
    Leader badge
    Downloads: 350 This Week
    Last Update:
    See Project
  • 14
    LlamaChat

    LlamaChat

    Chat with your favourite LLaMA models in a native macOS app

    Chat with your favourite LLaMA models, right on your Mac. LlamaChat is a macOS app that allows you to chat with LLaMA, Alpaca, and GPT4All models all running locally on your Mac.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    SageMaker MXNet Inference Toolkit

    SageMaker MXNet Inference Toolkit

    Toolkit for allowing inference and serving with MXNet in SageMaker

    ...AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Flashlight library

    Flashlight library

    A C++ standalone library for machine learning

    ...Each component can be incrementally built by specifying the correct build options. Flashlight is most-easily built and installed with vcpkg. Both the CUDA and CPU backends are supported with vcpkg. For either backend, first, install Intel MKL. Flashlight app binaries are also built for the selected features and are installed into the vcpkg install tree's tools directory.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    libfacedetection

    libfacedetection

    Library for face detection in images

    ...You can compile the source code under Windows, Linux, ARM and any platform with a C++ compiler. SIMD instructions are used to speed up the detection. You can enable AVX2 if you use Intel CPU or NEON for ARM. The model file has also been provided in directory ./models/. The file examples/detect-image.cpp and examples/detect-camera.cpp show how to use the library. The library was trained by libfacedetection.train. You can copy the files in directory src/ into your project, and compile them as the other files in your project. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ...The YOLO packages have been tested under ROS Noetic and Ubuntu 20.04. We also provide branches that work under ROS Melodic, ROS Foxy and ROS2. Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    NLP Architect

    NLP Architect

    A model library for exploring state-of-the-art deep learning

    NLP Architect is an open-source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding neural networks. The library includes our past and ongoing NLP research and development efforts as part of Intel AI Lab. NLP Architect is designed to be flexible for adding new models, neural network components, data handling methods, and for easy training and running models. NLP Architect is a model-oriented library designed to showcase novel and different neural network optimizations. The library contains NLP/NLU-related models per task, different neural network topologies (which are used in models), procedures for simplifying workflows in the library, pre-defined data processors and dataset loaders and misc utilities. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    exchange-core

    exchange-core

    Ultra-fast matching engine written in Java based on LMAX Disruptor

    ...Designed for high scalability and pauseless 24/7 operation under high-load conditions and providing low-latency responses. Single order book configuration is capable to process 5M operations per second on 10-years old hardware (Intel® Xeon® X5690) with moderate latency degradation. HFT optimized. Priority is a limit-order-move operation mean latency (currently ~0.5µs). Cancel operation takes ~0.7µs, placing new order ~1.0µs. Disk journaling and journal replay support, state snapshots (serialization) and restore operations, LZ4 compression. Lock-free and contention-free order matching and risk control algorithms. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    nGraph

    nGraph

    nGraph has moved to OpenVINO

    ...We've also seen performance boosts running workloads that are not included on the list of Validated workloads, thanks to nGraph's powerful subgraph pattern matching. Additionally, we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets. We strongly believe in providing freedom, performance, and ease of use to AI developers. Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and to interact with supported backends.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    Intel neon

    Intel neon

    Intel® Nervana™ reference deep learning framework

    ...Designed for ease of use and extensibility. See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation. The gpu backend is selected by default, so the above command is equivalent to if a compatible GPU resource is found on the system. The Intel Math Kernel Library takes advantages of the parallelization and vectorization capabilities of Intel Xeon and Xeon Phi systems. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23

    LightPCC

    Parallel pairwise correlation computation on Intel Xeon Phi clusters

    The first parallel and distributed library for pairwise correlation/dependence computation on Intel Xeon Phi clusters. This library is written in C++ template classes and achieves high speed by exploring the SIMD-instruction-level and thread-level parallelism within Xeon Phis as well as accelerator-level parallelism among multiple Xeon Phis. To facilitate balanced workload distribution, we have proposed a general framework for symmetric all-pairs computation by building provable bijective functions between job identifier and coordinate space for the first time.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24

    Accelerated Feature Extraction Tool

    A fast GPU accelerated feature extraction software for speech analysis

    ...The tool is a specially designed to process very large audio data sets. It uses GPU acceleration if compatible GPU available (CUDA as weel as OpenCL, NVIDIA, AMD, and Intel GPUs are supported). CPU SSE intrinsic instruction set is used in cases where no compatible GPU present. The output files are stored in HTK format. The software is developed at Department of Cybernetics at University of West Bohemia in Pilsen.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    The Java parallel to the popular Intel computer vision library, OpenCV. OpenJCV = Open Java Computer Vision
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next