Showing 36 open source projects for "intel-opencl-icd"

View related business solutions
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • Monitor the status and performance of any IT environment with NMIS Icon
    Monitor the status and performance of any IT environment with NMIS

    NMIS monitors an organization’s IT environment, helps identify and rectify faults, and provides valuable information for IT planning.

    Trusted by thousands of IT teams worldwide, The NMIS platform offers comprehensive network management, handling faults, performance, and configurations with ease.
    Get a Free Trial
  • 1
    Intel Extension for PyTorch

    Intel Extension for PyTorch

    A Python package for extending the official PyTorch

    Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    whisper.cpp

    whisper.cpp

    Port of OpenAI's Whisper model in C/C++

    High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model. Supported platforms: Mac OS (Intel and Arm) iOS Android Linux / FreeBSD WebAssembly Windows (MSVC and MinGW] Raspberry Pi
    Downloads: 35 This Week
    Last Update:
    See Project
  • 3
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks. Use models trained with popular frameworks like TensorFlow, PyTorch and more. Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training...
    Downloads: 21 This Week
    Last Update:
    See Project
  • 4
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ... optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc. The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Power Up Your AI with Databricks – Free Trial Icon
    Power Up Your AI with Databricks – Free Trial

    Ready to revolutionize your data and AI game? Test Databricks free on your cloud of choice and see the difference.

    Take your data and AI to the next level with Databricks – free trial on AWS, Azure, or Google Cloud. Create production-ready Generative AI apps that are accurate, secure, and tailored to your business. Simplify data ingestion from hundreds of sources with effortless ETL automation. Plus, tap into instant, elastic serverless compute during your trial (available on AWS/Azure). Sign up with your work email now to unlock premium trial perks and transform how you work with data – don’t wait!
    Get Started
  • 5
    OpenFace Face Recognition

    OpenFace Face Recognition

    Face recognition with deep neural networks

    OpenFace is a Python and Torch implementation of face recognition with deep neural networks and is based on the CVPR 2015 paper FaceNet: A Unified Embedding for Face Recognition and Clustering by Florian Schroff, Dmitry Kalenichenko, and James Philbin at Google. Torch allows the network to be executed on a CPU or with CUDA. This research was supported by the National Science Foundation (NSF) under grant number CNS-1518865. Additional support was provided by the Intel Corporation, Google...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 6
    ArrayFire

    ArrayFire

    ArrayFire, a general purpose GPU library

    ... interested and able to write top performing tensor functions. Together we can fulfill The ArrayFire Mission under an excellent Code of Conduct that promotes a respectful and friendly building experience. Rigorous benchmarks and tests ensuring top performance and numerical accuracy. Cross-platform compatibility with support for CUDA, OpenCL, and native CPU on Windows, Mac, and Linux. Built-in visualization functions through Forge.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    oneDNN

    oneDNN

    oneAPI Deep Neural Network Library (oneDNN)

    This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) and Deep Neural Network Library (DNNL). oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN is part of oneAPI. The library is optimized for Intel(R) Architecture Processors, Intel Processor Graphics and Xe Architecture graphics. oneDNN has experimental support for the following...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    NNCF

    NNCF

    Neural Network Compression Framework for enhanced OpenVINO

    NNCF (Neural Network Compression Framework) is an optimization toolkit for deep learning models, designed to apply quantization, pruning, and other techniques to improve inference efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • The Fastest Observability Platform for IT, Apps, and Cloud | LogicMonitor Icon
    The Fastest Observability Platform for IT, Apps, and Cloud | LogicMonitor

    Overwhelmed by tech chaos? Unify your teams with real-time visibility and predictability.

    LogicMonitor’s SaaS-based observability platform empowers ITOps, developers, MSPs, and business leaders to monitor networks, applications, and cloud environments seamlessly. Gain full data center visibility, powerful dashboards, and flexible alerting to bridge the gap between tech and teams – all with AI-driven insights. Trusted by leading enterprises, LogicMonitor cuts troubleshooting time, boosts innovation, and delivers extraordinary employee and customer experiences. Transform IT operations with a solution built for modern businesses.
    Get Started
  • 10
    Gorgonia

    Gorgonia

    Gorgonia is a library that helps facilitate machine learning in Go

    Write and evaluate mathematical equations involving multidimensional arrays easily. Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow. The primary goal for Gorgonia is to be a highly performant machine...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    LlamaChat

    LlamaChat

    Chat with your favourite LLaMA models in a native macOS app

    Chat with your favourite LLaMA models, right on your Mac. LlamaChat is a macOS app that allows you to chat with LLaMA, Alpaca, and GPT4All models all running locally on your Mac.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    GoCV

    GoCV

    Go package for computer vision using OpenCV 4 and beyond

    GoCV gives programmers who use the Go programming language access to the OpenCV 4 computer vision library. The GoCV package supports the latest releases of Go and OpenCV v4.5.4 on Linux, macOS, and Windows. Our mission is to make the Go language a “first-class” client compatible with the latest developments in the OpenCV ecosystem. Computer Vision (CV) is the ability of computers to process visual information, and perform tasks normally associated with those performed by humans. CV software...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Armadillo

    Armadillo

    fast C++ library for linear algebra & scientific computing

    * Fast C++ library for linear algebra (matrix maths) and scientific computing * Easy to use functions and syntax, deliberately similar to Matlab / Octave * Uses template meta-programming techniques to increase efficiency * Provides user-friendly wrappers for OpenBLAS, Intel MKL, LAPACK, ATLAS, ARPACK, SuperLU and FFTW libraries * Useful for machine learning, pattern recognition, signal processing, bioinformatics, statistics, finance, etc. * Downloads: http://arma.sourceforge.net...
    Leader badge
    Downloads: 3,018 This Week
    Last Update:
    See Project
  • 15
    Dead Deer 3.14.35.2025

    Dead Deer 3.14.35.2025

    3D modeler, 3D game maker, 3D demo maker

    .... Android .NED Player (install APK and "open with" with file managers) APK generator for Android. Support for: Direct3D9 (SM3) Direct3D10 (SM4) Direct3D11 (SM5) Direct3D12 (SM5) OpenGL and GLSL OpenGLES 2/3 Apple METAL Retina, UHD. Intel x86/64, ARMv7/ARM64, RISCV. Linux (Ubuntu/wxWidgets(Gtk3)). iOS /iPasOS (with XCode) (GLES20/METAL) Windows Phone Windows VR (Steam/Oculus) WebAsm/WebGL UWP Windows/XBOX SDL2 Linux ARM 32/64 RISCV OpenXR (Quest?/Pico) 3.14.35.2025
    Leader badge
    Downloads: 120 This Week
    Last Update:
    See Project
  • 16
    VAMS

    VAMS

    Virtual Assistant Maintenance System

    Virtual Assistant Maintenance System also knowns as VAMS is an AI software application, that helps users with some computer maintenance issues. Application Requirements: Operating System: Windows 8.1/10 /11 Processor: Intel Core i5 or equivalent RAM: 4GB or higher Free Disk Space: 500MB
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    Bandicoot

    Bandicoot

    fast C++ library for GPU linear algebra & scientific computing

    * Fast GPU linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use * Provides high-level syntax and functionality deliberately similar to Matlab * Provides an API that is aiming to be compatible with Armadillo for easy transition between CPU and GPU linear algebra code * Useful for algorithm development directly in C++, or quick conversion of research code into production environments * Distributed under the permissive...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 18
    SageMaker MXNet Inference Toolkit

    SageMaker MXNet Inference Toolkit

    Toolkit for allowing inference and serving with MXNet in SageMaker

    ... Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    libfacedetection

    libfacedetection

    Library for face detection in images

    This is an open source library for CNN-based face detection in images. The CNN model has been converted to static variables in C source files. The source code does not depend on any other libraries. What you need is just a C++ compiler. You can compile the source code under Windows, Linux, ARM and any platform with a C++ compiler. SIMD instructions are used to speed up the detection. You can enable AVX2 if you use Intel CPU or NEON for ARM. The model file has also been provided in directory...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ... 20.04. We also provide branches that work under ROS Melodic, ROS Foxy and ROS2. Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    NLP Architect

    NLP Architect

    A model library for exploring state-of-the-art deep learning

    NLP Architect is an open-source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding neural networks. The library includes our past and ongoing NLP research and development efforts as part of Intel AI Lab. NLP Architect is designed to be flexible for adding new models, neural network components, data handling methods, and for easy training and running models. NLP Architect is a model...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    exchange-core

    exchange-core

    Ultra-fast matching engine written in Java based on LMAX Disruptor

    Exchange-core is an open-source market exchange core based on LMAX Disruptor, Eclipse Collections (ex. Goldman Sachs GS Collections), Real Logic Agrona, OpenHFT Chronicle-Wire, LZ4 Java, and Adaptive Radix Trees. Designed for high scalability and pauseless 24/7 operation under high-load conditions and providing low-latency responses. Single order book configuration is capable to process 5M operations per second on 10-years old hardware (Intel® Xeon® X5690) with moderate latency degradation. HFT...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    nGraph

    nGraph

    nGraph has moved to OpenVINO

    Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of Validated workloads, thanks to nGraph's powerful subgraph pattern matching. Additionally, we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. nGraph Compiler aims to accelerate developing AI...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    brainCL_chung
    brainCL chung is a small program with dll to compute 3 to 4 layers neural networks with bulk training learn input data to bulk output data using openCL (cpu or gpu) acceleration written in easy fast freebasic
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next