Showing 35 open source projects for "intel-opencl-icd"

View related business solutions
  • The #1 Embedded Analytics Solution for SaaS Teams. Icon
    The #1 Embedded Analytics Solution for SaaS Teams.

    Qrvey saves engineering teams time and money with a turnkey multi-tenant solution connecting your data warehouse to your SaaS application.

    Qrvey’s comprehensive embedded analytics software enables you to design more customizable analytics experiences for your end users.
    Try Developer Playground
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • 1
    Intel Extension for PyTorch

    Intel Extension for PyTorch

    A Python package for extending the official PyTorch

    Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks. Use models trained with popular frameworks like TensorFlow, PyTorch and more. Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training...
    Downloads: 20 This Week
    Last Update:
    See Project
  • 3
    whisper.cpp

    whisper.cpp

    Port of OpenAI's Whisper model in C/C++

    High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model. Supported platforms: Mac OS (Intel and Arm) iOS Android Linux / FreeBSD WebAssembly Windows (MSVC and MinGW] Raspberry Pi
    Downloads: 18 This Week
    Last Update:
    See Project
  • 4
    OpenFace Face Recognition

    OpenFace Face Recognition

    Face recognition with deep neural networks

    OpenFace is a Python and Torch implementation of face recognition with deep neural networks and is based on the CVPR 2015 paper FaceNet: A Unified Embedding for Face Recognition and Clustering by Florian Schroff, Dmitry Kalenichenko, and James Philbin at Google. Torch allows the network to be executed on a CPU or with CUDA. This research was supported by the National Science Foundation (NSF) under grant number CNS-1518865. Additional support was provided by the Intel Corporation, Google...
    Downloads: 6 This Week
    Last Update:
    See Project
  • Never Get Blocked Again | Enterprise Web Scraping Icon
    Never Get Blocked Again | Enterprise Web Scraping

    Enterprise-Grade Proxies • Built-in IP Rotation • 195 Countries • 20K+ Companies Trust Us

    Get unrestricted access to public web data with our ethically-sourced proxy network. Automated session management and advanced unblocking handle the hard parts. Scale from 1 to 1M requests with zero blocks. Built for developers with ready-to-use APIs, serverless functions, and complete documentation. Used by 20,000+ companies including Fortune 500s. SOC2 and GDPR compliant.
    Get Started
  • 5
    ArrayFire

    ArrayFire

    ArrayFire, a general purpose GPU library

    ... interested and able to write top performing tensor functions. Together we can fulfill The ArrayFire Mission under an excellent Code of Conduct that promotes a respectful and friendly building experience. Rigorous benchmarks and tests ensuring top performance and numerical accuracy. Cross-platform compatibility with support for CUDA, OpenCL, and native CPU on Windows, Mac, and Linux. Built-in visualization functions through Forge.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ... optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc. The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    GoCV

    GoCV

    Go package for computer vision using OpenCV 4 and beyond

    GoCV gives programmers who use the Go programming language access to the OpenCV 4 computer vision library. The GoCV package supports the latest releases of Go and OpenCV v4.5.4 on Linux, macOS, and Windows. Our mission is to make the Go language a “first-class” client compatible with the latest developments in the OpenCV ecosystem. Computer Vision (CV) is the ability of computers to process visual information, and perform tasks normally associated with those performed by humans. CV software...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    oneDNN

    oneDNN

    oneAPI Deep Neural Network Library (oneDNN)

    This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) and Deep Neural Network Library (DNNL). oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN is part of oneAPI. The library is optimized for Intel(R) Architecture Processors, Intel Processor Graphics and Xe Architecture graphics. oneDNN has experimental support for the following...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Deliver secure remote access with OpenVPN. Icon
    Deliver secure remote access with OpenVPN.

    Trusted by nearly 20,000 customers worldwide, and all major cloud providers.

    OpenVPN's products provide scalable, secure remote access — giving complete freedom to your employees to work outside the office while securely accessing SaaS, the internet, and company resources.
    Get started — no credit card required.
  • 10
    Gorgonia

    Gorgonia

    Gorgonia is a library that helps facilitate machine learning in Go

    Write and evaluate mathematical equations involving multidimensional arrays easily. Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow. The primary goal for Gorgonia is to be a highly performant machine...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    LlamaChat

    LlamaChat

    Chat with your favourite LLaMA models in a native macOS app

    Chat with your favourite LLaMA models, right on your Mac. LlamaChat is a macOS app that allows you to chat with LLaMA, Alpaca, and GPT4All models all running locally on your Mac.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Armadillo

    Armadillo

    fast C++ library for linear algebra & scientific computing

    * Fast C++ library for linear algebra (matrix maths) and scientific computing * Easy to use functions and syntax, deliberately similar to Matlab / Octave * Uses template meta-programming techniques to increase efficiency * Provides user-friendly wrappers for OpenBLAS, Intel MKL, LAPACK, ATLAS, ARPACK, SuperLU and FFTW libraries * Useful for machine learning, pattern recognition, signal processing, bioinformatics, statistics, finance, etc. * Downloads: http://arma.sourceforge.net...
    Leader badge
    Downloads: 1,526 This Week
    Last Update:
    See Project
  • 14
    SageMaker MXNet Inference Toolkit

    SageMaker MXNet Inference Toolkit

    Toolkit for allowing inference and serving with MXNet in SageMaker

    ... Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Dead Deer 3.14.6.2024

    Dead Deer 3.14.6.2024

    3D modeler, 3D game maker, 3D demo maker

    .... Android .NED Player (install APK and "open with" with file managers) APK generator for Android. Support for: Direct3D9 (SM3) Direct3D10 (SM4) Direct3D11 (SM5) Direct3D12 (SM5) OpenGL and GLSL OpenGLES 2/3 Apple METAL Retina, UHD. Intel x86/64, ARMv7/ARM64, RISCV. Linux (Ubuntu/wxWidgets(Gtk3)). iOS /iPasOS (with XCode) (GLES20/METAL) Windows Phone Windows VR (Steam/Oculus) WebAsm/WebGL UWP Windows/XBOX SDL2 Linux ARM 32/64 RISCV OpenXR (Quest?/Pico) 3.14.6.2024
    Leader badge
    Downloads: 43 This Week
    Last Update:
    See Project
  • 16
    VAMS

    VAMS

    Virtual Assistant Maintenance System

    Virtual Assistant Maintenance System also knowns as VAMS is an AI software application, that helps users with some computer maintenance issues. Application Requirements: Operating System: Windows 8.1/10 /11 Processor: Intel Core i5 or equivalent RAM: 4GB or higher Free Disk Space: 500MB
    Downloads: 7 This Week
    Last Update:
    See Project
  • 17
    Bandicoot

    Bandicoot

    fast C++ library for GPU linear algebra & scientific computing

    * Fast GPU linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use * Provides high-level syntax and functionality deliberately similar to Matlab * Provides an API that is aiming to be compatible with Armadillo for easy transition between CPU and GPU linear algebra code * Useful for algorithm development directly in C++, or quick conversion of research code into production environments * Distributed under the permissive...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 18
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    libfacedetection

    libfacedetection

    Library for face detection in images

    This is an open source library for CNN-based face detection in images. The CNN model has been converted to static variables in C source files. The source code does not depend on any other libraries. What you need is just a C++ compiler. You can compile the source code under Windows, Linux, ARM and any platform with a C++ compiler. SIMD instructions are used to speed up the detection. You can enable AVX2 if you use Intel CPU or NEON for ARM. The model file has also been provided in directory...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ... 20.04. We also provide branches that work under ROS Melodic, ROS Foxy and ROS2. Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    NLP Architect

    NLP Architect

    A model library for exploring state-of-the-art deep learning

    NLP Architect is an open-source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding neural networks. The library includes our past and ongoing NLP research and development efforts as part of Intel AI Lab. NLP Architect is designed to be flexible for adding new models, neural network components, data handling methods, and for easy training and running models. NLP Architect is a model...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    exchange-core

    exchange-core

    Ultra-fast matching engine written in Java based on LMAX Disruptor

    Exchange-core is an open-source market exchange core based on LMAX Disruptor, Eclipse Collections (ex. Goldman Sachs GS Collections), Real Logic Agrona, OpenHFT Chronicle-Wire, LZ4 Java, and Adaptive Radix Trees. Designed for high scalability and pauseless 24/7 operation under high-load conditions and providing low-latency responses. Single order book configuration is capable to process 5M operations per second on 10-years old hardware (Intel® Xeon® X5690) with moderate latency degradation. HFT...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    nGraph

    nGraph

    nGraph has moved to OpenVINO

    Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of Validated workloads, thanks to nGraph's powerful subgraph pattern matching. Additionally, we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. nGraph Compiler aims to accelerate developing AI...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    brainCL_chung
    brainCL chung is a small program with dll to compute 3 to 4 layers neural networks with bulk training learn input data to bulk output data using openCL (cpu or gpu) acceleration written in easy fast freebasic
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Intel neon

    Intel neon

    Intel® Nervana™ reference deep learning framework

    neon is Intel's reference deep learning framework committed to best performance on all hardware. Designed for ease of use and extensibility. See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation. The gpu backend...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next