Showing 146 open source projects for "cpu-g"

View related business solutions
  • Extended Threat Intelligence | SOCRadar Icon
    Extended Threat Intelligence | SOCRadar

    See what hackers already know about your organization – and stop them from getting in.

    Enterprises need full-spectrum cyber intelligence—beyond social media and the dark web. SOCRadar monitors cloud buckets, dark web leaks, and external threats in real time. Automate takedowns, detect brand impersonations, and stay ahead of evolving attacks. Strengthen your security with Extended Threat Intelligence.
    Free Trial
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 1
    SPM - Monitoring  system

    SPM - Monitoring system

    Monitoring Tool for your IT Environment

    SPM Monitoring System - Complete Solution for Efficient Monitoring and Alerting SPM Monitoring System is an all-in-one monitoring solution for IT environments that offers comprehensive features to ensure high availability, stability, and optimal performance of your infrastructure. With SPM Monitoring Systems, you can monitor your network, servers, applications, and services with ease, and receive timely alerts when issues arise. Host Availability Monitoring. Agent-Based Monitoring: CPU...
    Downloads: 26 This Week
    Last Update:
    See Project
  • 2
    Exadel CompreFace

    Exadel CompreFace

    Leading free and open-source face recognition system

    ... to easily control who has access to your Face Recognition Services. CompreFace is delivered as a docker-compose config and supports different models that work on CPU and GPU. Our solution is based on state-of-the-art methods and libraries like FaceNet and InsightFace. Official website: https://exadel.com/solutions/compreface/ Github link: https://github.com/exadel-inc/CompreFace
    Downloads: 12 This Week
    Last Update:
    See Project
  • 3
    DxTonia

    DxTonia

    A tool for detecting the presence of leg dystonia from videos.

    An open-source diagnostic tool for detecting the presence of leg dystonia from videos. The software provides an automated solution for detecting leg dystonia in children with cerebral palsy (CP), by using artificial intelligence. The software has been verified on two datasets and achieved higher accuracy than routine clinical care. Stay tuned for the improved version. Note: Currently only works on Windows PC with Nvidia GPU. Soon, mac and Linux CPU only version will be released. https...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 4
    Eva AI

    Eva AI

    Eva is an A.I. assistant that helps users multi-task.

    ... speech recognition accuracy * Greatly decreased CPU resource consumption * FIXED ALL CRITICAL BUGS [NOTE] Eva is dependent on the Windows 10/11 ecosystem. For any issues consult the user's manual. Commands customisation tutorial: https://github.com/CSharpTeoMan911/Eva/wiki/Commands-customisation Donations: https://www.paypal.com/donate/?hosted_button_id=V5H8D2XRGRPHU You can check details about the technology at: https://github.com/CSharpTeoMan911/Eva
    Downloads: 6 This Week
    Last Update:
    See Project
  • Smart Monitoring for Any Network. Powered by Open Source. Icon
    Smart Monitoring for Any Network. Powered by Open Source.

    Trusted by thousands of IT teams worldwide

    NMIS helps with fault, performance, and configuration management. It provides performance graphs, threshold alerting, and detailed notification policies with various methods. NMIS monitors an organization’s IT environment, helps identify and rectify faults, and provides valuable information for IT planning.
    Get a Free Trial
  • 5
    Bandicoot

    Bandicoot

    fast C++ library for GPU linear algebra & scientific computing

    * Fast GPU linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use * Provides high-level syntax and functionality deliberately similar to Matlab * Provides an API that is aiming to be compatible with Armadillo for easy transition between CPU and GPU linear algebra code * Useful for algorithm development directly in C++, or quick conversion of research code into production environments * Distributed under the permissive Apache...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 6
    BlindOS 1.02

    BlindOS 1.02

    Debian 12 BlindOS Enviornament with ollama AI llama2 Chat Assistant

    Debian 12 whole operating system Liveram with BlindOS interface by <amigojapan>; +ollama llama2 LLM AI chat assitant included (ollama can use a GPU hardware autodetect. Non-free & non-free-firmwares packages installed. All Wifi packages installed. The file.iso size is less 6 Gigas. Xfce4 Desktop. amd64 arquitecture. To run Liveram.iso need minium 2 core CPU & 8 Gigas Ram system, The more hardware power, the faster the system will go!.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Eventer

    Eventer

    Rapid, unbiased, reproducible analysis of synaptic events

    ... procedures. The software is coded in MATLAB, but has been compiled as standalone applications for Windows, Mac and Linux. Please visit the official Eventer website for more info https://eventerneuro.netlify.app/ While the paper is in preparation, please cite as; Winchester, G., Liu, S., Steele, O.G., Aziz, W. and Penn, A.C. (2020) Eventer. Software for the detection of spontaneous synaptic events measured by electrophysiology or imaging. http://doi.org/10.5281/zenodo.3991676
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    pipeless

    pipeless

    A computer vision framework to create and deploy apps in minutes

    ... frames and Pipeless takes care of everything else. You can easily use industry-standard models, such as YOLO, or load your custom model in one of the supported inference runtimes. Pipeless ships some of the most popular inference runtimes, such as the ONNX Runtime, allowing you to run inference with high performance on CPU or GPU out-of-the-box. You can deploy your Pipeless application with a single command to edge and IoT devices or the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    min(DALL·E)

    min(DALL·E)

    min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch

    This is a fast, minimal port of Boris Dayma's DALL·E Mini (with mega weights). It has been stripped down for inference and converted to PyTorch. The only third-party dependencies are numpy, requests, pillow and torch. The required models will be downloaded to models_root if they are not already there. Set the dtype to torch.float16 to save GPU memory. If you have an Ampere architecture GPU you can use torch.bfloat16. Set the device to either cuda or "cpu". Once everything has finished...
    Downloads: 0 This Week
    Last Update:
    See Project
  • The Fastest Analytics Database for Observability, ML, and GenAI | ClickHouse Icon
    The Fastest Analytics Database for Observability, ML, and GenAI | ClickHouse

    Unlock faster queries without skyrocketing costs.

    ClickHouse powers businesses with the fastest open-source OLAP database, built for rapid analytics, observability, and business intelligence. Deploy on AWS, GCP, or your own VPC with BYOC, and query billions of rows in seconds – all cost-efficiently. Trusted by Sony, Lyft, and Cisco, it delivers unmatched speed, seamless stack integration, and enterprise-grade performance. Turn massive datasets into decisions with ClickHouse.
    Start free trial
  • 10
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    StudioGAN

    StudioGAN

    StudioGAN is a Pytorch library providing implementations of networks

    ...-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. Each modularized option is managed through a configuration system that works through a YAML file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12

    KoboldAI

    Your gateway to GPT writing

    ... the multiple gameplay styles. This makes KoboldAI both a writing assistant, a game and a platform for so much more. The way you play and how good the AI will be depends on the model or service you decide to use. No matter if you want to use the free, fast power of Google Colab, your own high end graphics card, an online service you have an API key for (Like OpenAI or Inferkit) or if you rather just run it slower on your CPU you will be able to find a way to use KoboldAI that works for you.
    Leader badge
    Downloads: 489 This Week
    Last Update:
    See Project
  • 13
    VoiceSmith

    VoiceSmith

    [WIP] VoiceSmith makes training text to speech models easy

    VoiceSmith makes it possible to train and infer on both single and multispeaker models without any coding experience. It fine-tunes a pretty solid text to speech pipeline based on a modified version of DelightfulTTS and UnivNet on your dataset. Both models were pretrained on a proprietary 5000 speaker dataset. It also provides some tools for dataset preprocessing like automatic text normalization. Windows (only CPU supported currently) or any Linux based operating system. If you want to run...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    SageMaker MXNet Inference Toolkit

    SageMaker MXNet Inference Toolkit

    Toolkit for allowing inference and serving with MXNet in SageMaker

    ... Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Flashlight library

    Flashlight library

    A C++ standalone library for machine learning

    ... domains. Flashlight can be broken down into several components as described above. Each component can be incrementally built by specifying the correct build options. Flashlight is most-easily built and installed with vcpkg. Both the CUDA and CPU backends are supported with vcpkg. For either backend, first, install Intel MKL. Flashlight app binaries are also built for the selected features and are installed into the vcpkg install tree's tools directory.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    TensorFlowOnSpark

    TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters

    By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize the amount of code changes required to run existing TensorFlow programs on a shared grid.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Hugging Face Transformer

    Hugging Face Transformer

    CPU/GPU inference server for Hugging Face transformer models

    Optimize and deploy in production Hugging Face Transformer models in a single command line. At Lefebvre Dalloz we run in-production semantic search engines in the legal domain, in the non-marketing language it's a re-ranker, and we based ours on Transformer. In that setup, latency is key to providing a good user experience, and relevancy inference is done online for hundreds of snippets per user query. Most tutorials on Transformer deployment in production are built over Pytorch and FastAPI....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    KoGPT

    KoGPT

    KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

    KoGPT is a Korean language model based on OpenAI’s GPT architecture, designed for various natural language processing (NLP) tasks such as text generation, summarization, and dialogue systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    gpt-2-simple

    gpt-2-simple

    Python package to easily retrain OpenAI's GPT-2 text-generating model

    A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifically the "small" 124M and "medium" 355M hyperparameter versions). Additionally, this package allows easier generation of text, generating to a file for easy curation, allowing for prefixes to force the text to start with a given phrase. For finetuning, it is strongly recommended to use a GPU, although you can generate using a CPU (albeit much more slowly...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 21

    Geometric Theorem Proving

    decide simple geometric statements

    parse geometric statements in a human-readable language from command line arguments or stdin and decide their truth by use of Gröbner bases
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    SVL Simulator

    SVL Simulator

    A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles

    LG Electronics America R&D Lab has developed an HDRP Unity-based multi-robot simulator for autonomous vehicle developers. We provide an out-of-the-box solution which can meet the needs of developers wishing to focus on testing their autonomous vehicle algorithms. It currently has integration with The Autoware Foundation's Autoware.auto and Baidu's Apollo platforms, can generate HD maps, and can be immediately used for testing and validation of a whole system with little need for custom...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    libfacedetection

    libfacedetection

    Library for face detection in images

    This is an open source library for CNN-based face detection in images. The CNN model has been converted to static variables in C source files. The source code does not depend on any other libraries. What you need is just a C++ compiler. You can compile the source code under Windows, Linux, ARM and any platform with a C++ compiler. SIMD instructions are used to speed up the detection. You can enable AVX2 if you use Intel CPU or NEON for ARM. The model file has also been provided in directory...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    Tez

    Tez

    Tez is a super-simple and lightweight Trainer for PyTorch

    Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch. tez (तेज़ / تیز) means sharp, fast & active. This is a simple, to-the-point, library to make your PyTorch training easy. This library is in early-stage currently! So, there might be breaking changes. Currently, tez supports cpu, single gpu and multi-gpu & tpu training. More coming soon! Using tez is super-easy. We don't want you...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    FARM

    FARM

    Fast & easy transfer learning for NLP

    ... of language models to your task and domain language. AMP optimizers (~35% faster) and parallel preprocessing (16 CPU cores => ~16x faster). Modular design of language models and prediction heads. Switch between heads or combine them for multitask learning. Full Compatibility with HuggingFace Transformers' models and model hub. Smooth upgrading to newer language models. Integration of custom datasets via Processor class. Powerful experiment tracking & execution.
    Downloads: 0 This Week
    Last Update:
    See Project