Showing 33 open source projects for "cpu"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 1
    Keras

    Keras

    Python-based neural networks API

    Python Deep Learning library
    Downloads: 11 This Week
    Last Update:
    See Project
  • 2
    fastdup

    fastdup

    An unsupervised and free tool for image and video dataset analysis

    fastdup is a powerful free tool designed to rapidly extract valuable insights from your image & video datasets. Assisting you to increase your dataset images & labels quality and reduce your data operations costs at an unparalleled scale.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    TensorLy

    TensorLy

    Tensor Learning in Python

    ...It allows to easily perform tensor decomposition, tensor learning and tensor algebra. Its backend system allows to seamlessly perform computation with NumPy, PyTorch, JAX, TensorFlow, CuPy or Paddle, and run methods at scale on CPU or GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    TensorFlow Model Garden

    TensorFlow Model Garden

    Models and examples built with TensorFlow

    ...A flexible and lightweight library that users can easily use or fork when writing customized training loop code in TensorFlow 2.x. It seamlessly integrates with tf.distribute and supports running on different device types (CPU, GPU, and TPU).
    Downloads: 4 This Week
    Last Update:
    See Project
  • The Modern, Flexible, and Easy-to-use LIMS Icon
    The Modern, Flexible, and Easy-to-use LIMS

    For Laboratory Managers, Laboratory Directors, Laboratory Techs, Laboratory Operations Staff

    Run your entire lab more efficiently with our highly configurable and flexible LIMS. Automate your workflow to process more samples, generate reports faster, and grow your laboratory.
    Learn More
  • 5
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    ...Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center, edge, and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including real-time, batched, ensembles, and audio/video streaming. Provides Backend API that allows adding custom backends and pre/post-processing operations. Model pipelines using Ensembling or Business Logic Scripting (BLS). HTTP/REST and GRPC inference protocols based on the community-developed KServe protocol. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    PennyLane

    PennyLane

    A cross-platform Python library for differentiable programming

    ...Support for hybrid quantum and classical models, and compatible with existing machine learning libraries. Quantum circuits can be set up to interface with either NumPy, PyTorch, JAX, or TensorFlow, allowing hybrid CPU-GPU-QPU computations. The same quantum circuit model can be run on different devices. Install plugins to run your computational circuits on more devices, including Strawberry Fields, Amazon Braket, Qiskit and IBM Q, Google Cirq, Rigetti Forest, and the Microsoft QDK.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    whisper-timestamped

    whisper-timestamped

    Multilingual Automatic Speech Recognition with word-level timestamps

    Multilingual Automatic Speech Recognition with word-level timestamps and confidence. Whisper is a set of multi-lingual, robust speech recognition models trained by OpenAI that achieve state-of-the-art results in many languages. Whisper models were trained to predict approximate timestamps on speech segments (most of the time with 1-second accuracy), but they cannot originally predict word timestamps. This repository proposes an implementation to predict word timestamps and provide a more...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    This repository implements SSD (Single Shot MultiBox Detector). The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    DocTR

    DocTR

    Library for OCR-related tasks powered by Deep Learning

    DocTR provides an easy and powerful way to extract valuable information from your documents. Seemlessly process documents for Natural Language Understanding tasks: we provide OCR predictors to parse textual information (localize and identify each word) from your documents. Robust 2-stage (detection + recognition) OCR predictors with pretrained parameters. User-friendly, 3 lines of code to load a document and extract text with a predictor. State-of-the-art performances on public document...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Easy-to-use online form builder for every business. Icon
    Easy-to-use online form builder for every business.

    Create online forms and publish them. Get an email for each response. Collect data.

    Easy-to-use online form builder for every business. Create online forms and publish them. Get an email for each response. Collect data. Design professional looking forms with JotForm Online Form Builder. Customize with advanced styling options to match your branding. Speed up and simplify your daily work by automating complex tasks with JotForm’s industry leading features. Securely and easily sell products. Collect subscription fees and donations. Being away from your computer shouldn’t stop you from getting the information you need. No matter where you work, JotForm Mobile Forms lets you collect data offline with powerful forms you can manage from your phone or tablet. Get the full power of JotForm at your fingertips. JotForm PDF Editor automatically turns collected form responses into professional, secure PDF documents that you can share with colleagues and customers. Easily generate custom PDF files online!
    Learn More
  • 10
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    ...Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    Core ML Tools

    Core ML Tools

    Core ML tools contain supporting tools for Core ML model conversion

    ...Your app uses Core ML APIs and user data to make predictions, and to fine-tune models, all on the user’s device. Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    PyTorch Geometric

    PyTorch Geometric

    Geometric deep learning extension library for PyTorch

    ...In addition, it consists of an easy-to-use mini-batch loader for many small and single giant graphs, a large number of common benchmark datasets (based on simple interfaces to create your own), and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds. We have outsourced a lot of functionality of PyTorch Geometric to other packages, which needs to be additionally installed. These packages come with their own CPU and GPU kernel implementations based on C++/CUDA extensions. We do not recommend installation as root user on your system python. Please setup an Anaconda/Miniconda environment or create a Docker image. We provide pip wheels for all major OS/PyTorch/CUDA combinations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Audiomentations

    Audiomentations

    A Python library for audio data augmentation

    A Python library for audio data augmentation. Inspired by albumentations. Useful for deep learning. Runs on CPU. Supports mono audio and multichannel audio. Can be integrated in training pipelines in e.g. Tensorflow/Keras or Pytorch. Has helped people get world-class results in Kaggle competitions. Is used by companies making next-generation audio products. Mix in another sound, e.g. a background noise. Useful if your original sound is clean and you want to simulate an environment where background noise is present. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    DGL

    DGL

    Python package built to ease deep learning on graph

    ...We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible. DGL provides a powerful graph object that can reside on either CPU or GPU. It bundles structural data as well as features for a better control. We provide a variety of functions for computing with graph objects including efficient and customizable message passing primitives for Graph Neural Networks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Jittor

    Jittor

    Jittor is a high-performance deep learning framework

    Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators. The whole framework and meta-operators are compiled just in time. A powerful op compiler and tuner are integrated into Jittor. It allowed us to generate high-performance code specialized for your model. Jittor also contains a wealth of high-performance model libraries, including image recognition, detection, segmentation, generation, differentiable rendering, geometric learning, reinforcement...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Ray

    Ray

    A unified framework for scalable computing

    Modern workloads like deep learning and hyperparameter tuning are compute-intensive and require distributed or parallel execution. Ray makes it effortless to parallelize single machine code — go from a single CPU to multi-core, multi-GPU or multi-node with minimal code changes. Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray. Accelerate your hyperparameter search workloads with Ray Tune. Find the best model and reduce training costs by using the latest optimization algorithms. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    TensorFlow Addons

    TensorFlow Addons

    Useful extra functionality for TensorFlow 2.x maintained by SIG-addons

    TensorFlow Addons is a repository of contributions that conform to well-established API patterns but implement new functionality not available in core TensorFlow. TensorFlow natively supports a large number of operators, layers, metrics, losses, and optimizers. However, in a fast-moving field like ML, there are many interesting new developments that cannot be integrated into core TensorFlow (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    Implicit

    Implicit

    Fast Python collaborative filtering for implicit feedback datasets

    This project provides fast Python implementations of several different popular recommendation algorithms for implicit feedback datasets. All models have multi-threaded training routines, using Cython and OpenMP to fit the models in parallel among all available CPU cores. In addition, the ALS and BPR models both have custom CUDA kernels - enabling fitting on compatible GPU’s. This library also supports using approximate nearest neighbour libraries such as Annoy, NMSLIB and Faiss for speeding up making recommendations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    NanoDet-Plus

    NanoDet-Plus

    Lightweight anchor-free object detection model

    Super fast and high accuracy lightweight anchor-free object detection model. Real-time on mobile devices. NanoDet is a FCOS-style one-stage anchor-free object detection model which using Generalized Focal Loss as classification and regression loss. In NanoDet-Plus, we propose a novel label assignment strategy with a simple assign guidance module (AGM) and a dynamic soft label assigner (DSLA) to solve the optimal label assignment problem in lightweight model training. We also introduce a...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 21
    GFPGAN

    GFPGAN

    GFPGAN aims at developing Practical Algorithms

    ...Online demo: Baseten.co (backed by GPU, returns the whole image). We provide a clean version of GFPGAN, which can run without CUDA extensions. So that it can run in Windows or on CPU mode. GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration. It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration. Add V1.3 model, which produces more natural restoration results, and better results on very low-quality / high-quality inputs.
    Downloads: 135 This Week
    Last Update:
    See Project
  • 22
    KoboldAI

    KoboldAI

    Your gateway to GPT writing

    ...No matter if you want to use the free, fast power of Google Colab, your own high end graphics card, an online service you have an API key for (Like OpenAI or Inferkit) or if you rather just run it slower on your CPU you will be able to find a way to use KoboldAI that works for you.
    Leader badge
    Downloads: 208 This Week
    Last Update:
    See Project
  • 23
    TensorFlowOnSpark

    TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters

    By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize the amount of code changes required to run existing TensorFlow programs on a shared grid.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Hugging Face Transformer

    Hugging Face Transformer

    CPU/GPU inference server for Hugging Face transformer models

    Optimize and deploy in production Hugging Face Transformer models in a single command line. At Lefebvre Dalloz we run in-production semantic search engines in the legal domain, in the non-marketing language it's a re-ranker, and we based ours on Transformer. In that setup, latency is key to providing a good user experience, and relevancy inference is done online for hundreds of snippets per user query. Most tutorials on Transformer deployment in production are built over Pytorch and FastAPI....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Tez

    Tez

    Tez is a super-simple and lightweight Trainer for PyTorch

    ...This is a simple, to-the-point, library to make your PyTorch training easy. This library is in early-stage currently! So, there might be breaking changes. Currently, tez supports cpu, single gpu and multi-gpu & tpu training. More coming soon! Using tez is super-easy. We don't want you to be far away from pytorch. So, you do everything on your own and just use tez to make a few things simpler.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next