Showing 14 open source projects for "nvidia"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Cloud tools for web scraping and data extraction Icon
    Cloud tools for web scraping and data extraction

    Deploy pre-built tools that crawl websites, extract structured data, and feed your applications. Reliable web data without maintaining scrapers.

    Automate web data collection with cloud tools that handle anti-bot measures, browser rendering, and data transformation out of the box. Extract content from any website, push to vector databases for RAG workflows, or pipe directly into your apps via API. Schedule runs, set up webhooks, and connect to your existing stack. Free tier available, then scale as you need to.
    Explore 10,000+ tools
  • 1
    TensorRT Node for ComfyUI

    TensorRT Node for ComfyUI

    Enables the best performance on NVIDIA RTX Graphics Cards

    ...This is particularly attractive for power users who run many generations or who host ComfyUI on dedicated hardware and want to squeeze out every bit of GPU performance. In short, it’s about taking ComfyUI from “it runs” to “it runs fast” on NVIDIA GPUs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    CuPy

    CuPy

    A NumPy-compatible array library accelerated by CUDA

    CuPy is an open source implementation of NumPy-compatible multi-dimensional array accelerated with NVIDIA CUDA. It consists of cupy.ndarray, a core multi-dimensional array class and many functions on it. CuPy offers GPU accelerated computing with Python, using CUDA-related libraries to fully utilize the GPU architecture. According to benchmarks, it can even speed up some operations by more than 100X. CuPy is highly compatible with NumPy, serving as a drop-in replacement in most cases. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    Transformers4Rec

    Transformers4Rec

    Transformers4Rec is a flexible and efficient library

    Transformers4Rec is an advanced recommendation system library that leverages Transformer models for sequential and session-based recommendations. The library works as a bridge between natural language processing (NLP) and recommender systems (RecSys) by integrating with one of the most popular NLP frameworks, Hugging Face Transformers (HF). Transformers4Rec makes state-of-the-art transformer architectures available for RecSys researchers and industry practitioners. Traditional recommendation...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Run applications fast and securely in a fully managed environment Icon
    Run applications fast and securely in a fully managed environment

    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of scalable infrastructure.

    Run frontend and backend services, batch jobs, deploy websites and applications, and queue processing workloads without the need to manage infrastructure.
    Try for free
  • 5
    DeepPavlov

    DeepPavlov

    A library for deep learning end-to-end dialog systems and chatbots

    DeepPavlov makes it easy for beginners and experts to create dialogue systems. The best place to start is with user-friendly tutorials. They provide quick and convenient introduction on how to use DeepPavlov with complete, end-to-end examples. No installation needed. Guides explain the concepts and components of DeepPavlov. Follow step-by-step instructions to install, configure and extend DeepPavlov framework for your use case. DeepPavlov is an open-source framework for chatbots and virtual...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    AlphaTensor

    AlphaTensor

    AI discovers faster, efficient algorithms for matrix multiplication

    AlphaTensor, developed by Google DeepMind, is the research codebase accompanying the 2022 Nature publication “Discovering faster matrix multiplication algorithms with reinforcement learning.” The project demonstrates how reinforcement learning can be used to automatically discover efficient algorithms for matrix multiplication — a fundamental operation in computer science and numerical computation. The repository is organized into four main components: algorithms, benchmarking,...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    GFPGAN

    GFPGAN

    GFPGAN aims at developing Practical Algorithms

    GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration. Colab Demo for GFPGAN; (Another Colab Demo for the original paper model) Online demo: Huggingface (return only the cropped face) Online demo: Replicate.ai (may need to sign in, return the whole image). Online demo: Baseten.co (backed by GPU, returns the whole image). We provide a clean version of GFPGAN, which can run without CUDA extensions. So that it can run in Windows or on CPU mode. GFPGAN aims at developing...
    Downloads: 70 This Week
    Last Update:
    See Project
  • 8
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    SageMaker MXNet Inference Toolkit

    SageMaker MXNet Inference Toolkit

    Toolkit for allowing inference and serving with MXNet in SageMaker

    ...AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • The Modern, Flexible, and Easy-to-use LIMS Icon
    The Modern, Flexible, and Easy-to-use LIMS

    For Laboratory Managers, Laboratory Directors, Laboratory Techs, Laboratory Operations Staff

    Run your entire lab more efficiently with our highly configurable and flexible LIMS. Automate your workflow to process more samples, generate reports faster, and grow your laboratory.
    Learn More
  • 10
    Deep Daze

    Deep Daze

    Simple command line tool for text to image generation

    Simple command-line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). In true deep learning fashion, more layers will yield better results. Default is at 16, but can be increased to 32 depending on your resources. Technique first devised and shared by Mario Klingemann, it allows you to prime the generator network with a starting image, before being steered towards the text. Simply specify the path to the image you wish to use, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    PyText

    PyText

    A natural language modeling framework based on PyTorch

    ...We use PyText at Facebook to iterate quickly on new modeling ideas and then seamlessly ship them at scale. Distributed-training support built on the new C10d backend in PyTorch 1.0. Mixed precision training support through APEX (trains faster with less GPU memory on NVIDIA Tensor Cores). Extensible components that allows easy creation of new models and tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    SageMaker Chainer Containers

    SageMaker Chainer Containers

    Docker container for running Chainer scripts to train and host Chainer

    SageMaker Chainer Containers is an open-source library for making the Chainer framework run on Amazon SageMaker. This repository also contains Dockerfiles which install this library, Chainer, and dependencies for building SageMaker Chainer images. Amazon SageMaker utilizes Docker containers to run all training jobs & inference endpoints. The Docker images are built from the Dockerfiles specified in Docker/. The Docker files are grouped based on Chainer version and separated based on Python...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    nVidia CUDA and MPI python wrappers. These wrappers are written in pure C no swig or boost necessary. The CUDA wrapper exposes the CUDA runtime and Driver API's.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    OpenGL apps running inside a VM use VMGL to obtain graphics hardware acceleration. VMGL supports VMware, Xen PV and HVM, qemu, and KVM VMs; X11-based OS such as Linux, FreeBSD and OpenSolaris; and ATI, Nvidia and Intel GPUs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next