Showing 3 open source projects for "sha3-256"

View related business solutions
  • New Plans, same great Auth0 | Auth0 by Okta Icon
    New Plans, same great Auth0 | Auth0 by Okta

    You asked, we delivered! Auth0 has expanded our Free and Paid plans to make it even easier for you to protect your customers identities.

    In our new Free Plan, you'll receive more MAUs than ever. You'll also be able to add Passwordless authentication, use your own custom domain, and more. Our expanded Paid Plans include increased connections, more MFA offerings, and more. Check out what's new.
    Learn more
  • Bright Data - All in One Platform for Proxies and Web Scraping Icon
    Bright Data - All in One Platform for Proxies and Web Scraping

    Say goodbye to blocks, restrictions, and CAPTCHAs

    Bright Data offers the highest quality proxies with automated session management, IP rotation, and advanced web unlocking technology. Enjoy reliable, fast performance with easy integration, a user-friendly dashboard, and enterprise-grade scaling. Powered by ethically-sourced residential IPs for seamless web scraping.
    Get Started
  • 1
    DALL-E in Pytorch

    DALL-E in Pytorch

    Implementation / replication of DALL-E, OpenAI's Text to Image

    ... the pretrained VAE offered by the authors of Taming Transformers! Currently only the VAE with a codebook size of 1024 is offered, with the hope that it may train a little faster than OpenAI's, which has a size of 8192. In contrast to OpenAI's VAE, it also has an extra layer of downsampling, so the image sequence length is 256 instead of 1024 (this will lead to a 16 reduction in training costs, when you do the math).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    BytePS

    BytePS

    A high performance and generic framework for distributed DNN training

    BytePS is a high-performance and generally distributed training framework. It supports TensorFlow, Keras, PyTorch, and MXNet, and can run on either TCP or RDMA networks. BytePS outperforms existing open-sourced distributed training frameworks by a large margin. For example, on BERT-large training, BytePS can achieve ~90% scaling efficiency with 256 GPUs (see below), which is much higher than Horovod+NCCL. In certain scenarios, BytePS can double the training speed compared with Horovod+NCCL. We...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    PyTorch pretrained BigGAN

    PyTorch pretrained BigGAN

    PyTorch implementation of BigGAN with pretrained weights

    An op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind. This repository contains an op-for-op PyTorch reimplementation of DeepMind's BigGAN that was released with the paper Large Scale GAN Training for High Fidelity Natural Image Synthesis. This PyTorch implementation of BigGAN is provided with the pretrained 128x128, 256x256 and 512x512 models by DeepMind. We also provide the scripts used to download and convert these models from the...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next