Showing 12 open source projects for "s-parameters"

View related business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Monitor the status and performance of any IT environment with NMIS Icon
    Monitor the status and performance of any IT environment with NMIS

    NMIS monitors an organization’s IT environment, helps identify and rectify faults, and provides valuable information for IT planning.

    Trusted by thousands of IT teams worldwide, The NMIS platform offers comprehensive network management, handling faults, performance, and configurations with ease.
    Get a Free Trial
  • 1
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    ... recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 2
    ChatGPT UI

    ChatGPT UI

    A ChatGPT web client that supports multiple users, and databases

    A ChatGPT web client that supports multiple users, multiple database connections for persistent data storage, supports i18n. Provides Docker images and quick deployment scripts. Support gpt-4 model. You can select the model in the "Model Parameters" of the front-end. The GPT-4 model requires whitelist access from OpenAI. Added web search capability to generate more relevant and up-to-date answers from ChatGPT! This feature is off by default, you can turn it on in `Chat->Settings` in the admin...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    ... and training sophisticated natural language processing models with billions and trillions of parameters. Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    GPTel

    GPTel

    A no-frills ChatGPT client for Emacs

    ... to the ChatGPT buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg to start a new session. In the gptel buffer, send your prompt with M-x gptel-send, bound to C-c RET. Set chat parameters (GPT model, directives etc) for the session by calling gptel-send with a prefix argument.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Deliver secure remote access with OpenVPN. Icon
    Deliver secure remote access with OpenVPN.

    Trusted by nearly 20,000 customers worldwide, and all major cloud providers.

    OpenVPN's products provide scalable, secure remote access — giving complete freedom to your employees to work outside the office while securely accessing SaaS, the internet, and company resources.
    Get started — no credit card required.
  • 5
    Video Diffusion - Pytorch

    Video Diffusion - Pytorch

    Implementation of Video Diffusion Models

    Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch. Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch. It uses a special space-time factored U-net, extending generation from 2D images to 3D videos. 14k for difficult moving mnist (converging much faster and better than NUWA) - wip. Any new developments for text-to-video synthesis will be centralized at...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Aphantasia

    Aphantasia

    CLIP + FFT/DWT/RGB = text to image/video

    ... (including multi-language from SBERT), continuous mode to process phrase lists (e.g. illustrating lyrics), pan/zoom motion with smooth interpolation. Direct RGB pixels optimization (very stable) depth-based 3D look (courtesy of deKxi, based on AdaBins), complex queries: text and/or image as main prompts, separate text prompts for style and to subtract (avoid) topics. Starting/resuming process from saved parameters or from an image.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    CPT

    CPT

    CPT: A Pre-Trained Unbalanced Transformer

    ... initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1. Aiming to unify both NLU and NLG tasks, We propose a novel Chinese Pre-trained Un-balanced Transformer (CPT).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    ruDALL-E

    ruDALL-E

    Generate images from texts. In Russian

    ... in Russian and other languages. You can even combine different languages within a single query. This neural network has been developed and trained by Sber AI researchers in close collaboration with scientists from Artificial Intelligence Research Institute using joined datasets by Sber AI and SberDevices. Russian text-to-image model that generates images from text. The architecture is the same as ruDALL-E XL. Even more parameters in the new version.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Insert Text

    Insert Text

    Extends the media pool with an effect for outputting text in the image

    This addon expands the media manager with the effect Bild. The effect can be used, for example, to display a copyright notice, creation date, image title, etc. on images. The values ​​set for the effect are considered "default" and can be changed individually for each image if necessary via the effect parameters in meta data. The text source of the effect can be selected here. inputfor the field Textausgabe or any meta field from the media pool. A text area can also be selected from the media...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Hackers Hate This: Free WAF for Dummies eBook Icon
    Hackers Hate This: Free WAF for Dummies eBook

    Are your applications exposed to relentless cyber threats? Lock them down with expert know-how.

    Unveil the ultimate guide to Web Application Firewalls (WAFs). Packed with expert tips, real-world solutions, and step-by-step strategies, this eBook from F5 empowers you to shield your web apps and APIs from today’s toughest threats. Don’t wait for a breach – grab your free copy now and fortify your defenses today!
    Get the free eBook
  • 10
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 11
    gpt2-client

    gpt2-client

    Easy-to-use TensorFlow Wrapper for GPT-2 117M, 345M, 774M, etc.

    GPT-2 is a Natural Language Processing model developed by OpenAI for text generation. It is the successor to the GPT (Generative Pre-trained Transformer) model trained on 40GB of text from the internet. It features a Transformer model that was brought to light by the Attention Is All You Need paper in 2017. The model has 4 versions - 124M, 345M, 774M, and 1558M - that differ in terms of the amount of training data fed to it and the number of parameters they contain. Finally, gpt2-client...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Grenade

    Grenade

    Deep Learning in Haskell

    ... the layers of the network but also the shapes of data that are passed between the layers. To perform back propagation, one can call the eponymous function which takes a network, appropriate input, and target data, and returns the back propagated gradients for the network. The shapes of the gradients are appropriate for each layer and may be trivial for layers like Relu which have no learnable parameters.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next