Showing 13 open source projects for "encoder"

View related business solutions
  • Add Two Lines of Code. Get Full APM. Icon
    Add Two Lines of Code. Get Full APM.

    AppSignal installs in minutes and auto-configures dashboards, alerts, and error tracking.

    Works out of the box for Rails, Django, Express, Phoenix, and more. Monitoring exceptions and performance in no time.
    Start Free
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 1
    Pytorch-toolbelt

    Pytorch-toolbelt

    PyTorch extensions for fast R&D prototyping and Kaggle farming

    ...Extras for Catalyst library (Visualization of batch predictions, additional metrics). By design, both encoder and decoder produces a list of tensors, from fine (high-resolution, indexed 0) to coarse (low-resolution) feature maps. Access to all intermediate feature maps is beneficial if you want to apply deep supervision losses on them or encoder-decoder of object detection task.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Bootstrap Your Own Latent (BYOL)

    Bootstrap Your Own Latent (BYOL)

    Usable Implementation of "Bootstrap Your Own Latent" self-supervised

    Practical implementation of an astoundingly simple method for self-supervised learning that achieves a new state-of-the-art (surpassing SimCLR) without contrastive learning and having to designate negative pairs. This repository offers a module that one can easily wrap any image-based neural network (residual network, discriminator, policy network) to immediately start benefitting from unlabelled image data. There is now new evidence that batch normalization is key to making this technique...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 3
    x-transformers

    x-transformers

    A simple but complete full-attention transformer

    A simple but complete full-attention transformer with a set of promising experimental features from various papers. Proposes adding learned memory key/values prior to attending. They were able to remove feedforwards altogether and attain a similar performance to the original transformers. I have found that keeping the feedforwards and adding the memory key/values leads to even better performance. Proposes adding learned tokens, akin to CLS tokens, named memory tokens, that is passed through...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    mTRF-Toolbox

    mTRF-Toolbox

    A MATLAB package for modelling multivariate stimulus-response data

    mTRF-Toolbox is a MATLAB package for modelling multivariate stimulus-response data, suitable for neurophysiological data such as MEG, EEG, sEEG, ECoG and EMG. It can be used to model the functional relationship between neuronal populations and dynamic sensory inputs such as natural scenes and sounds, or build neural decoders for reconstructing stimulus features and developing real-time applications such as brain-computer interfaces (BCIs). Toolbox Paper: ...
    Downloads: 19 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    MetaTransformer

    MetaTransformer

    Meta-Transformer for Unified Multimodal Learning

    We're thrilled to present OneLLM, an ensembling Meta-Transformer framework with Multimodal Large Language Models, which performs multimodal joint training, supports more modalities including fMRI, Depth, and Normal Maps, and demonstrates very impressive performances on 25 benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    OpenNMT-tf

    OpenNMT-tf

    Neural machine translation and sequence learning using TensorFlow

    ...Models are described with code to allow training custom architectures and overriding default behavior. For example, the following instance defines a sequence-to-sequence model with 2 concatenated input features, a self-attentional encoder, and an attentional RNN decoder sharing its input and output embeddings. Sequence to sequence models can be trained with guided alignment and alignment information are returned as part of the translation API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Deep learning time series forecasting

    Deep learning time series forecasting

    Deep learning PyTorch library for time series forecasting

    ...Historically, this repository provided open-source benchmarks and codes for flash flood and river flow forecasting. Full transformer (SimpleTransformer in model_dict): The full original transformer with all 8 encoder and decoder blocks. Requires passing the target in at inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Reformer PyTorch

    Reformer PyTorch

    Reformer, the efficient Transformer, in Pytorch

    This is a Pytorch implementation of Reformer. It includes LSH attention, reversible network, and chunking. It has been validated with an auto-regressive task (enwik8).
    Downloads: 8 This Week
    Last Update:
    See Project
  • 9
    AliceMind

    AliceMind

    ALIbaba's Collection of Encoder-decoders from MinD

    This repository provides pre-trained encoder-decoder models and its related optimization techniques developed by Alibaba's MinD (Machine IntelligeNce of Damo) Lab. Pre-trained models for natural language understanding (NLU). We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 10
    TTS

    TTS

    Deep learning for text to speech

    ...Notebooks for extensive model benchmarking. Modular (but not too much) code base enabling easy testing for new ideas. Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). Speaker Encoder to compute speaker embeddings efficiently. Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN). If you are only interested in synthesizing speech with the released TTS models, installing from PyPI is the easiest option.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    ALAE

    ALAE

    Adversarial Latent Autoencoders

    ...The project implements the architecture introduced in the CVPR research paper on Adversarial Latent Autoencoders, which focuses on improving generative modeling by learning latent representations aligned with adversarial training objectives. Unlike traditional GANs that directly generate images from random noise, ALAE uses an encoder-decoder architecture that maps images into a structured latent space and then reconstructs them through adversarial training. This design allows the model to learn interpretable latent representations that can be manipulated to control generated image attributes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Texar

    Texar

    Toolkit for Machine Learning, Natural Language Processing

    Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides a library of easy-to-use ML modules and functionalities for composing whatever models and algorithms. The tool is designed for both researchers and practitioners for fast prototyping and experimentation. Texar was originally developed and is actively contributed by Petuum and CMU in collaboration with other institutes. A mirror of this...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    CakeChat

    CakeChat

    CakeChat: Emotional Generative Dialog System

    ...The code is flexible and allows to condition model's responses by an arbitrary categorical variable. For example, you can train your own persona-based neural conversational model or create an emotional chatting machine. Hierarchical Recurrent Encoder-Decoder (HRED) architecture for handling deep dialog context. Multilayer RNN with GRU cells. The first layer of the utterance-level encoder is always bidirectional. By default, CuDNNGRU implementation is used for ~25% acceleration during inference. Thought vector is fed into decoder on each decoding step. Decoder can be conditioned on any categorical label, for example, emotion label or persona id. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB