Multilingual BERT model trained on 104 Wikipedia languages
Multilingual RoBERTa trained on 100 languages for NLP tasks
Zero-shot image-text matching with ViT-B/32 Transformer encoder
Image captioning model trained on COCO using BLIP base architecture
Summarization model fine-tuned on CNN/DailyMail articles
Base Vision Transformer pretrained on ImageNet-21k at 224x224
3B parameter ESM-2 model for protein sequence understanding
Sentiment analysis model fine-tuned on SST-2 with DistilBERT
Improved DeBERTa model with ELECTRA-style pretraining
Multilingual sentence embeddings for search and similarity tasks
Transformer model for image classification with patch-based input.
RoBERTa model for English sentiment analysis on Twitter data
Protein language model trained for sequence understanding and tasks
Transformer model trained to detect fake vs real tokens efficiently
Zero-shot image-text model for classification and similarity tasks
Compact, efficient model for sentence embeddings and semantic search
GPT-2 is a 124M parameter English language model for text generation
Custom BLEURT model for evaluating text similarity using PyTorch
Robust BERT-based model for English with improved MLM training
Flexible text-to-text transformer model for multilingual NLP tasks
BERT-based Chinese language model for fill-mask and NLP tasks
Multimodal Transformer for document image understanding and layout
ClinicalBERT model trained on MIMIC notes for clinical NLP tasks
T5-Small: Lightweight text-to-text transformer for NLP tasks
Lightweight sentence embedding model for semantic search