Showing 233 open source projects for "tensorflow"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    bert-base-multilingual-cased

    bert-base-multilingual-cased

    Multilingual BERT model trained on 104 Wikipedia languages

    ... languages. It supports sequence classification, token classification, question answering, and more. Built with a shared vocabulary of 110,000 tokens, it is compatible with both PyTorch and TensorFlow.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    xlm-roberta-base

    xlm-roberta-base

    Multilingual RoBERTa trained on 100 languages for NLP tasks

    ... and classification tasks, offering strong performance on benchmarks across languages. It supports use in PyTorch, TensorFlow, JAX, and ONNX, and is best utilized when fine-tuned for downstream applications such as sentiment analysis, named entity recognition, or question answering.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    clip-vit-base-patch32

    clip-vit-base-patch32

    Zero-shot image-text matching with ViT-B/32 Transformer encoder

    ... multiple frameworks including PyTorch, TensorFlow, and JAX. It is primarily intended for research and robustness evaluation in computer vision, not for commercial deployment. Like other CLIP models, it performs well across a wide range of benchmarks but exhibits known limitations in fine-grained classification and demographic bias. Despite strong generalization, OpenAI discourages its use in facial recognition or unconstrained real-world applications without in-domain testing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    blip-image-captioning-base

    blip-image-captioning-base

    Image captioning model trained on COCO using BLIP base architecture

    ... web-sourced noisy image-caption data using synthetic caption generation and noise filtering. BLIP's unified architecture is designed for both vision-language understanding and generation, showing strong generalization even in zero-shot settings. The model can be easily deployed using Hugging Face Transformers in PyTorch or TensorFlow, with support for GPU acceleration and half-precision inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 5
    bart-large-cnn

    bart-large-cnn

    Summarization model fine-tuned on CNN/DailyMail articles

    ... in generating concise, coherent, and human-readable summaries from longer texts. Its architecture allows it to model both language understanding and generation tasks effectively. The model supports usage in PyTorch, TensorFlow, and JAX, and is integrated with the Hugging Face pipeline API for simple deployment. Due to its size and performance, it's widely used in real-world summarization applications such as news aggregation, legal document condensing, and content creation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    vit-base-patch16-224-in21k

    vit-base-patch16-224-in21k

    Base Vision Transformer pretrained on ImageNet-21k at 224x224

    ... fine-tuned heads, it provides strong image representations useful for transfer learning and feature extraction. The model is compatible with PyTorch, TensorFlow, and JAX, and includes a pretrained pooler that facilitates downstream use cases. It is typically used by adding a linear classification head on top of the [CLS] token's output. The ViT architecture demonstrated that transformers, when scaled and trained properly, can match or exceed convolutional models in image recognition.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    esm2_t36_3B_UR50D

    esm2_t36_3B_UR50D

    3B parameter ESM-2 model for protein sequence understanding

    ... acid sequences as input and generates embeddings or masked predictions, enabling fine-tuning for specific biological applications. Larger checkpoints like this one tend to yield better performance but require more compute resources. The model is compatible with PyTorch and TensorFlow, and Meta provides demo notebooks to help with fine-tuning and application. Its capabilities support advanced bioinformatics research and computational biology workflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    distilbert-base-uncased-finetuned-sst-2

    distilbert-base-uncased-finetuned-sst-2

    Sentiment analysis model fine-tuned on SST-2 with DistilBERT

    distilbert-base-uncased-finetuned-sst-2-english is a lightweight sentiment classification model fine-tuned from DistilBERT on the SST-2 dataset. Developed by Hugging Face, it performs binary sentiment analysis (positive/negative) with high accuracy, achieving 91.3% on the dev set. It offers a smaller and faster alternative to BERT while retaining competitive performance (BERT scores ~92.7%). The model uses an uncased vocabulary and supports PyTorch, TensorFlow, ONNX, and Rust for broad...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    deberta-v3-base

    deberta-v3-base

    Improved DeBERTa model with ELECTRA-style pretraining

    ... parameters. DeBERTa-v3-base was trained on 160GB of text data, the same used for DeBERTa-v2, ensuring robust language understanding. It achieves state-of-the-art results on several NLU benchmarks, including SQuAD 2.0 and MNLI, outperforming prior models like RoBERTa-base and ELECTRA-base. The model is compatible with Hugging Face Transformers, PyTorch, TensorFlow, and Rust, and is widely used in text classification and fill-mask tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Sales CRM and Pipeline Management Software | Pipedrive Icon
    Sales CRM and Pipeline Management Software | Pipedrive

    The easy and effective CRM for closing deals

    Pipedrive’s simple interface empowers salespeople to streamline workflows and unite sales tasks in one workspace. Unlock instant sales insights with Pipedrive’s visual sales pipeline and fine-tune your strategy with robust reporting features and a personalized AI Sales Assistant.
    Try it for free
  • 10
    paraphrase-multilingual-mpnet-base-v2

    paraphrase-multilingual-mpnet-base-v2

    Multilingual sentence embeddings for search and similarity tasks

    ... the MPNet framework, it offers multilingual support with strong performance across a wide range of languages. It can be used via the sentence-transformers library for streamlined access or directly through Hugging Face Transformers with custom pooling operations. The model is compatible with multiple formats, including PyTorch, TensorFlow, ONNX, and OpenVINO. With over 3 million downloads per month, it’s widely adopted in both research and production environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    vit-base-patch16-224

    vit-base-patch16-224

    Transformer model for image classification with patch-based input.

    ...-quality representations that can be adapted to downstream visual tasks with minimal additional training. This model has 86.6 million parameters and is available in PyTorch, TensorFlow, and JAX implementations. While the model card was written by Hugging Face, the weights were originally converted from JAX to PyTorch by the community.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    twitter-roberta-base-sentiment-latest

    twitter-roberta-base-sentiment-latest

    RoBERTa model for English sentiment analysis on Twitter data

    .... This updated version improves performance over earlier Twitter sentiment models. It supports both PyTorch and TensorFlow and includes example pipelines for quick implementation. With strong classification accuracy and ease of use, it’s ideal for social media monitoring, brand sentiment tracking, and public opinion research.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    esm2_t30_150M_UR50D

    esm2_t30_150M_UR50D

    Protein language model trained for sequence understanding and tasks

    ... unlabeled protein data. This particular checkpoint uses 30 layers and is available in both PyTorch and TensorFlow, facilitating integration into various protein modeling pipelines. It is designed to help researchers extract meaningful representations from protein sequences and accelerate downstream discoveries in computational biology.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    electra-base-discriminator

    electra-base-discriminator

    Transformer model trained to detect fake vs real tokens efficiently

    ... classification, question answering (e.g., SQuAD), and sequence labeling. It can be fine-tuned for various downstream NLP tasks and supports multiple frameworks including PyTorch, TensorFlow, JAX, and Rust. ELECTRA models have demonstrated state-of-the-art performance on several benchmarks while training faster than comparable models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    clip-vit-large-patch14

    clip-vit-large-patch14

    Zero-shot image-text model for classification and similarity tasks

    ... inference in PyTorch, TensorFlow, and JAX. Despite its versatility, CLIP is not recommended for real-world deployment without thorough testing due to known performance variability, bias, and fairness issues. It particularly struggles with fine-grained visual classification, object counting, and biased associations in demographic evaluations. Its primary purpose is research in robustness, generalization, and interdisciplinary applications across computer vision and language understanding.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    all-MiniLM-L6-v2

    all-MiniLM-L6-v2

    Compact, efficient model for sentence embeddings and semantic search

    ... larger models in embedding quality relative to size and is available in PyTorch, TensorFlow, ONNX, and other formats. all-MiniLM-L6-v2 can be used with the sentence-transformers library or directly via Hugging Face Transformers with custom pooling. Text longer than 256 tokens is truncated, making it ideal for short-form text processing. Released under the Apache 2.0 license, the model is widely adopted across academic and commercial applications for its balance of performance and efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    GPT-2

    GPT-2

    GPT-2 is a 124M parameter English language model for text generation

    ... sequences up to 1024 tokens. It’s the smallest of the GPT-2 family with 124 million parameters and can be used with Hugging Face's Transformers in PyTorch, TensorFlow, and JAX. Though widely used, it reflects biases from its training data and is not suitable for factual tasks or sensitive deployments without further scrutiny. Despite limitations, GPT-2 remains a foundational model for generative NLP tasks and research.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    BLEURT-20-D12 is a PyTorch implementation of BLEURT, a model designed to assess the semantic similarity between two text sequences. It serves as an automatic evaluation metric for natural language generation tasks like summarization and translation. The model predicts a score indicating how similar a candidate sentence is to a reference sentence, with higher scores indicating greater semantic overlap. Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    roberta-base

    roberta-base

    Robust BERT-based model for English with improved MLM training

    roberta-base is a robustly optimized variant of BERT, pretrained on a significantly larger corpus of English text using dynamic masked language modeling. Developed by Facebook AI, RoBERTa improves on BERT by removing the Next Sentence Prediction objective, using longer training, larger batches, and more data, including BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories. It captures contextual representations of language by masking 15% of input tokens and predicting them....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    bert-base-chinese

    bert-base-chinese

    BERT-based Chinese language model for fill-mask and NLP tasks

    bert-base-chinese is a pre-trained language model developed by Google and hosted by Hugging Face, based on the original BERT architecture but tailored for Chinese. It supports fill-mask tasks and is pretrained on Chinese text using word piece tokenization and random masking strategies, following the standard BERT training procedure. With 12 hidden layers and a vocabulary size of 21,128 tokens, it has approximately 103 million parameters. The model is effective for a range of downstream NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    layoutlm-base-uncased

    layoutlm-base-uncased

    Multimodal Transformer for document image understanding and layout

    layoutlm-base-uncased is a multimodal transformer model developed by Microsoft for document image understanding tasks. It incorporates both text and layout (position) features to effectively process structured documents like forms, invoices, and receipts. This base version has 113 million parameters and is pre-trained on 11 million documents from the IIT-CDIP dataset. LayoutLM enables better performance in tasks where the spatial arrangement of text plays a crucial role. The model uses a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Bio_ClinicalBERT

    Bio_ClinicalBERT

    ClinicalBERT model trained on MIMIC notes for clinical NLP tasks

    Bio_ClinicalBERT is a domain-specific language model tailored for clinical natural language processing (NLP), extending BioBERT with additional training on clinical notes. It was initialized from BioBERT-Base v1.0 and further pre-trained on all clinical notes from the MIMIC-III database (~880M words), which includes ICU patient records. The training focused on improving performance in tasks like named entity recognition and natural language inference within the healthcare domain. Notes were...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    t5-small

    t5-small

    T5-Small: Lightweight text-to-text transformer for NLP tasks

    .... It was pretrained on the C4 dataset using both unsupervised denoising and supervised learning on tasks like sentiment analysis, NLI, and QA. Despite its size, it performs competitively across 24 NLP benchmarks, making it a strong candidate for prototyping and fine-tuning. T5-Small is compatible with major deep learning frameworks including PyTorch, TensorFlow, JAX, and ONNX. The model is open-source under the Apache 2.0 license and has wide support across Hugging Face's ecosystem.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    paraphrase-MiniLM-L6-v2

    paraphrase-MiniLM-L6-v2

    Lightweight sentence embedding model for semantic search

    paraphrase-MiniLM-L6-v2 is a sentence-transformers model that encodes sentences and paragraphs into 384-dimensional dense vectors. It is specifically optimized for semantic similarity tasks such as paraphrase mining, clustering, and semantic search. The model is built on a lightweight MiniLM architecture, making it both fast and efficient for large-scale inference. It supports integration via both the sentence-transformers and transformers libraries, with built-in pooling strategies like...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.