Showing 244 open source projects for "tensorflow"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Build apps or websites quickly on a fully managed platform Icon
    Build apps or websites quickly on a fully managed platform

    Get two million requests free per month.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure.
    Try it for free
  • 1
    clip-vit-large-patch14

    clip-vit-large-patch14

    Zero-shot image-text model for classification and similarity tasks

    ... inference in PyTorch, TensorFlow, and JAX. Despite its versatility, CLIP is not recommended for real-world deployment without thorough testing due to known performance variability, bias, and fairness issues. It particularly struggles with fine-grained visual classification, object counting, and biased associations in demographic evaluations. Its primary purpose is research in robustness, generalization, and interdisciplinary applications across computer vision and language understanding.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    all-MiniLM-L6-v2

    all-MiniLM-L6-v2

    Compact, efficient model for sentence embeddings and semantic search

    ... larger models in embedding quality relative to size and is available in PyTorch, TensorFlow, ONNX, and other formats. all-MiniLM-L6-v2 can be used with the sentence-transformers library or directly via Hugging Face Transformers with custom pooling. Text longer than 256 tokens is truncated, making it ideal for short-form text processing. Released under the Apache 2.0 license, the model is widely adopted across academic and commercial applications for its balance of performance and efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    GPT-2

    GPT-2

    GPT-2 is a 124M parameter English language model for text generation

    ... sequences up to 1024 tokens. It’s the smallest of the GPT-2 family with 124 million parameters and can be used with Hugging Face's Transformers in PyTorch, TensorFlow, and JAX. Though widely used, it reflects biases from its training data and is not suitable for factual tasks or sensitive deployments without further scrutiny. Despite limitations, GPT-2 remains a foundational model for generative NLP tasks and research.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    BLEURT-20-D12 is a PyTorch implementation of BLEURT, a model designed to assess the semantic similarity between two text sequences. It serves as an automatic evaluation metric for natural language generation tasks like summarization and translation. The model predicts a score indicating how similar a candidate sentence is to a reference sentence, with higher scores indicating greater semantic overlap. Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 5
    roberta-base

    roberta-base

    Robust BERT-based model for English with improved MLM training

    roberta-base is a robustly optimized variant of BERT, pretrained on a significantly larger corpus of English text using dynamic masked language modeling. Developed by Facebook AI, RoBERTa improves on BERT by removing the Next Sentence Prediction objective, using longer training, larger batches, and more data, including BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories. It captures contextual representations of language by masking 15% of input tokens and predicting them....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    bert-base-chinese

    bert-base-chinese

    BERT-based Chinese language model for fill-mask and NLP tasks

    bert-base-chinese is a pre-trained language model developed by Google and hosted by Hugging Face, based on the original BERT architecture but tailored for Chinese. It supports fill-mask tasks and is pretrained on Chinese text using word piece tokenization and random masking strategies, following the standard BERT training procedure. With 12 hidden layers and a vocabulary size of 21,128 tokens, it has approximately 103 million parameters. The model is effective for a range of downstream NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    layoutlm-base-uncased

    layoutlm-base-uncased

    Multimodal Transformer for document image understanding and layout

    layoutlm-base-uncased is a multimodal transformer model developed by Microsoft for document image understanding tasks. It incorporates both text and layout (position) features to effectively process structured documents like forms, invoices, and receipts. This base version has 113 million parameters and is pre-trained on 11 million documents from the IIT-CDIP dataset. LayoutLM enables better performance in tasks where the spatial arrangement of text plays a crucial role. The model uses a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Bio_ClinicalBERT

    Bio_ClinicalBERT

    ClinicalBERT model trained on MIMIC notes for clinical NLP tasks

    Bio_ClinicalBERT is a domain-specific language model tailored for clinical natural language processing (NLP), extending BioBERT with additional training on clinical notes. It was initialized from BioBERT-Base v1.0 and further pre-trained on all clinical notes from the MIMIC-III database (~880M words), which includes ICU patient records. The training focused on improving performance in tasks like named entity recognition and natural language inference within the healthcare domain. Notes were...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 10
    t5-small

    t5-small

    T5-Small: Lightweight text-to-text transformer for NLP tasks

    .... It was pretrained on the C4 dataset using both unsupervised denoising and supervised learning on tasks like sentiment analysis, NLI, and QA. Despite its size, it performs competitively across 24 NLP benchmarks, making it a strong candidate for prototyping and fine-tuning. T5-Small is compatible with major deep learning frameworks including PyTorch, TensorFlow, JAX, and ONNX. The model is open-source under the Apache 2.0 license and has wide support across Hugging Face's ecosystem.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    paraphrase-MiniLM-L6-v2

    paraphrase-MiniLM-L6-v2

    Lightweight sentence embedding model for semantic search

    paraphrase-MiniLM-L6-v2 is a sentence-transformers model that encodes sentences and paragraphs into 384-dimensional dense vectors. It is specifically optimized for semantic similarity tasks such as paraphrase mining, clustering, and semantic search. The model is built on a lightweight MiniLM architecture, making it both fast and efficient for large-scale inference. It supports integration via both the sentence-transformers and transformers libraries, with built-in pooling strategies like...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    xlm-roberta-large

    xlm-roberta-large

    Large multilingual RoBERTa model trained on 100 languages

    xlm-roberta-large is a multilingual transformer model pre-trained by Facebook AI on 2.5TB of filtered CommonCrawl data covering 100 languages. It is a large-sized version of XLM-RoBERTa, built on the RoBERTa architecture with enhanced multilingual capabilities. The model was trained using the masked language modeling (MLM) objective, where 15% of tokens are masked and predicted, enabling bidirectional context understanding. Unlike autoregressive models, it processes input holistically,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    bert-base-cased

    bert-base-cased

    English BERT model using cased text for sentence-level tasks

    bert-base-cased is a foundational transformer model pretrained on English using masked language modeling (MLM) and next sentence prediction (NSP). It is case-sensitive, treating "English" and "english" as distinct, making it suitable for tasks where casing matters. The model uses a bidirectional attention mechanism to deeply understand sentence structure, trained on BookCorpus and English Wikipedia. With 109M parameters and WordPiece tokenization (30K vocab size), it captures rich contextual...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    opt-125m

    opt-125m

    Compact GPT-style language model for open text generation and research

    opt-125m is the smallest model in Meta AI’s OPT (Open Pre-trained Transformer) family—an open-source suite of decoder-only language models ranging from 125M to 175B parameters. It’s trained using causal language modeling (CLM), following similar architecture and objectives to GPT-3. The model was trained on 180B tokens from a diverse mix of datasets including BookCorpus, Common Crawl, Reddit, and more. OPT models aim to democratize access to large language models for responsible and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    roberta-large

    roberta-large

    Large MLM-based English model optimized from BERT architecture

    RoBERTa-large is a robustly optimized transformer model for English, trained by Facebook AI using a masked language modeling (MLM) objective. Unlike BERT, RoBERTa was trained on 160GB of data from BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories, with dynamic masking applied during training. It uses a byte-level BPE tokenizer and was trained with a sequence length of 512 and a batch size of 8K across 1024 V100 GPUs. RoBERTa improves performance across multiple NLP tasks by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    resnet50.a1_in1k

    resnet50.a1_in1k

    Zero-shot image-text classification with ViT-B/32 encoder.

    clip-vit-base-patch32 is a zero-shot image classification model from OpenAI based on the CLIP (Contrastive Language–Image Pretraining) framework. It uses a Vision Transformer with base size and 32x32 patches (ViT-B/32) as the image encoder and a masked self-attention transformer as the text encoder. These components are jointly trained using contrastive loss to align images and text in a shared embedding space. The model excels in generalizing across tasks without additional fine-tuning by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    bert-base-uncased

    bert-base-uncased

    BERT-base-uncased is a foundational English model for NLP tasks

    BERT-base-uncased is a 110-million-parameter English language model developed by Google, pretrained using masked language modeling and next sentence prediction on BookCorpus and English Wikipedia. It is case-insensitive and tokenizes text using WordPiece, enabling it to learn contextual relationships between words in a sentence bidirectionally. The model excels at feature extraction for downstream NLP tasks like sentence classification, named entity recognition, and question answering when...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    context_menu

    context_menu

    A Python library to create and deploy cross-platform native context

    A Python library to create and deploy cross-platform native context. context_menu was created as due to the lack of an intuitive and easy to use cross-platform context menu library. The library allows you to create your own context menu entries and control their behavior seamlessly in native Python code. It's fully documented and used by over 80,000 developers worldwide.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Monk Computer Vision

    Monk Computer Vision

    A low code unified framework for computer vision and deep learning

    Monk is an open source low code programming environment to reduce the cognitive load faced by entry level programmers while catering to the needs of Expert Deep Learning engineers. There are three libraries in this opensource set. - Monk Classiciation- https://monkai.org. A Unified wrapper over major deep learning frameworks. Our core focus area is at the intersection of Computer Vision and Deep Learning algorithms. - Monk Object Detection -...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.