Showing 214 open source projects for "tensorflow"

View related business solutions
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • Test your software product anywhere in the world Icon
    Test your software product anywhere in the world

    Get feedback from real people across 190+ countries with the devices, environments, and payment instruments you need for your perfect test.

    Global App Testing is a managed pool of freelancers used by Google, Meta, Microsoft, and other world-beating software companies.
    Try us today.
  • 1
    roberta-base

    roberta-base

    Robust BERT-based model for English with improved MLM training

    roberta-base is a robustly optimized variant of BERT, pretrained on a significantly larger corpus of English text using dynamic masked language modeling. Developed by Facebook AI, RoBERTa improves on BERT by removing the Next Sentence Prediction objective, using longer training, larger batches, and more data, including BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories. It captures contextual representations of language by masking 15% of input tokens and predicting them....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    bert-base-chinese

    bert-base-chinese

    BERT-based Chinese language model for fill-mask and NLP tasks

    bert-base-chinese is a pre-trained language model developed by Google and hosted by Hugging Face, based on the original BERT architecture but tailored for Chinese. It supports fill-mask tasks and is pretrained on Chinese text using word piece tokenization and random masking strategies, following the standard BERT training procedure. With 12 hidden layers and a vocabulary size of 21,128 tokens, it has approximately 103 million parameters. The model is effective for a range of downstream NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    layoutlm-base-uncased

    layoutlm-base-uncased

    Multimodal Transformer for document image understanding and layout

    layoutlm-base-uncased is a multimodal transformer model developed by Microsoft for document image understanding tasks. It incorporates both text and layout (position) features to effectively process structured documents like forms, invoices, and receipts. This base version has 113 million parameters and is pre-trained on 11 million documents from the IIT-CDIP dataset. LayoutLM enables better performance in tasks where the spatial arrangement of text plays a crucial role. The model uses a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Sales CRM and Pipeline Management Software | Pipedrive Icon
    Sales CRM and Pipeline Management Software | Pipedrive

    The easy and effective CRM for closing deals

    Pipedrive’s simple interface empowers salespeople to streamline workflows and unite sales tasks in one workspace. Unlock instant sales insights with Pipedrive’s visual sales pipeline and fine-tune your strategy with robust reporting features and a personalized AI Sales Assistant.
    Try it for free
  • 5
    Bio_ClinicalBERT

    Bio_ClinicalBERT

    ClinicalBERT model trained on MIMIC notes for clinical NLP tasks

    Bio_ClinicalBERT is a domain-specific language model tailored for clinical natural language processing (NLP), extending BioBERT with additional training on clinical notes. It was initialized from BioBERT-Base v1.0 and further pre-trained on all clinical notes from the MIMIC-III database (~880M words), which includes ICU patient records. The training focused on improving performance in tasks like named entity recognition and natural language inference within the healthcare domain. Notes were...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    t5-small

    t5-small

    T5-Small: Lightweight text-to-text transformer for NLP tasks

    .... It was pretrained on the C4 dataset using both unsupervised denoising and supervised learning on tasks like sentiment analysis, NLI, and QA. Despite its size, it performs competitively across 24 NLP benchmarks, making it a strong candidate for prototyping and fine-tuning. T5-Small is compatible with major deep learning frameworks including PyTorch, TensorFlow, JAX, and ONNX. The model is open-source under the Apache 2.0 license and has wide support across Hugging Face's ecosystem.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    paraphrase-MiniLM-L6-v2

    paraphrase-MiniLM-L6-v2

    Lightweight sentence embedding model for semantic search

    paraphrase-MiniLM-L6-v2 is a sentence-transformers model that encodes sentences and paragraphs into 384-dimensional dense vectors. It is specifically optimized for semantic similarity tasks such as paraphrase mining, clustering, and semantic search. The model is built on a lightweight MiniLM architecture, making it both fast and efficient for large-scale inference. It supports integration via both the sentence-transformers and transformers libraries, with built-in pooling strategies like...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    xlm-roberta-large

    xlm-roberta-large

    Large multilingual RoBERTa model trained on 100 languages

    xlm-roberta-large is a multilingual transformer model pre-trained by Facebook AI on 2.5TB of filtered CommonCrawl data covering 100 languages. It is a large-sized version of XLM-RoBERTa, built on the RoBERTa architecture with enhanced multilingual capabilities. The model was trained using the masked language modeling (MLM) objective, where 15% of tokens are masked and predicted, enabling bidirectional context understanding. Unlike autoregressive models, it processes input holistically,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    bert-base-cased

    bert-base-cased

    English BERT model using cased text for sentence-level tasks

    bert-base-cased is a foundational transformer model pretrained on English using masked language modeling (MLM) and next sentence prediction (NSP). It is case-sensitive, treating "English" and "english" as distinct, making it suitable for tasks where casing matters. The model uses a bidirectional attention mechanism to deeply understand sentence structure, trained on BookCorpus and English Wikipedia. With 109M parameters and WordPiece tokenization (30K vocab size), it captures rich contextual...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 10
    opt-125m

    opt-125m

    Compact GPT-style language model for open text generation and research

    opt-125m is the smallest model in Meta AI’s OPT (Open Pre-trained Transformer) family—an open-source suite of decoder-only language models ranging from 125M to 175B parameters. It’s trained using causal language modeling (CLM), following similar architecture and objectives to GPT-3. The model was trained on 180B tokens from a diverse mix of datasets including BookCorpus, Common Crawl, Reddit, and more. OPT models aim to democratize access to large language models for responsible and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    roberta-large

    roberta-large

    Large MLM-based English model optimized from BERT architecture

    RoBERTa-large is a robustly optimized transformer model for English, trained by Facebook AI using a masked language modeling (MLM) objective. Unlike BERT, RoBERTa was trained on 160GB of data from BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories, with dynamic masking applied during training. It uses a byte-level BPE tokenizer and was trained with a sequence length of 512 and a batch size of 8K across 1024 V100 GPUs. RoBERTa improves performance across multiple NLP tasks by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    resnet50.a1_in1k

    resnet50.a1_in1k

    Zero-shot image-text classification with ViT-B/32 encoder.

    clip-vit-base-patch32 is a zero-shot image classification model from OpenAI based on the CLIP (Contrastive Language–Image Pretraining) framework. It uses a Vision Transformer with base size and 32x32 patches (ViT-B/32) as the image encoder and a masked self-attention transformer as the text encoder. These components are jointly trained using contrastive loss to align images and text in a shared embedding space. The model excels in generalizing across tasks without additional fine-tuning by...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    bert-base-uncased

    bert-base-uncased

    BERT-base-uncased is a foundational English model for NLP tasks

    BERT-base-uncased is a 110-million-parameter English language model developed by Google, pretrained using masked language modeling and next sentence prediction on BookCorpus and English Wikipedia. It is case-insensitive and tokenizes text using WordPiece, enabling it to learn contextual relationships between words in a sentence bidirectionally. The model excels at feature extraction for downstream NLP tasks like sentence classification, named entity recognition, and question answering when...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Monk Computer Vision

    Monk Computer Vision

    A low code unified framework for computer vision and deep learning

    Monk is an open source low code programming environment to reduce the cognitive load faced by entry level programmers while catering to the needs of Expert Deep Learning engineers. There are three libraries in this opensource set. - Monk Classiciation- https://monkai.org. A Unified wrapper over major deep learning frameworks. Our core focus area is at the intersection of Computer Vision and Deep Learning algorithms. - Monk Object Detection -...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.