Showing 45 open source projects for "python programming language"

View related business solutions
  • Deliver secure remote access with OpenVPN. Icon
    Deliver secure remote access with OpenVPN.

    Trusted by nearly 20,000 customers worldwide, and all major cloud providers.

    OpenVPN's products provide scalable, secure remote access — giving complete freedom to your employees to work outside the office while securely accessing SaaS, the internet, and company resources.
    Get started — no credit card required.
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    LaMDA-pytorch

    LaMDA-pytorch

    Open-source pre-training implementation of Google's LaMDA in PyTorch

    Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here. You can also view their previous blog post from 2021 on the model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 3
    starcoder

    starcoder

    Code generation model trained on 80+ languages with FIM support

    StarCoder is a 15.5B parameter language model developed by BigCode for code generation tasks across more than 80 programming languages. It is trained on 1 trillion tokens from the permissively licensed dataset The Stack v1.2, using the Fill-in-the-Middle (FIM) objective and Multi-Query Attention to enhance performance. With an extended context window of 8192 tokens and pretraining in bfloat16, StarCoder can generate, complete, or refactor code in various languages, with English as the primary...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    phi-2

    phi-2

    Small, high-performing language model for QA, chat, and code tasks

    Phi-2 is a 2.7 billion parameter Transformer model developed by Microsoft, designed for natural language processing and code generation tasks. It was trained on a filtered dataset of high-quality web content and synthetic NLP texts created by GPT-3.5, totaling 1.4 trillion tokens. Phi-2 excels in benchmarks for common sense, language understanding, and logical reasoning, outperforming most models under 13B parameters despite not being instruction-tuned or aligned via RLHF. It performs best...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Crowdtesting That Delivers | Testeum Icon
    Crowdtesting That Delivers | Testeum

    Unfixed bugs delaying your launch? Test with real users globally – check it out for free, results in days.

    Testeum connects your software, app, or website to a worldwide network of testers, delivering detailed feedback in under 48 hours. Ensure functionality and refine UX on real devices, all at a fraction of traditional costs. Trusted by startups and enterprises alike, our platform streamlines quality assurance with actionable insights.
    Click to perfect your product now.
  • 5
    GPT-2

    GPT-2

    GPT-2 is a 124M parameter English language model for text generation

    GPT-2 is a pretrained transformer-based language model developed by OpenAI for generating natural language text. Trained on 40GB of internet data from outbound Reddit links (excluding Wikipedia), it uses causal language modeling to predict the next token in a sequence. The model was trained without human labels and learns representations of English that support text generation, feature extraction, and fine-tuning. GPT-2 uses a byte-level BPE tokenizer with a vocabulary of 50,257 and handles...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE 4.5 MoE model in FP8 for efficient high-performance inference

    ERNIE-4.5-300B-A47B-FP8-Paddle is a quantized version of Baidu’s MoE large language model, post-trained for text generation tasks and optimized for FP8 precision. This variant retains the original’s 300 billion total parameters with 47 billion active per token, enabling powerful language understanding while dramatically improving inference efficiency. Built using PaddlePaddle, it supports multi-GPU distributed deployment and leverages advanced routing strategies and expert parallelism...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Llama-2-7b-chat-hf

    Llama-2-7b-chat-hf

    Dialogue-optimized 7B language model for safe and helpful chatting

    Llama-2-7b-chat-hf is a fine-tuned large language model developed by Meta, designed specifically for dialogue use cases. With 7 billion parameters and built on an optimized transformer architecture, it uses supervised fine-tuning and reinforcement learning with human feedback (RLHF) to enhance helpfulness, coherence, and safety. It outperforms most open-source chat models and rivals proprietary systems like ChatGPT in human evaluations. Trained on 2 trillion tokens of public text and over 1...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Llama-2-7b

    Llama-2-7b

    7B-parameter foundational LLM by Meta for text generation tasks

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation in English. It has 7 billion parameters and uses an optimized transformer-based, autoregressive architecture. Trained on 2 trillion tokens of publicly available data, it serves as the base for fine-tuned models like Llama-2-Chat. The model is pretrained only, meaning it is not optimized for dialogue but can be adapted for various natural language...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 10
    chatglm-6b

    chatglm-6b

    Bilingual 6.2B parameter chatbot optimized for Chinese and English

    ChatGLM-6B is a 6.2 billion parameter bilingual language model developed by THUDM, based on the General Language Model (GLM) framework. It is optimized for natural and fluent dialogue in both Chinese and English, supporting applications in conversational AI, question answering, and assistance. Trained on approximately 1 trillion tokens, the model benefits from supervised fine-tuning, feedback self-training, and reinforcement learning with human feedback to align its outputs with human...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    gpt-oss-20b

    gpt-oss-20b

    OpenAI’s compact 20B open model for fast, agentic, and local use

    GPT-OSS-20B is OpenAI’s smaller, open-weight language model optimized for low-latency, agentic tasks, and local deployment. With 21B total parameters and 3.6B active parameters (MoE), it fits within 16GB of memory thanks to native MXFP4 quantization. Designed for high-performance reasoning, it supports Harmony response format, function calling, web browsing, and code execution. Like its larger sibling (gpt-oss-120b), it offers adjustable reasoning depth and full chain-of-thought visibility...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Nanonets-OCR-s

    Nanonets-OCR-s

    State-of-the-art image-to-markdown OCR model

    Nanonets-OCR-s is an advanced image-to-markdown OCR model that transforms documents into structured and semantically rich markdown. It goes beyond basic text extraction by intelligently recognizing content types and applying meaningful tags, making the output ideal for Large Language Models (LLMs) and automated workflows. The model expertly converts mathematical equations into LaTeX syntax, distinguishing between inline and display modes for accuracy. It also generates descriptive <img> tags...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    whisper-large-v3

    whisper-large-v3

    High-accuracy multilingual speech recognition and translation model

    ... input and better support for Cantonese, achieving up to 20% error reduction over Whisper-large-v2. It handles zero-shot transcription and translation, performs language detection automatically, and supports features like word-level timestamps and long-form audio processing. The model integrates well with Hugging Face Transformers and supports optimizations such as batching, SDPA, and Flash Attention 2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Llama-3.1-8B-Instruct

    Llama-3.1-8B-Instruct

    Multilingual 8B-parameter chat-optimized LLM fine-tuned by Meta

    Llama-3.1-8B-Instruct is a multilingual, instruction-tuned language model developed by Meta, designed for high-quality dialogue generation across eight languages, including English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai. It uses a transformer-based, autoregressive architecture with Grouped-Query Attention and supports a 128k token context window. The model was fine-tuned using a combination of supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Meta-Llama-3-8B-Instruct

    Meta-Llama-3-8B-Instruct

    Instruction-tuned 8B LLM by Meta for helpful, safe English dialogue

    Meta-Llama-3-8B-Instruct is an instruction-tuned large language model from Meta’s Llama 3 family, optimized for safe and helpful English dialogue. It uses an autoregressive transformer architecture with Grouped-Query Attention (GQA) and supports an 8k token context length. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF), the model achieves strong results on benchmarks like MMLU, GSM8K, and HumanEval. Trained on over 15 trillion tokens of publicly...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Mistral-7B-Instruct-v0.2

    Mistral-7B-Instruct-v0.2

    Instruction-tuned 7B model for chat and task-oriented text generation

    Mistral-7B-Instruct-v0.2 is a fine-tuned version of the Mistral-7B-v0.2 language model, designed specifically for following instructions in a conversational format. It supports a 32k token context window, enabling more detailed and longer interactions compared to its predecessor. The model is trained to respond to user prompts formatted with [INST] and [/INST] tags, and it performs well in instruction-following tasks like Q&A, summarization, and explanations. It can be used via the official...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    whisper-large-v3-turbo

    whisper-large-v3-turbo

    Whisper-large-v3-turbo delivers fast, multilingual speech recognition

    Whisper-large-v3-turbo is a high-performance automatic speech recognition (ASR) and translation model developed by OpenAI, based on a pruned version of Whisper large-v3. It reduces decoding layers from 32 to 4, offering significantly faster inference with only minor degradation in accuracy. Trained on over 5 million hours of multilingual data, it handles speech transcription, translation, and language identification across 99 languages. It supports advanced decoding strategies like beam search...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Llama-3.3-70B-Instruct

    Llama-3.3-70B-Instruct

    Llama-3.3-70B-Instruct is a multilingual AI optimized for helpful chat

    Llama-3.3-70B-Instruct is Meta's large, instruction-tuned language model designed for safe, multilingual, assistant-style conversations and text generation. With 70 billion parameters, it supports English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai, offering state-of-the-art performance across a wide range of benchmarks including MMLU, HumanEval, and GPQA. The model is built on a transformer architecture with grouped-query attention, trained on over 15 trillion tokens...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Llama-2-70b-chat-hf

    Llama-2-70b-chat-hf

    Llama-2-70B-Chat is Meta’s largest fine-tuned open-source chat LLM

    Llama-2-70B-Chat is Meta’s largest fine-tuned large language model, optimized for dialogue and aligned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). It features 70 billion parameters and uses a transformer architecture with grouped-query attention (GQA) to improve inference scalability. Trained on 2 trillion tokens from publicly available sources and over a million human-annotated examples, the model outperforms most open-source chat models and rivals...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Llama-2-7b-hf

    Llama-2-7b-hf

    Llama-2-7B is a 7B-parameter transformer model for text generation

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation tasks. It is a 7 billion parameter auto-regressive transformer trained on 2 trillion tokens from publicly available sources, using an optimized architecture without Grouped-Query Attention (GQA). This model is the pretrained version, intended for research and commercial use in English, and can be adapted for downstream applications such as summarization...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.