Showing 60 open source projects for "compiler python linux"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • MongoDB 8.0 on Atlas | Run anywhere Icon
    MongoDB 8.0 on Atlas | Run anywhere

    Now available in even more cloud regions across AWS, Azure, and Google Cloud.

    MongoDB 8.0 brings enhanced performance and flexibility to Atlas—with expanded availability across 125+ regions globally. Build modern apps anywhere your users are, with the power of a modern database behind you.
    Learn More
  • 1
    stable-diffusion-3.5-large

    stable-diffusion-3.5-large

    Advanced MMDiT text-to-image model for high-quality visual generation

    Stable Diffusion 3.5 Large is a multimodal diffusion transformer (MMDiT) developed by Stability AI, designed for generating high-quality images from text prompts. It integrates three pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—with QK-normalization for improved training stability and prompt understanding. This model excels in handling typography, detailed scenes, and creative compositions while maintaining resource efficiency. It supports inference via ComfyUI, Hugging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    chatglm-6b

    chatglm-6b

    Bilingual 6.2B parameter chatbot optimized for Chinese and English

    ChatGLM-6B is a 6.2 billion parameter bilingual language model developed by THUDM, based on the General Language Model (GLM) framework. It is optimized for natural and fluent dialogue in both Chinese and English, supporting applications in conversational AI, question answering, and assistance. Trained on approximately 1 trillion tokens, the model benefits from supervised fine-tuning, feedback self-training, and reinforcement learning with human feedback to align its outputs with human...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Mistral-7B-Instruct-v0.2

    Mistral-7B-Instruct-v0.2

    Instruction-tuned 7B model for chat and task-oriented text generation

    Mistral-7B-Instruct-v0.2 is a fine-tuned version of the Mistral-7B-v0.2 language model, designed specifically for following instructions in a conversational format. It supports a 32k token context window, enabling more detailed and longer interactions compared to its predecessor. The model is trained to respond to user prompts formatted with [INST] and [/INST] tags, and it performs well in instruction-following tasks like Q&A, summarization, and explanations. It can be used via the official...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    ⓍTTS-v2

    ⓍTTS-v2

    Multilingual voice cloning TTS model with 6-second sample support

    ... rate. It's ideal for both inference and fine-tuning, with APIs and command-line tools available. The model powers Coqui Studio and the Coqui API, and can be run locally using Python or through Hugging Face Spaces. Licensed under the Coqui Public Model License, it balances open access with responsible use of generative voice technology.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 5
    GPT-2

    GPT-2

    GPT-2 is a 124M parameter English language model for text generation

    GPT-2 is a pretrained transformer-based language model developed by OpenAI for generating natural language text. Trained on 40GB of internet data from outbound Reddit links (excluding Wikipedia), it uses causal language modeling to predict the next token in a sequence. The model was trained without human labels and learns representations of English that support text generation, feature extraction, and fine-tuning. GPT-2 uses a byte-level BPE tokenizer with a vocabulary of 50,257 and handles...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    whisper-large-v3-turbo

    whisper-large-v3-turbo

    Whisper-large-v3-turbo delivers fast, multilingual speech recognition

    Whisper-large-v3-turbo is a high-performance automatic speech recognition (ASR) and translation model developed by OpenAI, based on a pruned version of Whisper large-v3. It reduces decoding layers from 32 to 4, offering significantly faster inference with only minor degradation in accuracy. Trained on over 5 million hours of multilingual data, it handles speech transcription, translation, and language identification across 99 languages. It supports advanced decoding strategies like beam...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Llama-3.3-70B-Instruct

    Llama-3.3-70B-Instruct

    Llama-3.3-70B-Instruct is a multilingual AI optimized for helpful chat

    Llama-3.3-70B-Instruct is Meta's large, instruction-tuned language model designed for safe, multilingual, assistant-style conversations and text generation. With 70 billion parameters, it supports English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai, offering state-of-the-art performance across a wide range of benchmarks including MMLU, HumanEval, and GPQA. The model is built on a transformer architecture with grouped-query attention, trained on over 15 trillion tokens and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Llama-2-70b-chat-hf

    Llama-2-70b-chat-hf

    Llama-2-70B-Chat is Meta’s largest fine-tuned open-source chat LLM

    Llama-2-70B-Chat is Meta’s largest fine-tuned large language model, optimized for dialogue and aligned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). It features 70 billion parameters and uses a transformer architecture with grouped-query attention (GQA) to improve inference scalability. Trained on 2 trillion tokens from publicly available sources and over a million human-annotated examples, the model outperforms most open-source chat models and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    bge-m3

    bge-m3

    BGE-M3 is a multilingual embedding model

    BGE-M3 is an advanced text embedding model developed by BAAI that excels in multi-functionality, multi-linguality, and multi-granularity. It supports dense retrieval, sparse retrieval (lexical weighting), and multi-vector retrieval (ColBERT-style), making it ideal for hybrid systems in retrieval-augmented generation (RAG). The model handles over 100 languages and supports long-text inputs up to 8192 tokens, offering flexibility across short queries and full documents. BGE-M3 was trained...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 10
    Llama-2-7b-hf

    Llama-2-7b-hf

    Llama-2-7B is a 7B-parameter transformer model for text generation

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation tasks. It is a 7 billion parameter auto-regressive transformer trained on 2 trillion tokens from publicly available sources, using an optimized architecture without Grouped-Query Attention (GQA). This model is the pretrained version, intended for research and commercial use in English, and can be adapted for downstream applications such as...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.