366 programs for "compiler python linux" with 2 filters applied:

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    grok-1

    grok-1

    Grok-1 is a 314B-parameter open-weight language model by xAI

    Grok-1 is a large-scale language model released by xAI, featuring 314 billion parameters and made available under the Apache 2.0 license. It is designed for text generation and was trained for advanced language understanding and reasoning capabilities. Grok-1 is currently distributed as open weights, with inference support requiring multi-GPU hardware due to its size. The model can be downloaded from Hugging Face and run using the accompanying Python code in the official GitHub repository...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    mozg is a flexible, fast neurolibrary. It allows one to create, learn and use multi-layer perceptron (MLP), which is the most popular artificial neural network (ANN). This work is based on library originally written by Alexy Filin.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Representation method of abstract information based on hierarchical network of simplest nodes. Each node can represent both value and attribute. Network is able to accumulate, classify and learn information.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Cellicone is a project to develop an artificial life organism with the necessary components to make it comparable to biological life as we know it. This includes components ranging from proteins to cells to organs to limbs, and many steps between.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • 5
    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE 4.5 MoE model in FP8 for efficient high-performance inference

    .... It is especially well-suited for production environments requiring high throughput and lower memory use, while maintaining high reasoning and generation quality. The model can be used with FastDeploy and integrates cleanly with Python APIs for prompt-based generation workflows. It supports long context lengths (up to 131,072 tokens) and includes both Chinese and English prompt templates for web search applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    starcoder

    starcoder

    Code generation model trained on 80+ languages with FIM support

    StarCoder is a 15.5B parameter language model developed by BigCode for code generation tasks across more than 80 programming languages. It is trained on 1 trillion tokens from the permissively licensed dataset The Stack v1.2, using the Fill-in-the-Middle (FIM) objective and Multi-Query Attention to enhance performance. With an extended context window of 8192 tokens and pretraining in bfloat16, StarCoder can generate, complete, or refactor code in various languages, with English as the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    DeepSWE-Preview

    DeepSWE-Preview

    State-of-the-art RL-trained coding agent for complex SWE tasks

    DeepSWE-Preview is a 32.8B parameter open-source coding agent trained solely with reinforcement learning (RL) to perform complex software engineering (SWE) tasks. Built on top of Qwen3-32B, it achieves 59% accuracy on the SWE-Bench-Verified benchmark—currently the highest among open-weight models. The model navigates and edits large codebases using tools like a file editor, bash execution, and search, within the R2E-Gym environment. Its training emphasizes sparse reward signals, test-time...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    mms-300m-1130-forced-aligner

    mms-300m-1130-forced-aligner

    CTC-based forced aligner for audio-text in 158 languages

    ... to the TorchAudio forced alignment API. Users can integrate it easily through the Python package ctc-forced-aligner, and it supports GPU acceleration via PyTorch. The alignment pipeline includes audio processing, emission generation, tokenization, and span detection, making it suitable for speech analysis, transcription syncing, and dataset creation. This model is especially useful for researchers and developers working with low-resource languages or building multilingual speech systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    yolo-world-mirror

    yolo-world-mirror

    Mirror of Ultralytics YOLO-World model weights for object detection

    ... descriptions. These weights are compatible with Ultralytics’ tooling and documentation, making it easier for developers to deploy or fine-tune the model. The mirror allows users to work with YOLO-World models through a centralized platform without downloading from alternate sources. It enables flexible integration with Ultralytics’ Python API or CLI tools for real-time and high-performance object detection tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Get Avast Free Antivirus | Your top-rated shield against malware and online scams Icon
    Get Avast Free Antivirus | Your top-rated shield against malware and online scams

    Boost your PC's defense against cyberthreats and web-based scams.

    Our antivirus software scans for security and performance issues and helps you to fix them instantly. It also protects you in real time by analyzing unknown files before they reach your desktop PC or laptop — all for free.
    Free Download
  • 10
    segmentation-3.0

    segmentation-3.0

    Speaker segmentation model for 10s audio chunks with powerset labels

    segmentation-3.0 is a voice activity and speaker segmentation model from the pyannote.audio framework, designed to analyze 10-second mono audio sampled at 16kHz. It outputs a (num_frames, num_classes) matrix using a powerset encoding that includes non-speech, individual speakers, and overlapping speech for up to three speakers. Trained with pyannote.audio 3.0.0 on a rich blend of datasets—including AISHELL, DIHARD, VoxConverse, and more—it enables downstream tasks like voice activity...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is the world’s first open-weight, large-scale hybrid-attention reasoning model designed for long-context and complex reasoning tasks. Powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism, it efficiently supports context lengths up to 1 million tokens—eight times larger than many contemporary models. MiniMax-M1 significantly reduces computational overhead at generation time, consuming only about 25% FLOPs compared to comparable...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    ERNIE-4.5-0.3B-Base-PT

    ERNIE-4.5-0.3B-Base-PT

    Compact 360M text model with high efficiency and fine-tuning support

    ... with support for SFT, LoRA, and DPO training methods, making it highly adaptable. Compatible with the Hugging Face Transformers library, the model can be easily used in Python for inference or deployed via FastDeploy. This variant emphasizes portability and accessibility, enabling fast deployment even on less powerful hardware. Ideal for developers seeking a smaller model for prototyping, educational use, or lightweight production tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Nanonets-OCR-s

    Nanonets-OCR-s

    State-of-the-art image-to-markdown OCR model

    Nanonets-OCR-s is an advanced image-to-markdown OCR model that transforms documents into structured and semantically rich markdown. It goes beyond basic text extraction by intelligently recognizing content types and applying meaningful tags, making the output ideal for Large Language Models (LLMs) and automated workflows. The model expertly converts mathematical equations into LaTeX syntax, distinguishing between inline and display modes for accuracy. It also generates descriptive <img> tags...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    FLUX.1-dev

    FLUX.1-dev

    Powerful 12B parameter model for top-tier text-to-image creation

    FLUX.1-dev is a powerful 12-billion parameter rectified flow transformer designed for generating high-quality images from text prompts. It delivers cutting-edge output quality, just slightly below the flagship FLUX.1 [pro] model, and matches or exceeds many closed-source competitors in prompt adherence. The model is trained using guidance distillation, making it more efficient and accessible for developers and artists alike. FLUX.1-dev is openly available with weights provided to support...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    stable-diffusion-xl-base-1.0

    stable-diffusion-xl-base-1.0

    Advanced base model for high-quality text-to-image generation

    stable-diffusion-xl-base-1.0 is a next-generation latent diffusion model developed by Stability AI for producing highly detailed images from text prompts. It forms the core of the SDXL pipeline and can be used on its own or paired with a refinement model for enhanced results. This base model utilizes two pretrained text encoders—OpenCLIP-ViT/G and CLIP-ViT/L—for richer text understanding and improved image quality. The model supports two-stage generation, where the base model creates initial...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    stable-diffusion-3-medium

    stable-diffusion-3-medium

    Efficient text-to-image model with enhanced quality and typography

    Stable Diffusion 3 Medium is a next-generation text-to-image model by Stability AI, designed using a Multimodal Diffusion Transformer (MMDiT) architecture. It offers notable improvements in image quality, prompt comprehension, typography, and computational efficiency over previous versions. The model integrates three fixed, pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—to interpret complex prompts more effectively. Trained on 1 billion synthetic and filtered public images,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Kokoro-82M

    Kokoro-82M

    Lightweight, fast, and high-quality open TTS model with 82M params

    Kokoro-82M is an open-weight, lightweight text-to-speech (TTS) model featuring 82 million parameters, developed to deliver high-quality voice synthesis with exceptional efficiency. Despite its compact size, Kokoro rivals the output quality of much larger models while remaining significantly faster and cheaper to run. Built on StyleTTS2 and ISTFTNet architectures, it uses a decoder-only setup without diffusion, enabling rapid audio generation with low computational overhead. Kokoro supports...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    whisper-large-v3

    whisper-large-v3

    High-accuracy multilingual speech recognition and translation model

    Whisper-large-v3 is OpenAI’s most advanced multilingual automatic speech recognition (ASR) and speech translation model, featuring 1.54 billion parameters and trained on 5 million hours of labeled and pseudo-labeled audio. Built on a Transformer-based encoder-decoder architecture, it supports 99 languages and delivers significant improvements in transcription accuracy, robustness to noise, and handling of diverse accents. Compared to previous versions, v3 introduces a 128 Mel bin spectrogram...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Llama-2-7b-chat-hf

    Llama-2-7b-chat-hf

    Dialogue-optimized 7B language model for safe and helpful chatting

    Llama-2-7b-chat-hf is a fine-tuned large language model developed by Meta, designed specifically for dialogue use cases. With 7 billion parameters and built on an optimized transformer architecture, it uses supervised fine-tuning and reinforcement learning with human feedback (RLHF) to enhance helpfulness, coherence, and safety. It outperforms most open-source chat models and rivals proprietary systems like ChatGPT in human evaluations. Trained on 2 trillion tokens of public text and over 1...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Llama-2-7b

    Llama-2-7b

    7B-parameter foundational LLM by Meta for text generation tasks

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation in English. It has 7 billion parameters and uses an optimized transformer-based, autoregressive architecture. Trained on 2 trillion tokens of publicly available data, it serves as the base for fine-tuned models like Llama-2-Chat. The model is pretrained only, meaning it is not optimized for dialogue but can be adapted for various natural language...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Llama-3.1-8B-Instruct

    Llama-3.1-8B-Instruct

    Multilingual 8B-parameter chat-optimized LLM fine-tuned by Meta

    Llama-3.1-8B-Instruct is a multilingual, instruction-tuned language model developed by Meta, designed for high-quality dialogue generation across eight languages, including English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai. It uses a transformer-based, autoregressive architecture with Grouped-Query Attention and supports a 128k token context window. The model was fine-tuned using a combination of supervised fine-tuning (SFT), reinforcement learning with human feedback...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Meta-Llama-3-8B-Instruct

    Meta-Llama-3-8B-Instruct

    Instruction-tuned 8B LLM by Meta for helpful, safe English dialogue

    Meta-Llama-3-8B-Instruct is an instruction-tuned large language model from Meta’s Llama 3 family, optimized for safe and helpful English dialogue. It uses an autoregressive transformer architecture with Grouped-Query Attention (GQA) and supports an 8k token context length. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF), the model achieves strong results on benchmarks like MMLU, GSM8K, and HumanEval. Trained on over 15 trillion tokens of publicly...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    FLUX.1-schnell

    FLUX.1-schnell

    12B-parameter image generator using fast rectified flow transformers

    FLUX.1-schnell is a 12 billion parameter text-to-image model developed by Black Forest Labs, designed for high-quality image generation using rectified flow transformers. It produces competitive visual results with strong prompt adherence, rivaling closed-source models in just 1 to 4 inference steps. Trained using latent adversarial diffusion distillation, the model is optimized for both quality and speed. It is released under the Apache 2.0 license, allowing commercial, scientific, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    stable-diffusion-2-1

    stable-diffusion-2-1

    Latent diffusion model for high-quality text-to-image generation

    Stable Diffusion 2.1 is a text-to-image generation model developed by Stability AI, building on the 768-v architecture with additional fine-tuning for improved safety and image quality. It uses a latent diffusion framework that operates in a compressed image space, enabling faster and more efficient image synthesis while preserving detail. The model is conditioned on text prompts via the OpenCLIP-ViT/H encoder and supports generation at resolutions up to 768×768. Released under the...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.