Showing 64 open source projects for "raspberry-gpio-python"

View related business solutions
  • Your top-rated shield against malware and online scams | Avast Free Antivirus Icon
    Your top-rated shield against malware and online scams | Avast Free Antivirus

    Browse and email in peace, supported by clever AI

    Our antivirus software scans for security and performance issues and helps you to fix them instantly. It also protects you in real time by analyzing unknown files before they reach your desktop PC or laptop — all for free.
    Free Download
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 2
    CSM (Conversational Speech Model)

    CSM (Conversational Speech Model)

    A Conversational Speech Generation Model

    The CSM (Conversational Speech Model) is a speech generation model developed by Sesame AI that creates RVQ audio codes from text and audio inputs. It uses a Llama backbone and a smaller audio decoder to produce audio codes for realistic speech synthesis. The model has been fine-tuned for interactive voice demos and is hosted on platforms like Hugging Face for testing. CSM offers a flexible setup and is compatible with CUDA-enabled GPUs for efficient execution.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Janus-Pro

    Janus-Pro

    Janus-Series: Unified Multimodal Understanding and Generation Models

    Janus is a cutting-edge, unified multimodal model designed to advance both multimodal understanding and generation. It features a decoupled visual encoding approach that allows it to handle visual tasks separately from the generative tasks, resulting in enhanced flexibility and performance. With a singular transformer architecture, Janus outperforms previous models by surpassing specialized task-specific models in its ability to handle diverse multimodal inputs and generate high-quality...
    Downloads: 1 This Week
    Last Update:
    See Project
  • MongoDB 8.0 on Atlas | Run anywhere Icon
    MongoDB 8.0 on Atlas | Run anywhere

    Now available in even more cloud regions across AWS, Azure, and Google Cloud.

    MongoDB 8.0 brings enhanced performance and flexibility to Atlas—with expanded availability across 125+ regions globally. Build modern apps anywhere your users are, with the power of a modern database behind you.
    Learn More
  • 5
    GLM-4-32B-0414

    GLM-4-32B-0414

    Open Multilingual Multimodal Chat LMs

    GLM-4-32B-0414 is a powerful open-source large language model featuring 32 billion parameters, designed to deliver performance comparable to leading models like OpenAI’s GPT series. It supports multilingual and multimodal chat capabilities with an extensive 32K token context length, making it ideal for dialogue, reasoning, and complex task completion. The model is pre-trained on 15 trillion tokens of high-quality data, including substantial synthetic reasoning datasets, and further enhanced...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    LaMDA-pytorch

    LaMDA-pytorch

    Open-source pre-training implementation of Google's LaMDA in PyTorch

    Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here. You can also view their previous blog post from 2021 on the model.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 8
    Dia-1.6B

    Dia-1.6B

    Dia-1.6B generates lifelike English dialogue and vocal expressions

    ... on enterprise GPUs, though CPU and quantized versions are planned. The format supports [S1]/[S2] tags to differentiate speakers and integrates easily into Python workflows. While not tuned to a specific voice, user-provided audio can guide output style. Licensed under Apache 2.0, Dia is intended for research and educational use, with explicit restrictions on misuse like identity mimicry or deceptive content.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    grok-1

    grok-1

    Grok-1 is a 314B-parameter open-weight language model by xAI

    Grok-1 is a large-scale language model released by xAI, featuring 314 billion parameters and made available under the Apache 2.0 license. It is designed for text generation and was trained for advanced language understanding and reasoning capabilities. Grok-1 is currently distributed as open weights, with inference support requiring multi-GPU hardware due to its size. The model can be downloaded from Hugging Face and run using the accompanying Python code in the official GitHub repository...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 10
    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE 4.5 MoE model in FP8 for efficient high-performance inference

    .... It is especially well-suited for production environments requiring high throughput and lower memory use, while maintaining high reasoning and generation quality. The model can be used with FastDeploy and integrates cleanly with Python APIs for prompt-based generation workflows. It supports long context lengths (up to 131,072 tokens) and includes both Chinese and English prompt templates for web search applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    starcoder

    starcoder

    Code generation model trained on 80+ languages with FIM support

    StarCoder is a 15.5B parameter language model developed by BigCode for code generation tasks across more than 80 programming languages. It is trained on 1 trillion tokens from the permissively licensed dataset The Stack v1.2, using the Fill-in-the-Middle (FIM) objective and Multi-Query Attention to enhance performance. With an extended context window of 8192 tokens and pretraining in bfloat16, StarCoder can generate, complete, or refactor code in various languages, with English as the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    DeepSWE-Preview

    DeepSWE-Preview

    State-of-the-art RL-trained coding agent for complex SWE tasks

    DeepSWE-Preview is a 32.8B parameter open-source coding agent trained solely with reinforcement learning (RL) to perform complex software engineering (SWE) tasks. Built on top of Qwen3-32B, it achieves 59% accuracy on the SWE-Bench-Verified benchmark—currently the highest among open-weight models. The model navigates and edits large codebases using tools like a file editor, bash execution, and search, within the R2E-Gym environment. Its training emphasizes sparse reward signals, test-time...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    segmentation-3.0

    segmentation-3.0

    Speaker segmentation model for 10s audio chunks with powerset labels

    segmentation-3.0 is a voice activity and speaker segmentation model from the pyannote.audio framework, designed to analyze 10-second mono audio sampled at 16kHz. It outputs a (num_frames, num_classes) matrix using a powerset encoding that includes non-speech, individual speakers, and overlapping speech for up to three speakers. Trained with pyannote.audio 3.0.0 on a rich blend of datasets—including AISHELL, DIHARD, VoxConverse, and more—it enables downstream tasks like voice activity...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is the world’s first open-weight, large-scale hybrid-attention reasoning model designed for long-context and complex reasoning tasks. Powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism, it efficiently supports context lengths up to 1 million tokens—eight times larger than many contemporary models. MiniMax-M1 significantly reduces computational overhead at generation time, consuming only about 25% FLOPs compared to comparable...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Nanonets-OCR-s

    Nanonets-OCR-s

    State-of-the-art image-to-markdown OCR model

    Nanonets-OCR-s is an advanced image-to-markdown OCR model that transforms documents into structured and semantically rich markdown. It goes beyond basic text extraction by intelligently recognizing content types and applying meaningful tags, making the output ideal for Large Language Models (LLMs) and automated workflows. The model expertly converts mathematical equations into LaTeX syntax, distinguishing between inline and display modes for accuracy. It also generates descriptive <img> tags...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    FLUX.1-dev

    FLUX.1-dev

    Powerful 12B parameter model for top-tier text-to-image creation

    FLUX.1-dev is a powerful 12-billion parameter rectified flow transformer designed for generating high-quality images from text prompts. It delivers cutting-edge output quality, just slightly below the flagship FLUX.1 [pro] model, and matches or exceeds many closed-source competitors in prompt adherence. The model is trained using guidance distillation, making it more efficient and accessible for developers and artists alike. FLUX.1-dev is openly available with weights provided to support...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    stable-diffusion-xl-base-1.0

    stable-diffusion-xl-base-1.0

    Advanced base model for high-quality text-to-image generation

    stable-diffusion-xl-base-1.0 is a next-generation latent diffusion model developed by Stability AI for producing highly detailed images from text prompts. It forms the core of the SDXL pipeline and can be used on its own or paired with a refinement model for enhanced results. This base model utilizes two pretrained text encoders—OpenCLIP-ViT/G and CLIP-ViT/L—for richer text understanding and improved image quality. The model supports two-stage generation, where the base model creates initial...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    stable-diffusion-3-medium

    stable-diffusion-3-medium

    Efficient text-to-image model with enhanced quality and typography

    Stable Diffusion 3 Medium is a next-generation text-to-image model by Stability AI, designed using a Multimodal Diffusion Transformer (MMDiT) architecture. It offers notable improvements in image quality, prompt comprehension, typography, and computational efficiency over previous versions. The model integrates three fixed, pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—to interpret complex prompts more effectively. Trained on 1 billion synthetic and filtered public images,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Kokoro-82M

    Kokoro-82M

    Lightweight, fast, and high-quality open TTS model with 82M params

    Kokoro-82M is an open-weight, lightweight text-to-speech (TTS) model featuring 82 million parameters, developed to deliver high-quality voice synthesis with exceptional efficiency. Despite its compact size, Kokoro rivals the output quality of much larger models while remaining significantly faster and cheaper to run. Built on StyleTTS2 and ISTFTNet architectures, it uses a decoder-only setup without diffusion, enabling rapid audio generation with low computational overhead. Kokoro supports...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    whisper-large-v3

    whisper-large-v3

    High-accuracy multilingual speech recognition and translation model

    Whisper-large-v3 is OpenAI’s most advanced multilingual automatic speech recognition (ASR) and speech translation model, featuring 1.54 billion parameters and trained on 5 million hours of labeled and pseudo-labeled audio. Built on a Transformer-based encoder-decoder architecture, it supports 99 languages and delivers significant improvements in transcription accuracy, robustness to noise, and handling of diverse accents. Compared to previous versions, v3 introduces a 128 Mel bin spectrogram...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Llama-2-7b-chat-hf

    Llama-2-7b-chat-hf

    Dialogue-optimized 7B language model for safe and helpful chatting

    Llama-2-7b-chat-hf is a fine-tuned large language model developed by Meta, designed specifically for dialogue use cases. With 7 billion parameters and built on an optimized transformer architecture, it uses supervised fine-tuning and reinforcement learning with human feedback (RLHF) to enhance helpfulness, coherence, and safety. It outperforms most open-source chat models and rivals proprietary systems like ChatGPT in human evaluations. Trained on 2 trillion tokens of public text and over 1...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Llama-2-7b

    Llama-2-7b

    7B-parameter foundational LLM by Meta for text generation tasks

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation in English. It has 7 billion parameters and uses an optimized transformer-based, autoregressive architecture. Trained on 2 trillion tokens of publicly available data, it serves as the base for fine-tuned models like Llama-2-Chat. The model is pretrained only, meaning it is not optimized for dialogue but can be adapted for various natural language...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Llama-3.1-8B-Instruct

    Llama-3.1-8B-Instruct

    Multilingual 8B-parameter chat-optimized LLM fine-tuned by Meta

    Llama-3.1-8B-Instruct is a multilingual, instruction-tuned language model developed by Meta, designed for high-quality dialogue generation across eight languages, including English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai. It uses a transformer-based, autoregressive architecture with Grouped-Query Attention and supports a 128k token context window. The model was fine-tuned using a combination of supervised fine-tuning (SFT), reinforcement learning with human feedback...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Meta-Llama-3-8B-Instruct

    Meta-Llama-3-8B-Instruct

    Instruction-tuned 8B LLM by Meta for helpful, safe English dialogue

    Meta-Llama-3-8B-Instruct is an instruction-tuned large language model from Meta’s Llama 3 family, optimized for safe and helpful English dialogue. It uses an autoregressive transformer architecture with Grouped-Query Attention (GQA) and supports an 8k token context length. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF), the model achieves strong results on benchmarks like MMLU, GSM8K, and HumanEval. Trained on over 15 trillion tokens of publicly...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.