Search Results for "python programming language" - Page 113

Showing 2829 open source projects for "python programming language"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 1
    PAGEB is an easy to use tool to create videogames. With PAGEB is not necessary to known any programming language, nor to master complex game development techniques, to create high quality games. All you need is a good game idea and time to develope it.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    DeepSeek-R1-0528

    DeepSeek-R1-0528

    DeepSeek-R1-0528 is a powerful reasoning-focused LLM with 64K context

    DeepSeek-R1-0528 is an upgraded large language model developed by DeepSeek AI, designed to improve deep reasoning, inference, and programming capabilities. With a context length of up to 64K tokens and 685 billion parameters, it introduces enhanced algorithmic optimizations and expanded token usage per task. Compared to previous versions, it significantly improves benchmark scores in math (e.g., AIME 2025: 87.5%), logic, and coding tasks like LiveCodeBench and SWE Verified. It supports system...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE 4.5 MoE model in FP8 for efficient high-performance inference

    ERNIE-4.5-300B-A47B-FP8-Paddle is a quantized version of Baidu’s MoE large language model, post-trained for text generation tasks and optimized for FP8 precision. This variant retains the original’s 300 billion total parameters with 47 billion active per token, enabling powerful language understanding while dramatically improving inference efficiency. Built using PaddlePaddle, it supports multi-GPU distributed deployment and leverages advanced routing strategies and expert parallelism...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Devstral

    Devstral

    Agentic 24B LLM optimized for coding tasks with 128k context support

    ..., compatible with frameworks like vLLM, Transformers, llama.cpp, and Ollama. It is licensed under Apache 2.0 and is fully open for commercial and non-commercial use. Its Tekken tokenizer allows a 131k vocabulary size for high flexibility in programming languages and natural language inputs. Devstral is the preferred backend for OpenHands, where it acts as the reasoning engine for autonomous code agents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 5
    DeepSeek-V3-0324

    DeepSeek-V3-0324

    Advanced multilingual LLM with enhanced reasoning and code generation

    DeepSeek-V3-0324 is a powerful large language model by DeepSeek AI that significantly enhances performance over its predecessor, especially in reasoning, programming, and Chinese language tasks. It achieves major benchmark improvements, such as +5.3 on MMLU-Pro and +19.8 on AIME, and delivers more executable, aesthetically improved front-end code. Its Chinese writing and search-answering capabilities have also been refined, generating more fluent, contextually aware long-form outputs. Key...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    cdd

    cdd

    Lexical, grammatical and syntactic analyzer of C÷÷.

    cdd is the lexical, grammatical, and syntactic analyzer of the C÷÷ dialect. The latter is not a programming language, but a set of variations in the C language that cdd analyzes, converts to C code and compiles by calling to g++.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7

    Control-Freak

    Visual Automation & UI Authoring

    This is a general purpose tool optimized for managing, programming and automating machines, services and apps or just loose parts of code snippets. It comes with a visual block language and a visual GUI designer which lets you inter connect and automate all sorts of devices. It has built-in support for TCP, UDP, Serial, MQTT, SSH, Arduino/Raspberry-PI or access to your custom API via HTTP. Public Homepage: http://pearls-media.com/control-freak/ On Github: https://github.com/net...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    SrcComDoc allows documentation written in source comment rows to be extracted, formatted and highlighted according to the chosen documentation format. The basic SrcComDoc syntax is source and documentation language independent.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    A public repository of open source scripts and small programs related to linguistics and language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 10
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Llama-2-7b-chat-hf

    Llama-2-7b-chat-hf

    Dialogue-optimized 7B language model for safe and helpful chatting

    Llama-2-7b-chat-hf is a fine-tuned large language model developed by Meta, designed specifically for dialogue use cases. With 7 billion parameters and built on an optimized transformer architecture, it uses supervised fine-tuning and reinforcement learning with human feedback (RLHF) to enhance helpfulness, coherence, and safety. It outperforms most open-source chat models and rivals proprietary systems like ChatGPT in human evaluations. Trained on 2 trillion tokens of public text and over 1...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Llama-2-7b

    Llama-2-7b

    7B-parameter foundational LLM by Meta for text generation tasks

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation in English. It has 7 billion parameters and uses an optimized transformer-based, autoregressive architecture. Trained on 2 trillion tokens of publicly available data, it serves as the base for fine-tuned models like Llama-2-Chat. The model is pretrained only, meaning it is not optimized for dialogue but can be adapted for various natural language...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    chatglm-6b

    chatglm-6b

    Bilingual 6.2B parameter chatbot optimized for Chinese and English

    ChatGLM-6B is a 6.2 billion parameter bilingual language model developed by THUDM, based on the General Language Model (GLM) framework. It is optimized for natural and fluent dialogue in both Chinese and English, supporting applications in conversational AI, question answering, and assistance. Trained on approximately 1 trillion tokens, the model benefits from supervised fine-tuning, feedback self-training, and reinforcement learning with human feedback to align its outputs with human...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    OWLOS

    OWLOS

    Open source network operating system for managing IoT devices.

    OWLOS Open source network operating system for managing IoT devices: - does't require internet access or additional servers - ready to connect sensors, actuators, LCD, DHT, Stepper and other devices - does't require programming skills - built-in user interface - use web browser to access and manage your OWLOS nodes - built-in RESTful server - built-in MQTT client - at the same time WiFi access point and station, in any combination: - can be used autonomously...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    gpt-oss-20b

    gpt-oss-20b

    OpenAI’s compact 20B open model for fast, agentic, and local use

    GPT-OSS-20B is OpenAI’s smaller, open-weight language model optimized for low-latency, agentic tasks, and local deployment. With 21B total parameters and 3.6B active parameters (MoE), it fits within 16GB of memory thanks to native MXFP4 quantization. Designed for high-performance reasoning, it supports Harmony response format, function calling, web browsing, and code execution. Like its larger sibling (gpt-oss-120b), it offers adjustable reasoning depth and full chain-of-thought visibility...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    ERNIE-4.5-0.3B-Base-PT

    ERNIE-4.5-0.3B-Base-PT

    Compact 360M text model with high efficiency and fine-tuning support

    ERNIE-4.5-0.3B-Base-PT is a compact, fully dense transformer model with 360 million parameters, optimized for general-purpose text generation tasks. It belongs to the ERNIE 4.5 series by Baidu and leverages advanced pretraining techniques without relying on a Mixture-of-Experts (MoE) structure. The model features 18 transformer layers, 16 attention heads, and a maximum context length of 131,072 tokens, offering strong language understanding for its size. It can be fine-tuned using ERNIEKit...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Nanonets-OCR-s

    Nanonets-OCR-s

    State-of-the-art image-to-markdown OCR model

    Nanonets-OCR-s is an advanced image-to-markdown OCR model that transforms documents into structured and semantically rich markdown. It goes beyond basic text extraction by intelligently recognizing content types and applying meaningful tags, making the output ideal for Large Language Models (LLMs) and automated workflows. The model expertly converts mathematical equations into LaTeX syntax, distinguishing between inline and display modes for accuracy. It also generates descriptive <img> tags...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    whisper-large-v3

    whisper-large-v3

    High-accuracy multilingual speech recognition and translation model

    ... input and better support for Cantonese, achieving up to 20% error reduction over Whisper-large-v2. It handles zero-shot transcription and translation, performs language detection automatically, and supports features like word-level timestamps and long-form audio processing. The model integrates well with Hugging Face Transformers and supports optimizations such as batching, SDPA, and Flash Attention 2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Llama-3.1-8B-Instruct

    Llama-3.1-8B-Instruct

    Multilingual 8B-parameter chat-optimized LLM fine-tuned by Meta

    Llama-3.1-8B-Instruct is a multilingual, instruction-tuned language model developed by Meta, designed for high-quality dialogue generation across eight languages, including English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai. It uses a transformer-based, autoregressive architecture with Grouped-Query Attention and supports a 128k token context window. The model was fine-tuned using a combination of supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Meta-Llama-3-8B-Instruct

    Meta-Llama-3-8B-Instruct

    Instruction-tuned 8B LLM by Meta for helpful, safe English dialogue

    Meta-Llama-3-8B-Instruct is an instruction-tuned large language model from Meta’s Llama 3 family, optimized for safe and helpful English dialogue. It uses an autoregressive transformer architecture with Grouped-Query Attention (GQA) and supports an 8k token context length. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF), the model achieves strong results on benchmarks like MMLU, GSM8K, and HumanEval. Trained on over 15 trillion tokens of publicly...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Mistral-7B-Instruct-v0.2

    Mistral-7B-Instruct-v0.2

    Instruction-tuned 7B model for chat and task-oriented text generation

    Mistral-7B-Instruct-v0.2 is a fine-tuned version of the Mistral-7B-v0.2 language model, designed specifically for following instructions in a conversational format. It supports a 32k token context window, enabling more detailed and longer interactions compared to its predecessor. The model is trained to respond to user prompts formatted with [INST] and [/INST] tags, and it performs well in instruction-following tasks like Q&A, summarization, and explanations. It can be used via the official...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    ⓍTTS-v2

    ⓍTTS-v2

    Multilingual voice cloning TTS model with 6-second sample support

    ⓍTTS-v2 (XTTS-v2) by Coqui is a powerful multilingual text-to-speech model capable of cloning voices from a short 6-second audio sample. It supports 17 languages and enables high-quality voice generation with emotion, style transfer, and cross-language synthesis. The model introduces major improvements over ⓍTTS-v1, including better prosody, stability, and support for Hungarian and Korean. ⓍTTS-v2 allows interpolation between multiple voice references and generates speech at a 24kHz sampling...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    whisper-large-v3-turbo

    whisper-large-v3-turbo

    Whisper-large-v3-turbo delivers fast, multilingual speech recognition

    Whisper-large-v3-turbo is a high-performance automatic speech recognition (ASR) and translation model developed by OpenAI, based on a pruned version of Whisper large-v3. It reduces decoding layers from 32 to 4, offering significantly faster inference with only minor degradation in accuracy. Trained on over 5 million hours of multilingual data, it handles speech transcription, translation, and language identification across 99 languages. It supports advanced decoding strategies like beam search...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Llama-3.3-70B-Instruct

    Llama-3.3-70B-Instruct

    Llama-3.3-70B-Instruct is a multilingual AI optimized for helpful chat

    Llama-3.3-70B-Instruct is Meta's large, instruction-tuned language model designed for safe, multilingual, assistant-style conversations and text generation. With 70 billion parameters, it supports English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai, offering state-of-the-art performance across a wide range of benchmarks including MMLU, HumanEval, and GPQA. The model is built on a transformer architecture with grouped-query attention, trained on over 15 trillion tokens...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Llama-2-70b-chat-hf

    Llama-2-70b-chat-hf

    Llama-2-70B-Chat is Meta’s largest fine-tuned open-source chat LLM

    Llama-2-70B-Chat is Meta’s largest fine-tuned large language model, optimized for dialogue and aligned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). It features 70 billion parameters and uses a transformer architecture with grouped-query attention (GQA) to improve inference scalability. Trained on 2 trillion tokens from publicly available sources and over a million human-annotated examples, the model outperforms most open-source chat models and rivals...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.