Showing 23 open source projects for "delphi code source"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Deliver secure remote access with OpenVPN. Icon
    Deliver secure remote access with OpenVPN.

    Trusted by nearly 20,000 customers worldwide, and all major cloud providers.

    OpenVPN's products provide scalable, secure remote access — giving complete freedom to your employees to work outside the office while securely accessing SaaS, the internet, and company resources.
    Get started — no credit card required.
  • 1
    AlphaGenome

    AlphaGenome

    Programmatic access to the AlphaGenome model

    The AlphaGenome API provides access to AlphaGenome, Google DeepMind’s unifying model for deciphering the regulatory code within DNA sequences. This repository contains client-side code, examples, and documentation to help you use the AlphaGenome API. AlphaGenome offers multimodal predictions, encompassing diverse functional outputs such as gene expression, splicing patterns, chromatin features, and contact maps. The model analyzes DNA sequences of up to 1 million base pairs in length and can...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    BioEmu

    BioEmu

    Inference code for scalable emulation of protein equilibrium ensembles

    Biomolecular Emulator (BioEmu for short) is a model that samples from the approximated equilibrium distribution of structures for a protein monomer, given its amino acid sequence. By default, unphysical structures (steric clashes or chain discontinuities) will be filtered out, so you will typically get fewer samples in the output than requested. The difference can be very large if your protein has large disordered regions, which are very likely to produce clashes. BioEmu outputs structures...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems....
    Downloads: 12 This Week
    Last Update:
    See Project
  • 4
    LLaMA.go

    LLaMA.go

    llama.go is like llama.cpp in pure Golang

    llama.go is like llama.cpp in pure Golang. The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas | Run databases anywhere Icon
    MongoDB Atlas | Run databases anywhere

    Ensure the availability of your data with coverage across AWS, Azure, and GCP on MongoDB Atlas—the multi-cloud database for every enterprise.

    MongoDB Atlas allows you to build and run modern applications across 125+ cloud regions, spanning AWS, Azure, and Google Cloud. Its multi-cloud clusters enable seamless data distribution and automated failover between cloud providers, ensuring high availability and flexibility without added complexity.
    Learn More
  • 5
    Qwen2.5

    Qwen2.5

    Open source large language model by Alibaba

    ... (exceeding 8,000 tokens), and structured data comprehension, such as tables and JSON formats. They support context lengths up to 128,000 tokens and offer multilingual capabilities in over 29 languages, including Chinese, English, French, Spanish, and more. The models are open-source under the Apache 2.0 license, with resources and documentation available on platforms like Hugging Face and ModelScope. This is a full ZIP snapshot of the Qwen2.5 code.
    Leader badge
    Downloads: 64 This Week
    Last Update:
    See Project
  • 6
    Qwen

    Qwen

    Qwen (通义千问) chat/pretrained large language model Alibaba Cloud

    Qwen is a series of large language models developed by Alibaba Cloud, consisting of various pretrained versions like Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B. These models, which range from smaller to larger configurations, are designed for a wide range of natural language processing tasks. They are openly available for research and commercial use, with Qwen's code and model weights shared on GitHub. Qwen's capabilities include text generation, comprehension, and conversation, making...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 7
    Warlock-Studio

    Warlock-Studio

    AI-suite for image and video upscaling and enhancement. v4.0.1

    Warlock-Studio is a powerful, open-source desktop application for Windows that integrates state-of-the-art AI models for video and image enhancement. This suite provides a unified, high-performance interface for upscaling, restoration, and frame interpolation, making advanced enhancement workflows accessible and efficient. Version 4.0.1 continues this evolution with enhanced AI architecture and improved code stability for even better performance and reliability. Note: The SuperResolution-10...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 8
    Grok-1

    Grok-1

    Open-source, high-performance Mixture-of-Experts large language model

    Grok-1 is a 314-billion-parameter Mixture-of-Experts (MoE) large language model developed by xAI. Designed to optimize computational efficiency, it activates only 25% of its weights for each input token. In March 2024, xAI released Grok-1's model weights and architecture under the Apache 2.0 license, making them openly accessible to developers. The accompanying GitHub repository provides JAX example code for loading and running the model. Due to its substantial size, utilizing Grok-1 requires...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 9
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is ideal...
    Downloads: 5 This Week
    Last Update:
    See Project
  • No-Nonsense Code-to-Cloud Security for Devs | Aikido Icon
    No-Nonsense Code-to-Cloud Security for Devs | Aikido

    Connect your GitHub, GitLab, Bitbucket, or Azure DevOps account to start scanning your repos for free.

    Aikido provides a unified security platform for developers, combining 12 powerful scans like SAST, DAST, and CSPM. AI-driven AutoFix and AutoTriage streamline vulnerability management, while runtime protection blocks attacks.
    Start for Free
  • 10
    GLM-4-32B-0414

    GLM-4-32B-0414

    Open Multilingual Multimodal Chat LMs

    GLM-4-32B-0414 is a powerful open-source large language model featuring 32 billion parameters, designed to deliver performance comparable to leading models like OpenAI’s GPT series. It supports multilingual and multimodal chat capabilities with an extensive 32K token context length, making it ideal for dialogue, reasoning, and complex task completion. The model is pre-trained on 15 trillion tokens of high-quality data, including substantial synthetic reasoning datasets, and further enhanced...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 11
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    starcoder

    starcoder

    Code generation model trained on 80+ languages with FIM support

    StarCoder is a 15.5B parameter language model developed by BigCode for code generation tasks across more than 80 programming languages. It is trained on 1 trillion tokens from the permissively licensed dataset The Stack v1.2, using the Fill-in-the-Middle (FIM) objective and Multi-Query Attention to enhance performance. With an extended context window of 8192 tokens and pretraining in bfloat16, StarCoder can generate, complete, or refactor code in various languages, with English as the primary...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    SmolLM3

    SmolLM3

    Multilingual 3B LLM optimized for reasoning, math, and long contexts

    SmolLM3 is a 3.08B parameter decoder-only language model designed by Hugging Face to deliver high performance in reasoning, math, and multilingual understanding. Trained on 11.2T tokens with a curriculum of web, code, and mathematical data, it uses advanced features like GQA and NoPE. The model supports extended context lengths up to 128k tokens via YaRN extrapolation, making it highly suitable for long-context applications. It outperforms or rivals larger models like Qwen3 and LLaMA3...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    DeepSeek-V3-0324

    DeepSeek-V3-0324

    Advanced multilingual LLM with enhanced reasoning and code generation

    DeepSeek-V3-0324 is a powerful large language model by DeepSeek AI that significantly enhances performance over its predecessor, especially in reasoning, programming, and Chinese language tasks. It achieves major benchmark improvements, such as +5.3 on MMLU-Pro and +19.8 on AIME, and delivers more executable, aesthetically improved front-end code. Its Chinese writing and search-answering capabilities have also been refined, generating more fluent, contextually aware long-form outputs. Key...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Devstral

    Devstral

    Agentic 24B LLM optimized for coding tasks with 128k context support

    Devstral-Small-2505 is a 23.6B parameter language model fine-tuned by Mistral AI and All Hands AI, built specifically for agentic software engineering tasks. Based on Mistral-Small-3.1, it supports a 128k context window and excels in exploring codebases, editing multiple files, and tool usage. The model achieves state-of-the-art open-source performance on SWE-Bench Verified with a score of 46.8%, surpassing much larger models. Devstral is designed for local and production-level deployments...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Meta-Llama-3-8B

    Meta-Llama-3-8B

    Versatile 8B language model optimized for helpful, safe dialogue

    ...-following, reasoning, and code generation. It significantly outperforms previous Llama 2 models and many open-source alternatives on benchmarks like MMLU, GSM8K, and HumanEval. Designed with safety in mind, Llama 3 includes alignment via supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), and is backed by tools like Llama Guard 2 and Code Shield. The model is intended for commercial and research use, especially for building assistant-like chatbots.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    gemma-7b

    gemma-7b

    Compact, state-of-the-art LLM by Google for text generation tasks

    Gemma-7B is a lightweight, open-source, decoder-only language model developed by Google, built using the same research and technology behind the Gemini family. With 8.5 billion parameters and an 8192-token context window, it is optimized for English text generation tasks like question answering, summarization, reasoning, and creative writing. Trained on 6 trillion tokens including web documents, code, and mathematical texts, Gemma-7B provides competitive performance across a wide range of NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    OmniGen2

    OmniGen2

    Multimodal generation AI model for image and text generation

    OmniGen2 is a powerful, efficient open-source multimodal generation model designed for diverse AI tasks involving both images and text. It improves on its predecessor by introducing separate decoding pathways for text and image, along with unshared parameters and a decoupled image tokenizer, enhancing flexibility and performance. Built on a strong Qwen-VL-2.5 foundation, OmniGen2 excels in visual understanding, high-quality text-to-image generation, and instruction-guided image editing. It also...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    OpenVLA 7B

    OpenVLA 7B

    Vision-language-action model for robot control via images and text

    ... supports real-world robotics tasks, with robust generalization to environments seen in pretraining. Its actions include delta values for position, orientation, and gripper status, and can be un-normalized based on robot-specific statistics. OpenVLA is MIT-licensed, fully open-source, and designed collaboratively by Stanford, Berkeley, Google DeepMind, and TRI. Deployment is facilitated via Python and Hugging Face tools, with flash attention support for efficient inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    DeepSWE-Preview

    DeepSWE-Preview

    State-of-the-art RL-trained coding agent for complex SWE tasks

    DeepSWE-Preview is a 32.8B parameter open-source coding agent trained solely with reinforcement learning (RL) to perform complex software engineering (SWE) tasks. Built on top of Qwen3-32B, it achieves 59% accuracy on the SWE-Bench-Verified benchmark—currently the highest among open-weight models. The model navigates and edits large codebases using tools like a file editor, bash execution, and search, within the R2E-Gym environment. Its training emphasizes sparse reward signals, test-time...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    wav2vec2-large-xlsr-53-portuguese

    wav2vec2-large-xlsr-53-portuguese

    Portuguese ASR model fine-tuned on XLSR-53 for 16kHz audio input

    ... Voice test data, demonstrating high accuracy for a single-language ASR model. Inference can be done using HuggingSound or via a custom PyTorch script using Hugging Face Transformers and Librosa. Training scripts and evaluation methods are open source and available on GitHub. It is released under the Apache 2.0 license and intended for ASR tasks in Brazilian Portuguese.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    FLUX.1-schnell

    FLUX.1-schnell

    12B-parameter image generator using fast rectified flow transformers

    FLUX.1-schnell is a 12 billion parameter text-to-image model developed by Black Forest Labs, designed for high-quality image generation using rectified flow transformers. It produces competitive visual results with strong prompt adherence, rivaling closed-source models in just 1 to 4 inference steps. Trained using latent adversarial diffusion distillation, the model is optimized for both quality and speed. It is released under the Apache 2.0 license, allowing commercial, scientific...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Mistral-7B-v0.1

    Mistral-7B-v0.1

    Efficient 7B parameter LLM outperforming Llama 2 13B in benchmarks

    Mistral-7B-v0.1 is a pretrained 7-billion parameter transformer language model developed by Mistral AI, designed to deliver high performance with optimized compute efficiency. It outperforms Llama 2 13B on all evaluated benchmarks despite its smaller size. The architecture integrates Grouped-Query Attention (GQA) and Sliding-Window Attention, enabling efficient inference and improved long-context performance. Mistral-7B uses a byte-fallback BPE tokenizer for better multilingual and code...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.