Showing 409 open source projects for "python compile source code"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Marvin

    Marvin

    A batteries-included library for building AI-powered software

    Meet Marvin: a batteries-included library for building AI-powered software. Marvin's job is to integrate AI directly into your codebase by making it look and feel like any other function. Marvin introduces a new concept called AI Functions. These functions differ from conventional ones in that they don’t rely on source code, but instead generate their outputs on-demand through AI. With AI functions, you don't have to write complex code for tasks like extracting entities from web pages, scoring...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    Diffusers

    Diffusers

    State-of-the-art diffusion models for image and audio generation

    ... lines of code. Interchangeable noise schedulers for different diffusion speeds and output quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. We recommend installing Diffusers in a virtual environment from PyPi or Conda. For more details about installing PyTorch and Flax, please refer to their official documentation.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    OpenLIT

    OpenLIT

    OpenLIT is an open-source LLM Observability tool

    OpenLIT is an OpenTelemetry-native tool designed to help developers gain insights into the performance of their LLM applications in production. It automatically collects LLM input and output metadata and monitors GPU performance for self-hosted LLMs. OpenLIT makes integrating observability into GenAI projects effortless with just a single line of code. Whether you're working with popular LLM providers such as OpenAI and HuggingFace, or leveraging vector databases like ChromaDB, OpenLIT ensures...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    Xorbits Inference

    Xorbits Inference

    Replace OpenAI GPT with another LLM in your app

    Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop. Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Get the most trusted enterprise browser Icon
    Get the most trusted enterprise browser

    Advanced built-in security helps IT prevent breaches before they happen

    Defend against security incidents with Chrome Enterprise. Create customizable controls, manage extensions and set proactive alerts to keep your data and employees protected without slowing down productivity.
    Download Chrome
  • 5
    Llama 2 LLM

    Llama 2 LLM

    Inference Llama 2 in one file of pure C

    ... the C runtime at interactive speeds on a laptop. You can also export and run Meta’s Llama-2 models (currently practical up to 7B due to fp32 inference and memory limits), plus try chat/Code Llama variants with proper tokenizers. A quantized int8 path (runq.c) reduces checkpoint size (e.g., 26GB→6.7GB for 7B) and speeds up inference (e.g., ~3× vs fp32 in author’s notes), with modest quality tradeoffs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    AWS MCP Servers

    AWS MCP Servers

    Helping you get the most out of AWS, wherever you use MCP

    AWS MCP Servers are a collection of remotely hosted, fully-managed Model Context Protocol (MCP) servers by AWS, providing AI applications with real-time access to AWS documentation, API references, best practices, and infrastructure-management capabilities via natural-language workflows. An MCP Server is a lightweight program that exposes specific capabilities through the standardized Model Context Protocol. Host applications (such as chatbots, IDEs, and other AI tools) have MCP clients that...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    IDA Pro MCP

    IDA Pro MCP

    MCP Server for IDA Pro

    The IDA Pro MCP Server is a Model Context Protocol (MCP) server designed to integrate with IDA Pro, a popular disassembler and debugger. It enables AI assistants to interact with IDA Pro, facilitating tasks such as code analysis and reverse engineering. ​
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    PyCaret

    PyCaret

    An open-source, low-code machine learning library in Python

    PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows. It is an end-to-end machine learning and model management tool that speeds up the experiment cycle exponentially and makes you more productive. In comparison with the other open-source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with few lines only. This makes experiments exponentially fast and efficient...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    FastMCP

    FastMCP

    The fast, Pythonic way to build Model Context Protocol servers

    FastMCP is a Pythonic framework designed to simplify the creation of MCP servers. It allows developers to build servers that provide context and tools to Large Language Models (LLMs) using clean and intuitive Python code, streamlining the integration process between AI models and external resources. ​
    Downloads: 1 This Week
    Last Update:
    See Project
  • Photo and Video Editing APIs and SDKs Icon
    Photo and Video Editing APIs and SDKs

    Trusted by 150 million+ creators and businesses globally

    Unlock Picsart's full editing suite by embedding our Editor SDK directly into your platform. Offer your users the power of a full design suite without leaving your site.
    Learn More
  • 10
    RA.Aid

    RA.Aid

    Develop software autonomously

    RA.Aid is an AI-powered assistant designed to enhance the efficiency of software development workflows. It integrates seamlessly with various development environments, providing intelligent code suggestions, automated documentation generation, and real-time error detection. By leveraging advanced machine learning models, RA.Aid aims to reduce development time and improve code quality.​
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    ChatDev

    ChatDev

    Create Customized Software using Natural Language Idea

    ChatDev is an AI-powered development tool designed to simulate the software development lifecycle using multi-agent collaboration. It allows multiple AI agents to take on roles such as product managers, developers, and testers to collaboratively generate, refine, and evaluate software code. This project explores how AI can be leveraged to automate and optimize development workflows.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    gpt-engineer

    gpt-engineer

    Full stack AI software engineer

    gpt-engineer is an open-source platform designed to help developers automate the software development process using natural language. The platform allows users to specify software requirements in plain language, and the AI generates and executes the corresponding code. It can also handle improvements and iterative development, giving users more control over the software they’re building. Built with a terminal-based interface, gpt-engineer is customizable, enabling developers to experiment...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Klavis AI

    Klavis AI

    Open Source MCP integration for AI applications

    Klavis AI is an open-source platform that simplifies the integration of Model Context Protocols (MCPs) into AI applications. It provides hosted, secure MCP servers with built-in OAuth support, eliminating the need for complex authentication management and client-side code. Klavis AI enables developers to connect AI agents to various tools and services efficiently, facilitating scalable and secure AI deployments.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    ChatGPT Academic

    ChatGPT Academic

    ChatGPT extension for scientific research work

    ChatGPT extension for scientific research work, specially optimized academic paper polishing experience, supports custom shortcut buttons, supports custom function plug-ins, supports markdown table display, double display of Tex formulas, complete code display function, new local Python/C++/Go project tree Analysis function/Project source code self-translation ability, newly added PDF and Word document batch summary function/PDF paper full-text translation function. All buttons are dynamically...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    marimo

    marimo

    A reactive notebook for Python

    marimo is an open-source reactive notebook for Python, reproducible, git-friendly, executable as a script, and shareable as an app. marimo notebooks are reproducible, extremely interactive, designed for collaboration (git-friendly!), deployable as scripts or apps, and fit for modern Pythonista. Run one cell and marimo reacts by automatically running affected cells, eliminating the error-prone chore of managing the notebook state. marimo's reactive UI elements, like data frame GUIs and plots...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    PandasAI

    PandasAI

    PandasAI is a Python library that integrates generative AI

    PandasAI is a Python library that adds Generative AI capabilities to pandas, the popular data analysis and manipulation tool. It is designed to be used in conjunction with pandas, and is not a replacement for it. PandasAI makes pandas (and all the most used data analyst libraries) conversational, allowing you to ask questions to your data in natural language. For example, you can ask PandasAI to find all the rows in a DataFrame where the value of a column is greater than 5, and it will return...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    LLM Action

    LLM Action

    Technical principles related to large models

    LLM-Action is a knowledge/tutorial/repository that shares principles, techniques, and real-world experience related to large language models (LLMs), focusing on LLM engineering, deployment, optimization, inference, compression, and tooling. It organizes content in domains like training, inference, compression, alignment, evaluation, pipelines, and applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    BioEmu

    BioEmu

    Inference code for scalable emulation of protein equilibrium ensembles

    Biomolecular Emulator (BioEmu for short) is a model that samples from the approximated equilibrium distribution of structures for a protein monomer, given its amino acid sequence. By default, unphysical structures (steric clashes or chain discontinuities) will be filtered out, so you will typically get fewer samples in the output than requested. The difference can be very large if your protein has large disordered regions, which are very likely to produce clashes. BioEmu outputs structures...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    LLMStack

    LLMStack

    No-code multi-agent framework to build LLM Agents, workflows

    LLMStack is a no-code platform for building generative AI agents, workflows and chatbots, connecting them to your data and business processes. Build tailor-made generative AI agents, applications and chatbots that cater to your unique needs by chaining multiple LLMs. Seamlessly integrate your own data, internal tools and GPT-powered models without any coding experience using LLMStack's no-code builder. Trigger your AI chains from Slack or Discord. Deploy to the cloud or on-premise.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Omnara

    Omnara

    Talk to Your AI Agents from Anywhere

    Omnara is an open-source agent control platform that empowers developers to turn autonomous AI tools (e.g., Claude Code, Cursor, GitHub Copilot) into collaborative teammates by offering real-time dashboards, push notifications, and remote guidance across terminals, web, and mobile. Omnara transforms your AI agents (Claude Code, Codex CLI, n8n, and more) from silent workers into communicative teammates. Get real-time visibility into what your agents are doing, and respond to their questions...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    imodelsX

    imodelsX

    Interpretable prompting and models for NLP

    Interpretable prompting and models for NLP (using large language models). Generates a prompt that explains patterns in data (Official) Explain the difference between two distributions. Find a natural-language prompt using input-gradients. Fit a better linear model using an LLM to extract embeddings. Fit better decision trees using an LLM to expand features. Finetune a single linear layer on top of LLM embeddings. Use these just a like a sci-kit-learn model. During training, they fit better...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    AWS Neuron

    AWS Neuron

    Powering Amazon custom machine learning chips

    ... continue to use the same ML frameworks you use today and migrate your software onto Inf1 instances with minimal code changes and without tie-in to vendor-specific solutions. Neuron is pre-integrated into popular machine learning frameworks like TensorFlow, MXNet and Pytorch to provide a seamless training-to-inference workflow. It includes a compiler, runtime driver, as well as debug and profiling utilities with a TensorBoard plugin for visualization.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Avalanche

    Avalanche

    End-to-End Library for Continual Learning based on PyTorch

    Avalanche is an end-to-end Continual Learning library based on Pytorch, born within ContinualAI with the unique goal of providing a shared and collaborative open-source (MIT licensed) codebase for fast prototyping, training and reproducible evaluation of continual learning algorithms. Avalanche can help Continual Learning researchers in several ways. This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    Qwen3 Embedding

    Qwen3 Embedding

    Designed for text embedding and ranking tasks

    ... instructions along with queries) and flexible embedding/vector dimension definitions. It is meant for tasks such as text retrieval, classification, clustering, bitext mining, and code retrieval.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    H2O LLM Studio

    H2O LLM Studio

    Framework and no-code GUI for fine-tuning LLMs

    Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell. With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start...
    Downloads: 1 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.