Open Source Python Artificial Intelligence Software - Page 12

Python Artificial Intelligence Software

View 13534 business solutions

Browse free open source Python Artificial Intelligence Software and projects below. Use the toggles on the left to filter open source Python Artificial Intelligence Software by OS, license, language, programming language, and project status.

  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 1
    Screen Translate

    Screen Translate

    An OCR translator tool made by utilizing tesseract & python-opencv

    STL is an easy to use and light OCR translator tool that can be use to translate your screen. Made with python by utilizing Tesseract and opencv-python. For full view of the project you can check the Github repository: https://github.com/Dadangdut33/Screen-Translate REQUIREMENTS - Tesseract : https://github.com/UB-Mannheim/tesseract/wiki. Needed for the ocr. Install it with all the language pack. - Libretranslate (Optional for offline translation support) - Internet connection for translation if not using libretranslate # Tutorial on How To Setup https://github.com/Dadangdut33/Screen-Translate#installation-and-setup
    Leader badge
    Downloads: 40 This Week
    Last Update:
    See Project
  • 2
    A.I.G

    A.I.G

    Full-stack AI Red Teaming platform

    AI-Infra-Guard is a powerful open-source security platform from Tencent’s Zhuque Lab designed to assess the safety and resilience of AI infrastructures, codebases, and components through automated scanning and evaluation tools. It brings together AI infrastructure vulnerability scanning, MCP server risk analysis, and jailbreak evaluation into a unified workflow so that enterprises and individuals can identify critical security issues without relying on external services. Users can deploy it via Docker or scripts to get a modern web UI that guides them through tasks like scanning third-party frameworks for known CVEs and experimenting with prompt security against attack vectors. The tool provides both a visual interface and a comprehensive API, making integration with internal security systems or CI/CD pipelines practical for ongoing risk management.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    AI Marketing Skills

    AI Marketing Skills

    Open-source AI marketing skills for Claude Code

    AI Marketing Skills is a comprehensive open-source framework designed to transform AI agents into fully operational marketing and sales systems by equipping them with structured, reusable “skills” that automate real business workflows. Instead of simple prompts, the project provides complete operational modules that include scripts, scoring systems, and decision-making logic, allowing AI tools like Claude Code to execute complex marketing tasks end-to-end. The system is organized into multiple domains such as growth experimentation, sales pipeline generation, content production, outbound marketing, SEO optimization, and financial analysis, effectively covering the entire revenue lifecycle of a business. Each skill functions as an executable capability that can be invoked on demand, enabling users to perform tasks like running A/B tests, generating high-quality content, or analyzing conversion funnels with minimal manual effort.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 4
    AlphaFold 3

    AlphaFold 3

    AlphaFold 3 inference pipeline

    AlphaFold 3, developed by Google DeepMind, is an advanced deep learning system for predicting biomolecular structures and interactions with exceptional accuracy. This repository provides the complete inference pipeline for running AlphaFold 3, though access to the model parameters is restricted and must be obtained directly from Google under specific terms of use. The system is designed for scientific research applications in structural biology, biochemistry, and bioinformatics, enabling accurate modeling of proteins, ligands, and covalent modifications. Users can perform local predictions via Docker containers, integrating AlphaFold 3’s inference process with provided JSON input configurations. The software includes flexible options for running both data preprocessing and GPU-accelerated inference, allowing users to adapt to available computational resources.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Application Monitoring That Won't Slow Your App Down Icon
    Application Monitoring That Won't Slow Your App Down

    AppSignal's Rust-based agent is lightweight and stable. Already running in thousands of production apps.

    Full APM with errors, performance, logs, and uptime monitoring. 99.999% uptime SLA on the platform itself.
    Start Free
  • 5
    AppAgent

    AppAgent

    Multimodal Agents as Smartphone Users, an LLM-based multimodal agent

    AppAgent is an open-source multimodal agent framework designed to enable large language models to operate smartphone applications through natural interactions with graphical user interfaces. The system allows an AI agent to interpret visual information from the screen and translate natural language instructions into actions such as tapping, swiping, and navigating between application screens. Instead of requiring backend access to application APIs, the framework interacts with apps the same way a human user would, making it compatible with a wide variety of mobile applications. AppAgent combines vision capabilities with language reasoning to understand interface elements and determine which actions are required to accomplish a task. The system also includes mechanisms for exploration and learning, allowing the agent to analyze user interface layouts and build structured knowledge about how different apps function.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    Astron Agent

    Astron Agent

    Enterprise platform for building and orchestrating AI agent workflows

    Astron Agent is an enterprise-grade platform designed for building and managing intelligent AI agent workflows in production environments. It provides a development environment that combines workflow orchestration, model management, and integration with various AI tools and services. Astron Agent enables organizations to design complex agent-driven processes that coordinate models, automation tools, and enterprise systems. It also integrates robotic process automation capabilities so agents can execute tasks across digital systems instead of only generating responses. Astron Agent supports scalable and high-availability deployments, allowing teams to run reliable AI agent infrastructure in distributed environments. It includes collaboration features that allow teams to develop, manage, and operate AI applications together. With its extensible architecture and enterprise-focused design, it aims to help organizations build production-ready intelligent agent solutions.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    AtomAI

    AtomAI

    Deep and Machine Learning for Microscopy

    AtomAI is a Pytorch-based package for deep and machine-learning analysis of microscopy data that doesn't require any advanced knowledge of Python or machine learning. The intended audience is domain scientists with a basic understanding of how to use NumPy and Matplotlib. It was developed by Maxim Ziatdinov at Oak Ridge National Lab. The purpose of the AtomAI is to provide an environment that bridges the instrument-specific libraries and general physical analysis by enabling the seamless deployment of machine learning algorithms including deep convolutional neural networks, invariant variational autoencoders, and decomposition/unmixing techniques for image and hyperspectral data analysis. Ultimately, it aims to combine the power and flexibility of the PyTorch deep learning framework and the simplicity and intuitive nature of packages such as scikit-learn, with a focus on scientific data.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 8
    BioEmu

    BioEmu

    Inference code for scalable emulation of protein equilibrium ensembles

    Biomolecular Emulator (BioEmu for short) is a model that samples from the approximated equilibrium distribution of structures for a protein monomer, given its amino acid sequence. By default, unphysical structures (steric clashes or chain discontinuities) will be filtered out, so you will typically get fewer samples in the output than requested. The difference can be very large if your protein has large disordered regions, which are very likely to produce clashes. BioEmu outputs structures in backbone frame representation. To reconstruct the side-chains, several tools are available. As an example, we interface with HPacker to conduct side-chain reconstruction and also provide basic tooling for running a short molecular dynamics (MD) equilibration.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    CS-Ebook

    CS-Ebook

    Curated list of classic, high-quality computer science books

    CS-Ebook is a curated repository that compiles high-quality and classic computer science books across a wide range of software-related fields. It focuses on depth over volume, selecting only well-regarded titles that support structured learning and long-term skill development. It spans core areas such as computer fundamentals, programming languages, software engineering, mathematics, data science, and artificial intelligence, making it suitable for learners at different stages. Rather than hosting files, the project serves as a discovery guide, helping users identify essential reading materials and build a strong technical foundation. CS-Ebook is actively maintained and updated to reflect relevant and modern resources while preserving foundational texts. Its organized structure allows users to navigate topics efficiently and follow a progressive learning path. Contributions are encouraged, ensuring the list evolves with community input and continues to highlight valuable resources.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 10
    Claude Scientific Skills

    Claude Scientific Skills

    A set of ready to use Agent Skills for research, science, engineering

    Claude Scientific Skills is a large open source collection of ready-to-use scientific capabilities that extend AI coding agents into full research assistants. The project provides more than 170 curated skills covering domains such as genomics, drug discovery, medical imaging, physics, and advanced data analysis. Each skill bundles documentation, examples, and tool integrations so agents can reliably execute complex multi-step scientific workflows. The framework follows the open Agent Skills standard and works with multiple AI development environments including Claude Code, Cursor, and Codex. Its primary goal is to reduce the friction of scientific computing by giving AI agents structured access to specialized libraries, databases, and research pipelines. Overall, the repository acts as a modular capability layer that transforms general AI agents into domain-aware computational scientists.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 11
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    CogVLM is an open-source visual–language model suite—and its GUI-oriented sibling CogAgent—aimed at image understanding, grounding, and multi-turn dialogue, with optional agent actions on real UI screenshots. The flagship CogVLM-17B combines ~10B visual parameters with ~7B language parameters and supports 490×490 inputs; CogAgent-18B extends this to 1120×1120 and adds plan/next-action outputs plus grounded operation coordinates for GUI tasks. The repo provides multiple ways to run models (CLI, web demo, and OpenAI-Vision–style APIs), along with quantization options that reduce VRAM needs (e.g., 4-bit). It includes checkpoints for chat, base, and grounding variants, plus recipes for model-parallel inference and LoRA fine-tuning. The documentation covers task prompts for general dialogue, visual grounding (box→caption, caption→box, caption+boxes), and GUI agent workflows that produce structured actions with bounding boxes.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 12
    ControlNet

    ControlNet

    Let us control diffusion models

    ControlNet is a neural network architecture designed to add conditional control to text-to-image diffusion models. Rather than training from scratch, ControlNet “locks” the weights of a pre-trained diffusion model and introduces a parallel trainable branch that learns additional conditions—like edges, depth maps, segmentation, human pose, scribbles, or other guidance signals. This allows the system to control where and how the model should focus during generation, enabling users to steer layout, structure, and content more precisely than prompt text alone. The project includes many trained model variants that accept different types of conditioning (e.g., canny edge input, normal maps, skeletal pose) and produce improved fidelity in stable diffusion outputs. It is widely adopted in the community as a go-to tool for semi-automatic image generation workflows, especially when users want structure plus creative freedom.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 13
    DFlash

    DFlash

    Block Diffusion for Ultra-Fast Speculative Decoding

    DFlash is an open-source framework for ultra-fast speculative decoding using a lightweight block diffusion model to draft text in parallel with a target large language model, dramatically improving inference speed without sacrificing generation quality. It acts as a “drafter” that proposes likely continuations which the main model then verifies, enabling significant throughput gains compared to traditional autoregressive decoding methods that generate token by token. This approach has been shown to deliver lossless acceleration on models like Qwen3-8B by combining block diffusion techniques with efficient batching, making it ideal for applications where latency and cost matter. The project includes support for multiple draft models, example integration code, and scripts to benchmark performance, and it is structured to work with popular model serving stacks like SGLang and the Hugging Face Transformers ecosystem.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 14
    DeepSeek VL

    DeepSeek VL

    Towards Real-World Vision-Language Understanding

    DeepSeek-VL is DeepSeek’s initial vision-language model that anchors their multimodal stack. It enables understanding and generation across visual and textual modalities—meaning it can process an image + a prompt, answer questions about images, caption, classify, or reason about visuals in context. The model is likely used internally as the visual encoder backbone for agent use cases, to ground perception in downstream tasks (e.g. answering questions about a screenshot). The repository includes model weights (or pointers to them), evaluation metrics on standard vision + language benchmarks, and configuration or architecture files. It also supports inference tools for forwarding image + prompt through the model to produce text output. DeepSeek-VL is a predecessor to their newer VL2 model, and presumably shares core design philosophy but with earlier scaling, fewer enhancements, or capability tradeoffs.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 15
    DeepSeek-OCR 2

    DeepSeek-OCR 2

    Visual Causal Flow

    DeepSeek-OCR-2 is the second-generation optical character recognition system developed to improve document understanding by introducing a “visual causal flow” mechanism, enabling the encoder to reorder visual tokens in a way that better reflects semantic structure rather than strict raster scan order. It is designed to handle complex layouts and noisy documents by giving the model causal reasoning capabilities that mimic human visual scanning behavior, enhancing OCR performance on documents with rich spatial structure. The repository provides model code and inference scripts that let researchers and developers run and benchmark the system on both images and PDFs, with support for batch evaluation and optimized pipelines leveraging vLLM and transformers.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 16
    Desloppify

    Desloppify

    Agent harness to make your slop code well-engineered and beautiful

    Desloppify is a utility-focused project aimed at improving the quality, structure, and clarity of generated or written text by removing redundancy, noise, and unnecessary verbosity. It is designed to “clean up” outputs, particularly those produced by AI systems, making them more concise, readable, and professional. The system likely applies heuristics or transformation rules to identify repetitive patterns, filler content, and stylistic inconsistencies. This makes it especially useful in workflows where AI-generated text needs to be refined before publication or use in production. It may also support integration into pipelines, allowing automatic post-processing of outputs. The project reflects a growing need to manage and optimize AI-generated content rather than simply produce it. Overall, desloppify acts as a refinement layer that enhances clarity and usability of textual outputs.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    Freqtrade

    Freqtrade

    Free, open source crypto trading bot

    Freqtrade is a free and open-source crypto trading bot written in Python. It is designed to support all major exchanges and be controlled via Telegram or WebUI. It contains backtesting, plotting, and money management tools as well as strategy optimization by machine learning. Always start by running a trading bot in Dry-run and do not engage money before you understand how it works and what profit/loss you should expect. We strongly recommend you have basic coding skills and Python knowledge. Do not hesitate to read the source code and understand the mechanisms of this bot, algorithms, and techniques implemented in it. Write your strategy in python, using pandas. Example strategies to inspire you are available in the strategy repository. Download historical data of the exchange and the markets you may want to trade with. Find the best parameters for your strategy using hyper optimization which employs machining learning methods.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 18
    GLM-4.6V

    GLM-4.6V

    GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning

    GLM-4.6V represents the latest generation of the GLM-V family and marks a major step forward in multimodal AI by combining advanced vision-language understanding with native “tool-call” capabilities, long-context reasoning, and strong generalization across domains. Unlike many vision-language models that treat images and text separately or require intermediate conversions, GLM-4.6V allows inputs such as images, screenshots or document pages directly as part of its reasoning pipeline — and can output or act via tools seamlessly, bridging perception and execution. Its architecture supports a very large context window (on the order of 128K tokens during training), which lets it handle complex multimodal inputs like long documents, multi-page reports, or video transcripts, while maintaining coherence across extended content. In benchmarks and internal evaluations, GLM-4.6V achieves state-of-the-art (SoTA) performance among models of comparable parameter scale on multimodal reasoning.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 19
    Generative AI

    Generative AI

    Sample code and notebooks for Generative AI on Google Cloud

    Generative AI is a comprehensive collection of code samples, notebooks, and demo applications designed to help developers build generative-AI workflows on the Vertex AI platform. It spans multiple modalities—text, image, audio, search (RAG/grounding) and more—showing how to integrate foundation models like the Gemini family into cloud projects. The README emphasises getting started with prompts, datasets, environments and sample apps, making it ideal for both experimentation and production-ready usage. The repository architecture is organised into folders like gemini/, search/, vision/, audio/, and rag-grounding/, which helps developers locate use cases by modality. It is licensed under Apache-2.0, open­sourced and maintained by Google, meaning it's designed with enterprise-grade practices in mind. Overall, it serves as a practical entry point and reference library for building real-world generative AI systems on Google Cloud.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 20
    Habit Tracker

    Habit Tracker

    Habit Tracker for the AI Coding Workshop

    Habit Tracker is a personal habit-tracking web application designed to help users build and maintain daily habits through intuitive UI and analytics that visualize progress over time. It runs locally with a FastAPI backend (Python) and a React frontend, storing all data in a lightweight SQLite database so there’s no need for user accounts or cloud storage, which keeps habit data fully private and self-contained. The app provides streak tracking and completion rates for each habit, giving users feedback on consistency and motivation by showing how often habits are completed and where they may be lagging. A calendar view lets users see a monthly grid of their habit history with color-coded days to highlight patterns and encourage daily engagement. Habit-Tracker also supports planned absences so users can skip days without breaking their streaks, reducing frustration and keeping long-term habits on track.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 21
    HunyuanImage-3.0

    HunyuanImage-3.0

    A Powerful Native Multimodal Model for Image Generation

    HunyuanImage-3.0 is a powerful, native multimodal text-to-image generation model released by Tencent’s Hunyuan team. It unifies multimodal understanding and generation in a single autoregressive framework, combining text and image modalities seamlessly rather than relying on separate image-only diffusion components. It uses a Mixture-of-Experts (MoE) architecture with many expert subnetworks to scale efficiently, deploying only a subset of experts per token, which allows large parameter counts without linear inference cost explosion. The model is intended to be competitive with closed-source image generation systems, aiming for high fidelity, prompt adherence, fine detail, and even “world knowledge” reasoning (i.e. leveraging context, semantics, or common sense in generation). The GitHub repo includes code, scripts, model loading instructions, inference utilities, prompt handling, and integration with standard ML tooling (e.g. Hugging Face / Transformers).
    Downloads: 5 This Week
    Last Update:
    See Project
  • 22
    HunyuanVideo

    HunyuanVideo

    HunyuanVideo: A Systematic Framework For Large Video Generation Model

    HunyuanVideo is a cutting-edge framework designed for large-scale video generation, leveraging advanced AI techniques to synthesize videos from various inputs. It is implemented in PyTorch, providing pre-trained model weights and inference code for efficient deployment. The framework aims to push the boundaries of video generation quality, incorporating multiple innovative approaches to improve the realism and coherence of the generated content. Release of FP8 model weights to reduce GPU memory usage / improve efficiency. Parallel inference code to speed up sampling, utilities and tests included.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 23
    HunyuanWorld 1.0

    HunyuanWorld 1.0

    Generating Immersive, Explorable, and Interactive 3D Worlds

    HunyuanWorld-1.0 is an open-source, simulation-capable 3D world generation model developed by Tencent Hunyuan that creates immersive, explorable, and interactive 3D environments from text or image inputs. It combines the strengths of video-based diversity and 3D-based geometric consistency through a novel framework using panoramic world proxies and semantically layered 3D mesh representations. This approach enables 360° immersive experiences, seamless mesh export for graphics pipelines, and disentangled object representations for enhanced interactivity. The architecture integrates panoramic proxy generation, semantic layering, and hierarchical 3D reconstruction to produce high-quality scene-scale 3D worlds from both text and images. HunyuanWorld-1.0 surpasses existing open-source methods in visual quality and geometric consistency, demonstrated by superior scores in BRISQUE, NIQE, Q-Align, and CLIP metrics.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 24
    Image GPT

    Image GPT

    Large-scale autoregressive pixel model for image generation by OpenAI

    Image-GPT is the official research code and models from OpenAI’s paper Generative Pretraining from Pixels. The project adapts GPT-2 to the image domain, showing that the same transformer architecture can model sequences of pixels without altering its fundamental structure. It provides scripts to download pretrained checkpoints of different model sizes (small, medium, large) trained on large-scale datasets and includes utilities for handling color quantization with a 9-bit palette. Researchers can use the code to sample new images, evaluate generative loss on datasets like ImageNet or CIFAR-10, and explore the impact of scaling on performance. While the repository is archived and provided as-is, it remains a valuable starting point for experimenting with autoregressive transformers applied directly to raw pixel data. By demonstrating GPT’s flexibility across modalities, Image-GPT influenced subsequent multimodal generative research.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 25
    Kaldi

    Kaldi

    kaldi-asr/kaldi is the official location of the Kaldi project

    Kaldi is an open source toolkit for speech recognition research. It provides a powerful framework for building state-of-the-art automatic speech recognition (ASR) systems, with support for deep neural networks, Gaussian mixture models, hidden Markov models, and other advanced techniques. The toolkit is widely used in both academia and industry due to its flexibility, extensibility, and strong community support. Kaldi is designed for researchers who need a highly customizable environment to experiment with new algorithms, as well as for practitioners who want robust, production-ready ASR pipelines. It includes extensive tools for data preparation, feature extraction, acoustic and language modeling, decoding, and evaluation. With its modular design, Kaldi allows users to adapt the system to a wide range of languages and domains. As one of the most influential projects in speech recognition, it has become a foundation for much of the modern work in ASR.
    Downloads: 5 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB