Showing 1630 open source projects for "python"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 1
    cognee

    cognee

    Deterministic LLMs Outputs for AI Applications and AI Agents

    We build for developers who need a reliable, production-ready data layer for AI applications. Cognee implements scalable, modular data pipelines that allow for creating the LLM-enriched data layer using graph and vector stores. Cognee acts a semantic memory layer, unveiling hidden connections within your data and infusing it with your company's language and principles. This self-optimizing process ensures ultra-relevant, personalized, and contextually aware LLM retrievals. Any kind of data...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 3
    TorchMetrics AI

    TorchMetrics AI

    Machine learning metrics for distributed, scalable PyTorch application

    TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    OpenLIT

    OpenLIT

    OpenLIT is an open-source LLM Observability tool

    OpenLIT is an OpenTelemetry-native tool designed to help developers gain insights into the performance of their LLM applications in production. It automatically collects LLM input and output metadata and monitors GPU performance for self-hosted LLMs. OpenLIT makes integrating observability into GenAI projects effortless with just a single line of code. Whether you're working with popular LLM providers such as OpenAI and HuggingFace, or leveraging vector databases like ChromaDB, OpenLIT...
    Downloads: 4 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 5
    FiftyOne

    FiftyOne

    The open-source tool for building high-quality datasets

    The open-source tool for building high-quality datasets and computer vision models. Nothing hinders the success of machine learning systems more than poor-quality data. And without the right tools, improving a model can be time-consuming and inefficient. FiftyOne supercharges your machine learning workflows by enabling you to visualize datasets and interpret models faster and more effectively. Improving data quality and understanding your model’s failure modes are the most impactful ways to...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 6
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU,...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 7
    torchvision

    torchvision

    Datasets, transforms and models specific to Computer Vision

    The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. We recommend Anaconda as Python package management system. Torchvision currently supports Pillow (default), Pillow-SIMD, which is a much faster drop-in replacement for Pillow with SIMD, if installed will be used as the default. Also, accimage, if installed can be activated by calling torchvision.set_image_backend('accimage'), libpng, which can be installed via conda...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Nerve

    Nerve

    The Simple Agent Development Kit

    Nerve is a developer-friendly Agent Development Kit (ADK) that utilizes YAML and a CLI to define, run, orchestrate, and evaluate LLM-driven agents. It supports declarative setups, tool integration, workflow pipelines, and both MCP client and server roles.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Guidance

    Guidance

    A guidance language for controlling large language models

    Guidance is an efficient programming paradigm for steering language models. With Guidance, you can control how output is structured and get high-quality output for your use case—while reducing latency and cost vs. conventional prompting or fine-tuning. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditionals, loops, tool use) and generation seamlessly.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Test your software product anywhere in the world Icon
    Test your software product anywhere in the world

    Get feedback from real people across 190+ countries with the devices, environments, and payment instruments you need for your perfect test.

    Global App Testing is a managed pool of freelancers used by Google, Meta, Microsoft, and other world-beating software companies.
    Try us today.
  • 10
    AgentOps

    AgentOps

    Python SDK for agent monitoring, LLM cost tracking, benchmarking, etc.

    Industry-leading developer platform to test and debug AI agents. We built the tools so you don't have to. Visually track events such as LLM calls, tools, and multi-agent interactions. Rewind and replay agent runs with point-in-time precision. Keep a full data trail of logs, errors, and prompt injection attacks from prototype to production. Native integrations with the top agent frameworks.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    Hummingbird

    Hummingbird

    Hummingbird compiles trained ML models into tensor computation

    Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to seamlessly leverage neural network frameworks (such as PyTorch) to accelerate traditional ML models. Thanks to Hummingbird, users can benefit from (1) all the current and future optimizations implemented in neural network frameworks; (2) native hardware acceleration; (3) having a unique platform to support both traditional and neural network models; and having all of...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    ChatTTS

    ChatTTS

    A generative speech model for daily dialogue

    ChatTTS is an open-source conversational text-to-speech model optimized for dialogue, developed by 2Noise. Trained on 100,000+ hours of English and Chinese conversation data, it excels at generating expressive prosody—pauses, interjections, laughter—for more natural-sounding speech synthesis in assistant and chatbot applications.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    MCP Everything Search

    MCP Everything Search

    An MCP server that provides fast file searching capabilities

    Everything Search MCP Server is an MCP server that provides fast file searching capabilities across Windows, macOS, and Linux. On Windows, it utilizes the Everything SDK; on macOS, it leverages the built-in mdfind command; and on Linux, it uses the locate or plocate command. ​
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    Airtable MCP

    Airtable MCP

    Airtable integration for AI-powered applications

    Airtable MCP is an integration tool that enables AI-powered applications to access and manipulate Airtable databases directly from the IDE using Anthropic's Model Context Protocol (MCP). It allows querying, creating, updating, and deleting records using natural language, facilitating seamless data management. ​
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    LightRAG

    LightRAG

    "LightRAG: Simple and Fast Retrieval-Augmented Generation"

    LightRAG is a lightweight Retrieval-Augmented Generation (RAG) framework designed for efficient document retrieval and response generation. It is optimized for speed and lower resource consumption, making it ideal for real-time applications.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    BioNeMo

    BioNeMo

    BioNeMo Framework: For building and adapting AI models

    BioNeMo is an AI-powered framework developed by NVIDIA for protein and molecular generation using deep learning models. It provides researchers and developers with tools to design, analyze, and optimize biological molecules, aiding in drug discovery and synthetic biology applications.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    GPTme

    GPTme

    Your agent in your terminal, equipped with local tools

    GPTMe is a personal AI chatbot designed for self-reflection, journaling, and productivity, using GPT models to generate personalized insights and responses.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    DeepChem

    DeepChem

    Democratizing Deep-Learning for Drug Discovery, Quantum Chemistry, etc

    DeepChem aims to provide a high-quality open-source toolchain that democratizes the use of deep learning in drug discovery, materials science, quantum chemistry, and biology. DeepChem currently supports Python 3.7 through 3.9 and requires these packages on any condition. DeepChem has a number of "soft" requirements. If you face some errors like ImportError: This class requires XXXX, you may need to install some packages. Deepchem provides support for TensorFlow, PyTorch, JAX and each requires...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    AI Chatbot Framework

    AI Chatbot Framework

    Python chatbot framework with Natural Language Understanding

    Building a chatbot can sound daunting, but it’s totally doable. AI Chatbot Framework is an AI powered conversational dialog interface built in Python. With this tool, it’s easy to create Natural Language conversational scenarios with no coding efforts whatsoever. The smooth UI makes it effortless to create and train conversations to the bot and it continuously gets smarter as it learns from conversations it has with people. AI Chatbot Framework can live on any channel of your choice...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T N1.5 is the world's first open foundation model

    NVIDIA Isaac‑GR00T N1.5 is an open-source foundation model engineered for generalized humanoid robot reasoning and manipulation skills. It accepts multimodal inputs—such as language and images—and uses a diffusion transformer architecture built upon vision-language encoders, enabling adaptive robot behaviors across diverse environments. It is designed to be customizable via post-training with real or synthetic data.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    BioEmu

    BioEmu

    Inference code for scalable emulation of protein equilibrium ensembles

    Biomolecular Emulator (BioEmu for short) is a model that samples from the approximated equilibrium distribution of structures for a protein monomer, given its amino acid sequence. By default, unphysical structures (steric clashes or chain discontinuities) will be filtered out, so you will typically get fewer samples in the output than requested. The difference can be very large if your protein has large disordered regions, which are very likely to produce clashes. BioEmu outputs structures...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    Pruna AI

    Pruna AI

    Pruna is a model optimization framework built for developers

    Pruna is an open-source, self-hostable AI inference engine designed to help teams deploy and manage large language models (LLMs) efficiently across private or hybrid infrastructures. Built with performance and developer ergonomics in mind, Pruna simplifies inference workflows by enabling multi-model orchestration, autoscaling, GPU resource allocation, and compatibility with popular open-source models. It is ideal for companies or teams looking to reduce reliance on external APIs while...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23
    LazyLLM

    LazyLLM

    Easiest and laziest way for building multi-agent LLMs applications

    LazyLLM is an optimized, lightweight LLM server designed for easy and fast deployment of large language models. It is fully compatible with the OpenAI API specification, enabling developers to integrate their own models into applications that normally rely on OpenAI’s endpoints. LazyLLM emphasizes low resource usage and fast inference while supporting multiple models.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Brax

    Brax

    Massively parallel rigidbody physics simulation

    Brax is a fast and fully differentiable physics engine for large-scale rigid body simulations, built on JAX. It is designed for research in reinforcement learning and robotics, enabling efficient simulations and gradient-based optimization.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    Keras Hub

    Keras Hub

    Pretrained model hub for Keras 3

    Keras Hub is a repository of pre-trained models for Keras 3, offering a collection of ready-to-use models for various machine-learning tasks. KerasHub is an extension of the core Keras API; KerasHub components are provided as Layer and Model implementations. If you are familiar with Keras, congratulations. You already understand most of KerasHub.
    Downloads: 3 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.