Open Source Python Artificial Intelligence Software - Page 100

Python Artificial Intelligence Software

View 13574 business solutions

Browse free open source Python Artificial Intelligence Software and projects below. Use the toggles on the left to filter open source Python Artificial Intelligence Software by OS, license, language, programming language, and project status.

  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    Access competitive interest rates on your digital assets.

    Generate interest, borrow against your crypto, and trade a range of cryptocurrencies — all in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • 1
    TigerBot

    TigerBot

    TigerBot: A multi-language multi-task LLM

    TigerBot is an open-source family of large language models designed to support multilingual and multi-task natural language processing applications. The project focuses on building high-performance models capable of handling both English and Chinese tasks while maintaining strong reasoning and conversational abilities. TigerBot models are based on modern transformer architectures and are trained on large datasets that cover multiple domains and languages. The project provides both base models and chat-optimized variants that can be used for dialogue systems, question answering, and general language understanding tasks. In addition to model weights, the repository includes training scripts, inference tools, and configuration files that allow researchers and developers to reproduce experiments or fine-tune the models for specific applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    TimeMixer

    TimeMixer

    Decomposable Multiscale Mixing for Time Series Forecasting

    TimeMixer is a deep learning framework designed for advanced time series forecasting and analysis using a multiscale neural architecture. The model focuses on decomposing time series data into multiple temporal scales in order to capture both short-term seasonal patterns and long-term trends. Instead of relying on traditional recurrent or transformer-based architectures, TimeMixer is implemented as a fully multilayer perceptron–based model that performs temporal mixing across different resolutions of the data. The architecture introduces specialized components such as Past-Decomposable-Mixing blocks, which extract information from historical sequences at different scales, and Future-Multipredictor-Mixing modules that combine predictions from multiple forecasting paths. This design allows the model to integrate complementary information across scales and produce more accurate predictions for complex temporal patterns.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    TimesFM

    TimesFM

    Pretrained time-series foundation model developed by Google Research

    TimesFM is a pretrained time-series foundation model from Google Research built for forecasting tasks, designed to generalize across many domains without requiring extensive per-dataset retraining. It provides a decoder-only model approach to forecasting, aiming for strong performance even in zero-shot or low-data settings where traditional models often struggle. The project includes code and an inference API intended to make it practical to run forecasts programmatically, with options to use different backends such as Torch or Flax depending on your environment and performance needs. Newer releases emphasize expanded context handling and more flexible forecasting outputs, including quantile forecasting so users can get uncertainty estimates rather than only point predictions. The repository also documents how model versions evolved, with newer variants focusing on efficiency and longer context windows while maintaining forecasting quality.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    An artificial intelligence that plays the board game Puerto Rico, by Andreas Seyfarth.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 5
    ToMe (Token Merging)

    ToMe (Token Merging)

    A method to increase the speed and lower the memory footprint

    ToMe (Token Merging) is a PyTorch-based optimization framework designed to significantly accelerate Vision Transformer (ViT) architectures without retraining. Developed by researchers at Facebook (Meta AI), ToMe introduces an efficient technique that merges similar tokens within transformer layers, reducing redundant computation while preserving model accuracy. This approach differs from token pruning, which removes background tokens entirely; instead, ToMe merges tokens based on feature similarity, allowing it to compress both foreground and background information efficiently. ToMe integrates seamlessly into existing transformer models such as DeiT, MAE, SWAG, and timm ViTs, offering 2–3x speedups during inference and substantial efficiency gains during training. The method can be applied dynamically at inference time or incorporated into training for improved performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ToRA

    ToRA

    Tool-integrated Reasoning LLM Agents

    ToRA is an open-source framework developed by Microsoft for building tool-integrated reasoning agents powered by large language models. The project focuses on improving the ability of AI systems to solve complex mathematical and analytical problems by combining natural language reasoning with external computational tools. Instead of relying solely on text generation, the system dynamically invokes tools such as symbolic solvers or programming libraries when deeper computation is required. This approach allows the model to reason step by step in natural language and then execute precise calculations or code through tool calls, creating a hybrid reasoning workflow. The framework was designed to address known weaknesses of large language models in mathematical problem solving and formal reasoning tasks. Training data includes tool-use trajectories that teach the model when to reason verbally and when to delegate tasks to specialized tools.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    A python implementation of Fresnel, a display vocabulary for the Resource Description Framework (RDF). See http://www.w3.org/2005/04/fresnel-info/.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    ToolUniverse

    ToolUniverse

    Democratizing AI scientists with ToolUniverse

    ToolUniverse is a comprehensive open-source ecosystem designed to transform any large language model into an autonomous “AI scientist” capable of performing real scientific research tasks through structured tool interaction. It standardizes how AI systems discover, select, and execute tools by introducing a unified AI-Tool Interaction Protocol that allows models to seamlessly connect with hundreds of scientific resources, including machine learning models, datasets, APIs, and analytical packages. Instead of requiring custom pipelines or fine-tuning, ToolUniverse wraps around existing models and enables them to reason, experiment, and iterate on complex workflows such as drug discovery, data analysis, and hypothesis testing. The platform abstracts tool usage behind a consistent interface, allowing AI agents to compose multi-step workflows, refine tool definitions automatically, and even generate new tools from natural language descriptions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Top Engine the Semantic Web Engine for the Enterprise. Top Engine is a Business Rule Engine that utilize OWL DL ontologies for vocabulary primitive to write rules on top of ontology. Top Engine support forward and backward chaining with truth maintenance
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 10
    Top deep learning Github repositories

    Top deep learning Github repositories

    Top 200 deep learning Github repositories sorted by stars

    Top-Deep-Learning is a curated repository that aggregates some of the most influential and widely used deep learning projects available on GitHub. Instead of providing its own machine learning models or frameworks, the project functions as an organized index that helps users discover high-quality deep learning repositories across different application domains. The repository categorizes projects related to neural networks, computer vision, natural language processing, reinforcement learning, and other areas of artificial intelligence. By collecting popular open-source implementations in one place, the project simplifies the process of exploring cutting-edge tools and research implementations for deep learning practitioners. The curated lists are particularly helpful for developers who want to quickly identify well-maintained projects with strong community support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    TorchDistill

    TorchDistill

    A coding-free framework built on PyTorch

    torchdistill (formerly kdkit) offers various state-of-the-art knowledge distillation methods and enables you to design (new) experiments simply by editing a declarative yaml config file instead of Python code. Even when you need to extract intermediate representations in teacher/student models, you will NOT need to reimplement the models, which often change the interface of the forward, but instead specify the module path(s) in the yaml file. In addition to knowledge distillation, this framework helps you design and perform general deep learning experiments (WITHOUT coding) for reproducible deep learning studies. i.e., it enables you to train models without teachers simply by excluding teacher entries from a declarative yaml config file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    TorchGAN

    TorchGAN

    Research Framework for easy and efficient training of GANs

    The torchgan package consists of various generative adversarial networks and utilities that have been found useful in training them. This package provides an easy-to-use API which can be used to train popular GANs as well as develop newer variants. The core idea behind this project is to facilitate easy and rapid generative adversarial model research. TorchGAN is a Pytorch-based framework for designing and developing Generative Adversarial Networks. This framework has been designed to provide building blocks for popular GANs and also to allow customization for cutting-edge research. Using TorchGAN's modular structure allows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Torchreid

    Torchreid

    Deep learning person re-identification in PyTorch

    Torchreid is a library for deep-learning person re-identification, written in PyTorch and developed for our ICCV’19 project, Omni-Scale Feature Learning for Person Re-Identification. In "deep-person-reid/scripts/", we provide a unified interface to train and test a model. See "scripts/main.py" and "scripts/default_config.py" for more details. The folder "configs/" contains some predefined configs which you can use as a starting point. The code will automatically (download and) load the ImageNet pretrained weights. After the training is done, the model will be saved as "log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250". Under the same folder, you can find the tensorboard file. Different from the same-domain setting, here we replace random_erase with color_jitter. This can improve the generalization performance on the unseen target dataset.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Traduki is an open-soruce suite of linguistic-related software, written mainly in Python.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Trae Agent

    Trae Agent

    LLM-based agent for general purpose software engineering tasks

    Trae Agent is an open-source, LLM-based agent system also developed by ByteDance, focused primarily on automating software engineering workflows. It provides a command-line interface (CLI) that accepts natural-language instructions (e.g. “refactor this module,” “write a unit test,” “generate a REST API skeleton”), and then orchestrates tool-based workflows — such as file editing, shell/batch commands, code generation, code formatting or refactoring — to carry out complex engineering tasks. Under the hood, Trae Agent supports multiple LLM backends (so you can choose your preferred model provider), and comes with a modular architecture that makes it easy to study, extend, or modify. Because of its transparent, research-friendly design and detailed logging (trajectory recording), it is positioned not just as a productivity tool but also as a platform for researchers to explore, analyze, or extend AI-based code automation strategies.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16

    Training Image Operators from Samples

    Tools to train Image Operators automatically from a set of samples.

    TRIOS - Training Image Operators from Samples is a set of tools to bring Image Processing closer to scientists in general. It is capable of estimating an operator between two images using only pairs of samples that contain an input image and the desired output. The operator is saved to a file and can be applied to any image.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Transfer Learning Repo

    Transfer Learning Repo

    Transfer learning / domain adaptation / domain generalization

    Transfer Learning Repo is an open-source repository that compiles resources, code implementations, and academic references related to transfer learning and its related research areas. The project functions as a large knowledge hub that organizes papers, tutorials, datasets, and software implementations across topics such as domain adaptation, domain generalization, multi-task learning, and few-shot learning. The repository includes surveys and theoretical explanations that help readers understand how transfer learning methods allow models trained in one domain to adapt to new tasks or datasets. In addition to academic references, the project provides practical code implementations of many transfer learning algorithms so that researchers can reproduce experiments or build their own applications. The repository also catalogs well-known scholars, research laboratories, and datasets relevant to transfer learning studies.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Transformer TTS

    Transformer TTS

    Implementation of a Transformer based neural network

    TransformerTTS is an implementation of a non-autoregressive Transformer-based neural network for text-to-speech, built with TensorFlow 2. It takes inspiration from architectures like FastSpeech, FastSpeech 2, FastPitch, and Transformer TTS, and extends them with its own aligner and forward models. The system separates alignment learning and acoustic modeling: an autoregressive Transformer is used as an aligner to extract phoneme-to-frame durations, while a non-autoregressive “ForwardTransformer” generates mel-spectrograms conditioned on text and durations. This design addresses common autoregressive issues such as repetition, skipped words, and unstable attention, and results in robust, fast synthesis where all frames are predicted in parallel. The repository ships with tooling to build datasets (especially LJSpeech) and create training data, plus scripts to train both the aligner and the TTS model, monitor training with TensorBoard, and resume or reset training runs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Comprehensive framework for delivering personalized travel services using agent infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    TrustGraph

    TrustGraph

    Deploy reasoning AI agents powered by agentic graph RAG in minutes

    TrustGraph is an AI-driven framework designed to assess and visualize trust relationships within networks, aiding in the analysis of trustworthiness and influence among entities.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    TurboDiffusion

    TurboDiffusion

    100–200× Acceleration for Video Diffusion Models

    TurboDiffusion is an advanced open-source framework designed to dramatically accelerate video diffusion model generation, aiming for performance improvements on the order of 100–200× compared with traditional implementations while retaining high output quality. It achieves this by combining a suite of algorithmic and engineering optimizations, including attention acceleration techniques, efficient step distillation methods, and quantization strategies that reduce computational overhead. The project targets large video models and enables developers to run accelerated generation even on single high-end GPUs, making fast video synthesis more practical for research and creative workflows. TurboDiffusion is structured to integrate with existing diffusion model architectures and provides tools for experimenting with and benchmarking speed and quality trade-offs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    TypeAgent Python

    TypeAgent Python

    Structured RAG: ingest, index, query

    TypeAgent Python is an experimental Python implementation of Microsoft’s TypeAgent architecture designed to explore how large language models can interact with structured software systems. The project focuses on implementing structured Retrieval-Augmented Generation workflows that allow agents to ingest information, index it in structured form, and answer queries using language models. Instead of relying solely on free-form prompts, the architecture emphasizes converting natural language interactions into structured representations that can be processed by deterministic software components. This design allows the system to combine the flexibility of language models with the reliability of traditional programming logic. The repository is intended primarily as a research prototype and sample code rather than a production-ready framework, allowing developers to experiment with building AI agents that maintain structured memory and perform tasks through defined actions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    U-Net Fusion RFI

    U-Net Fusion RFI

    U-Net for RFI Detection based on @jakeret's implementation

    See original code here: https://github.com/jakeret/tf_unet Currently this project is based on Tensorflow 1.13 code base and there are no plans to transfer to TF version 2. The primary improvements to this code base include a training and evaluation framework, along with a fusion based approach to detection, combining a number of models (currently hard coded to two trained models) along with Sum Threshold as an additional "expert." Additional work is being done to add custom layers to this model for further experimentation, including Squeeze/Excitation layers (unimplemented.) Sum Threshold (in fusion as an expert, and in testing as a comparison) requires the use of AOFlagger by Andre Offringa. You can find this code at https://gitlab.com/aroffringa/aoflagger. This project will use the aoflagger program within the code, so you may need to ensure that any environment variables are set for aoflagger before use. cite: https://sourceforge.net/p/u-net-fusion-rfi/wiki/cite/
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    UCO3D

    UCO3D

    Uncommon Objects in 3D dataset

    uCO3D is a large-scale 3D vision dataset and toolkit centered on turn-table videos of everyday objects drawn from the LVIS taxonomy. It provides about 170,000 full videos per object instance rather than still frames, along with per-video annotations including object masks, calibrated camera poses, and multiple flavors of point clouds. Each sequence also ships with a precomputed 3D Gaussian Splat reconstruction, enabling fast, differentiable rendering workflows and modern implicit/point-based modeling experiments. The repository includes automated downloaders with checksum verification, fine-grained controls to fetch only selected modalities or super-categories, and a lightweight Python API for loading frames, geometry, and splats on demand. Metadata is indexed in SQLite for quick queries at scale, and helper builders handle alignment, undistortion, frame extraction from videos, and cropping around the object.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    UForm

    UForm

    Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion

    UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space! It comes with a set of homonymous pre-trained networks available on HuggingFace portal and extends the transfromers package to support Mid-fusion Models. Late-fusion models encode each modality independently, but into one shared vector space. Due to independent encoding late-fusion models are good at capturing coarse-grained features but often neglect fine-grained ones. This type of models is well-suited for retrieval in large collections. The most famous example of such models is CLIP by OpenAI. Early-fusion models encode both modalities jointly so they can take into account fine-grained features. Usually, these models are used for re-ranking relatively small retrieval results. Mid-fusion models are the golden midpoint between the previous two types. Mid-fusion models consist of two parts – unimodal and multimodal.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB