550 projects for "python" with 2 filters applied:

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Level Up Your Cyber Defense with External Threat Management Icon
    Level Up Your Cyber Defense with External Threat Management

    See every risk before it hits. From exposed data to dark web chatter. All in one unified view.

    Move beyond alerts. Gain full visibility, context, and control over your external attack surface to stay ahead of every threat.
    Try for Free
  • 1
    AEA Framework

    AEA Framework

    A framework for autonomous economic agent (AEA) development

    agents-aea by Fetch.ai is a framework for building autonomous economic agents (AEAs) that can act independently, communicate, and transact on decentralized networks. It focuses on enabling AI-driven agents to participate in digital marketplaces and ecosystems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ConvNeXt V2 is an evolution of the ConvNeXt architecture that co-designs convolutional networks alongside self-supervised learning. The V2 version introduces a fully convolutional masked autoencoder (FCMAE) framework where parts of the image are masked and the network reconstructs the missing content, marrying convolutional inductive bias with powerful pretraining. A key innovation is a new Global Response Normalization (GRN) layer added to the ConvNeXt backbone, which enhances feature...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    UnionML

    UnionML

    Build and deploy machine learning microservices

    Creating ML apps should be simple and frictionless. UnionML is an open-source Python framework built on top of Flyte™, unifying the complex ecosystem of ML tools into a single interface. Combine the tools that you love using a simple, standardized API so you can stop writing so much boilerplate and focus on what matters: the data and the models that learn from them. Fit the rich ecosystem of tools and frameworks into a common protocol for machine learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Point-E

    Point-E

    Point cloud diffusion for 3D model synthesis

    point-e is the official repository for Point-E, a generative model developed by OpenAI that produces 3D point clouds from textual (or image) prompts. Its principal advantage is speed: it can generate 3D assets in just 1–2 minutes on a single GPU, which is significantly faster than many competing text-to-3D models. The model works via a two-stage diffusion approach: first, it uses a text → image diffusion network to produce a synthetic 2D view consistent with the prompt; then a second...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    d2l-zh

    d2l-zh

    Chinese-language edition of Dive into Deep Learning

    d2l‑zh is the Chinese-language edition of Dive into Deep Learning, an interactive, open‑source deep learning textbook that combines code, math, and explanatory text. It features runnable Jupyter notebooks compatible with multiple frameworks (e.g., PyTorch, MXNet, TensorFlow), comprehensive theoretical analysis, and exercises. Widely adopted in over 70 countries and used by more than 500 universities for teaching deep learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    FrankMocap

    FrankMocap

    A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

    FrankMocap is a monocular 3D human capture system that estimates body, hand, and optionally face pose from a single RGB image or video. It regresses parametric human models (e.g., SMPL/SMPL-X) directly, producing temporally stable meshes and joint angles suitable for animation or analytics. The pipeline couples a robust 2D keypoint detector with 3D mesh regression networks and priors that keep results anatomically plausible. It can run frame-by-frame or with temporal smoothing, and includes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    DiffSinger

    DiffSinger

    Singing Voice Synthesis via Shallow Diffusion Mechanism

    DiffSinger is an open-source PyTorch implementation of a diffusion-based acoustic model for singing-voice synthesis (SVS) and also text-to-speech (TTS) in a related variant. The core idea is to view generation of a sung voice (mel-spectrogram) as a diffusion process: starting from noise, the model iteratively “denoises” while being conditioned on a music score (lyrics, pitch, musical timing). This avoids some of the typical problems of prior SVS models — like over-smoothing or unstable GAN...
    Downloads: 18 This Week
    Last Update:
    See Project
  • 8
    handson-ml

    handson-ml

    Teaching you the fundamentals of Machine Learning in python

    handson-ml hosts the notebooks for the first edition of the same hands-on ML book, reflecting the tooling and idioms of its time while teaching durable concepts. It walks through supervised and unsupervised learning with scikit-learn, then introduces deep learning using the earlier TensorFlow 1 graph-execution style. The examples underscore fundamentals like bias-variance trade-offs, regularization, and proper validation, grounding learners before they move to deep nets. Even though the deep...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Video Pre-Training

    Video Pre-Training

    Learning to Act by Watching Unlabeled Online Videos

    The Video PreTraining (VPT) repository provides code and model artifacts for a project where agents learn to act by watching human gameplay videos—specifically, gameplay of Minecraft—using behavioral cloning. The idea is to learn general priors of control from large-scale, unlabeled video data, and then optionally fine-tune those priors for more goal-directed behavior via environment interaction. The repository contains demonstration models of different widths, fine-tuned variants (e.g. for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 10
    ConvNeXt

    ConvNeXt

    Code release for ConvNeXt model

    ConvNeXt is a modernized convolutional neural network (CNN) architecture designed to rival Vision Transformers (ViTs) in accuracy and scalability while retaining the simplicity and efficiency of CNNs. It revisits classic ResNet-style backbones through the lens of transformer design trends—large kernel sizes, inverted bottlenecks, layer normalization, and GELU activations—to bridge the performance gap between convolutions and attention-based models. ConvNeXt’s clean, hierarchical structure...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Guided Diffusion

    Guided Diffusion

    Codebase for Diffusion Models Beat GANS on Image Synthesis

    The guided-diffusion repository is centered on diffusion models for image synthesis, with a focus on classifier guidance and improvements over earlier diffusion frameworks. It is derived from OpenAI’s improved-diffusion work, enhanced to include guided generation where a classifier (or other guidance mechanism) can steer sampling toward desired classes or attributes. The code provides model definitions (UNet, diffusion schedules), sampling and training scripts, and utilities for guidance and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    WaveRNN

    WaveRNN

    WaveRNN Vocoder + TTS

    WaveRNN is a PyTorch implementation of DeepMind’s WaveRNN vocoder, bundled with a Tacotron-style TTS front end to form a complete text-to-speech stack. As a vocoder, WaveRNN models raw audio with a compact recurrent neural network that can generate high-quality waveforms more efficiently than many traditional autoregressive models. The repository includes scripts and code for preprocessing datasets such as LJSpeech, training Tacotron to produce mel spectrograms, training WaveRNN on those...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    Mask2Former

    Mask2Former

    Code release for "Masked-attention Mask Transformer

    Mask2Former is a unified segmentation architecture that handles semantic, instance, and panoptic segmentation with one model and one training recipe. Its core idea is to cast segmentation as mask classification: a transformer decoder predicts a set of mask queries, each with an associated class score, eliminating the need for task-specific heads. A pixel decoder fuses multi-scale features and feeds masked attention in the transformer so each query focuses computation on its current spatial...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    MAE (Masked Autoencoders)

    MAE (Masked Autoencoders)

    PyTorch implementation of MAE

    MAE (Masked Autoencoders) is a self-supervised learning framework for visual representation learning using masked image modeling. It trains a Vision Transformer (ViT) by randomly masking a high percentage of image patches (typically 75%) and reconstructing the missing content from the remaining visible patches. This forces the model to learn semantic structure and global context without supervision. The encoder processes only the visible patches, while a lightweight decoder reconstructs the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Piano transcription

    Piano transcription

    Task of transcribing piano recordings into MIDI files

    Piano transcription is an open-source high-resolution piano transcription system by ByteDance that converts raw audio recordings of piano performance into symbolic MIDI files — detecting note onsets, offsets, pitch, velocity, and even pedal usage. The system is implemented in Python (PyTorch) and is capable of accurate transcription of polyphonic piano recordings, even with complex passages and pedal techniques, making it suitable for classical piano music. By using this transcription tool, users can transform live performance audio (or recordings) into editable, machine-readable MIDI — enabling tasks such as analysis, editing, remixing, or generation of piano music. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Acme

    Acme

    A library of reinforcement learning components and agents

    Acme is a framework from DeepMind for building scalable and reproducible reinforcement learning agents. It emphasizes modular components, distributed training, and ease of experimentation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    GiantMIDI-Piano

    GiantMIDI-Piano

    Classical piano MIDI dataset

    GiantMIDI-Piano is a large-scale symbolic classical piano music dataset built by applying the piano_transcription system on a vast collection of piano performance recordings. The dataset contains thousands of piano works, spanning a large number of composers and styles, with each piece transcribed into high-precision MIDI files capturing note events, pedal usage, velocities, etc. It provides a resource for music information retrieval (MIR), symbolic music modeling, composer classification,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Music Source Separation

    Music Source Separation

    Separate audio recordings into individual sources

    Music Source Separation is a PyTorch-based open-source implementation for the task of separating a music (or audio) recording into its constituent sources — for example isolating vocals, instruments, bass, accompaniment, or background from a mixed track. It aims to give users the ability to take any existing song and decompose it into separate stems (vocals, accompaniment, etc.), or to train custom separation models on their own datasets (e.g. for speech enhancement, instrument isolation, or...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    SLM Lab

    SLM Lab

    Modular Deep Reinforcement Learning framework in PyTorch

    SLM Lab is a modular and extensible deep reinforcement learning framework designed for research and practical applications. It provides implementations of various state-of-the-art RL algorithms and emphasizes reproducibility, scalability, and detailed experiment tracking. SLM Lab is structured around a flexible experiment management system, allowing users to define, run, and analyze RL experiments efficiently.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Grade School Math

    Grade School Math

    8.5K high quality grade school math problems

    The grade-school-math repository (sometimes called GSM8K) is a curated dataset of 8,500 high-quality grade school math word problems intended for evaluating mathematical reasoning capabilities of language models. It is structured into 7,500 training problems and 1,000 test problems. These aren’t trivial exercises — many require multi-step reasoning, combining arithmetic operations, and handling intermediate steps (e.g. “If she sold half as many in May… how many in total?”). The problems are...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    TensorFlowTTS

    TensorFlowTTS

    Real-Time State-of-the-art Speech Synthesis for Tensorflow 2

    TensorFlowTTS is a state-of-the-art, open-source speech synthesis library built on TensorFlow 2. It offers a variety of architectures for text-to-speech, including classic and modern models such as Tacotron‑2, FastSpeech / FastSpeech2, and neural vocoders like MelGAN and Multiband‑MelGAN. Because it’s based on TensorFlow 2, it can leverage optimizations such as fake-quantization aware training and pruning — which allow models to run faster than real time and to be deployable on mobile or...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    ReinventCommunity

    ReinventCommunity

    Jupyter Notebook tutorials for REINVENT 3.2

    This repository is a collection of useful jupyter notebooks, code snippets and example JSON files illustrating the use of Reinvent 3.2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    VITS

    VITS

    Conditional Variational Autoencoder with Adversarial Learning

    VITS is a foundational research implementation of “VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech,” a well-known neural TTS architecture. Unlike traditional two-stage systems that separately train an acoustic model and a vocoder, VITS trains an end-to-end model that maps text directly to waveform using a conditional variational autoencoder combined with normalizing flows and adversarial training. This architecture enables parallel generation...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Transformer TTS

    Transformer TTS

    Implementation of a Transformer based neural network

    TransformerTTS is an implementation of a non-autoregressive Transformer-based neural network for text-to-speech, built with TensorFlow 2. It takes inspiration from architectures like FastSpeech, FastSpeech 2, FastPitch, and Transformer TTS, and extends them with its own aligner and forward models. The system separates alignment learning and acoustic modeling: an autoregressive Transformer is used as an aligner to extract phoneme-to-frame durations, while a non-autoregressive...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    TimeSformer

    TimeSformer

    The official pytorch implementation of our paper

    TimeSformer is a vision transformer architecture for video that extends the standard attention mechanism into spatiotemporal attention. The model alternates attention along spatial and temporal dimensions (or designs variants like divided attention) so that it can capture both appearance and motion cues in video. Because the attention is global across frames, TimeSformer can reason about dependencies across long time spans, not just local neighborhoods. The official implementation in PyTorch...
    Downloads: 0 This Week
    Last Update:
    See Project