Showing 73 open source projects for "motion"

View related business solutions
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • Find Hidden Risks in Windows Task Scheduler Icon
    Find Hidden Risks in Windows Task Scheduler

    Free diagnostic script reveals configuration issues, error patterns, and security risks. Instant HTML report.

    Windows Task Scheduler might be hiding critical failures. Download the free JAMS diagnostic tool to uncover problems before they impact production—get a color-coded risk report with clear remediation steps in minutes.
    Download Free Tool
  • 1
    HY-Motion 1.0

    HY-Motion 1.0

    HY-Motion model for 3D character animation generation

    ...The training strategy for the HY-Motion series includes extensive pre-training on thousands of hours of varied motion data, fine-tuning on curated high-quality datasets, and reinforcement learning with human feedback, which improves both the plausibility and adaptability of generated motion sequences.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    Wan Move

    Wan Move

    Motion-controllable Video Generation via Latent Trajectory Guidance

    ...By representing motion information as dense point trajectories and integrating them into the latent space of an image-to-video model, the project produces videos with more precise and controllable motion behavior than many existing methods. Wan-Move is particularly notable for eliminating the need for additional motion encoders, instead directly infusing motion cues into spatiotemporal features, which simplifies both training and inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Animated Drawings

    Animated Drawings

    Code to accompany "A Method for Animating Children's Drawings"

    AnimatedDrawings is a framework that converts user sketches or line drawings into fully animated 2D motion sequences using learned motion priors. The idea is that you draw a simple static figure (stick figure, silhouette, or contour lines), and the system produces plausible skeletal motion (walking, jumping, dancing) that adheres to the drawn shape constraints. The architecture separates shape embedding (to understand user-drawn geometry) from motion embedding / generation (to produce temporally coherent movement). ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    video2robot

    video2robot

    End-to-end pipeline converting generative videos

    video2robot is an end-to-end open-source pipeline that converts generative video or prompt-driven motion content into executable humanoid robot motion sequences, enabling researchers and developers to go from high-level action descriptions or videos to robot-ready motion data. The pipeline supports both prompt-to-video generation using models like Veo/Sora and video upload processing, followed by human pose extraction through a 3D pose model and retargeting of that motion to robot joints using a general motion retargeting system. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    OpenShot Video Editor

    OpenShot Video Editor

    Award-Winning Open Source Video Editing Software

    OpenShot Video Editor is a powerful yet very simple and easy-to-use video editor that delivers high quality video editing and animation solutions. OpenShot offers a myriad of features and capabilities, including powerful curve-based Key frame animations, 3D animated titles and effects, slow motion and time effects, audio mixing and editing, and so much more. It’s available for Linux, Mac and Windows, with a very simple and friendly interface. Start creating stunning videos quickly and easily with OpenShot!
    Downloads: 143 This Week
    Last Update:
    See Project
  • 6
    Wan2.1

    Wan2.1

    Wan2.1: Open and Advanced Large-Scale Video Generative Model

    ...Wan2.1’s architecture balances generation quality and inference cost, paving the way for later improvements seen in Wan2.2 such as Mixture-of-Experts and enhanced aesthetics. It was trained on large-scale video and image datasets, providing generalization across diverse scenes and motion patterns.
    Downloads: 77 This Week
    Last Update:
    See Project
  • 7
    MESHROOM

    MESHROOM

    3D reconstruction software

    ...The goal of photogrammetry is to reverse this process. The dense modeling of the scene is the result yielded by chaining two computer vision-based pipelines, “Structure-from-Motion” (SfM) and “Multi View Stereo” (MVS). Fusion of Multi-bracketing LDR images into HDR. Alignment of panorama images. Support for fisheye optics. Automatically estimate fisheye circle or manually edit it. Take advantage of motorized-head file. Easy to integrate in your Renderfarm System. Add specific rules to select the most suitable machines regarding CPU, RAM, GPU requirements of each Node.
    Downloads: 126 This Week
    Last Update:
    See Project
  • 8
    SlowFast

    SlowFast

    Video understanding codebase from FAIR for reproducing video models

    SlowFast is a video understanding framework that captures both spatial semantics and temporal dynamics efficiently by processing video frames at two different temporal resolutions. The slow pathway encodes semantic context by sampling frames sparsely, while the fast pathway captures motion and fine temporal cues by operating on densely sampled frames with fewer channels. Together, these two pathways complement each other, allowing the network to model both appearance and motion without excessive computational cost. The architecture is modular and supports tasks like action recognition, temporal localization, and video segmentation, performing strongly on benchmarks like Kinetics and AVA. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    PersonaLive

    PersonaLive

    Expressive Portrait Image Animation for Live Streaming

    PersonaLive is an open-source diffusion-based portrait animation framework focused on generating expressive, long-duration animated sequences in real time, primarily for live streaming or interactive applications. It leverages deep generative models that condition on a static reference image and a driving input (such as motion or expression cues) to produce a seamless animated portrait sequence that can run indefinitely without segmentation artifacts. The framework prioritizes low-latency and streamable output, making it suitable for real-time creative workflows, broadcast overlays, or interactive avatars on consumer-grade GPUs. PersonaLive’s architecture balances visual quality and efficiency by combining motion encoding, temporal modules, and hybrid implicit control signals to preserve identity and stable expression through long sequences.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 10
    Wan2.2

    Wan2.2

    Wan2.2: Open and Advanced Large-Scale Video Generative Model

    ...Wan2.2 integrates meticulously curated cinematic aesthetic data, enabling precise control over lighting, composition, color tone, and more, for high-quality, customizable video styles. The model is trained on significantly larger datasets than its predecessor, greatly enhancing motion complexity, semantic understanding, and aesthetic diversity. Wan2.2 also open-sources a 5-billion parameter high-compression VAE-based hybrid text-image-to-video (TI2V) model that supports 720P video generation at 24fps on consumer-grade GPUs like the RTX 4090. It supports multiple video generation tasks including text-to-video.
    Downloads: 213 This Week
    Last Update:
    See Project
  • 11
    LatentSync

    LatentSync

    Taming Stable Diffusion for Lip Sync

    LatentSync is an open-source framework from ByteDance that produces high-quality lip-synchronization for video by using an audio-conditioned latent diffusion model, bypassing traditional intermediate motion representations. In effect, given a source video (with masked or reference frames) and an audio track, LatentSync directly generates frames whose lip motions and expressions align with the audio, producing convincing talking-head or animated lip-sync output. The system leverages a U-Net diffusion backbone, with cross-attention of audio embeddings (via an audio encoder) and reference video frames to guide generation, and applies a set of loss functions (temporal, perceptual, sync-net based) to enforce lip-sync accuracy, visual fidelity, and temporal consistency. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    VGGSfM

    VGGSfM

    VGGSfM: Visual Geometry Grounded Deep Structure From Motion

    ...It leverages tools like PyCOLMAP, poselib, LightGlue, and PyTorch3D for feature matching, pose estimation, and visualization. With minimal configuration, users can process single scenes or full video sequences, apply motion masks to exclude moving objects, and train neural radiance or splatting models directly from reconstructed outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    TorchIO

    TorchIO

    Medical imaging toolkit for deep learning

    ...These transforms include typical computer vision operations such as random affine transformations and also domain-specific ones such as simulation of intensity artifacts due to MRI magnetic field inhomogeneity (bias) or k-space motion artifacts. TorchIO is a Python package containing a set of tools to efficiently read, preprocess, sample, augment, and write 3D medical images in deep learning applications written in PyTorch, including intensity and spatial transforms for data augmentation and preprocessing. Transforms include typical computer vision operations such as random affine transformations and also domain-specific ones such as simulation of intensity artifacts due to MRI magnetic field inhomogeneity.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Step-Video-T2V

    Step-Video-T2V

    State-of-the-art (SoTA) text-to-video pre-trained model

    ...Under the hood it uses a compressed latent representation (a Video-VAE) to reduce spatial and temporal redundancy, and a denoising diffusion (or similar) process over that latent space to generate smooth, plausible motion and visuals. The model handles bilingual input (e.g. English and Chinese) thanks to dual encoders, and supports end-to-end text-to-video generation without requiring external assets. Its training and generation pipeline includes techniques like flow-matching, full 3D attention for temporal consistency, and fine-tuning approaches (e.g. video-based DPO) to improve fidelity and reduce artifacts. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 15
    Frigate

    Frigate

    NVR with realtime local object detection for IP cameras

    Frigate - NVR With Realtime Object Detection for IP Cameras A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras. Use of a Google Coral Accelerator is optional, but highly recommended. The Coral will outperform even the best CPUs and can process 100+ FPS with very little overhead.
    Downloads: 35 This Week
    Last Update:
    See Project
  • 16
    MuJoCo MPC

    MuJoCo MPC

    Real-time behaviour synthesis with MuJoCo, using Predictive Control

    ...MJPC integrates a high-performance GUI and multiple predictive control algorithms, including iLQG, gradient descent, and Predictive Sampling — a competitive, derivative-free method that achieves robust real-time control. The system supports multi-shooting optimization, enabling precise motion planning across diverse domains like quadruped locomotion, humanoid tracking, and dexterous manipulation. In addition to its C++ core, MJPC includes an experimental Python API, enabling integration with custom models and MuJoCo tasks for flexible scripting and experimentation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    kapture

    kapture

    Tools for manipulating datasets

    Kapture is a pivot file format, based on text and binary files, used to describe SfM (Structure From Motion) and more generally sensor-acquired data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Pixelorama

    Pixelorama

    A free & open-source 2D sprite editor, made with the Godot Engine

    ...Pixelorama has its own animation timeline just for you! You can work at an individual cel level, where each cel refers to a unique layer and frame. Supports onion skinning, cel linking, motion drawing and frame grouping with tags. Custom brushes, including random brushes. Create or import custom palettes. Import images and edit them inside Pixelorama. If you import multiple files, they will be added as individual animation frames. Importing sprite sheets is also supported.
    Downloads: 32 This Week
    Last Update:
    See Project
  • 19
    The Arcade Library

    The Arcade Library

    Easy to use Python library for creating 2D arcade games

    Arcade is an easy-to-use Python library for creating 2D video games. It provides a modern and straightforward API, enabling developers to craft engaging games and graphical applications efficiently. Arcade supports rendering shapes, handling user input, and managing game physics, making it suitable for both beginners and experienced developers.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    Moondream

    Moondream

    Tiny vision language model

    ...The project typically showcases procedural visualizations, algorithmic designs, and artistic experiments that push the boundaries of what can be expressed with programming languages and rendering frameworks. While the exact nature can vary by commit or branch, Moondream’s work often blends geometry, color theory, and motion to create immersive visuals that can be interactive, animated, or reactive to input. It serves as both a playground for the author’s artistic curiosity and a resource for other creative coders interested in generative art techniques. The repository may include shaders, canvas/WebGL code, visual demos, and utilities that demonstrate how mathematical functions or noise patterns can be harnessed for compelling visuals.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    Watermark Anything

    Watermark Anything

    Official implementation of Watermark Anything with Localized Messages

    ...Unlike traditional watermarking methods that rely on uniform embedding, WAM supports spatially localized watermarks, enabling targeted protection of specific image regions or objects. The model is trained to balance imperceptibility, ensuring minimal visual distortion, with robustness against transformations and edits such as cropping or motion.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    NVIDIA Isaac Lab

    NVIDIA Isaac Lab

    Unified framework for robot learning built on NVIDIA Isaac Sim

    Isaac Lab is an open-source modular robotics learning framework built atop Isaac Sim. It simplifies research workflows across reinforcement learning, imitation learning, and motion planning by offering robust, GPU-accelerated simulation with realistic sensor and physics fidelity—ideal for sim-to-real robot training. Compatible and optimized for use with Isaac Sim versions (e.g., Sim 5.0 and 4.5). GPU-accelerated, high-fidelity physics and sensor simulation suitable for complex learning tasks. Offers a variety of robotic environment simulations on both Linux and Windows.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Map-Anything

    Map-Anything

    MapAnything: Universal Feed-Forward Metric 3D Reconstruction

    Map-Anything is a universal, feed-forward transformer for metric 3D reconstruction that predicts a scene’s geometry and camera parameters directly from visual inputs. Instead of stitching together many task-specific models, it uses a single architecture that supports a wide range of 3D tasks—multi-image structure-from-motion, multi-view stereo, monocular metric depth, registration, depth completion, and more. The model flexibly accepts different input combinations (images, intrinsics, poses, sparse or dense depth) and produces a rich set of outputs including per-pixel 3D points, camera intrinsics, camera poses, ray directions, confidence maps, and validity masks. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    Fast3R

    Fast3R

    Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass

    ...It represents a next-generation feedforward 3D reconstruction model capable of producing dense point clouds and camera poses for hundreds to thousands of images or video frames in a single inference pass—eliminating the need for slow, iterative structure-from-motion pipelines. Built on PyTorch Lightning and extending concepts from DUSt3R and Spann3r, Fast3R unifies multi-view geometry, depth estimation, and camera registration within a single transformer-based architecture. It outputs high-quality 3D scene representations from unordered or sequential views, scaling to large datasets and varied camera intrinsics. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    LiveAvatar

    LiveAvatar

    Streaming Real-time Audio-Driven Avatar Generation

    LiveAvatar is an open-source research and implementation project that provides a unified framework for real-time, streaming, interactive avatar video generation driven by audio and other control signals. It implements techniques from state-of-the-art diffusion-based avatar modeling to support infinite-length continuous video generation with low latency, enabling interactive AI avatars that maintain continuity and realism over extended sessions. The project co-designs algorithms and system...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next