18 projects for "modular" with 2 filters applied:

  • Retool your internal operations Icon
    Retool your internal operations

    Generate secure, production-grade apps that connect to your business data. Not just prototypes, but tools your team can actually deploy.

    Build internal software that meets enterprise security standards without waiting on engineering resources. Retool connects to your databases, APIs, and data sources while maintaining the permissions and controls you need. Create custom dashboards, admin tools, and workflows from natural language prompts—all deployed in your cloud with security baked in. Stop duct-taping operations together, start building in Retool.
    Build an app in Retool
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 1
    LTX-2

    LTX-2

    Python inference and LoRA trainer package for the LTX-2 audio–video

    LTX-2 is a powerful, open-source toolkit developed by Lightricks that provides a modular, high-performance base for building real-time graphics and visual effects applications. It is architected to give developers low-level control over rendering pipelines, GPU resource management, shader orchestration, and cross-platform abstractions so they can craft visually compelling experiences without starting from scratch. Beyond basic rendering scaffolding, LTX-2 includes optimized math libraries, resource loaders, utilities for texture and buffer handling, and integration points for native event loops and input systems. ...
    Downloads: 47 This Week
    Last Update:
    See Project
  • 2
    SlowFast

    SlowFast

    Video understanding codebase from FAIR for reproducing video models

    ...Together, these two pathways complement each other, allowing the network to model both appearance and motion without excessive computational cost. The architecture is modular and supports tasks like action recognition, temporal localization, and video segmentation, performing strongly on benchmarks like Kinetics and AVA. The repository provides training recipes, pretrained models, and distributed pipelines optimized for large-scale video datasets.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 3
    Sapiens

    Sapiens

    High-resolution models for human tasks

    ...The project emphasizes long-horizon reasoning and cross-modal grounding—connecting language, perception, and action into a single agentic model capable of following abstract goals. It includes simulation environments, datasets, and benchmarks for testing grounded understanding, imitation learning, and decision-making. The system’s modular pipeline supports both imitation-based and reinforcement-based training strategies, allowing flexible experimentation with different embodiments and tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Large Concept Model

    Large Concept Model

    Language modeling in a sentence representation space

    ...Probing tools help diagnose what the model knows—e.g., attribute recognition, relation understanding, or compositionality—so you can iterate on data and objectives. The design is modular, making it straightforward to swap backbones, change objectives, or integrate retrieval components.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 5
    IndexTTS2

    IndexTTS2

    Industrial-level controllable zero-shot text-to-speech system

    IndexTTS is a modern, zero-shot text-to-speech (TTS) system engineered to deliver high-quality, natural-sounding speech synthesis with few requirements and strong voice-cloning capabilities. It builds on state-of-the-art models such as XTTS and other modern neural TTS backbones, improving them with a conformer-based speech conditional encoder and upgrading the decoder to a high-quality vocoder (BigVGAN2), leading to clearer and more natural audio output. The system supports zero-shot voice...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 6
    DeepSeek-OCR

    DeepSeek-OCR

    Contexts Optical Compression

    DeepSeek-OCR is an open-source optical character recognition solution built as part of the broader DeepSeek AI vision-language ecosystem. It is designed to extract text from images, PDFs, and scanned documents, and integrates with multimodal capabilities that understand layout, context, and visual elements beyond raw character recognition. The system treats OCR not simply as “read the text” but as “understand what the text is doing in the image”—for example distinguishing captions from body...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 7
    Janus

    Janus

    Unified Multimodal Understanding and Generation Models

    Janus is a sophisticated open-source project from DeepSeek AI that aims to unify both visual understanding and image generation in a single model architecture. Rather than having separate systems for “look and describe” and “prompt and generate”, Janus uses an autoregressive transformer framework with a decoupled visual encoder—allowing it to ingest images for comprehension and to produce images from text prompts with shared internal representations. The design tackles long-standing...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 8
    Map-Anything

    Map-Anything

    MapAnything: Universal Feed-Forward Metric 3D Reconstruction

    Map-Anything is a universal, feed-forward transformer for metric 3D reconstruction that predicts a scene’s geometry and camera parameters directly from visual inputs. Instead of stitching together many task-specific models, it uses a single architecture that supports a wide range of 3D tasks—multi-image structure-from-motion, multi-view stereo, monocular metric depth, registration, depth completion, and more. The model flexibly accepts different input combinations (images, intrinsics, poses,...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    PokeeResearch-7B

    PokeeResearch-7B

    Pokee Deep Research Model Open Source Repo

    ...It is built to operate end-to-end: planning a research strategy, gathering sources, reasoning over conflicting claims, and writing a grounded response. The repository includes evaluation results on multi-step QA and research benchmarks, illustrating how web-time context boosts accuracy. Because the system is modular, you can swap the search component, reader, or policy to fit private deployments or different data domains. It’s aimed at developers who want a transparent, hackable research agent they can run locally or wire into existing workflows.
    Downloads: 4 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    DeepGEMM

    DeepGEMM

    Clean and efficient FP8 GEMM kernels with fine-grained scaling

    DeepGEMM is a specialized CUDA library for efficient, high-performance general matrix multiplication (GEMM) operations, with particular focus on low-precision formats such as FP8 (and experimental support for BF16). The library is designed to work cleanly and simply, avoiding overly templated or heavily abstracted code, while still delivering performance that rivals expert-tuned libraries. It supports both standard and “grouped” GEMMs, which is useful for architectures like Mixture of...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Claude Code Config

    Claude Code Config

    My personal Claude Code configuration

    Claude Code Config is a highly customizable personal configuration repository for Claude Code, containing tailored rules, hooks, agents, skills, and commands meant to enhance the coding assistant experience. The project centralizes configuration files that instruct Claude Code how to behave in different contexts, automating repetitive tasks and enforcing coding patterns across languages or project types. Its rulesets can apply path-scoped conventions (such as for TypeScript or test files),...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    DFlash

    DFlash

    Block Diffusion for Ultra-Fast Speculative Decoding

    DFlash is an open-source framework for ultra-fast speculative decoding using a lightweight block diffusion model to draft text in parallel with a target large language model, dramatically improving inference speed without sacrificing generation quality. It acts as a “drafter” that proposes likely continuations which the main model then verifies, enabling significant throughput gains compared to traditional autoregressive decoding methods that generate token by token. This approach has been...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    MetaCLIP

    MetaCLIP

    ICLR2024 Spotlight: curation/training code, metadata, distribution

    MetaCLIP is a research codebase that extends the CLIP framework into a meta-learning / continual learning regime, aiming to adapt CLIP-style models to new tasks or domains efficiently. The goal is to preserve CLIP’s strong zero-shot transfer capability while enabling fast adaptation to domain shifts or novel class sets with minimal data and without catastrophic forgetting. The repository provides training logic, adaptation strategies (e.g. prompt tuning, adapter modules), and evaluation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Grok-1

    Grok-1

    Open-source, high-performance Mixture-of-Experts large language model

    Grok-1 is a 314-billion-parameter Mixture-of-Experts (MoE) large language model developed by xAI. Designed to optimize computational efficiency, it activates only 25% of its weights for each input token. In March 2024, xAI released Grok-1's model weights and architecture under the Apache 2.0 license, making them openly accessible to developers. The accompanying GitHub repository provides JAX example code for loading and running the model. Due to its substantial size, utilizing Grok-1...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 15
    Consistency Models

    Consistency Models

    Official repo for consistency models

    consistency_models is the repository for Consistency Models, a new family of generative models introduced by OpenAI that aim to generate high-quality samples by mapping noise directly into data — circumventing the need for lengthy diffusion chains. It builds on and extends diffusion model frameworks (e.g. based on the guided-diffusion codebase), adding techniques like consistency distillation and consistency training to enable fast, often one-step, sample generation. The repo is implemented...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DiT (Diffusion Transformers)

    DiT (Diffusion Transformers)

    Official PyTorch Implementation of "Scalable Diffusion Models"

    ...The model architecture parallels large language models but for image tokens—each block refines noisy latent representations toward cleaner outputs through iterative denoising steps. DiT achieves strong results on benchmarks like ImageNet and LSUN while being architecturally simple and highly modular. It supports variable resolution, conditioning on class or text embeddings, and integration with latent autoencoders (like those used in Stable Diffusion).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    fairseq-lua

    fairseq-lua

    Facebook AI Research Sequence-to-Sequence Toolkit

    ...The framework implements sequence-to-sequence models with attention, beam search decoding, and distributed training, providing a research platform for exploring translation, summarization, and language modeling. Its modular design made it easy to prototype new architectures by modifying encoders, decoders, or attention mechanisms. Although now deprecated in favor of the PyTorch rewrite, fairseq-lua played a key role in advancing large-scale NMT systems, such as early versions of Facebook’s production translation models. It remains an important historical reference for neural sequence learning frameworks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    SG2Im

    SG2Im

    Code for "Image Generation from Scene Graphs", Johnson et al, CVPR 201

    sg2im is a research codebase that learns to synthesize images from scene graphs—structured descriptions of objects and their relationships. Instead of conditioning on free-form text alone, it leverages graph structure to control layout and interactions, generating scenes that respect constraints like “person left of dog” or “cup on table.” The pipeline typically predicts object layouts (bounding boxes and masks) from the graph, then renders a realistic image conditioned on those layouts....
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next