ComfyUI_TensorRT is an extension that lets ComfyUI run AI inference through NVIDIA’s TensorRT, aiming to get faster, more efficient execution on supported GPUs. It bridges the gap between ComfyUI’s flexible, node-based workflows and TensorRT’s highly optimized engine format. The result is that complex diffusion or image-processing graphs can be accelerated without the user having to rewrite the pipeline. The repo typically includes instructions for converting models to TensorRT engines and for wiring those engines into ComfyUI nodes. This is particularly attractive for power users who run many generations or who host ComfyUI on dedicated hardware and want to squeeze out every bit of GPU performance. In short, it’s about taking ComfyUI from “it runs” to “it runs fast” on NVIDIA GPUs.
Features
- TensorRT-powered acceleration for ComfyUI inference
- Integration nodes to run TensorRT engines inside workflows
- Conversion guidance for models to TensorRT format
- Targets NVIDIA GPU users who want more speed and efficiency
- Keeps the ComfyUI node-based UX while making it faster
- Useful for heavy, repeated, or server-hosted image generation workloads