...It introduces a “Workflow-to-APP” concept, where a ComfyUI graph can be transformed into a Web App through an AppInfo node, complete with categories, batch prompts, and editable configurations. The project also brings Real-time Design features like screen capture and floating video nodes, enabling creative pipelines that mix live screen content, generative models, and visual effects. For audio and speech, it provides nodes for SpeechRecognition and SpeechSynthesis, plus workflows that combine voice generation with real-time face swapping and other audio-visual effects. On the AI side, it integrates multiple LLM providers (cloud and local), supports OpenAI-compatible endpoints, Siliconflow models, and includes prompt-focused utilities for random prompt generation, Chinese prompts, clip interrogation.