Download Latest Version 1.13.0 - Human-in-the-Loop and Workflow Execution Upgrades source code.tar.gz (20.6 MB)
Email in envelope

Get an email when there's a new version of Dify

Home / 1.13.0
Name Modified Size InfoDownloads / Week
Parent folder
1.13.0 - Human-in-the-Loop and Workflow Execution Upgrades source code.tar.gz 2026-02-11 20.6 MB
1.13.0 - Human-in-the-Loop and Workflow Execution Upgrades source code.zip 2026-02-11 25.7 MB
README.md 2026-02-11 21.0 kB
Totals: 3 Items   46.3 MB 18

🚀 New Features

Human-in-the-Loop (HITL)

We are introducing the Human Input node, a major update that transforms how AI and humans collaborate within Dify workflows.

Background

Previously, workflows were binary: either fully automated or fully manual. This created a "trust gap" in high-stakes scenarios where AI speed is needed but human judgment is essential. With HITL, we are making h uman oversight a native part of the workflow architecture, allowing you to embed review steps directly into the execution graph.

Key Capabilities

  • Native Workflow Pausing: Insert a "Human Input" node to suspend workflow execution at critical decision points.
  • Review & Edit: The node generates a UI where humans can review AI outputs and modify variables (e.g., editing a draft or correcting data) before the process continues.
  • Action-Based Routing: Configure custom buttons (like "Approve," "Reject," or "Escalate") that determine the subsequent path of the workflow.
  • Flexible Delivery Methods: Human input forms can be delivered via Webapp or Email. In cloud environments, Email delivery availability may depend on plan/feature settings.

🛠 Architecture Updates

To support the stateful pause/resume mechanism required by HITL and provide event‑subscription APIs, we refactored the execution engine: Workflow‑based streaming executions and Advanced Chat executions now run in Celery workers, while non‑streaming WORKFLOW runs still execute in the API process. All pause/resume paths (e.g., HITL) are resumed via Celery, and events are streamed back through Redis Pub/Sub.

For Large Deployments & Self-Hosted Users:

We have introduced a new Celery queue named workflow_based_app_execution. While standard setups will work out of the box, high-throughput environments should consider the following optimizations to ensure stability and performance:

  1. Scale Workers: Adjust the number of workers consuming the workflow_based_app_execution queue based on your specific workload.
  2. Dedicated Redis (Optional): For large-scale deployments, we recommend configuring the new PUBSUB_REDIS_URL environment variable to point to a dedicated Redis instance. Using Redis Cluster mode with Sharded PubSub is strongly advised to ensure horizontal scalability.

⚠️ Important Upgrade Note

New Celery Queue Required: workflow_based_app_execution

Please ensure your deployment configuration (Docker Compose, Helm Chart, etc.) includes workers listening to the new workflow_based_app_execution queue.
This queue is required for workflow‑based streaming executions and all resume flows (e.g., HITL); otherwise, streaming executions and resume tasks will not be processed.

🔧 Operational Note

Additional Celery Queue: api_token

If ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK=true, ensure your deployment also has workers listening to api_token. This queue is used by the scheduled batch update task for API token last_used_at timestamps.

⚙️ Configuration Changes

We have introduced several new environment variables to support the architectural changes. Large deployments should pay special attention to the PubSub Redis configurations to ensure scalability.

  • PUBSUB_REDIS_URL (Critical): Specifies the Redis URL used for PubSub communication between the API and Celery workers. If left empty, it defaults to the standard REDIS_* configuration.
  • PUBSUB_REDIS_CHANNEL_TYPE (Critical): Defines the channel type for streaming events. Options are pubsub (default) or sharded. We highly recommend using sharded for high-throughput environments.
  • PUBSUB_REDIS_USE_CLUSTERS (Critical): Set to true to enable Redis cluster mode for PubSub. Combined with sharded PubSub, this is essential for horizontal scaling.

Other Additions:

  • WEB_FORM_SUBMIT_RATE_LIMIT_MAX_ATTEMPTS: Maximum number of web form submissions allowed per IP within the rate limit window (Default: 30).
  • WEB_FORM_SUBMIT_RATE_LIMIT_WINDOW_SECONDS: Time window in seconds for web form submission rate limiting (Default: 60).
  • HUMAN_INPUT_GLOBAL_TIMEOUT_SECONDS: Maximum seconds a workflow run can stay paused waiting for human input before global timeout (Default: 604800, 7 days).
  • ENABLE_HUMAN_INPUT_TIMEOUT_TASK: Enables the background task that checks for expired human input requests (Default: true).
  • HUMAN_INPUT_TIMEOUT_TASK_INTERVAL: Sets the interval (in minutes) for the timeout check task (Default: 1).
  • ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK: Enables the periodic background task that batch-updates API token last_used_at timestamps (Default: true).
  • API_TOKEN_LAST_USED_UPDATE_INTERVAL: Sets the interval (in minutes) for batch-updating API token last_used_at timestamps (Default: 30).
  • SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_MAX_INTERVAL: Maximum random delay (in milliseconds) between retention cleanup batches to reduce DB pressure spikes (Default: 200).

📌 Additional Changelog Highlights

Reliability & Correctness

  • Added migration-time deduplication and a unique constraint for tenant default models to prevent duplicate default model records.
  • Fixed a tools-deletion edge case caused by provider ID type mismatch.
  • Fixed a FastOpenAPI integration regression where authenticated users could be resolved as anonymous in remote file APIs.
  • Fixed message event type detection for file-related responses, and hid the workspace invite action for non-manager users.

Performance & Scalability

  • Reduced backend load and console latency with plugin manifest pre-caching and AppListApi query optimizations.
  • Improved large-data task stability with split DB sessions, batched cleanup execution, index tuning, and configurable inter-batch throttling for retention cleanup jobs.

API & Platform Capabilities

  • Added a Service API endpoint for end-user lookup with tenant/app scope enforcement.
  • Improved workflow run history refresh behavior during run state transitions.
  • Enhanced MCP Tool integration by extracting and reporting usage metadata (for example, token/cost fields) from MCP responses.

Security

  • Removed dynamic new Function evaluation from ECharts parsing and now return explicit parsing errors for unsupported chart code.

Localization

  • Added Dutch (nl-NL) language support across backend language mapping and web localization resources.

Upgrade Guide

[!IMPORTANT] If you use custom CELERY_QUEUES, make sure workflow_based_app_execution is included.
If ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK=true, also include api_token.

For background and details, see ⚠️ Important Upgrade Note and 🔧 Operational Note above.

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

bash cd docker cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak

  1. Get the latest code from the main branch

bash git checkout main git pull origin main

  1. Stop the service. Please execute in the docker directory

bash docker compose down

  1. Back up data

bash tar -cvf volumes-$(date +%s).tgz volumes

  1. Upgrade services

bash docker compose up -d

[!NOTE]

If you encounter errors like below

``bash 2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30 [error] failed to initialize database, got error failed to connect tohost=db_postgres user=postgres database=dify_plugin`: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)

2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34 [error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=postgres: hostname > resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving) 2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving) panic: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving) Please use the following command instead. For details, please read this https://github.com/langgenius/dify/issues/28706bash docker compose --profile postgresql up -d ```

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

bash git checkout 1.13.0

  1. Update Python dependencies:

bash cd api uv sync

  1. Then, let's run the migration script:

bash uv run flask db upgrade

  1. Finally, run the API server, Worker, and Web frontend Server again.

What's Changed

New Contributors

Full Changelog: https://github.com/langgenius/dify/compare/1.12.1...1.13.0

Source: README.md, updated 2026-02-11