Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2025-05-19 | 3.9 kB | |
v0.64.2 - event filtering, prometheus metrics, read replicas, and trace sampling source code.tar.gz | 2025-05-19 | 141.7 MB | |
v0.64.2 - event filtering, prometheus metrics, read replicas, and trace sampling source code.zip | 2025-05-19 | 144.4 MB | |
Totals: 3 Items | 286.1 MB | 0 |
What's Changed
This past month, we've been focused on day-2 operations and improving the usage of our recently launched features.
Event filtering
It is now possible to register filters for events that trigger tasks and workflows. For example, if you'd only like to trigger a workflow when the event email:received
is sent from the user a@example.com
to the user b@example.com
, you could set up a filter for that:
:::py
hatchet.filters.create(
workflow_id=event_workflow.id,
expression='input.sender_email == "a@example.com"',
scope='b@example.com',
payload={
'filterkey1': 'abcd'
},
)
You can read more about filters here.
New Cloud Features
Managed Compute
- Our managed compute instances now support an additional Frankfurt region
- We've improved the usage of setting environment variables, and all environment variables are redacted in the UI by default
New Self-Hosted Features
Prometheus metrics
If you're a self-hosted user of Hatchet, you can now configure global prometheus metrics. We support the following metrics:
Metric Name | Type | Description |
---|---|---|
hatchet_queue_invocations_total |
Counter | The total number of invocations of the queuer function |
hatchet_created_tasks_total |
Counter | The total number of tasks created |
hatchet_retried_tasks_total |
Counter | The total number of tasks retried |
hatchet_succeeded_tasks_total |
Counter | The total number of tasks that succeeded |
hatchet_failed_tasks_total |
Counter | The total number of tasks that failed (in a final state, not including retries) |
hatchet_skipped_tasks_total |
Counter | The total number of tasks that were skipped |
hatchet_cancelled_tasks_total |
Counter | The total number of tasks cancelled |
hatchet_assigned_tasks_total |
Counter | The total number of tasks assigned to a worker |
hatchet_scheduling_timed_out_total |
Counter | The total number of tasks that timed out while waiting to be scheduled |
hatchet_rate_limited_total |
Counter | The total number of tasks that were rate limited |
hatchet_queued_to_assigned_total |
Counter | The total number of unique tasks that were queued and later assigned to a worker |
hatchet_queued_to_assigned_seconds |
Histogram | Buckets of time (in seconds) spent in the queue before being assigned to a worker |
Read replicas
Self-hosted instances now support read replicas. Instead of placing additional load on your primary database with metrics and list queries, you can delegate these queries to a read replica. You can read about how to set up read replicas here.
Trace sampling
If you're running a high-volume Hatchet setup, it might not be desirable to store every single execution in the database. We've introduced trace sampling to reduce pressure in high-throughput scenarios.