| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2026-05-04 | 982 Bytes | |
| v0.3.1_ Defensive DB pool tuning source code.tar.gz | 2026-05-04 | 2.0 MB | |
| v0.3.1_ Defensive DB pool tuning source code.zip | 2026-05-04 | 2.2 MB | |
| Totals: 3 Items | 4.2 MB | 0 | |
What changed
fix(db): recycle pooled connections and enable TCP keepalives(#1138)
Why
A long-running deployment can freeze for an hour or more if a pooled Postgres connection's underlying TCP socket is silently dropped by intermediate cloud infrastructure (NAT, proxy, load balancer). The next sync DB call from an async def route blocks the event loop on the dead socket for the kernel's TCP retransmit window. The container appears alive (zero CPU, no logs, no crash) but cannot serve traffic, and restartPolicyType: ON_FAILURE does not catch it.
This release adds two independent defenses:
pool_recycle=1800rotates pooled connections every 30 minutes, well under any reasonable cloud NAT idle timeout.- libpq TCP keepalive options (
keepalives_idle=30s,keepalives_interval=10s,keepalives_count=5) let the kernel detect a dead socket within ~80 seconds.
pool_pre_ping=True is preserved as a backstop. No behavior change for healthy connections.