|
From: Jorg L. <jor...@xc...> - 2026-02-09 10:46:49
|
Hi all, We would like to share an update on Automatic Differentiation (AAD) support in QuantLib under the project name QuantLibAAD (formerly QuantLib-Risks-Cpp). QuantLib can be used with AAD via the open-source XAD library and a dedicated integration module, allowing sensitivities to be computed directly through QuantLib code without modifying model logic. Recent work by da-roth (https://github.com/da-roth/) adds optional JIT-based execution support, enabling record-once / replay-many workflows for selected parts of a valuation. This is particularly relevant for Monte Carlo-style inner loops. In this setup: * A representative execution of a code region is recorded once * Optimised code is generated and compiled once * The compiled code is replayed efficiently across Monte Carlo paths This is not an all-or-nothing switch. For selected code regions that are evaluated repeatedly with different inputs, such as expensive Monte Carlo or PDE pricing engines, a recording mode can be activated. The overloaded operators then capture a computation graph, including branches, that can be JIT-compiled and replayed efficiently. The rest of the valuation simply runs through the overloaded operators as usual. The result is a hybrid workflow combining tape-based AAD with replay-based JIT-compiled execution. Example: Monte Carlo sensitivities A representative QuantLib Monte Carlo example applies replay-based execution to the MC loop while retaining tape-based AAD for curve construction and setup, and compares results against finite-difference sensitivities. The application prices a European swaption in a realistic production-style setup, with separate forecasting and OIS discounting curves, CVA/DVA, bootstrapped interest-rate and credit curves, Monte Carlo pricing, and sensitivities to all market inputs. Key result: a native QuantLib double valuation with 10,000 Monte Carlo paths and no sensitivities takes ~306 ms. With XAD and JIT enabled, computing all 90 sensitivities on the same 10,000 paths takes ~520 ms.* This illustrates that selectively applying replay-based execution can significantly reduce Monte Carlo sensitivity runtimes while retaining full AAD capabilities across the valuation pipeline. * QuantLib integration: https://github.com/auto-differentiation/QuantLibAAD * XAD library: https://github.com/auto-differentiation/xad We would be very interested in feedback on where this execution model fits (or does not fit) typical QuantLib Monte Carlo use cases. Questions and comments are very welcome. Best regards, Jorg * Timings are indicative only and depend on product structure and where replay-based execution can be applied. Full benchmark details are available here: https://gist.github.com/auto-differentiation-dev/9e6c472dcf913ffa00136d4b16423d16 |