|
From: Dmitri G. <dm...@ma...> - 2024-04-07 10:53:50
|
Hi Peter, Since Matlogica was mentioned by Peter, I'll join the conversation. To get correct derivatives for solver/optimizer steps, you need to propagate adjoints using the implicit function method. There are two ways to do this, the most known requires you to refactor your code and pull out dependent variables from the objective functions. Turns out it's not always needed and it's possible to do it automatically. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3984964 Indeed, for efficiency, you don't want to record the solver/minimizer. It's sufficient to record the objective function only. Care should be taken to ensure the AAD tool is not missing any discontinuities due to control flow. Regular tape-based tools indeed can miss this and quants would have to spend a lot of time trying to identify problematic areas where adjoints don't propagate correctly and hence result in disagreement with bump-and-revalue. The code generation approach used by MatLogicaβs AADC, also records all boolean operations and can identify all stochastic if-statements automatically. After graph recording is complete, you can get confirmation that all branches are recorded and you can use the recorded graph for arbitrary inputs(to process multiple scenarios). Instrumenting the whole library is very attractive because it can be applied to existing quant libraries even much much larger than QuantLib. In many cases, the slowdown due to the use of active type everywhere is an acceptable trade-off. In my tests, running QuantLib-benchmark suit, I typically observe x1.3 to x2.5 performance penalty, but it can be higher for numerically intensive pricers such as PDE. Of course, this is if you just replace Real with active type and don't actually take advantage of AAD or code generation to speed up your analytics. Since the last time we spoke about this, we actually found a solution to this problem. We can get instrumented C++ code to run at about 5% performance penalty vs its native version. It is an interesting technical solution and I am happy to discuss it with you directly. As for QuantLib swap pricer inefficiency, it's actually applicable to many other products too. In general, using object-oriented languages is good for development and support, but bad for performance. At Matlogica we are also planning to release an AADC-enabled QuantLib python version. It's currently available for testing on request. The code generation approach can accelerate QuantLib python code by x100 times or more for xVA type workloads. Kind regards, Dmitri Goloubentsev Head of Automatic Adjoint Differentiation, Matlogica LTD http://matlogica.com +447378414528 See my schedule and book <https://calendly.com/matlogica> a meeting with me On Sun, 7 Apr 2024 at 01:25, Peter Caspers <pca...@gm...> wrote: > Hi JΓΆrg, > > thanks a lot, that's interesting. I have a couple of questions: > > - I think the approach you take in the example is not entirely > correct. IterativeBootstrap uses Brent as the "first solver" which is > not differentiable on all branches. It uses bisection which has a zero > derivative everywhere. Of course you might be lucky and get correct > results anyhow in the specific run. > - More generally, how do you ensure correct treatment of control-flow > in your tool? > - Coming back to the bootstrap: even if you would ensure > differentiability, I think you usually don't want to record the > calibration itself to get market rate sensitivities - it's more > efficient to compute the the matrix d par / d zero (with AAD or just > bump-revalue) and invert that. > > On the technical side: > > - Last time I looked at XAD I noticed a slowdown of about 10x (I > think) in some QuantLib unit tests. How do you address that? I asked > this question many times (also Matlogica, Compatibl) but never seem to > get a straight answer. I still think instrumenting the whole library > is not a good approach, albeit very easy to to. It might also be good > enough for specific use cases, admittedly. > - If we are honest, the low AAD-overhead around 2x that you see is > actually due to poorly optimised pricing of vanilla swaps in QuantLib. > In this sense, QuantLib is an "easy target" for "proof of concepts" > like the one in your blog. It might create wrong expectations though! > > Thank you > Peter > > On Fri, 5 Apr 2024 at 19:01, Jorg Lotze <jor...@xc...> wrote: > > > > Dear Community, > > > > Exciting news for QuantLib users! QuantLib-Risks, now available for > Python, supercharges QuantLib with automatic differentiation. This new > addition streamlines risk assessments and derivative pricing, making > complex analyses more accessible than ever. > > > > Get started effortlessly with: pip install QuantLib-Risks > > > > Curious about the impact? QuantLib-Risks dramatically improves > efficiency, achieving sensitivities calculation in nearly the same > timeframe as standard pricing. Discover the full performance story with a > real-world application here: > > > > https://auto-differentiation.github.io/quantlib-risks > > > > Kind regards, > > Jorg > > _______________________________________________ > > QuantLib-users mailing list > > Qua...@li... > > https://lists.sourceforge.net/lists/listinfo/quantlib-users > > > _______________________________________________ > QuantLib-users mailing list > Qua...@li... > https://lists.sourceforge.net/lists/listinfo/quantlib-users > |