|
From: Luigi B. <lui...@gm...> - 2024-02-22 09:19:23
|
Hello Rishi,
the required tolerance is an absolute target for the estimated
statistical error, so if you're passing 0.01 that would mean that the
simulation goes on until the error estimate goes below 1 cent in absolute
terms. During the simulation, the error estimate is calculated as the
standard deviation of the results divided by the square root of N, the
number of samples so far.
It's not always obvious what should happen if you increase the number of
instruments. It's true that you're doing more work for each sample; and in
fact, you should see that if you pass requiredSamples instead of
requiredTolerance as an input. In that case, the simulation will run
exactly that number of samples, and the time will depend only on how much
time it takes to run one sample.
However, when passing a tolerance as a target, things are not so clear.
Depending on the correlation between instruments and the way the payoff is
defined, it might be that adding an instrument decreases the standard
deviation of the results, and therefore it will take fewer samples to reach
the required tolerance. If you're running fewer samples, the total time
might decrease, even if a single sample takes longer.
Unfortunately, the standard deviation and the number of samples are not
currently available from the results returned by the engine, so this is
difficult to investigate.
If you wanted to check them, you'd have to modify the underlying C++ code
and recompile the library and the Python wheel. Let me know if you need
guidance on that.
Hope this helps,
Luigi
On Sat, Feb 10, 2024 at 6:47 PM Rishi Sreedhar <ris...@gm...>
wrote:
> Dear QuantLib community,
>
> I've been exploring how to price American Basket options
> using MCAmericanBasketEngine, when I found something strange. The time it
> took to produce a result for a requiredTolerance of 1e-2 was decreasing as
> I increased the number of assets [d]. (See attached plot for reference).
>
> Isn't this surprising? The landscape from where monte carlo should sample
> becomes significantly more complex when simulating larger baskets, and
> hence shouldn't the time increase with the number of assets?
>
> The parameters I am using are:
>
> d = 4 #number of assets
> underlying_r = np.array([0.3 for i in range(d)])
> underlying_volatilities = np.array([0.5 for i in range(d)])
> underlying_spots = np.array([100.0 for i in range(d)])
> underlying_dividend_rate = np.zeros(d)
>
> β = 0.5
> underlying_correlation_mat = (β*np.ones((d,d))
> +np.identity(d))/(1+β)
>
> Also, could someone please point me to where I can learn more about the
> actual algorithms implemented behind the pricing engines, and what the
> parameters like requiredTolerance mean? I see that the requiredTolerance
> sets an upper bound to the errorEstimate(), but how is this errorEstimate
> also calculated?
>
> Thank you so much again for taking the time to answer these very beginner
> questions!
> Most Cordially,
> Rishi
>
> _______________________________________________
> QuantLib-users mailing list
> Qua...@li...
> https://lists.sourceforge.net/lists/listinfo/quantlib-users
>
|