|
From: Rishi S. <ris...@gm...> - 2024-02-10 17:44:25
|
Dear QuantLib community,
I've been exploring how to price American Basket options
using MCAmericanBasketEngine, when I found something strange. The time it
took to produce a result for a requiredTolerance of 1e-2 was decreasing as
I increased the number of assets [d]. (See attached plot for reference).
Isn't this surprising? The landscape from where monte carlo should sample
becomes significantly more complex when simulating larger baskets, and
hence shouldn't the time increase with the number of assets?
The parameters I am using are:
d = 4 #number of assets
underlying_r = np.array([0.3 for i in range(d)])
underlying_volatilities = np.array([0.5 for i in range(d)])
underlying_spots = np.array([100.0 for i in range(d)])
underlying_dividend_rate = np.zeros(d)
β = 0.5
underlying_correlation_mat = (β*np.ones((d,d))
+np.identity(d))/(1+β)
Also, could someone please point me to where I can learn more about the
actual algorithms implemented behind the pricing engines, and what the
parameters like requiredTolerance mean? I see that the requiredTolerance
sets an upper bound to the errorEstimate(), but how is this errorEstimate
also calculated?
Thank you so much again for taking the time to answer these very beginner
questions!
Most Cordially,
Rishi
|