You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(60) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(18) |
Feb
(4) |
Mar
(6) |
Apr
(2) |
May
|
Jun
(12) |
Jul
(48) |
Aug
(6) |
Sep
(3) |
Oct
(24) |
Nov
(15) |
Dec
(18) |
| 2002 |
Jan
(39) |
Feb
(12) |
Mar
(80) |
Apr
(72) |
May
(46) |
Jun
(27) |
Jul
(23) |
Aug
(34) |
Sep
(65) |
Oct
(71) |
Nov
(19) |
Dec
(14) |
| 2003 |
Jan
(44) |
Feb
(59) |
Mar
(18) |
Apr
(62) |
May
(54) |
Jun
(27) |
Jul
(46) |
Aug
(15) |
Sep
(44) |
Oct
(36) |
Nov
(19) |
Dec
(12) |
| 2004 |
Jan
(26) |
Feb
(33) |
Mar
(47) |
Apr
(63) |
May
(36) |
Jun
(65) |
Jul
(80) |
Aug
(163) |
Sep
(65) |
Oct
(39) |
Nov
(36) |
Dec
(39) |
| 2005 |
Jan
(97) |
Feb
(78) |
Mar
(64) |
Apr
(64) |
May
(48) |
Jun
(55) |
Jul
(89) |
Aug
(57) |
Sep
(51) |
Oct
(111) |
Nov
(86) |
Dec
(76) |
| 2006 |
Jan
(84) |
Feb
(103) |
Mar
(143) |
Apr
(92) |
May
(55) |
Jun
(58) |
Jul
(71) |
Aug
(57) |
Sep
(74) |
Oct
(59) |
Nov
(8) |
Dec
(32) |
| 2007 |
Jan
(60) |
Feb
(40) |
Mar
(50) |
Apr
(26) |
May
(61) |
Jun
(120) |
Jul
(119) |
Aug
(48) |
Sep
(121) |
Oct
(66) |
Nov
(103) |
Dec
(43) |
| 2008 |
Jan
(60) |
Feb
(109) |
Mar
(92) |
Apr
(106) |
May
(82) |
Jun
(59) |
Jul
(67) |
Aug
(118) |
Sep
(131) |
Oct
(56) |
Nov
(37) |
Dec
(69) |
| 2009 |
Jan
(75) |
Feb
(76) |
Mar
(103) |
Apr
(78) |
May
(61) |
Jun
(35) |
Jul
(66) |
Aug
(69) |
Sep
(166) |
Oct
(46) |
Nov
(72) |
Dec
(65) |
| 2010 |
Jan
(48) |
Feb
(57) |
Mar
(93) |
Apr
(85) |
May
(123) |
Jun
(82) |
Jul
(98) |
Aug
(121) |
Sep
(146) |
Oct
(86) |
Nov
(72) |
Dec
(34) |
| 2011 |
Jan
(96) |
Feb
(55) |
Mar
(73) |
Apr
(57) |
May
(33) |
Jun
(74) |
Jul
(89) |
Aug
(71) |
Sep
(103) |
Oct
(76) |
Nov
(52) |
Dec
(61) |
| 2012 |
Jan
(48) |
Feb
(54) |
Mar
(78) |
Apr
(60) |
May
(75) |
Jun
(59) |
Jul
(33) |
Aug
(66) |
Sep
(43) |
Oct
(46) |
Nov
(75) |
Dec
(51) |
| 2013 |
Jan
(112) |
Feb
(72) |
Mar
(49) |
Apr
(48) |
May
(42) |
Jun
(44) |
Jul
(80) |
Aug
(19) |
Sep
(33) |
Oct
(37) |
Nov
(38) |
Dec
(98) |
| 2014 |
Jan
(113) |
Feb
(93) |
Mar
(49) |
Apr
(106) |
May
(97) |
Jun
(155) |
Jul
(87) |
Aug
(127) |
Sep
(85) |
Oct
(48) |
Nov
(41) |
Dec
(37) |
| 2015 |
Jan
(34) |
Feb
(50) |
Mar
(104) |
Apr
(80) |
May
(82) |
Jun
(66) |
Jul
(41) |
Aug
(84) |
Sep
(37) |
Oct
(65) |
Nov
(83) |
Dec
(52) |
| 2016 |
Jan
(68) |
Feb
(35) |
Mar
(42) |
Apr
(35) |
May
(54) |
Jun
(75) |
Jul
(45) |
Aug
(52) |
Sep
(60) |
Oct
(52) |
Nov
(36) |
Dec
(64) |
| 2017 |
Jan
(92) |
Feb
(59) |
Mar
(35) |
Apr
(53) |
May
(83) |
Jun
(43) |
Jul
(65) |
Aug
(68) |
Sep
(46) |
Oct
(75) |
Nov
(40) |
Dec
(49) |
| 2018 |
Jan
(68) |
Feb
(54) |
Mar
(48) |
Apr
(58) |
May
(51) |
Jun
(44) |
Jul
(40) |
Aug
(68) |
Sep
(35) |
Oct
(15) |
Nov
(7) |
Dec
(37) |
| 2019 |
Jan
(43) |
Feb
(7) |
Mar
(22) |
Apr
(21) |
May
(31) |
Jun
(39) |
Jul
(73) |
Aug
(45) |
Sep
(47) |
Oct
(89) |
Nov
(19) |
Dec
(69) |
| 2020 |
Jan
(52) |
Feb
(63) |
Mar
(45) |
Apr
(59) |
May
(42) |
Jun
(57) |
Jul
(30) |
Aug
(29) |
Sep
(75) |
Oct
(64) |
Nov
(96) |
Dec
(22) |
| 2021 |
Jan
(14) |
Feb
(24) |
Mar
(35) |
Apr
(58) |
May
(36) |
Jun
(15) |
Jul
(18) |
Aug
(31) |
Sep
(30) |
Oct
(33) |
Nov
(27) |
Dec
(16) |
| 2022 |
Jan
(35) |
Feb
(22) |
Mar
(14) |
Apr
(20) |
May
(44) |
Jun
(53) |
Jul
(25) |
Aug
(56) |
Sep
(11) |
Oct
(47) |
Nov
(22) |
Dec
(36) |
| 2023 |
Jan
(30) |
Feb
(17) |
Mar
(31) |
Apr
(48) |
May
(31) |
Jun
(7) |
Jul
(25) |
Aug
(26) |
Sep
(61) |
Oct
(66) |
Nov
(19) |
Dec
(21) |
| 2024 |
Jan
(37) |
Feb
(29) |
Mar
(26) |
Apr
(26) |
May
(34) |
Jun
(9) |
Jul
(27) |
Aug
(13) |
Sep
(15) |
Oct
(25) |
Nov
(13) |
Dec
(8) |
| 2025 |
Jan
(13) |
Feb
(1) |
Mar
(16) |
Apr
(17) |
May
(8) |
Jun
(6) |
Jul
(9) |
Aug
|
Sep
(6) |
Oct
(15) |
Nov
(6) |
Dec
|
| 2026 |
Jan
(6) |
Feb
(4) |
Mar
(20) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Giuseppe T. <tr...@gm...> - 2023-09-27 14:28:45
|
Hi Greg,
The fact that you are getting "double" of what is expected makes me think
in the object otm_vol_spreads you don't have the spreads (that is, vol(ATM)
- vol(Strike)) but you have the actual vols or at least you have them for
the strike spread = 0. In that case indeed you would get exactly double the
true ATM vol because you would be summing the ATM vol from the vol matrix
with a "spread" that is not 0 but the ATM vol again.
In summary, check whether in otm_vol_spreads you have a column of Zeros for
the strike spread = 0.
Il Mer 27 Set 2023, 16:10 Gre...@ca... <
Gre...@ca...> ha scritto:
> Hi Aleksis,
>
>
>
> Thank you for confirming how the strike is determined for the vol cube, I
> was not positive if it should be relative to ATM or absolute. However,
> neither seem to produce the correct vol.
>
>
>
>
>
> When I plug my forward rate into the vol cube to represent ATM, I still
> get a different number. I wasn’t positive if it should be in percent or
> decimal, so I am showing both here. Interestingly, the vol cube is roughly
> double what I am expecting. Are you able to advise if I am doing something
> else wrong with building my cube?
>
>
>
> Thanks again,
>
> Greg
>
>
>
> *From:* Aleksis Ali Raza <ale...@go...>
> *Sent:* Wednesday, September 27, 2023 1:12 AM
> *To:* Gregory Zuroff (GQQZ) <Gre...@ca...>
> *Cc:* qua...@li...
> *Subject:* Re: [Quantlib-users] Quantlib-Python Swaption Volatility Cube
>
>
>
> hi, you seem to be enquiring for a 0 strike 3m10y swaption volatility. the
> third (rate) parameter in the volatility attribute is absolute strike
> level, not spread to atm.
>
>
>
> because the swaption vol matrix doesn’t include skew data while the
> volcube does, these would agree on ATM vols, but the cube would return a
> skew adjusted vol for an off-strike enquiry (as in your case) while the
> swaption matrix would just give the ATM vol in both cases.
>
>
>
> On Sep 27, 2023, at 1:49 AM, Gre...@ca... wrote:
>
>
>
> Hello,
>
>
>
> I am hoping you can help me better understand how to create a swaption vol
> cube using normal vols.
>
>
>
> I successfully create the ATM vol surface
>
>
>
> swaption_vol_matrix = ql.SwaptionVolatilityMatrix(
>
> calendar,
>
> bdc,
>
> option_tenors,
>
> swap_tenors,
>
> ql.Matrix(normal_atm_vols),
>
> day_counter,
>
> False,
>
> ql.Normal
>
> )
>
>
>
>
>
> I am able to confirm that the vols are as expected
>
> swaption_vol_matrix.volatility(today + ql.Period('3m'), ql.Period('10y'),
> 0)
>
> This volatility function returns the same vol for 3m10y that I supplied.
>
>
>
> I then format all of my swaption vols into a cube such that I have one row
> for each expiry X swap tenor combination and one column for each strike
> spread [-2.25, 2.25]
>
> swaptions.query('cusip.str.startswith("SR")').pivot(index=['expiry',
> 'tenor'], columns='strike', values='vol')
>
>
>
> I convert this dataframe to numpy such that I have a 2d numpy array of
> size (120, 19) which is (num expiries * num tenors, num strike spreads).
>
>
>
> These are SOFR swaptions, so I create an OvernightIndexedSwapIndex.
>
> base_index = ql.OvernightIndexedSwapIndex('SOFR', ql.Period('1d'), 2,
> ql.USDCurrency(), sofr_handle)
>
>
>
> Finally, I try to actually create the vol cube
>
>
>
> vol_cube = ql.SwaptionVolatilityStructureHandle(
>
> ql.InterpolatedSwaptionVolatilityCube(
>
> ql.SwaptionVolatilityStructureHandle(swaption_vol_matrix),
>
> option_tenors,
>
> swap_tenors,
>
> strikes,
>
> otm_vol_spreads,
>
> base_index,
>
> base_index,
>
> False
>
> )
>
> )
>
>
>
> However, when I check the same point on the cube that I confirmed on the
> ATM surface, the vol is wrong.
>
> vol_cube.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
>
>
>
>
>
> Could you please help me understand what I am doing wrong in creating my
> vol cube?
>
>
>
> Thanks,
> Greg
>
>
>
> *Gregory Zuroff, CFA*
>
> Fundamental Research Group
>
> *Direct:* (310) 996-9413
>
> *Email:* Gre...@ca...
>
> 11100 Santa Monica Blvd. Los Angeles, CA 90025
>
> <image002.png>
>
>
>
>
>
> Your privacy and security are important to us. See our privacy policy (
> Canada <https://www.capitalgroup.com/individual/ca/en/about/legal.html>, Europe
> & Asia
> <https://www.capitalgroup.com/content/sites/the-capital-group/entry-page/shared/privacy.html>,
> United States <https://www.capitalgroup.com/individual/privacy.html>).
>
> _______________________________________________
> QuantLib-users mailing list
> Qua...@li...
> https://lists.sourceforge.net/lists/listinfo/quantlib-users
>
>
>
> Your privacy and security are important to us. See our privacy policy (
> Canada <https://www.capitalgroup.com/individual/ca/en/about/legal.html>, Europe
> & Asia
> <https://www.capitalgroup.com/content/sites/the-capital-group/entry-page/shared/privacy.html>,
> United States <https://www.capitalgroup.com/individual/privacy.html>).
> _______________________________________________
> QuantLib-users mailing list
> Qua...@li...
> https://lists.sourceforge.net/lists/listinfo/quantlib-users
>
|
|
From: <Gre...@ca...> - 2023-09-27 14:09:28
|
Hi Aleksis,
Thank you for confirming how the strike is determined for the vol cube, I was not positive if it should be relative to ATM or absolute. However, neither seem to produce the correct vol.
[cid:image001.png@01D9F110.9DCD79C0]
When I plug my forward rate into the vol cube to represent ATM, I still get a different number. I wasn’t positive if it should be in percent or decimal, so I am showing both here. Interestingly, the vol cube is roughly double what I am expecting. Are you able to advise if I am doing something else wrong with building my cube?
Thanks again,
Greg
From: Aleksis Ali Raza <ale...@go...>
Sent: Wednesday, September 27, 2023 1:12 AM
To: Gregory Zuroff (GQQZ) <Gre...@ca...>
Cc: qua...@li...
Subject: Re: [Quantlib-users] Quantlib-Python Swaption Volatility Cube
hi, you seem to be enquiring for a 0 strike 3m10y swaption volatility. the third (rate) parameter in the volatility attribute is absolute strike level, not spread to atm.
because the swaption vol matrix doesn’t include skew data while the volcube does, these would agree on ATM vols, but the cube would return a skew adjusted vol for an off-strike enquiry (as in your case) while the swaption matrix would just give the ATM vol in both cases.
On Sep 27, 2023, at 1:49 AM, Gre...@ca...<mailto:Gre...@ca...> wrote:
Hello,
I am hoping you can help me better understand how to create a swaption vol cube using normal vols.
I successfully create the ATM vol surface
swaption_vol_matrix = ql.SwaptionVolatilityMatrix(
calendar,
bdc,
option_tenors,
swap_tenors,
ql.Matrix(normal_atm_vols),
day_counter,
False,
ql.Normal
)
I am able to confirm that the vols are as expected
swaption_vol_matrix.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
This volatility function returns the same vol for 3m10y that I supplied.
I then format all of my swaption vols into a cube such that I have one row for each expiry X swap tenor combination and one column for each strike spread [-2.25, 2.25]
swaptions.query('cusip.str.startswith("SR")').pivot(index=['expiry', 'tenor'], columns='strike', values='vol')
I convert this dataframe to numpy such that I have a 2d numpy array of size (120, 19) which is (num expiries * num tenors, num strike spreads).
These are SOFR swaptions, so I create an OvernightIndexedSwapIndex.
base_index = ql.OvernightIndexedSwapIndex('SOFR', ql.Period('1d'), 2, ql.USDCurrency(), sofr_handle)
Finally, I try to actually create the vol cube
vol_cube = ql.SwaptionVolatilityStructureHandle(
ql.InterpolatedSwaptionVolatilityCube(
ql.SwaptionVolatilityStructureHandle(swaption_vol_matrix),
option_tenors,
swap_tenors,
strikes,
otm_vol_spreads,
base_index,
base_index,
False
)
)
However, when I check the same point on the cube that I confirmed on the ATM surface, the vol is wrong.
vol_cube.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
Could you please help me understand what I am doing wrong in creating my vol cube?
Thanks,
Greg
Gregory Zuroff, CFA
Fundamental Research Group
Direct: (310) 996-9413
Email: Gre...@ca...<mailto:Gre...@ca...>
11100 Santa Monica Blvd. Los Angeles, CA 90025
<image002.png>
Your privacy and security are important to us. See our privacy policy (Canada<https://www.capitalgroup.com/individual/ca/en/about/legal.html>, Europe & Asia<https://www.capitalgroup.com/content/sites/the-capital-group/entry-page/shared/privacy.html>, United States<https://www.capitalgroup.com/individual/privacy.html>).
_______________________________________________
QuantLib-users mailing list
Qua...@li...<mailto:Qua...@li...>
https://lists.sourceforge.net/lists/listinfo/quantlib-users
Your privacy and security are important to us. See our privacy policy (Canada https://www.capitalgroup.com/individual/ca/en/about/legal.html, Europe & Asia https://www.capitalgroup.com/content/sites/the-capital-group/entry-page/shared/privacy.html, United States https://www.capitalgroup.com/individual/privacy.html).
|
|
From: Aleksis A. R. <ale...@go...> - 2023-09-27 08:39:53
|
hi, you seem to be enquiring for a 0 strike 3m10y swaption volatility. the third (rate) parameter in the volatility attribute is absolute strike level, not spread to atm.
because the swaption vol matrix doesn’t include skew data while the volcube does, these would agree on ATM vols, but the cube would return a skew adjusted vol for an off-strike enquiry (as in your case) while the swaption matrix would just give the ATM vol in both cases.
> On Sep 27, 2023, at 1:49 AM, Gre...@ca... wrote:
>
> Hello,
>
> I am hoping you can help me better understand how to create a swaption vol cube using normal vols.
>
> I successfully create the ATM vol surface
>
> swaption_vol_matrix = ql.SwaptionVolatilityMatrix(
> calendar,
> bdc,
> option_tenors,
> swap_tenors,
> ql.Matrix(normal_atm_vols),
> day_counter,
> False,
> ql.Normal
> )
>
>
> I am able to confirm that the vols are as expected
> swaption_vol_matrix.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
> This volatility function returns the same vol for 3m10y that I supplied.
>
> I then format all of my swaption vols into a cube such that I have one row for each expiry X swap tenor combination and one column for each strike spread [-2.25, 2.25]
> swaptions.query('cusip.str.startswith("SR")').pivot(index=['expiry', 'tenor'], columns='strike', values='vol')
>
> I convert this dataframe to numpy such that I have a 2d numpy array of size (120, 19) which is (num expiries * num tenors, num strike spreads).
>
> These are SOFR swaptions, so I create an OvernightIndexedSwapIndex.
> base_index = ql.OvernightIndexedSwapIndex('SOFR', ql.Period('1d'), 2, ql.USDCurrency(), sofr_handle)
>
> Finally, I try to actually create the vol cube
>
> vol_cube = ql.SwaptionVolatilityStructureHandle(
> ql.InterpolatedSwaptionVolatilityCube(
> ql.SwaptionVolatilityStructureHandle(swaption_vol_matrix),
> option_tenors,
> swap_tenors,
> strikes,
> otm_vol_spreads,
> base_index,
> base_index,
> False
> )
> )
>
> However, when I check the same point on the cube that I confirmed on the ATM surface, the vol is wrong.
> vol_cube.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
>
>
> Could you please help me understand what I am doing wrong in creating my vol cube?
>
> Thanks,
> Greg
>
> Gregory Zuroff, CFA
> Fundamental Research Group
> Direct: (310) 996-9413
> Email: Gre...@ca... <mailto:Gre...@ca...>
> 11100 Santa Monica Blvd. Los Angeles, CA 90025
> <image002.png>
>
>
> Your privacy and security are important to us. See our privacy policy (Canada <https://www.capitalgroup.com/individual/ca/en/about/legal.html>, Europe & Asia <https://www.capitalgroup.com/content/sites/the-capital-group/entry-page/shared/privacy.html>, United States <https://www.capitalgroup.com/individual/privacy.html>).
>
> _______________________________________________
> QuantLib-users mailing list
> Qua...@li...
> https://lists.sourceforge.net/lists/listinfo/quantlib-users
|
|
From: <Gre...@ca...> - 2023-09-27 01:06:46
|
Hello,
I am hoping you can help me better understand how to create a swaption vol cube using normal vols.
I successfully create the ATM vol surface
swaption_vol_matrix = ql.SwaptionVolatilityMatrix(
calendar,
bdc,
option_tenors,
swap_tenors,
ql.Matrix(normal_atm_vols),
day_counter,
False,
ql.Normal
)
I am able to confirm that the vols are as expected
swaption_vol_matrix.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
This volatility function returns the same vol for 3m10y that I supplied.
I then format all of my swaption vols into a cube such that I have one row for each expiry X swap tenor combination and one column for each strike spread [-2.25, 2.25]
swaptions.query('cusip.str.startswith("SR")').pivot(index=['expiry', 'tenor'], columns='strike', values='vol')
I convert this dataframe to numpy such that I have a 2d numpy array of size (120, 19) which is (num expiries * num tenors, num strike spreads).
These are SOFR swaptions, so I create an OvernightIndexedSwapIndex.
base_index = ql.OvernightIndexedSwapIndex('SOFR', ql.Period('1d'), 2, ql.USDCurrency(), sofr_handle)
Finally, I try to actually create the vol cube
vol_cube = ql.SwaptionVolatilityStructureHandle(
ql.InterpolatedSwaptionVolatilityCube(
ql.SwaptionVolatilityStructureHandle(swaption_vol_matrix),
option_tenors,
swap_tenors,
strikes,
otm_vol_spreads,
base_index,
base_index,
False
)
)
However, when I check the same point on the cube that I confirmed on the ATM surface, the vol is wrong.
vol_cube.volatility(today + ql.Period('3m'), ql.Period('10y'), 0)
Could you please help me understand what I am doing wrong in creating my vol cube?
Thanks,
Greg
Gregory Zuroff, CFA
Fundamental Research Group
Direct: (310) 996-9413
Email: Gre...@ca...<mailto:Gre...@ca...>
11100 Santa Monica Blvd. Los Angeles, CA 90025
[cid:image002.png@01D9F09D.950932C0]
Your privacy and security are important to us. See our privacy policy (Canada https://www.capitalgroup.com/individual/ca/en/about/legal.html, Europe & Asia https://www.capitalgroup.com/content/sites/the-capital-group/entry-page/shared/privacy.html, United States https://www.capitalgroup.com/individual/privacy.html).
|
|
From: Ben W. <ben...@ma...> - 2023-09-15 20:31:09
|
There is no WAL calculator in Quantlib. WAL modelling is very much related to the tranche waterfall structure. The typical model is the CPR (constant prepayment rate). To do this properly you will need the details of every series, model paydown profile, calculate the WAL for the given CPR. You can estimate the CPR from the actual or estimated bond factors. You can get more in to this by looking at the expected defaults in the pool and see what the impact is on the waterfall. On Sat, 16 Sept 2023, 2:57 am Michael (DataDriven portal), < mi...@da...> wrote: > Hi Ben: > > Thanks a lot. I will use that. I am also wondering if WAL (weighted > average life) for a bond is calculated by QuantLib (I could not find it in > the documentation) which is defined here: > https://www.investopedia.com/terms/w/weightedaveragelife.asp. > > Thanks, > > Michael > > On Thu, Sep 14, 2023 at 4:32 PM Ben Watson <ben...@ma...> > wrote: > >> >> >> This is from the python documentation >> >> >> import QuantLib as ql >> import numpy as np >> import matplotlib.pyplot as plt >> >> X = [1., 2., 3., 4., 5.] >> Y = [0.5, 0.6, 0.7, 0.8, 0.9] >> >> methods = { >> 'Linear Interpolation': ql.LinearInterpolation(X, Y), >> 'LogLinearInterpolation': ql.LogLinearInterpolation(X, Y), >> 'CubicNaturalSpline': ql.CubicNaturalSpline(X, Y), >> 'LogCubicNaturalSpline': ql.LogCubicNaturalSpline(X, Y), >> 'ForwardFlatInterpolation': ql.ForwardFlatInterpolation(X, Y), >> 'BackwardFlatInterpolation': ql.BackwardFlatInterpolation(X, Y), >> 'LogParabolic': ql.LogParabolic(X, Y) >> } >> >> xx = np.linspace(1, 10) >> fig = plt.figure(figsize=(15,4)) >> plt.scatter(X, Y, label='Original Data') >> for name, i in methods.items(): >> yy = [i(x, allowExtrapolation=True) for x in xx] >> plt.plot(xx, yy, label=name); >> plt.legend(); >> >> On Fri, 15 Sept 2023, 4:34 am Michael (DataDriven portal), < >> mi...@da...> wrote: >> >>> Hi All, >>> >>> I am looking for some QuantLib examples on how to calculate bond's I >>> spread that match Bloomberg calculations (as described below). >>> >>> Any help is greatly appreciated! >>> >>> Bloomberg's I-spread is calculated like this: find the 2 swap rate >>> quotes nearest the bond maturity. Linearly interpolate to get the swap rate >>> at the bond's maturity. (Unless you happen to have a swap rate quote >>> exactly at the bond's maturity.) I-spread = interpolated swap rate - the >>> bond's conventional yield >>> >>> Thanks, >>> >>> Michael >>> _______________________________________________ >>> QuantLib-users mailing list >>> Qua...@li... >>> https://lists.sourceforge.net/lists/listinfo/quantlib-users >>> >> |
|
From: Michael (D. portal) <mi...@da...> - 2023-09-15 17:50:11
|
On Fri, Sep 15, 2023, 12:57 PM Michael (DataDriven portal) < mi...@da...> wrote: > Hi Ben: > > Thanks a lot. I will use that. I am also wondering if WAL (weighted > average life) for a bond is calculated by QuantLib (I could not find it in > the documentation) which is defined here: > https://www.investopedia.com/terms/w/weightedaveragelife.asp. > > Thanks, > > Michael > > On Thu, Sep 14, 2023 at 4:32 PM Ben Watson <ben...@ma...> > wrote: > >> >> >> This is from the python documentation >> >> >> import QuantLib as ql >> import numpy as np >> import matplotlib.pyplot as plt >> >> X = [1., 2., 3., 4., 5.] >> Y = [0.5, 0.6, 0.7, 0.8, 0.9] >> >> methods = { >> 'Linear Interpolation': ql.LinearInterpolation(X, Y), >> 'LogLinearInterpolation': ql.LogLinearInterpolation(X, Y), >> 'CubicNaturalSpline': ql.CubicNaturalSpline(X, Y), >> 'LogCubicNaturalSpline': ql.LogCubicNaturalSpline(X, Y), >> 'ForwardFlatInterpolation': ql.ForwardFlatInterpolation(X, Y), >> 'BackwardFlatInterpolation': ql.BackwardFlatInterpolation(X, Y), >> 'LogParabolic': ql.LogParabolic(X, Y) >> } >> >> xx = np.linspace(1, 10) >> fig = plt.figure(figsize=(15,4)) >> plt.scatter(X, Y, label='Original Data') >> for name, i in methods.items(): >> yy = [i(x, allowExtrapolation=True) for x in xx] >> plt.plot(xx, yy, label=name); >> plt.legend(); >> >> On Fri, 15 Sept 2023, 4:34 am Michael (DataDriven portal), < >> mi...@da...> wrote: >> >>> Hi All, >>> >>> I am looking for some QuantLib examples on how to calculate bond's I >>> spread that match Bloomberg calculations (as described below). >>> >>> Any help is greatly appreciated! >>> >>> Bloomberg's I-spread is calculated like this: find the 2 swap rate >>> quotes nearest the bond maturity. Linearly interpolate to get the swap rate >>> at the bond's maturity. (Unless you happen to have a swap rate quote >>> exactly at the bond's maturity.) I-spread = interpolated swap rate - the >>> bond's conventional yield >>> >>> Thanks, >>> >>> Michael >>> _______________________________________________ >>> QuantLib-users mailing list >>> Qua...@li... >>> https://lists.sourceforge.net/lists/listinfo/quantlib-users >>> >> |
|
From: Ben W. <ben...@ma...> - 2023-09-14 20:58:06
|
This is from the python documentation
import QuantLib as ql
import numpy as np
import matplotlib.pyplot as plt
X = [1., 2., 3., 4., 5.]
Y = [0.5, 0.6, 0.7, 0.8, 0.9]
methods = {
'Linear Interpolation': ql.LinearInterpolation(X, Y),
'LogLinearInterpolation': ql.LogLinearInterpolation(X, Y),
'CubicNaturalSpline': ql.CubicNaturalSpline(X, Y),
'LogCubicNaturalSpline': ql.LogCubicNaturalSpline(X, Y),
'ForwardFlatInterpolation': ql.ForwardFlatInterpolation(X, Y),
'BackwardFlatInterpolation': ql.BackwardFlatInterpolation(X, Y),
'LogParabolic': ql.LogParabolic(X, Y)
}
xx = np.linspace(1, 10)
fig = plt.figure(figsize=(15,4))
plt.scatter(X, Y, label='Original Data')
for name, i in methods.items():
yy = [i(x, allowExtrapolation=True) for x in xx]
plt.plot(xx, yy, label=name);
plt.legend();
On Fri, 15 Sept 2023, 4:34 am Michael (DataDriven portal), <
mi...@da...> wrote:
> Hi All,
>
> I am looking for some QuantLib examples on how to calculate bond's I
> spread that match Bloomberg calculations (as described below).
>
> Any help is greatly appreciated!
>
> Bloomberg's I-spread is calculated like this: find the 2 swap rate quotes
> nearest the bond maturity. Linearly interpolate to get the swap rate at the
> bond's maturity. (Unless you happen to have a swap rate quote exactly at
> the bond's maturity.) I-spread = interpolated swap rate - the bond's
> conventional yield
>
> Thanks,
>
> Michael
> _______________________________________________
> QuantLib-users mailing list
> Qua...@li...
> https://lists.sourceforge.net/lists/listinfo/quantlib-users
>
|
|
From: Michael (D. portal) <mi...@da...> - 2023-09-14 18:34:04
|
Hi All, I am looking for some QuantLib examples on how to calculate bond's I spread that match Bloomberg calculations (as described below). Any help is greatly appreciated! Bloomberg's I-spread is calculated like this: find the 2 swap rate quotes nearest the bond maturity. Linearly interpolate to get the swap rate at the bond's maturity. (Unless you happen to have a swap rate quote exactly at the bond's maturity.) I-spread = interpolated swap rate - the bond's conventional yield Thanks, Michael |
|
From: Steve H. <war...@gm...> - 2023-09-13 10:07:11
|
Hi Marcin Thanks for the reply. It does provide valuable information. According to your previous discussion, it looks like I have to make two independent instance of index. One is for build term structure, and the other one is for instrument valuation. However, I still have confusion on the valuation part. When calling NPV in my example, the fixing rate on valuation date should not be used, since it’s not the applicable fixing date for the relevant cash flows. I will try to look deeper into this by review the source code. Many Thanks for the help. Best, Steve On Wed, Sep 13, 2023 at 5:17 PM Marcin Rybacki <mry...@gm...> wrote: > Hi Steve, > > That fixing that you are passing on 2019/11/12 (also the evaluation date) > is actually quite important, because it is being set on the swap > instruments you are using to bootstrap the curve. Hence, it will affect the > NPV of the swap, which might not necessarily need that particular fixing. > > See this related issue for more color: > https://github.com/lballabio/QuantLib/issues/1213 > > Hope this helps. > > Kind regards, > Marcin > > On Wed, 13 Sept 2023 at 10:44, Steve Hsieh <war...@gm...> > wrote: > >> Dear all, >> >> I have a question regarding addfixing. I try to value a portfolio of >> interest rate swaps, I add all the existing and required fixing rate of my >> swaps to the index, however I get unexpected npv and dv01 for some trades. >> My original code is in C++, for conenience, I try to reproduce it in python >> as below. The story is that I add a fixing rate on 2019/11/12 which is >> actually unnecessary for this swap. and it change the NPV of this swap. I >> think the pricer will automatically choose pastfixing or forecasting by >> swap’s fixing dates. Since it’s not the required fixing rate, adding such a >> rate should have no impact to the NPV. >> Do I miss anything or I make any mistake? >> Need your advice and help. >> Many Thanks. >> >> Best, >> Steve >> >> >> >> from IPython.display import display >> >> import QuantLib as ql >> >> import pandas as pd >> >> >> >> ql.Settings.instance().evaluationDate= ql.Date(12, 11, 2019) >> >> refDate=ql.Settings.instance().evaluationDate >> >> cashFlowCdr = ql.JointCalendar(ql.UnitedStates(ql.UnitedStates.Settlement), >> ql.UnitedStates(ql.UnitedStates.LiborImpact)) >> >> yts = ql.RelinkableYieldTermStructureHandle() >> >> >> >> instruments = [ >> >> ('depo', '3M', 0.020), >> >> ('swap', '1Y', 0.031), >> >> ('swap', '2Y', 0.032), >> >> ('swap', '5Y', 0.035) >> >> ] >> >> >> >> helpers = ql.RateHelperVector() >> >> index = ql.USDLibor(ql.Period('3M'),yts) >> >> for instrument, tenor, rate in instruments: >> >> if instrument == 'depo': >> >> helpers.append( ql.DepositRateHelper(rate, index) ) >> >> if instrument == 'fra': >> >> monthsToStart = ql.Period(tenor).length() >> >> helpers.append( ql.FraRateHelper(rate, monthsToStart, index) ) >> >> if instrument == 'swap': >> >> swapIndex = ql.UsdLiborSwapIsdaFixAm(ql.Period(tenor)) >> >> helpers.append( ql.SwapRateHelper(rate, swapIndex)) >> >> curve = ql.PiecewiseLogCubicDiscount(0, cashFlowCdr, helpers, ql. >> ActualActual()) >> >> >> >> yts.linkTo(curve) >> >> engine = ql.DiscountingSwapEngine(yts) >> >> calendar = ql.TARGET() >> >> start = ql.Date(11,9,2019) >> >> maturity = calendar.advance(start, ql.Period('4y')) >> >> fixedSchedule = ql.MakeSchedule(start, maturity, ql.Period('3M')) >> >> floatSchedule = ql.MakeSchedule(start, maturity, ql.Period('3M')) >> >> >> >> index.addFixing( ql.Date(9, 9, 2019), 0.020) >> >> index.addFixing( ql.Date(12, 11, 2019), 0.030) >> >> >> >> swap = ql.VanillaSwap( >> >> ql.VanillaSwap.Payer, 1000000, >> >> fixedSchedule, 0.01, ql.Thirty360(), >> >> floatSchedule, index, 0, ql.Actual360() >> >> ) >> >> swap.setPricingEngine(engine) >> >> npv=swap.NPV() >> >> print(npv) >> > _______________________________________________ >> QuantLib-users mailing list >> Qua...@li... >> https://lists.sourceforge.net/lists/listinfo/quantlib-users >> > |
|
From: Luigi B. <lui...@gm...> - 2023-09-13 09:36:18
|
I just mean that you're taking points at the t=1/12, 2/12 etc and seeing
whether they're inside the SD of the expected distribution at t=1. That
looks to me like comparing apples to oranges. If you have some use for
that, though, good.
Luigi
On Wed, Sep 13, 2023 at 2:55 AM U.Mutlu <um...@mu...> wrote:
> The said table is intended for this practical situation:
> You know (or will know) the stock price only at the end
> of the timesteps, and are interested to know from these
> datapoints the expectancy% at end for the pre-calculated
> range from -1SD to +1SD for t of the initial stock price.
>
> It's the same procedure like for timeSteps=1 (which is easier to
> understand).
>
> Normally you are interested in the outcome after t (ie. timeSteps=1 giving
> the
> usual 68.27%).
> But then you would also like to know the result if you
> watch more datapoints than just the one at the end,
> ie. in your example timeSteps=12 in t=1y.
>
> Your observation below is correct I think (except
> the sentence "But that makes no sense"), but that
> all is already counted for in the table, IMO.
>
> For timeSteps=12 in your example the expectancy is 83.5962% (ie.
> p=0.835962).
> Ie. 68.27% now suddenly becomes 83.5982% only b/c we watch the stock price
> more times (1 vs 12 times, at end of equal time intervals).
>
> Do you get a different value for the above said situation?
> Or asked differently: how should the table ideally be made up instead?
>
> I'll also create a cumulative version for 0 to t_i.
> Maybe you mean that one.
>
> This method can be important for bots who watch the
> stock price at such fixed intervals, like weeks or months,
> and need to know the probability... and/or for (auto-)trading
> systems based on such stochastics.
>
>
> Luigi Ballabio wrote on 09/11/23 19:01:
> > in your sample implementation, you generate paths from 0 to t using
> > a number of timesteps. For the sake of example, let's say t=1 year and
> you
> > use 12 timesteps, so one step per month. You calculate the price plus or
> > minus 1SD, where 1SD is the standard deviation of the expected
> distribution at
> > t=1. Then you do the following to calculate percentage hits:
> >
> > for (size_t j = 1; j <= timeSteps; ++j)
> > {
> > const auto Sx = samplePath.at(j);
> > if (Sx >= m1SD && Sx <= p1SD) ++cHit;
> > }
> >
> > that is, for every point in the path you check whether it's between S -
> 1SD
> > and S + 1SD. But that makes no sense. You can only check the point at
> the
> > end of the path, because it's the only point that belongs to the expected
> > distribution at t=1 year. In the case of 12 timesteps, the point at j=1
> > belongs to the expected distribution at t=1 month, which has a
> different, much
> > smaller SD. The point at j=2 belongs to the expected distribution at t=2
> > months, which has a different SD. Of course your percentages increase
> with
> > the timesteps, because you're sampling points from all times between 0
> and 1
> > (which belong to much narrower distributions) and compare them with the
> SD at
> > t=1 year. If you compare the points at each timestep with their correct
> > respective distributions, you'll find about the same percentage (the
> usual
> > 68%) falling within the respective SDs.
> >
> > Hope this helps clear this up—I won't be able to give much more time to
> this,
> > if at all.
>
>
>
>
>
|
|
From: Marcin R. <mry...@gm...> - 2023-09-13 09:32:54
|
Hi Jonathan, Great that you were able to find a solution. I am not sure if the UFR curve would be my first choice, because you have to be careful with the selection of the right parameters there. But if done right, it should do the job. When it comes to the interpolation you mentioned, I did see some issues with convergence when bootstrapping a (Euribor) swap curve including FRA instruments - which can happen with cubic schemes, in general. Kind regards, Marcin On Mon, 11 Sept 2023 at 08:39, Jonathan George < Jon...@ni...> wrote: > Hi Marcin, > > > > Thank you for your input. Apologies for the delay in response. > > > > Before your mail I settled on the following solution: > > > > 1. If I am correct in assuming, the constant zero rate post final bond > maturity will be ‘fixed’ indefinitely if the preceding forward rate is > equal to the zero rate. > 2. I used the ultimate forward term structure and set the ultimate > forward rate equal to the sampled final zero rate at t+1 post final bond > maturity - > https://rkapl123.github.io/QLAnnotatedSource/dc/dbf/class_quant_lib_1_1_ultimate_forward_term_structure.html > > > > It seems to provide me with the curve I was intending. Any thoughts around > this? > > > > A question for you on your linear/non-linear point, have you had any > attempt/success at using Piecewise Monotone Convex interpolation using > python? > > > > Thanks for your help! > > > Jonathan George > Quantitative Developer > > T: > +27 21 901 1363 <+27%2021%20901%201363> > > 36 Hans Strijdom Avenue > Foreshore , Cape Town , Western Cape , 8001 , South Africa > www.ninetyone.com | <http://ninetyone.com/> > <https://www.linkedin.com/company/ninetyone/> > Follow us <https://www.linkedin.com/company/ninetyone> > <https://ninetyone.com/> > > > *From:* Marcin Rybacki <mry...@gm...> > *Sent:* 07 September 2023 17:56 > *To:* Jonathan George <Jon...@ni...> > *Cc:* qua...@li... > *Subject:* Re: [Quantlib-users] Bootstrapped Yield Curve extrapolation > help > > > > Hi Jonathan, > > > > I think that, at the moment, the library only offers flat forward > extrapolation. > > However, you could use the following workaround: > > > > 1) Build the curve, based on the original input rates, using linear > log-discount interpolation without enabling the extrapolation > > 2) Retrieve the nodes of the curve, which I assume will be (in Python) a > list of tuples with dates and discount factors > > 2) From this curve calculate a zero rate for the last node: last_zero_rate > = crv.zeroRate(crv.maxTime(), ql.Continuous).rate() > 3) Calculate a discount factor for the very last QuantLib date: max_dt = > ql.Date.maxDate() By taking the exponent of the year fraction from the > reference date to max date, times last zero rate and times -1. And append > the nodes with this last tuple (date and discount factor) > > 4) Reconstruct the curve using ql.DiscountCurve(dates, discounts, > day_counter) and enable extrapolation. > > > > I think this should give the outcome you're looking for. > > > > Please note that the above approach will only work for linear schemes. > Applying it with e.g. cubic splines will lead to a different solution of > the tridiagonal system, and the resulting curves will be slightly different. > > Another downside is that bumping quote handles to obtain sensitivities > will yield incorrect outcomes in the extrapolation region - so instead you > would have to rebuild the curve to get the deltas. > > > > Hope this helps. > > > > Kind regards, > > Marcin > > > > On Thu, 24 Aug 2023 at 15:32, Jonathan George < > Jon...@ni...> wrote: > > Hi All, > > > > I am hoping that I might be able to get some help from this forum. > > > > Context: > > I am trying to bootstrap a yield curve using piecewise flat forward > interpolation between market obtained rates. Here is my output: > > > > > > I am using python and my curve construction is quite straight forward with > extrapolation enabled(I understand I am committing the cardinal sin by not > posting my code, but hoping that this is theoretical enough a question for > a straight forward answer) > > > > Question: > > Is it possible to fix the final zero rate post maturity of the last bond? > It seems that enableExtrapolation is fixing the final forward rate making > the zero rate decay over time. I am trying to avoid creating a custom > function to return zero and forward rates post maturity of the final bond. > > I have tried create two curves (a linear flat forward curve and a regular > flat forward curve) and combining using CompositeZeroYieldStructure however > I’m not sure which binary function to pass into the function. > > > > Any help would be appreciated. > > Regards > > *Jonathan George* > > *Quantitative Developer* > > > > T: > > +27 21 901 1363 <+27%2021%20901%201363> > > > > > > 36 Hans Strijdom Avenue > Foreshore, Cape Town, 8001 > > www.ninetyone.com | <http://ninetyone.com/> > > <https://www.linkedin.com/company/ninetyone/> > > Follow us <https://www.linkedin.com/company/ninetyone> > > <https://ninetyone.com/> > > > > _______________________________________________ > QuantLib-users mailing list > Qua...@li... > https://lists.sourceforge.net/lists/listinfo/quantlib-users > > |
|
From: Marcin R. <mry...@gm...> - 2023-09-13 09:17:29
|
Hi Steve, That fixing that you are passing on 2019/11/12 (also the evaluation date) is actually quite important, because it is being set on the swap instruments you are using to bootstrap the curve. Hence, it will affect the NPV of the swap, which might not necessarily need that particular fixing. See this related issue for more color: https://github.com/lballabio/QuantLib/issues/1213 Hope this helps. Kind regards, Marcin On Wed, 13 Sept 2023 at 10:44, Steve Hsieh <war...@gm...> wrote: > Dear all, > > I have a question regarding addfixing. I try to value a portfolio of > interest rate swaps, I add all the existing and required fixing rate of my > swaps to the index, however I get unexpected npv and dv01 for some trades. > My original code is in C++, for conenience, I try to reproduce it in python > as below. The story is that I add a fixing rate on 2019/11/12 which is > actually unnecessary for this swap. and it change the NPV of this swap. I > think the pricer will automatically choose pastfixing or forecasting by > swap’s fixing dates. Since it’s not the required fixing rate, adding such a > rate should have no impact to the NPV. > Do I miss anything or I make any mistake? > Need your advice and help. > Many Thanks. > > Best, > Steve > > > > from IPython.display import display > > import QuantLib as ql > > import pandas as pd > > > > ql.Settings.instance().evaluationDate= ql.Date(12, 11, 2019) > > refDate=ql.Settings.instance().evaluationDate > > cashFlowCdr = ql.JointCalendar(ql.UnitedStates(ql.UnitedStates.Settlement), > ql.UnitedStates(ql.UnitedStates.LiborImpact)) > > yts = ql.RelinkableYieldTermStructureHandle() > > > > instruments = [ > > ('depo', '3M', 0.020), > > ('swap', '1Y', 0.031), > > ('swap', '2Y', 0.032), > > ('swap', '5Y', 0.035) > > ] > > > > helpers = ql.RateHelperVector() > > index = ql.USDLibor(ql.Period('3M'),yts) > > for instrument, tenor, rate in instruments: > > if instrument == 'depo': > > helpers.append( ql.DepositRateHelper(rate, index) ) > > if instrument == 'fra': > > monthsToStart = ql.Period(tenor).length() > > helpers.append( ql.FraRateHelper(rate, monthsToStart, index) ) > > if instrument == 'swap': > > swapIndex = ql.UsdLiborSwapIsdaFixAm(ql.Period(tenor)) > > helpers.append( ql.SwapRateHelper(rate, swapIndex)) > > curve = ql.PiecewiseLogCubicDiscount(0, cashFlowCdr, helpers, ql. > ActualActual()) > > > > yts.linkTo(curve) > > engine = ql.DiscountingSwapEngine(yts) > > calendar = ql.TARGET() > > start = ql.Date(11,9,2019) > > maturity = calendar.advance(start, ql.Period('4y')) > > fixedSchedule = ql.MakeSchedule(start, maturity, ql.Period('3M')) > > floatSchedule = ql.MakeSchedule(start, maturity, ql.Period('3M')) > > > > index.addFixing( ql.Date(9, 9, 2019), 0.020) > > index.addFixing( ql.Date(12, 11, 2019), 0.030) > > > > swap = ql.VanillaSwap( > > ql.VanillaSwap.Payer, 1000000, > > fixedSchedule, 0.01, ql.Thirty360(), > > floatSchedule, index, 0, ql.Actual360() > > ) > > swap.setPricingEngine(engine) > > npv=swap.NPV() > > print(npv) > _______________________________________________ > QuantLib-users mailing list > Qua...@li... > https://lists.sourceforge.net/lists/listinfo/quantlib-users > |
|
From: Steve H. <war...@gm...> - 2023-09-13 08:42:57
|
Dear all,
I have a question regarding addfixing. I try to value a portfolio of
interest rate swaps, I add all the existing and required fixing rate of my
swaps to the index, however I get unexpected npv and dv01 for some trades.
My original code is in C++, for conenience, I try to reproduce it in python
as below. The story is that I add a fixing rate on 2019/11/12 which is
actually unnecessary for this swap. and it change the NPV of this swap. I
think the pricer will automatically choose pastfixing or forecasting by
swap’s fixing dates. Since it’s not the required fixing rate, adding such a
rate should have no impact to the NPV.
Do I miss anything or I make any mistake?
Need your advice and help.
Many Thanks.
Best,
Steve
from IPython.display import display
import QuantLib as ql
import pandas as pd
ql.Settings.instance().evaluationDate= ql.Date(12, 11, 2019)
refDate=ql.Settings.instance().evaluationDate
cashFlowCdr = ql.JointCalendar(ql.UnitedStates(ql.UnitedStates.Settlement),
ql.UnitedStates(ql.UnitedStates.LiborImpact))
yts = ql.RelinkableYieldTermStructureHandle()
instruments = [
('depo', '3M', 0.020),
('swap', '1Y', 0.031),
('swap', '2Y', 0.032),
('swap', '5Y', 0.035)
]
helpers = ql.RateHelperVector()
index = ql.USDLibor(ql.Period('3M'),yts)
for instrument, tenor, rate in instruments:
if instrument == 'depo':
helpers.append( ql.DepositRateHelper(rate, index) )
if instrument == 'fra':
monthsToStart = ql.Period(tenor).length()
helpers.append( ql.FraRateHelper(rate, monthsToStart, index) )
if instrument == 'swap':
swapIndex = ql.UsdLiborSwapIsdaFixAm(ql.Period(tenor))
helpers.append( ql.SwapRateHelper(rate, swapIndex))
curve = ql.PiecewiseLogCubicDiscount(0, cashFlowCdr, helpers, ql.
ActualActual())
yts.linkTo(curve)
engine = ql.DiscountingSwapEngine(yts)
calendar = ql.TARGET()
start = ql.Date(11,9,2019)
maturity = calendar.advance(start, ql.Period('4y'))
fixedSchedule = ql.MakeSchedule(start, maturity, ql.Period('3M'))
floatSchedule = ql.MakeSchedule(start, maturity, ql.Period('3M'))
index.addFixing( ql.Date(9, 9, 2019), 0.020)
index.addFixing( ql.Date(12, 11, 2019), 0.030)
swap = ql.VanillaSwap(
ql.VanillaSwap.Payer, 1000000,
fixedSchedule, 0.01, ql.Thirty360(),
floatSchedule, index, 0, ql.Actual360()
)
swap.setPricingEngine(engine)
npv=swap.NPV()
print(npv)
|
|
From: U.Mutlu <um...@mu...> - 2023-09-13 00:56:11
|
The said table is intended for this practical situation:
You know (or will know) the stock price only at the end
of the timesteps, and are interested to know from these
datapoints the expectancy% at end for the pre-calculated
range from -1SD to +1SD for t of the initial stock price.
It's the same procedure like for timeSteps=1 (which is easier to understand).
Normally you are interested in the outcome after t (ie. timeSteps=1 giving the
usual 68.27%).
But then you would also like to know the result if you
watch more datapoints than just the one at the end,
ie. in your example timeSteps=12 in t=1y.
Your observation below is correct I think (except
the sentence "But that makes no sense"), but that
all is already counted for in the table, IMO.
For timeSteps=12 in your example the expectancy is 83.5962% (ie. p=0.835962).
Ie. 68.27% now suddenly becomes 83.5982% only b/c we watch the stock price
more times (1 vs 12 times, at end of equal time intervals).
Do you get a different value for the above said situation?
Or asked differently: how should the table ideally be made up instead?
I'll also create a cumulative version for 0 to t_i.
Maybe you mean that one.
This method can be important for bots who watch the
stock price at such fixed intervals, like weeks or months,
and need to know the probability... and/or for (auto-)trading
systems based on such stochastics.
Luigi Ballabio wrote on 09/11/23 19:01:
> in your sample implementation, you generate paths from 0 to t using
> a number of timesteps. For the sake of example, let's say t=1 year and you
> use 12 timesteps, so one step per month. You calculate the price plus or
> minus 1SD, where 1SD is the standard deviation of the expected distribution at
> t=1. Then you do the following to calculate percentage hits:
>
> for (size_t j = 1; j <= timeSteps; ++j)
> {
> const auto Sx = samplePath.at(j);
> if (Sx >= m1SD && Sx <= p1SD) ++cHit;
> }
>
> that is, for every point in the path you check whether it's between S - 1SD
> and S + 1SD. But that makes no sense. You can only check the point at the
> end of the path, because it's the only point that belongs to the expected
> distribution at t=1 year. In the case of 12 timesteps, the point at j=1
> belongs to the expected distribution at t=1 month, which has a different, much
> smaller SD. The point at j=2 belongs to the expected distribution at t=2
> months, which has a different SD. Of course your percentages increase with
> the timesteps, because you're sampling points from all times between 0 and 1
> (which belong to much narrower distributions) and compare them with the SD at
> t=1 year. If you compare the points at each timestep with their correct
> respective distributions, you'll find about the same percentage (the usual
> 68%) falling within the respective SDs.
>
> Hope this helps clear this up—I won't be able to give much more time to this,
> if at all.
|
|
From: U.Mutlu <um...@mu...> - 2023-09-12 14:15:54
|
Dear Luigi & list, thx, I think now the problem has been found and also fixed as follows: We had in our firewall on the server some IPs and IP ranges blocked b/c of massive hacking and DoS attempts coming from them sometime in the past. It seems that now the gmail servers are using some of these addresses... :-) After removing them from the firewall it now happily works again! Sorry for the unnecessary bandwith this problem has caused in the list. Luigi Ballabio wrote on 09/12/23 14:15: > You didn't receive any emails to your email address because I didn't write > them. I've never received your email telling me of your gmail address, or > possibly I misplaced it. Please try writing to me from your gmail address. > > As a general rule, I find it unlikely that the millions of users of gmail > around the world never noticed that it's dropping mails, especially as it's > only when I write to you that I get messages like this: > Error Icon > > > Delivery incomplete > > There was a temporary problem delivering your message to *um...@mu...*. > Gmail will retry for 20 more hours. You'll be notified if the delivery fails > permanently. > LEARN MORE <https://support.google.com/mail/answer/7720> > > The response was: > > The recipient server did not accept our requests to connect. Learn more at > https://support.google.com/mail/answer/7720 [mail.mutluit.com > <http://mail.mutluit.com/>. 195.201.130.20 <http://195.201.130.20/>: timed > out] [mutluit.com <http://mutluit.com/>. 195.201.130.20 > <http://195.201.130.20/>: timed out] > > > > On Tue, Sep 12, 2023 at 10:13 AM U.Mutlu <um...@mu... > <mailto:um...@mu...>> wrote: > > Luigi, on last Friday I gave you a private "gmail" address > (we...@gm... <mailto:we...@gm...>), > and not even there is any mail from you. > > Ie. the sender's (yours) and recipients mail servers both > are the same gmail server(s), and it still could not deliver > the mail from you. > > This indicates to me that the gmail server(s) must be buggy. ... |
|
From: Luigi B. <lui...@gm...> - 2023-09-12 12:19:01
|
I had seen your table, but please, read again the part of my email where I explain that you're calculating it by taking points from different expected distributions. Luigi On Mon, Sep 11, 2023 at 8:29 PM U.Mutlu <um...@mu...> wrote: > U.Mutlu wrote on 09/11/23 19:41: > > Luigi, your below posting still has not made to my mail client, > > maybe the mailman takes longer to send it out to the list members, > > or again your mail server could not deliver to my mail server. > > I just saw your posting only in the archive. > > > > Luigi B. <lui...@gm...> - 2023-09-11 17:01:32 > > > for (size_t j = 1; j < timeSteps; ++j) > > > > Attention! The above loop is incorrect! > > It must be > > for (size_t j = 1; j <= timeSteps; ++j) > > That's very important! > > > > I think many QuantLib users even use this buggy one: > > for (size_t j = 0; j < timeSteps; ++j) > > but that one is totally wrong! > > > > > > I'll reply later to the rest of your posting. > > I created the table below that shows the expected values > for the various timeSteps. > It's correctness was also confirmed by simulations. > If that table does not answer your questions, let me know and I'll try to > clarify. > > " > Expected HitRate% for 1SD around initial stock price for various GBM > timeSteps > This table is an extension of > https://en.wikipedia.org/wiki/68–95–99.7_rule as > there only timeStep=1 is given. > These numbers are exact values, not the result of simulations. > Params used: S=100 rPct=0 qPct=0 IV=30(s=0.3) DIY=365.00 DTE=365(t=1 dt=1 > dd=365) zFm=-1(SxFm=74.081822) zTo=+1(SxTo=134.985881) > timeSteps Expected_HitRate > 1 68.2689% cf. the above wiki link > 2 76.2695% > 3 79.2918% > 4 80.7919% > 5 81.6648% > 6 82.2304% > 7 82.6265% > 8 82.9197% > 9 83.1461% > 10 83.3265% > 15 83.8653% > 20 84.1337% > 25 84.2942% > 30 84.4010% > 35 84.4771% > 40 84.5341% > 45 84.5785% > 50 84.6139% > 60 84.6671% > 70 84.7050% > 80 84.7334% > 90 84.7555% > 100 84.7732% > 500 84.9003% > 1000 84.9162% > 10000 84.9305% > 100000 84.9319% > 1000000 84.9320% > 10000000 84.9320% > " > > Regarding Ito's lemma you write:. > > You talk as if Ito's lemma was something that one chooses to apply > because > > one likes it. It's not. Just as there is a rule for performing a > change > > of variable in an integral or a derivative, there is also a rule for > > performing a change of variable in a stochastic differential equation, > and > > that's Ito's lemma. If you change variables from x to log(x) or > whatever > > else, you have to use it. If you don't change variables, you don't > have to > > use it. > > As you say, I indeed don't need to change any variables, > so then I don't need this Ito's lemma stuff. > I'm a practitioner from the field, not a theoretician. > I just know this, and can even prove it: Ito's lemma > is definitely not needed, neither in GBM nor in Black-Scholes-Merton (BSM). > At the moment I can't say anything about the possible > use of Ito's lemma in any other field beyond these two, if any. > > > > |
|
From: Luigi B. <lui...@gm...> - 2023-09-12 12:15:48
|
You didn't receive any emails to your email address because I didn't write them. I've never received your email telling me of your gmail address, or possibly I misplaced it. Please try writing to me from your gmail address. As a general rule, I find it unlikely that the millions of users of gmail around the world never noticed that it's dropping mails, especially as it's only when I write to you that I get messages like this: [image: Error Icon] Delivery incomplete There was a temporary problem delivering your message to *um...@mu...*. Gmail will retry for 20 more hours. You'll be notified if the delivery fails permanently. LEARN MORE <https://support.google.com/mail/answer/7720> The response was: The recipient server did not accept our requests to connect. Learn more at https://support.google.com/mail/answer/7720 [mail.mutluit.com. 195.201.130.20: timed out] [mutluit.com. 195.201.130.20: timed out] On Tue, Sep 12, 2023 at 10:13 AM U.Mutlu <um...@mu...> wrote: > Luigi, on last Friday I gave you a private "gmail" address ( > we...@gm...), > and not even there is any mail from you. > > Ie. the sender's (yours) and recipients mail servers both > are the same gmail server(s), and it still could not deliver > the mail from you. > > This indicates to me that the gmail server(s) must be buggy. > > > U.Mutlu wrote on 09/10/23 12:39: > > Luigi, your below posting of today morning as well I did not > > receive but found only in the archive. > > > > Normally in case of such problems the sending mail server tries to > > resend it later, but your mail server seems not to try to resend it. > > > > Look what google says at > > > https://support.google.com/mail/answer/6596#zippy=%2Crecipient-server-did-not-accept-our-requests > > > > > > It says "Try sending the email again later.", ie. the user has to do it > > manually. That's very funny :-) > > > > " > > "Recipient server did not accept our requests" > > Why your message bounced > > You'll see this error message if Gmail can’t connect to your recipient’s > email > > server. > > > > What you can do > > The problem usually goes away quickly without you doing anything. Try > sending > > the email again later. > > > > If you keep getting the error: > > > > Check if there are any mistakes in the recipient's email address. > > Contact the customer support team of your recipient's email provider. > > If you got this error while emailing someone at your work, school, or > other > > organization, contact your administrator. > > " > > > > I checked my server, but could not find anything related in the logs. > > I now have disabled IPv6 as I don't need it on that server, and rebooted > the > > server. > > I will also contact my hosting provider about this problem. Thx. > > > > > > > > Luigi B. <lui...@gm...> on 2023-09-10 07:12:54 wrote > > > > > > I don't think the problem is in the mailing list. I was replying to > your > > > address directly, not through the mailing list, and I've been getting > > > warnings all week like the one below. The same happened when I > replied to > > > the private email you sent to me. It seems to me the emails were all > sent, > > > but it's your server that's not behaving. > > > > > > Luigi > > > Message not delivered > > > There was a problem delivering your message to *um...@mu...*. See the > > > technical details below. > > > LEARN MORE <https://support.google.com/mail/answer/7720> > > > The response was: > > > > > > The recipient server did not accept our requests to connect. Learn > more at > > > https://support.google.com/mail/answer/7720 [mail.mutluit.com. > > > 195.201.130.20: timed out] [mutluit.com. 195.201.130.20: timed out] > > > > > > On Sat, Sep 9, 2023 at 9:33 PM U.Mutlu <um...@mu...> wrote: > > > > > > > I today learned that some of the recent mails posted in this > mailing list > > > > did not make to my email client. > > > > For example the following posting by Jonathan S. on 2023-09-05 > 12:53:12 > > > > I only saw today on the web when I did some searching > > > > in the archive at sourceforge: > > > > https://sourceforge.net/p/quantlib/mailman/message/37892292/ > > > > > > > > Folks, I think such commercial sites are very unreliable > > > > for such open source projects. > > > > It's IMO a nightmare situation for all list subscribers, > > > > especially for active participants. > > > > > > > > What about switching the mailing list provider? > > > > Where one also can download the old postings, which IMO is not > possible at > > > > this sourgeforge site. > > > > > |
|
From: U.Mutlu <um...@mu...> - 2023-09-12 08:13:50
|
Luigi, on last Friday I gave you a private "gmail" address (we...@gm...), and not even there is any mail from you. Ie. the sender's (yours) and recipients mail servers both are the same gmail server(s), and it still could not deliver the mail from you. This indicates to me that the gmail server(s) must be buggy. U.Mutlu wrote on 09/10/23 12:39: > Luigi, your below posting of today morning as well I did not > receive but found only in the archive. > > Normally in case of such problems the sending mail server tries to > resend it later, but your mail server seems not to try to resend it. > > Look what google says at > https://support.google.com/mail/answer/6596#zippy=%2Crecipient-server-did-not-accept-our-requests > > > It says "Try sending the email again later.", ie. the user has to do it > manually. That's very funny :-) > > " > "Recipient server did not accept our requests" > Why your message bounced > You'll see this error message if Gmail can’t connect to your recipient’s email > server. > > What you can do > The problem usually goes away quickly without you doing anything. Try sending > the email again later. > > If you keep getting the error: > > Check if there are any mistakes in the recipient's email address. > Contact the customer support team of your recipient's email provider. > If you got this error while emailing someone at your work, school, or other > organization, contact your administrator. > " > > I checked my server, but could not find anything related in the logs. > I now have disabled IPv6 as I don't need it on that server, and rebooted the > server. > I will also contact my hosting provider about this problem. Thx. > > > > Luigi B. <lui...@gm...> on 2023-09-10 07:12:54 wrote > > > > I don't think the problem is in the mailing list. I was replying to your > > address directly, not through the mailing list, and I've been getting > > warnings all week like the one below. The same happened when I replied to > > the private email you sent to me. It seems to me the emails were all sent, > > but it's your server that's not behaving. > > > > Luigi > > Message not delivered > > There was a problem delivering your message to *um...@mu...*. See the > > technical details below. > > LEARN MORE <https://support.google.com/mail/answer/7720> > > The response was: > > > > The recipient server did not accept our requests to connect. Learn more at > > https://support.google.com/mail/answer/7720 [mail.mutluit.com. > > 195.201.130.20: timed out] [mutluit.com. 195.201.130.20: timed out] > > > > On Sat, Sep 9, 2023 at 9:33 PM U.Mutlu <um...@mu...> wrote: > > > > > I today learned that some of the recent mails posted in this mailing list > > > did not make to my email client. > > > For example the following posting by Jonathan S. on 2023-09-05 12:53:12 > > > I only saw today on the web when I did some searching > > > in the archive at sourceforge: > > > https://sourceforge.net/p/quantlib/mailman/message/37892292/ > > > > > > Folks, I think such commercial sites are very unreliable > > > for such open source projects. > > > It's IMO a nightmare situation for all list subscribers, > > > especially for active participants. > > > > > > What about switching the mailing list provider? > > > Where one also can download the old postings, which IMO is not possible at > > > this sourgeforge site. > |
|
From: U.Mutlu <um...@mu...> - 2023-09-11 18:29:17
|
U.Mutlu wrote on 09/11/23 19:41: > Luigi, your below posting still has not made to my mail client, > maybe the mailman takes longer to send it out to the list members, > or again your mail server could not deliver to my mail server. > I just saw your posting only in the archive. > > Luigi B. <lui...@gm...> - 2023-09-11 17:01:32 > > for (size_t j = 1; j < timeSteps; ++j) > > Attention! The above loop is incorrect! > It must be > for (size_t j = 1; j <= timeSteps; ++j) > That's very important! > > I think many QuantLib users even use this buggy one: > for (size_t j = 0; j < timeSteps; ++j) > but that one is totally wrong! > > > I'll reply later to the rest of your posting. I created the table below that shows the expected values for the various timeSteps. It's correctness was also confirmed by simulations. If that table does not answer your questions, let me know and I'll try to clarify. " Expected HitRate% for 1SD around initial stock price for various GBM timeSteps This table is an extension of https://en.wikipedia.org/wiki/68–95–99.7_rule as there only timeStep=1 is given. These numbers are exact values, not the result of simulations. Params used: S=100 rPct=0 qPct=0 IV=30(s=0.3) DIY=365.00 DTE=365(t=1 dt=1 dd=365) zFm=-1(SxFm=74.081822) zTo=+1(SxTo=134.985881) timeSteps Expected_HitRate 1 68.2689% cf. the above wiki link 2 76.2695% 3 79.2918% 4 80.7919% 5 81.6648% 6 82.2304% 7 82.6265% 8 82.9197% 9 83.1461% 10 83.3265% 15 83.8653% 20 84.1337% 25 84.2942% 30 84.4010% 35 84.4771% 40 84.5341% 45 84.5785% 50 84.6139% 60 84.6671% 70 84.7050% 80 84.7334% 90 84.7555% 100 84.7732% 500 84.9003% 1000 84.9162% 10000 84.9305% 100000 84.9319% 1000000 84.9320% 10000000 84.9320% " Regarding Ito's lemma you write:. > You talk as if Ito's lemma was something that one chooses to apply because > one likes it. It's not. Just as there is a rule for performing a change > of variable in an integral or a derivative, there is also a rule for > performing a change of variable in a stochastic differential equation, and > that's Ito's lemma. If you change variables from x to log(x) or whatever > else, you have to use it. If you don't change variables, you don't have to > use it. As you say, I indeed don't need to change any variables, so then I don't need this Ito's lemma stuff. I'm a practitioner from the field, not a theoretician. I just know this, and can even prove it: Ito's lemma is definitely not needed, neither in GBM nor in Black-Scholes-Merton (BSM). At the moment I can't say anything about the possible use of Ito's lemma in any other field beyond these two, if any. |
|
From: U.Mutlu <um...@mu...> - 2023-09-11 17:41:57
|
Luigi, your below posting still has not made to my mail client, maybe the mailman takes longer to send it out to the list members, or again your mail server could not deliver to my mail server. I just saw your posting only in the archive. Luigi B. <lui...@gm...> - 2023-09-11 17:01:32 > for (size_t j = 1; j < timeSteps; ++j) Attention! The above loop is incorrect! It must be for (size_t j = 1; j <= timeSteps; ++j) That's very important! I think many QuantLib users even use this buggy one: for (size_t j = 0; j < timeSteps; ++j) but that one is totally wrong! I'll reply later to the rest of your posting. |
|
From: Luigi B. <lui...@gm...> - 2023-09-11 17:01:32
|
You talk as if Ito's lemma was something that one chooses to apply because
one likes it. It's not. Just as there is a rule for performing a change
of variable in an integral or a derivative, there is also a rule for
performing a change of variable in a stochastic differential equation, and
that's Ito's lemma. If you change variables from x to log(x) or whatever
else, you have to use it. If you don't change variables, you don't have to
use it.
This said: in your sample implementation, you generate paths from 0 to t
using a number of timesteps. For the sake of example, let's say t=1 year
and you use 12 timesteps, so one step per month. You calculate the price
plus or minus 1SD, where 1SD is the standard deviation of the expected
distribution at t=1. Then you do the following to calculate percentage
hits:
for (size_t j = 1; j < timeSteps; ++j)
{
const auto Sx = samplePath.at(j);
if (Sx >= m1SD && Sx <= p1SD) ++cHit;
}
that is, for every point in the path you check whether it's between S - 1SD
and S + 1SD. But that makes no sense. You can only check the point at the
end of the path, because it's the only point that belongs to the expected
distribution at t=1 year. In the case of 12 timesteps, the point at j=1
belongs to the expected distribution at t=1 month, which has a different,
much smaller SD. The point at j=2 belongs to the expected distribution at
t=2 months, which has a different SD. Of course your percentages increase
with the timesteps, because you're sampling points from all times between 0
and 1 (which belong to much narrower distributions) and compare them with
the SD at t=1 year. If you compare the points at each timestep with their
correct respective distributions, you'll find about the same percentage
(the usual 68%) falling within the respective SDs.
Hope this helps clear this up—I won't be able to give much more time to
this, if at all.
Regards,
Luigi
On Mon, Sep 11, 2023 at 6:00 PM U.Mutlu <um...@mu...> wrote:
> Ioannis Rigopoulos wrote on 09/11/23 17:28:
> > Thank you U.Mutlu.
> >
> > The histograms indicate that the numbers represent indeed lognormal
> distributions.
> >
> > The average of all numbers in file GBM_ts1_A amounts to *104.5935229*
> >
> > The average of all numbers in file GBM_ts1_B amounts to *104.6047793*
> >
> > Both data sets exhibit the correct log standard deviation, very close to
> *0.3*.
>
> So far so good, and these are the most simplest things,
> but with the rest of your analysis I don't agree.
>
> What do other experts say on the following analysis of Ioannis? :
>
> > *Since you have assumed a drift of 0, the SDE represents a martingale and
> > therefore the average stock price at t=1 owes to be very close to 100.*
> >
> > *This proves that the simulated numbers cannot represent the GBM
> diffusion of
> > a stock price starting at 100 and having drift = 0.*
> >
> > Furthermore, the quantity S(0)exp(0.5*sigma^2*t) equals *104.602786*,
> which is
> > extremely close to the above averages and indicates that the Ito term
> > 0.5*sigma^2*t is embedded in your results, albeit in a wrong fashion,
> since -
> > as mentioned above - it should not affect the average of S(t).
> >
> > It seems that you have set up your code in such a way that ln(S) is
> driftless
> > and therefore S acquires a positive Ito drift of 0.5*sigma^2.
> >
> > It also seems that the number of time steps are much greater than 1,
> otherwise
> > the distribution would appear as normal.
> >
> > At any case, since I cannot run your own code and verify the results, I
> will
> > finish my feedback with this email.
> >
> > Ioannis
>
> Oh oh, that's too much incorrect assumptions & interpretations by you.
> Nevertheless, thank you for attempting to analyse it.
>
> Maybe some other experts too will analyse the data and post their findings
> here.
>
> I claim that Ito's lemma is the culprit and that it's unnecessary
> both in GBM and BSM (Black-Scholes-Merton option pricing formula).
>
>
> > On 9/11/2023 4:19 PM, U.Mutlu wrote:
> >> Ioannis Rigopoulos wrote on 09/11/23 13:43:
> >>> Could you please send us a link to a file that contains the simulated
> values
> >>> produced by your program when you use one time step?
> >>>
> >>> Since you use 1,000,000 simulations, the file should contain 1,000,000
> >>> numbers.
> >>
> >> On the following web page you can find download links to even 2 such
> >> GBM data sets I just have created and uploaded to a server:
> >>
> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861992
> >>
> >>
> >> "
> >> 2 GBM generated test files in CSV format (with header line; see there),
> >> with 1,000,000 generated stock prices (S0=100, IV=30, t=1y, timeSteps=1,
> >> r=0, q=0),
> >> each about 29 MB uncompressed (about 12 MB zip-compressed).
> >>
> >> This GBM generator does not use Ito's lemma.
> >>
> >> ...
> >> "
> >>
> >>
> >>> On 9/11/2023 1:33 PM, U.Mutlu wrote:
> >>>> Ioannis Rigopoulos wrote on 09/11/23 11:47:
> >>>> > By the way, this is how my histogram of S(t) (for t = 1) simulated
> values
> >>>> > looks like when I used 100,000 time steps and 100,000 simulations.
> It is
> >>>> > clearly lognormal in shape.
> >>>>
> >>>> Yes, it's lognormally. My values are lognormlly as well,
> >>>> You just restate the obvious fact.
> >>>>
> >>>> > I am sure, if you do the same when time steps = 1,
> >>>> > you will get a normal distribution for S(t), which would prove
> >>>> > that time steps = 1 does not reproduce the theoretically expected
> >>>> > values for S(t).
> >>>>
> >>>> Not true, simply from the fact that the set boundary is lognormally:
> >>>> -1SD: 74.08182206817179
> >>>> +1SD: 134.9858807576003
> >>>> Ie. it's not symmetric (Gaussian, normally distributed) around the
> >>>> initial mean (100), but is lognormally (and therefore unsymmetric).
> >>>>
> >>>> FYI: in my other test program (that does not use QuantLib)
> >>>> I get in such simulations exact results almost identical
> >>>> to the expected theoretical results! So, your math and arguing is
> wrong, IMO.
> >>>>
> >>>> You are too much fixated on timeSteps=1. Maybe you have
> >>>> a much different understanding of what timeSteps really
> >>>> means in practice. It's _not_ something like the number
> >>>> of simulations that you simply can change and expect
> >>>> to get the same result; no, it has a clearly defined context
> >>>> and gives different results for different timeSteps.
> >>>>
> >>>> And you haven't answered the question why you think it has
> >>>> to be 68.27% for all timeSteps (which IMO is wrong).
> >>>>
> >>>> For those interested, here are more details about the above said:
> >>>>
> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548
> >>>>
> >>>>
> >>>>
> >>>> And, I today did a brief summary of these ongoing discussions on GBM
> here:
> >>>>
> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861919
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Ioannis Rigopoulos wrote on 09/11/23 11:47:
> >>>>> Our SDE for the evolution of S is:
> >>>>>
> >>>>> dS = μSdt + σSdw
> >>>>>
> >>>>> In your code you use:
> >>>>>
> >>>>>
> >>>>> PathGenerator<RandomSequenceGenerator<MersenneBoxMuller>>
> >>>>> gbmPathGenerator(gbm, t, timeSteps, gsg, false);
> >>>>> const Path& samplePath = gbmPathGenerator.next().value;
> >>>>>
> >>>>> The PathGenerator::next().value is designed to return an array of
> numbers
> >>>>> {S(T₁) , S(T₂) , ... , S(Tᵤ)}
> >>>>>
> >>>>> where
> >>>>>
> >>>>> u := timeSteps
> >>>>>
> >>>>> Δt := t/u
> >>>>>
> >>>>> Tᵢ := iΔt for i = 1,...,u
> >>>>>
> >>>>> It follows:
> >>>>>
> >>>>> Tᵤ = t = the final time when we are interested to know the simulated
> >>>>> values of
> >>>>> S (in your test you use t = 1).
> >>>>>
> >>>>> QuantLib builds the sequence {S(T₁) , S(T₂) , ... , S(Tᵤ)}
> incrementally
> >>>>> through a total number of u steps by adding the quantity ΔS := μSΔt +
> >>>>> σSΔw on
> >>>>> the previous member of the sequence, with Δw := random number drawn
> from
> >>>>> N~(0,Δt¹ᐟ²).
> >>>>>
> >>>>> The previous member of S(T₁) is assumed the T₀ = 0 and the initial
> value
> >>>>> S(T₀)
> >>>>> = S(0) is assumed deterministic and known.
> >>>>>
> >>>>> So, starting with the given S(T₀), QuantLib calculates the S(T₁)
> using the
> >>>>> formula:
> >>>>>
> >>>>> S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw
> >>>>>
> >>>>> Then given the S(T₁) as calculated above, it proceeds to calculate
> S(T₂) as:
> >>>>>
> >>>>> S(T₂) := S(T₁) + μS(T₁)Δt + σS(T₁)Δw
> >>>>>
> >>>>> For simplicity I have used the same notation for the Δw in the
> equations
> >>>>> above, but they are different.
> >>>>>
> >>>>> After u steps, the final value S(t) = S(Tᵤ) is calculated as:
> >>>>>
> >>>>> S(t) = S(Tᵤ) := S(Tᵤ₋₁) + μS(Tᵤ₋₁)Δt + σS(Tᵤ₋₁)Δw
> >>>>>
> >>>>> All equations above hold only in an approximate sense when Δt is
> finite.
> >>>>>
> >>>>> They are accurate only in a limiting sense as Δt → 0.
> >>>>>
> >>>>> It follows that the value S(t) will be calculated correctly only if
> Δt → 0,
> >>>>> which is equivalent to u → ∞
> >>>>>
> >>>>> If you use u = 1 (i.e. one time step), you will still match the
> 68.27% rule
> >>>>> when the 1SD interval is defined in accordance with a normal
> distribution
> >>>>> because S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw corresponds to a normal
> >>>>> distribution and if you build a histogram of all simulated values
> you will
> >>>>> notice that S(T₁) is distributed normally, which is clearly not
> consistent
> >>>>> with the SDE that predicts lognormal distribution.
> >>>>>
> >>>>> The correct procedure is of course to use a very large u and define
> the 1SD
> >>>>> interval in accordance with a lognormal distribution, as I described
> in my
> >>>>> previous email.
> >>>>>
> >>>>> Then you will get convergence to the 68.27% rule.
> >>>>>
> >>>>> By the way, this is how my histogram of S(t) (for t = 1) simulated
> values
> >>>>> looks like when I used 100,000 time steps and 100,000 simulations.
> It is
> >>>>> clearly lognormal in shape. I am sure, if you do the same when time
> steps
> >>>>> = 1,
> >>>>> you will get a normal distribution for S(t), which would prove that
> time
> >>>>> steps
> >>>>> = 1 does not reproduce the theoretically expected values for S(t).
> >>>>>
> >>>>> Ioannis
> >>>>>
> >>>>> On 9/11/2023 10:19 AM, U.Mutlu wrote:
> >>>>>> Dear Ioannis Rigopoulos, I now analysed your table and
> >>>>>> must say that you seem to have a much different expectation to,
> >>>>>> and understanding of that table.
> >>>>>>
> >>>>>> The values in your table for timeSteps > 1 can definitely
> >>>>>> not be correct, b/c why do you think it has to be 68.27 for all
> timeSteps?
> >>>>>> This is illogical b/c the more timeSteps the more the hitrate%
> >>>>>> must be, like in my table. It saturates at about 84.93%
> >>>>>>
> >>>>>> Sorry, I have to retract my initial enthusiastic judment
> >>>>>> of your work below, as it's fundamentally flawed & wrong,
> >>>>>> and I should have been more careful when I quickly & briefly
> >>>>>> inspected your posting in the early morning here.
> >>>>>>
> >>>>>> Since your results in the table for timeSteps > 1 are wrong,
> >>>>>> then your formulas must be wrong too, so I had to stop my analysis
> right
> >>>>>> there.
> >>>>>>
> >>>>>> Please just answer why you think it has to be 68.27% for all
> timeSteps?
> >>>>>>
> >>>>>>
> >>>>>> U.Mutlu wrote on 09/11/23 08:36:
> >>>>>>> Mr. Ioannis Rigopoulos, that's a brilliant work by you. Thank you
> very
> >>>>>>> much.
> >>>>>>> I need some time to study and verify the reults, but it looks very
> good
> >>>>>>> to solve the confusion and mystery I had experienced when I
> started this
> >>>>>>> research some weeks ago.
> >>>>>>> I'll try to replicate your results and will report later in detail
> here.
> >>>>>>>
> >>>>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29:
> >>>>>>> > On a different topic, I do not understand your statement "/I've
> never
> >>>>>>> seen
> >>>>>>> > that Sx@-1SD and Sx@+1SD get computed with Ito's lemma
> applied/".
> >>>>>>> > What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m
> +1SD,
> >>>>>>> where
> >>>>>>>
> >>>>>>> Sorry for the consfusion, it just means the stock price (Sx after
> time t,
> >>>>>>> ie. S(t) or S_t) for the lower boundary at -1SD and for the upper
> >>>>>>> boundary at
> >>>>>>> +1SD,
> >>>>>>> ie. your "min of S(t)" and "max of S(t)" in your other posting.
> >>>>>>> In future I'll use a more common term like yours, sorry.
> >>>>>>>
> >>>>>>> Thx
> >>>>>>>
> >>>>>>>
> >>>>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29:
> >>>>>>>> Yes, I mean mu * t - 0.5 * sigma * sigma * t
> >>>>>>>>
> >>>>>>>> Are you sure, you also adjusted the boundaries corresponding to
> plus and
> >>>>>>>> minus
> >>>>>>>> 1 SD?
> >>>>>>>>
> >>>>>>>> I ran an Excel spreadsheet by modifying a spreadsheet I had
> published
> >>>>>>>> several
> >>>>>>>> years ago at
> https://blog.deriscope.com/index.php/en/excel-value-at-risk
> >>>>>>>> that
> >>>>>>>> uses the GeometricBrownianMotionProcess , albeit with the
> >>>>>>>> PseudoRandom::rsg_type generator.
> >>>>>>>>
> >>>>>>>> At any case, I got convergence to 68.27% as I increased the
> number of
> >>>>>>>> time
> >>>>>>>> steps and keeping the number of simulations fixed to 100,000,
> provided I
> >>>>>>>> used
> >>>>>>>> the correct (Ito-adjusted) means, as below:
> >>>>>>>>
> >>>>>>>> As you see above, the "No Ito-adjusted" boundaries are 74.08 and
> 134.99,
> >>>>>>>> just
> >>>>>>>> like those you use.
> >>>>>>>>
> >>>>>>>> The Ito-adjusted boundaries are 70.82 and 129.05 produced by
> shifting the
> >>>>>>>> no-adjusted boundaries - in log terms - to the left by
> 0.5*sigma^2*t.
> >>>>>>>>
> >>>>>>>> The simulation results (percentage of values being within the min
> and max
> >>>>>>>> boundaries) are below:
> >>>>>>>>
> >>>>>>>> As you see, the Ito-adjusted percentage reaches 68.30% for
> 100,000 time
> >>>>>>>> steps,
> >>>>>>>> which is close to the expected 68.27%.
> >>>>>>>>
> >>>>>>>> The No Ito-adjusted percentage seems instead to converge to a
> different
> >>>>>>>> value
> >>>>>>>> around 67.73%
> >>>>>>>>
> >>>>>>>> On a different topic, I do not understand your statement "/I've
> never
> >>>>>>>> seen
> >>>>>>>> that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/".
> >>>>>>>>
> >>>>>>>> What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m
> +1SD, where
> >>>>>>>> m is
> >>>>>>>> the mean of a normally distributed variable. This means that one
> needs to
> >>>>>>>> calculate m in order to do the mentioned computation.
> >>>>>>>>
> >>>>>>>> If your starting point is an SDE like dS = S*mu*dt + S*sigma*dw,
> then
> >>>>>>>> ln(S(t))
> >>>>>>>> at any time t will be normally distributed with mean m = ln(S(0))
> +
> >>>>>>>> mu*t -
> >>>>>>>> 0.5*sigma^2*t , where the last term is a consequence of the Ito
> lemma.
> >>>>>>>>
> >>>>>>>> Do you have any reason to believe that the Ito lemma is somehow
> not
> >>>>>>>> valid so
> >>>>>>>> that m = ln(S(0)) + mu*t?
> >>>>>>>>
> >>>>>>>> Ioannis
> >>>>>>>>
> >>>>>>>> On 9/10/2023 3:00 PM, U.Mutlu wrote:
> >>>>>>>>
> >>>>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06:
> >>>>>>>>> > new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t,
> sigma)
> >>>>>>>>>
> >>>>>>>>> Thank you for your analysis.
> >>>>>>>>> I guess you mean "mu * t - 0.5 * sigma * sigma * t",
> >>>>>>>>> though in our example with t=1 it doesn't make any difference.
> >>>>>>>>> It gives for timeSteps=1 this result:
> >>>>>>>>>
> >>>>>>>>> $ ./Testing_GBM_of_QuantLib.exe 1
> >>>>>>>>> timeSteps=1 nRuns=1000000 seed=1694350548 S=100.000000
> mu=0.000000
> >>>>>>>>> sigma=0.300000 t=1.000000 : nGenAll=1000000 USE_ITO=1 :
> -1SD=70.822035
> >>>>>>>>> +1SD=129.046162
> >>>>>>>>> cHit=663067/1000000(66.307%)
> >>>>>>>>>
> >>>>>>>>> This is far from the expected value of 68.2689% for timeStep=1.
> >>>>>>>>> So, there must be a bug somewhere. IMO it's our suspect friend
> Ito :-)
> >>>>>>>>>
> >>>>>>>>> Cf. also the following showing similar results for GBM,
> >>>>>>>>> which then was compared to the standard lognormal calculation:
> >>>>>>>>>
> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> And, honestly, I've never seen that Sx@-1SD and Sx@+1SD get
> computed
> >>>>>>>>> with
> >>>>>>>>> Ito's lemma applied.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06:
> >>>>>>>>>> Thank you for testing the BoxMullerGaussianRng code, which - as
> I
> >>>>>>>>>> wrote -
> >>>>>>>>>> does
> >>>>>>>>>> not seem to be used in other areas of the standard part of
> QuantLib.
> >>>>>>>>>>
> >>>>>>>>>> Your numerical results below ...
> >>>>>>>>>>
> >>>>>>>>>> indicate that the simulated values for ln(S(t)) produced with
> >>>>>>>>>> timeSteps = 1
> >>>>>>>>>> are very likely normally distributed with mean (I use your
> notation)
> >>>>>>>>>> ln(S(0))
> >>>>>>>>>> + mu*t and standard deviation sigma * sqrt(t)
> >>>>>>>>>>
> >>>>>>>>>> This result _*is consistent*_ with:
> >>>>>>>>>>
> >>>>>>>>>> a) the SDE: dS(t) = S(t)*mu*dt + S(t)*sigma*dw
> >>>>>>>>>>
> >>>>>>>>>> _*and*_
> >>>>>>>>>>
> >>>>>>>>>> b) the fact that in QuantLib the PathGenerator.next().value
> returns the
> >>>>>>>>>> result
> >>>>>>>>>> of the SDE expression _*without *_applying the ITO correction
> >>>>>>>>>> associated
> >>>>>>>>>> with
> >>>>>>>>>> the fact that dt is not infinitesimal.
> >>>>>>>>>>
> >>>>>>>>>> The b) is also responsible for the lack of convergence in your
> output
> >>>>>>>>>> towards
> >>>>>>>>>> the theoretical target of 68.27%
> >>>>>>>>>>
> >>>>>>>>>> You would get the correct convergence if you modified your code
> by
> >>>>>>>>>> using
> >>>>>>>>>> the
> >>>>>>>>>> expressions below:
> >>>>>>>>>>
> >>>>>>>>>> const double m1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma
> *
> >>>>>>>>>> sqrt(t) *
> >>>>>>>>>> -1.0); // Sx at -1SD
> >>>>>>>>>> const double p1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma
> *
> >>>>>>>>>> sqrt(t) *
> >>>>>>>>>> 1.0); // Sx at +1SD
> >>>>>>>>>>
> >>>>>>>>>> new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t,
> >>>>>>>>>> sigma) (as
> >>>>>>>>>> Peter also pointed out)
> >>>>>>>>>>
> >>>>>>>>>> Again, due to b) one can produce the correct simulated values
> of a GBM
> >>>>>>>>>> diffused quantity (such as a stock price in the GBM model) by
> using N
> >>>>>>>>>> time
> >>>>>>>>>> steps, with N very large. Using N = 1 (like in your example),
> the
> >>>>>>>>>> simulated
> >>>>>>>>>> values will still be lognormally distributed (whence your good
> result
> >>>>>>>>>> with
> >>>>>>>>>> N =
> >>>>>>>>>> 1), but will be centered at a wrong mean and thus will _*fail
> *_to
> >>>>>>>>>> represent
> >>>>>>>>>> the correct values expected by the GBM SDE.
> >>>>>>>>>>
> >>>>>>>>>> Ioannis
> >>>>>>>>>>
> >>>>>>>>>> On 9/10/2023 11:29 AM, U.Mutlu wrote:
> >>>>>>>>>>> As said in other posting here, after fixing the test program
> >>>>>>>>>>> by skipping the first item (the initial price) in the
> >>>>>>>>>>> generated sample path, BoxMullerGaussianRng now passes the
> said test.
> >>>>>>>>>>>
> >>>>>>>>>>> The bugfixed test code and the new test results can be found
> here:
> >>>>>>>>>>>
> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-4#post-5861666
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> That important fact that the generated sample path contains
> >>>>>>>>>>> not timeSteps elements but 1 + timeSteps elements needs
> >>>>>>>>>>> to be documented in the library doc.
> >>>>>>>>>>> For example on this API doc page one normally would expect
> >>>>>>>>>>> to find such an important information, but it's missing:
> >>>>>>>>>>>
> https://www.quantlib.org/reference/class_quant_lib_1_1_path_generator.html
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> If you or someone else can change/extend the test program by
> using
> >>>>>>>>>>> the suggested alternative(s) to BoxMullerGaussianRng, I would
> be happy
> >>>>>>>>>>> to hear about it. Thx.
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Ioannis Rigopoulos wrote on 09/09/23 16:28:
> >>>>>>>>>>>> If you search within the QuantLib code for
> BoxMullerGaussianRng, you
> >>>>>>>>>>>> will
> >>>>>>>>>>>> see
> >>>>>>>>>>>> it is used only in the experimental folder. It is therefore
> not
> >>>>>>>>>>>> surprising if
> >>>>>>>>>>>> it doesn't produce the expected results.
> >>>>>>>>>>>>
> >>>>>>>>>>>> I use myself the MultiPathGenerator with
> PseudoRandom::rsg_type,
> >>>>>>>>>>>> which is
> >>>>>>>>>>>> used
> >>>>>>>>>>>> extensively in other areas of QuantLib.
> >>>>>>>>>>>>
> >>>>>>>>>>>> This type expands to InverseCumulativeRsg<
> RandomSequenceGenerator<
> >>>>>>>>>>>> MersenneTwisterUniformRng > , InverseCumulativeNormal > and
> gives me
> >>>>>>>>>>>> good
> >>>>>>>>>>>> results.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Ioannis Rigopoulos, founder of deriscope.com
> >>
> >>
> >
> > <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
> >
> > Virus-free.www.avast.com
> > <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > QuantLib-users mailing list
> > Qua...@li...
> > https://lists.sourceforge.net/lists/listinfo/quantlib-users
> >
>
>
>
>
> _______________________________________________
> QuantLib-users mailing list
> Qua...@li...
> https://lists.sourceforge.net/lists/listinfo/quantlib-users
>
|
|
From: U.Mutlu <um...@mu...> - 2023-09-11 15:57:05
|
Ioannis Rigopoulos wrote on 09/11/23 17:28: > Thank you U.Mutlu. > > The histograms indicate that the numbers represent indeed lognormal distributions. > > The average of all numbers in file GBM_ts1_A amounts to *104.5935229* > > The average of all numbers in file GBM_ts1_B amounts to *104.6047793* > > Both data sets exhibit the correct log standard deviation, very close to *0.3*. So far so good, and these are the most simplest things, but with the rest of your analysis I don't agree. What do other experts say on the following analysis of Ioannis? : > *Since you have assumed a drift of 0, the SDE represents a martingale and > therefore the average stock price at t=1 owes to be very close to 100.* > > *This proves that the simulated numbers cannot represent the GBM diffusion of > a stock price starting at 100 and having drift = 0.* > > Furthermore, the quantity S(0)exp(0.5*sigma^2*t) equals *104.602786*, which is > extremely close to the above averages and indicates that the Ito term > 0.5*sigma^2*t is embedded in your results, albeit in a wrong fashion, since - > as mentioned above - it should not affect the average of S(t). > > It seems that you have set up your code in such a way that ln(S) is driftless > and therefore S acquires a positive Ito drift of 0.5*sigma^2. > > It also seems that the number of time steps are much greater than 1, otherwise > the distribution would appear as normal. > > At any case, since I cannot run your own code and verify the results, I will > finish my feedback with this email. > > Ioannis Oh oh, that's too much incorrect assumptions & interpretations by you. Nevertheless, thank you for attempting to analyse it. Maybe some other experts too will analyse the data and post their findings here. I claim that Ito's lemma is the culprit and that it's unnecessary both in GBM and BSM (Black-Scholes-Merton option pricing formula). > On 9/11/2023 4:19 PM, U.Mutlu wrote: >> Ioannis Rigopoulos wrote on 09/11/23 13:43: >>> Could you please send us a link to a file that contains the simulated values >>> produced by your program when you use one time step? >>> >>> Since you use 1,000,000 simulations, the file should contain 1,000,000 >>> numbers. >> >> On the following web page you can find download links to even 2 such >> GBM data sets I just have created and uploaded to a server: >> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861992 >> >> >> " >> 2 GBM generated test files in CSV format (with header line; see there), >> with 1,000,000 generated stock prices (S0=100, IV=30, t=1y, timeSteps=1, >> r=0, q=0), >> each about 29 MB uncompressed (about 12 MB zip-compressed). >> >> This GBM generator does not use Ito's lemma. >> >> ... >> " >> >> >>> On 9/11/2023 1:33 PM, U.Mutlu wrote: >>>> Ioannis Rigopoulos wrote on 09/11/23 11:47: >>>> > By the way, this is how my histogram of S(t) (for t = 1) simulated values >>>> > looks like when I used 100,000 time steps and 100,000 simulations. It is >>>> > clearly lognormal in shape. >>>> >>>> Yes, it's lognormally. My values are lognormlly as well, >>>> You just restate the obvious fact. >>>> >>>> > I am sure, if you do the same when time steps = 1, >>>> > you will get a normal distribution for S(t), which would prove >>>> > that time steps = 1 does not reproduce the theoretically expected >>>> > values for S(t). >>>> >>>> Not true, simply from the fact that the set boundary is lognormally: >>>> -1SD: 74.08182206817179 >>>> +1SD: 134.9858807576003 >>>> Ie. it's not symmetric (Gaussian, normally distributed) around the >>>> initial mean (100), but is lognormally (and therefore unsymmetric). >>>> >>>> FYI: in my other test program (that does not use QuantLib) >>>> I get in such simulations exact results almost identical >>>> to the expected theoretical results! So, your math and arguing is wrong, IMO. >>>> >>>> You are too much fixated on timeSteps=1. Maybe you have >>>> a much different understanding of what timeSteps really >>>> means in practice. It's _not_ something like the number >>>> of simulations that you simply can change and expect >>>> to get the same result; no, it has a clearly defined context >>>> and gives different results for different timeSteps. >>>> >>>> And you haven't answered the question why you think it has >>>> to be 68.27% for all timeSteps (which IMO is wrong). >>>> >>>> For those interested, here are more details about the above said: >>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >>>> >>>> >>>> >>>> And, I today did a brief summary of these ongoing discussions on GBM here: >>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861919 >>>> >>>> >>>> >>>> >>>> Ioannis Rigopoulos wrote on 09/11/23 11:47: >>>>> Our SDE for the evolution of S is: >>>>> >>>>> dS = μSdt + σSdw >>>>> >>>>> In your code you use: >>>>> >>>>> >>>>> PathGenerator<RandomSequenceGenerator<MersenneBoxMuller>> >>>>> gbmPathGenerator(gbm, t, timeSteps, gsg, false); >>>>> const Path& samplePath = gbmPathGenerator.next().value; >>>>> >>>>> The PathGenerator::next().value is designed to return an array of numbers >>>>> {S(T₁) , S(T₂) , ... , S(Tᵤ)} >>>>> >>>>> where >>>>> >>>>> u := timeSteps >>>>> >>>>> Δt := t/u >>>>> >>>>> Tᵢ := iΔt for i = 1,...,u >>>>> >>>>> It follows: >>>>> >>>>> Tᵤ = t = the final time when we are interested to know the simulated >>>>> values of >>>>> S (in your test you use t = 1). >>>>> >>>>> QuantLib builds the sequence {S(T₁) , S(T₂) , ... , S(Tᵤ)} incrementally >>>>> through a total number of u steps by adding the quantity ΔS := μSΔt + >>>>> σSΔw on >>>>> the previous member of the sequence, with Δw := random number drawn from >>>>> N~(0,Δt¹ᐟ²). >>>>> >>>>> The previous member of S(T₁) is assumed the T₀ = 0 and the initial value >>>>> S(T₀) >>>>> = S(0) is assumed deterministic and known. >>>>> >>>>> So, starting with the given S(T₀), QuantLib calculates the S(T₁) using the >>>>> formula: >>>>> >>>>> S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw >>>>> >>>>> Then given the S(T₁) as calculated above, it proceeds to calculate S(T₂) as: >>>>> >>>>> S(T₂) := S(T₁) + μS(T₁)Δt + σS(T₁)Δw >>>>> >>>>> For simplicity I have used the same notation for the Δw in the equations >>>>> above, but they are different. >>>>> >>>>> After u steps, the final value S(t) = S(Tᵤ) is calculated as: >>>>> >>>>> S(t) = S(Tᵤ) := S(Tᵤ₋₁) + μS(Tᵤ₋₁)Δt + σS(Tᵤ₋₁)Δw >>>>> >>>>> All equations above hold only in an approximate sense when Δt is finite. >>>>> >>>>> They are accurate only in a limiting sense as Δt → 0. >>>>> >>>>> It follows that the value S(t) will be calculated correctly only if Δt → 0, >>>>> which is equivalent to u → ∞ >>>>> >>>>> If you use u = 1 (i.e. one time step), you will still match the 68.27% rule >>>>> when the 1SD interval is defined in accordance with a normal distribution >>>>> because S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw corresponds to a normal >>>>> distribution and if you build a histogram of all simulated values you will >>>>> notice that S(T₁) is distributed normally, which is clearly not consistent >>>>> with the SDE that predicts lognormal distribution. >>>>> >>>>> The correct procedure is of course to use a very large u and define the 1SD >>>>> interval in accordance with a lognormal distribution, as I described in my >>>>> previous email. >>>>> >>>>> Then you will get convergence to the 68.27% rule. >>>>> >>>>> By the way, this is how my histogram of S(t) (for t = 1) simulated values >>>>> looks like when I used 100,000 time steps and 100,000 simulations. It is >>>>> clearly lognormal in shape. I am sure, if you do the same when time steps >>>>> = 1, >>>>> you will get a normal distribution for S(t), which would prove that time >>>>> steps >>>>> = 1 does not reproduce the theoretically expected values for S(t). >>>>> >>>>> Ioannis >>>>> >>>>> On 9/11/2023 10:19 AM, U.Mutlu wrote: >>>>>> Dear Ioannis Rigopoulos, I now analysed your table and >>>>>> must say that you seem to have a much different expectation to, >>>>>> and understanding of that table. >>>>>> >>>>>> The values in your table for timeSteps > 1 can definitely >>>>>> not be correct, b/c why do you think it has to be 68.27 for all timeSteps? >>>>>> This is illogical b/c the more timeSteps the more the hitrate% >>>>>> must be, like in my table. It saturates at about 84.93% >>>>>> >>>>>> Sorry, I have to retract my initial enthusiastic judment >>>>>> of your work below, as it's fundamentally flawed & wrong, >>>>>> and I should have been more careful when I quickly & briefly >>>>>> inspected your posting in the early morning here. >>>>>> >>>>>> Since your results in the table for timeSteps > 1 are wrong, >>>>>> then your formulas must be wrong too, so I had to stop my analysis right >>>>>> there. >>>>>> >>>>>> Please just answer why you think it has to be 68.27% for all timeSteps? >>>>>> >>>>>> >>>>>> U.Mutlu wrote on 09/11/23 08:36: >>>>>>> Mr. Ioannis Rigopoulos, that's a brilliant work by you. Thank you very >>>>>>> much. >>>>>>> I need some time to study and verify the reults, but it looks very good >>>>>>> to solve the confusion and mystery I had experienced when I started this >>>>>>> research some weeks ago. >>>>>>> I'll try to replicate your results and will report later in detail here. >>>>>>> >>>>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>>>> > On a different topic, I do not understand your statement "/I've never >>>>>>> seen >>>>>>> > that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>>>> > What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m +1SD, >>>>>>> where >>>>>>> >>>>>>> Sorry for the consfusion, it just means the stock price (Sx after time t, >>>>>>> ie. S(t) or S_t) for the lower boundary at -1SD and for the upper >>>>>>> boundary at >>>>>>> +1SD, >>>>>>> ie. your "min of S(t)" and "max of S(t)" in your other posting. >>>>>>> In future I'll use a more common term like yours, sorry. >>>>>>> >>>>>>> Thx >>>>>>> >>>>>>> >>>>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>>>>> Yes, I mean mu * t - 0.5 * sigma * sigma * t >>>>>>>> >>>>>>>> Are you sure, you also adjusted the boundaries corresponding to plus and >>>>>>>> minus >>>>>>>> 1 SD? >>>>>>>> >>>>>>>> I ran an Excel spreadsheet by modifying a spreadsheet I had published >>>>>>>> several >>>>>>>> years ago at https://blog.deriscope.com/index.php/en/excel-value-at-risk >>>>>>>> that >>>>>>>> uses the GeometricBrownianMotionProcess , albeit with the >>>>>>>> PseudoRandom::rsg_type generator. >>>>>>>> >>>>>>>> At any case, I got convergence to 68.27% as I increased the number of >>>>>>>> time >>>>>>>> steps and keeping the number of simulations fixed to 100,000, provided I >>>>>>>> used >>>>>>>> the correct (Ito-adjusted) means, as below: >>>>>>>> >>>>>>>> As you see above, the "No Ito-adjusted" boundaries are 74.08 and 134.99, >>>>>>>> just >>>>>>>> like those you use. >>>>>>>> >>>>>>>> The Ito-adjusted boundaries are 70.82 and 129.05 produced by shifting the >>>>>>>> no-adjusted boundaries - in log terms - to the left by 0.5*sigma^2*t. >>>>>>>> >>>>>>>> The simulation results (percentage of values being within the min and max >>>>>>>> boundaries) are below: >>>>>>>> >>>>>>>> As you see, the Ito-adjusted percentage reaches 68.30% for 100,000 time >>>>>>>> steps, >>>>>>>> which is close to the expected 68.27%. >>>>>>>> >>>>>>>> The No Ito-adjusted percentage seems instead to converge to a different >>>>>>>> value >>>>>>>> around 67.73% >>>>>>>> >>>>>>>> On a different topic, I do not understand your statement "/I've never >>>>>>>> seen >>>>>>>> that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>>>>> >>>>>>>> What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m +1SD, where >>>>>>>> m is >>>>>>>> the mean of a normally distributed variable. This means that one needs to >>>>>>>> calculate m in order to do the mentioned computation. >>>>>>>> >>>>>>>> If your starting point is an SDE like dS = S*mu*dt + S*sigma*dw, then >>>>>>>> ln(S(t)) >>>>>>>> at any time t will be normally distributed with mean m = ln(S(0)) + >>>>>>>> mu*t - >>>>>>>> 0.5*sigma^2*t , where the last term is a consequence of the Ito lemma. >>>>>>>> >>>>>>>> Do you have any reason to believe that the Ito lemma is somehow not >>>>>>>> valid so >>>>>>>> that m = ln(S(0)) + mu*t? >>>>>>>> >>>>>>>> Ioannis >>>>>>>> >>>>>>>> On 9/10/2023 3:00 PM, U.Mutlu wrote: >>>>>>>> >>>>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>>>> > new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, sigma) >>>>>>>>> >>>>>>>>> Thank you for your analysis. >>>>>>>>> I guess you mean "mu * t - 0.5 * sigma * sigma * t", >>>>>>>>> though in our example with t=1 it doesn't make any difference. >>>>>>>>> It gives for timeSteps=1 this result: >>>>>>>>> >>>>>>>>> $ ./Testing_GBM_of_QuantLib.exe 1 >>>>>>>>> timeSteps=1 nRuns=1000000 seed=1694350548 S=100.000000 mu=0.000000 >>>>>>>>> sigma=0.300000 t=1.000000 : nGenAll=1000000 USE_ITO=1 : -1SD=70.822035 >>>>>>>>> +1SD=129.046162 >>>>>>>>> cHit=663067/1000000(66.307%) >>>>>>>>> >>>>>>>>> This is far from the expected value of 68.2689% for timeStep=1. >>>>>>>>> So, there must be a bug somewhere. IMO it's our suspect friend Ito :-) >>>>>>>>> >>>>>>>>> Cf. also the following showing similar results for GBM, >>>>>>>>> which then was compared to the standard lognormal calculation: >>>>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> And, honestly, I've never seen that Sx@-1SD and Sx@+1SD get computed >>>>>>>>> with >>>>>>>>> Ito's lemma applied. >>>>>>>>> >>>>>>>>> >>>>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>>>>> Thank you for testing the BoxMullerGaussianRng code, which - as I >>>>>>>>>> wrote - >>>>>>>>>> does >>>>>>>>>> not seem to be used in other areas of the standard part of QuantLib. >>>>>>>>>> >>>>>>>>>> Your numerical results below ... >>>>>>>>>> >>>>>>>>>> indicate that the simulated values for ln(S(t)) produced with >>>>>>>>>> timeSteps = 1 >>>>>>>>>> are very likely normally distributed with mean (I use your notation) >>>>>>>>>> ln(S(0)) >>>>>>>>>> + mu*t and standard deviation sigma * sqrt(t) >>>>>>>>>> >>>>>>>>>> This result _*is consistent*_ with: >>>>>>>>>> >>>>>>>>>> a) the SDE: dS(t) = S(t)*mu*dt + S(t)*sigma*dw >>>>>>>>>> >>>>>>>>>> _*and*_ >>>>>>>>>> >>>>>>>>>> b) the fact that in QuantLib the PathGenerator.next().value returns the >>>>>>>>>> result >>>>>>>>>> of the SDE expression _*without *_applying the ITO correction >>>>>>>>>> associated >>>>>>>>>> with >>>>>>>>>> the fact that dt is not infinitesimal. >>>>>>>>>> >>>>>>>>>> The b) is also responsible for the lack of convergence in your output >>>>>>>>>> towards >>>>>>>>>> the theoretical target of 68.27% >>>>>>>>>> >>>>>>>>>> You would get the correct convergence if you modified your code by >>>>>>>>>> using >>>>>>>>>> the >>>>>>>>>> expressions below: >>>>>>>>>> >>>>>>>>>> const double m1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma * >>>>>>>>>> sqrt(t) * >>>>>>>>>> -1.0); // Sx at -1SD >>>>>>>>>> const double p1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma * >>>>>>>>>> sqrt(t) * >>>>>>>>>> 1.0); // Sx at +1SD >>>>>>>>>> >>>>>>>>>> new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, >>>>>>>>>> sigma) (as >>>>>>>>>> Peter also pointed out) >>>>>>>>>> >>>>>>>>>> Again, due to b) one can produce the correct simulated values of a GBM >>>>>>>>>> diffused quantity (such as a stock price in the GBM model) by using N >>>>>>>>>> time >>>>>>>>>> steps, with N very large. Using N = 1 (like in your example), the >>>>>>>>>> simulated >>>>>>>>>> values will still be lognormally distributed (whence your good result >>>>>>>>>> with >>>>>>>>>> N = >>>>>>>>>> 1), but will be centered at a wrong mean and thus will _*fail *_to >>>>>>>>>> represent >>>>>>>>>> the correct values expected by the GBM SDE. >>>>>>>>>> >>>>>>>>>> Ioannis >>>>>>>>>> >>>>>>>>>> On 9/10/2023 11:29 AM, U.Mutlu wrote: >>>>>>>>>>> As said in other posting here, after fixing the test program >>>>>>>>>>> by skipping the first item (the initial price) in the >>>>>>>>>>> generated sample path, BoxMullerGaussianRng now passes the said test. >>>>>>>>>>> >>>>>>>>>>> The bugfixed test code and the new test results can be found here: >>>>>>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-4#post-5861666 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> That important fact that the generated sample path contains >>>>>>>>>>> not timeSteps elements but 1 + timeSteps elements needs >>>>>>>>>>> to be documented in the library doc. >>>>>>>>>>> For example on this API doc page one normally would expect >>>>>>>>>>> to find such an important information, but it's missing: >>>>>>>>>>> https://www.quantlib.org/reference/class_quant_lib_1_1_path_generator.html >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> If you or someone else can change/extend the test program by using >>>>>>>>>>> the suggested alternative(s) to BoxMullerGaussianRng, I would be happy >>>>>>>>>>> to hear about it. Thx. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Ioannis Rigopoulos wrote on 09/09/23 16:28: >>>>>>>>>>>> If you search within the QuantLib code for BoxMullerGaussianRng, you >>>>>>>>>>>> will >>>>>>>>>>>> see >>>>>>>>>>>> it is used only in the experimental folder. It is therefore not >>>>>>>>>>>> surprising if >>>>>>>>>>>> it doesn't produce the expected results. >>>>>>>>>>>> >>>>>>>>>>>> I use myself the MultiPathGenerator with PseudoRandom::rsg_type, >>>>>>>>>>>> which is >>>>>>>>>>>> used >>>>>>>>>>>> extensively in other areas of QuantLib. >>>>>>>>>>>> >>>>>>>>>>>> This type expands to InverseCumulativeRsg< RandomSequenceGenerator< >>>>>>>>>>>> MersenneTwisterUniformRng > , InverseCumulativeNormal > and gives me >>>>>>>>>>>> good >>>>>>>>>>>> results. >>>>>>>>>>>> >>>>>>>>>>>> Ioannis Rigopoulos, founder of deriscope.com >> >> > > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> > Virus-free.www.avast.com > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> > > > > > > _______________________________________________ > QuantLib-users mailing list > Qua...@li... > https://lists.sourceforge.net/lists/listinfo/quantlib-users > |
|
From: Ioannis R. <qua...@de...> - 2023-09-11 15:29:12
|
Thank you U.Mutlu. The histograms indicate that the numbers represent indeed lognormal distributions. The average of all numbers in file GBM_ts1_A amounts to *104.5935229* The average of all numbers in file GBM_ts1_B amounts to *104.6047793* Both data sets exhibit the correct log standard deviation, very close to *0.3*. *Since you have assumed a drift of 0, the SDE represents a martingale and therefore the average stock price at t=1 owes to be very close to 100.* *This proves that the simulated numbers cannot represent the GBM diffusion of a stock price starting at 100 and having drift = 0.* Furthermore, the quantity S(0)exp(0.5*sigma^2*t) equals *104.602786*, which is extremely close to the above averages and indicates that the Ito term 0.5*sigma^2*t is embedded in your results, albeit in a wrong fashion, since - as mentioned above - it should not affect the average of S(t). It seems that you have set up your code in such a way that ln(S) is driftless and therefore S acquires a positive Ito drift of 0.5*sigma^2. It also seems that the number of time steps are much greater than 1, otherwise the distribution would appear as normal. At any case, since I cannot run your own code and verify the results, I will finish my feedback with this email. Ioannis On 9/11/2023 4:19 PM, U.Mutlu wrote: > Ioannis Rigopoulos wrote on 09/11/23 13:43: >> Could you please send us a link to a file that contains the simulated >> values >> produced by your program when you use one time step? >> >> Since you use 1,000,000 simulations, the file should contain >> 1,000,000 numbers. > > On the following web page you can find download links to even 2 such > GBM data sets I just have created and uploaded to a server: > https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861992 > > > " > 2 GBM generated test files in CSV format (with header line; see there), > with 1,000,000 generated stock prices (S0=100, IV=30, t=1y, > timeSteps=1, r=0, q=0), > each about 29 MB uncompressed (about 12 MB zip-compressed). > > This GBM generator does not use Ito's lemma. > > ... > " > > >> On 9/11/2023 1:33 PM, U.Mutlu wrote: >>> Ioannis Rigopoulos wrote on 09/11/23 11:47: >>> > By the way, this is how my histogram of S(t) (for t = 1) simulated >>> values >>> > looks like when I used 100,000 time steps and 100,000 simulations. >>> It is >>> > clearly lognormal in shape. >>> >>> Yes, it's lognormally. My values are lognormlly as well, >>> You just restate the obvious fact. >>> >>> > I am sure, if you do the same when time steps = 1, >>> > you will get a normal distribution for S(t), which would prove >>> > that time steps = 1 does not reproduce the theoretically expected >>> > values for S(t). >>> >>> Not true, simply from the fact that the set boundary is lognormally: >>> -1SD: 74.08182206817179 >>> +1SD: 134.9858807576003 >>> Ie. it's not symmetric (Gaussian, normally distributed) around the >>> initial mean (100), but is lognormally (and therefore unsymmetric). >>> >>> FYI: in my other test program (that does not use QuantLib) >>> I get in such simulations exact results almost identical >>> to the expected theoretical results! So, your math and arguing is >>> wrong, IMO. >>> >>> You are too much fixated on timeSteps=1. Maybe you have >>> a much different understanding of what timeSteps really >>> means in practice. It's _not_ something like the number >>> of simulations that you simply can change and expect >>> to get the same result; no, it has a clearly defined context >>> and gives different results for different timeSteps. >>> >>> And you haven't answered the question why you think it has >>> to be 68.27% for all timeSteps (which IMO is wrong). >>> >>> For those interested, here are more details about the above said: >>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >>> >>> >>> >>> And, I today did a brief summary of these ongoing discussions on GBM >>> here: >>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861919 >>> >>> >>> >>> >>> Ioannis Rigopoulos wrote on 09/11/23 11:47: >>>> Our SDE for the evolution of S is: >>>> >>>> dS = μSdt + σSdw >>>> >>>> In your code you use: >>>> >>>> >>>> PathGenerator<RandomSequenceGenerator<MersenneBoxMuller>> >>>> gbmPathGenerator(gbm, t, timeSteps, gsg, false); >>>> const Path& samplePath = gbmPathGenerator.next().value; >>>> >>>> The PathGenerator::next().value is designed to return an array of >>>> numbers >>>> {S(T₁) , S(T₂) , ... , S(Tᵤ)} >>>> >>>> where >>>> >>>> u := timeSteps >>>> >>>> Δt := t/u >>>> >>>> Tᵢ := iΔt for i = 1,...,u >>>> >>>> It follows: >>>> >>>> Tᵤ = t = the final time when we are interested to know the >>>> simulated values of >>>> S (in your test you use t = 1). >>>> >>>> QuantLib builds the sequence {S(T₁) , S(T₂) , ... , S(Tᵤ)} >>>> incrementally >>>> through a total number of u steps by adding the quantity ΔS := μSΔt >>>> + σSΔw on >>>> the previous member of the sequence, with Δw := random number drawn >>>> from >>>> N~(0,Δt¹ᐟ²). >>>> >>>> The previous member of S(T₁) is assumed the T₀ = 0 and the initial >>>> value S(T₀) >>>> = S(0) is assumed deterministic and known. >>>> >>>> So, starting with the given S(T₀), QuantLib calculates the S(T₁) >>>> using the >>>> formula: >>>> >>>> S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw >>>> >>>> Then given the S(T₁) as calculated above, it proceeds to calculate >>>> S(T₂) as: >>>> >>>> S(T₂) := S(T₁) + μS(T₁)Δt + σS(T₁)Δw >>>> >>>> For simplicity I have used the same notation for the Δw in the >>>> equations >>>> above, but they are different. >>>> >>>> After u steps, the final value S(t) = S(Tᵤ) is calculated as: >>>> >>>> S(t) = S(Tᵤ) := S(Tᵤ₋₁) + μS(Tᵤ₋₁)Δt + σS(Tᵤ₋₁)Δw >>>> >>>> All equations above hold only in an approximate sense when Δt is >>>> finite. >>>> >>>> They are accurate only in a limiting sense as Δt → 0. >>>> >>>> It follows that the value S(t) will be calculated correctly only if >>>> Δt → 0, >>>> which is equivalent to u → ∞ >>>> >>>> If you use u = 1 (i.e. one time step), you will still match the >>>> 68.27% rule >>>> when the 1SD interval is defined in accordance with a normal >>>> distribution >>>> because S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw corresponds to a normal >>>> distribution and if you build a histogram of all simulated values >>>> you will >>>> notice that S(T₁) is distributed normally, which is clearly not >>>> consistent >>>> with the SDE that predicts lognormal distribution. >>>> >>>> The correct procedure is of course to use a very large u and define >>>> the 1SD >>>> interval in accordance with a lognormal distribution, as I >>>> described in my >>>> previous email. >>>> >>>> Then you will get convergence to the 68.27% rule. >>>> >>>> By the way, this is how my histogram of S(t) (for t = 1) simulated >>>> values >>>> looks like when I used 100,000 time steps and 100,000 simulations. >>>> It is >>>> clearly lognormal in shape. I am sure, if you do the same when time >>>> steps = 1, >>>> you will get a normal distribution for S(t), which would prove that >>>> time steps >>>> = 1 does not reproduce the theoretically expected values for S(t). >>>> >>>> Ioannis >>>> >>>> On 9/11/2023 10:19 AM, U.Mutlu wrote: >>>>> Dear Ioannis Rigopoulos, I now analysed your table and >>>>> must say that you seem to have a much different expectation to, >>>>> and understanding of that table. >>>>> >>>>> The values in your table for timeSteps > 1 can definitely >>>>> not be correct, b/c why do you think it has to be 68.27 for all >>>>> timeSteps? >>>>> This is illogical b/c the more timeSteps the more the hitrate% >>>>> must be, like in my table. It saturates at about 84.93% >>>>> >>>>> Sorry, I have to retract my initial enthusiastic judment >>>>> of your work below, as it's fundamentally flawed & wrong, >>>>> and I should have been more careful when I quickly & briefly >>>>> inspected your posting in the early morning here. >>>>> >>>>> Since your results in the table for timeSteps > 1 are wrong, >>>>> then your formulas must be wrong too, so I had to stop my analysis >>>>> right >>>>> there. >>>>> >>>>> Please just answer why you think it has to be 68.27% for all >>>>> timeSteps? >>>>> >>>>> >>>>> U.Mutlu wrote on 09/11/23 08:36: >>>>>> Mr. Ioannis Rigopoulos, that's a brilliant work by you. Thank you >>>>>> very much. >>>>>> I need some time to study and verify the reults, but it looks >>>>>> very good >>>>>> to solve the confusion and mystery I had experienced when I >>>>>> started this >>>>>> research some weeks ago. >>>>>> I'll try to replicate your results and will report later in >>>>>> detail here. >>>>>> >>>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>>> > On a different topic, I do not understand your statement >>>>>> "/I've never >>>>>> seen >>>>>> > that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>>> > What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m >>>>>> +1SD, where >>>>>> >>>>>> Sorry for the consfusion, it just means the stock price (Sx after >>>>>> time t, >>>>>> ie. S(t) or S_t) for the lower boundary at -1SD and for the upper >>>>>> boundary at >>>>>> +1SD, >>>>>> ie. your "min of S(t)" and "max of S(t)" in your other posting. >>>>>> In future I'll use a more common term like yours, sorry. >>>>>> >>>>>> Thx >>>>>> >>>>>> >>>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>>>> Yes, I mean mu * t - 0.5 * sigma * sigma * t >>>>>>> >>>>>>> Are you sure, you also adjusted the boundaries corresponding to >>>>>>> plus and >>>>>>> minus >>>>>>> 1 SD? >>>>>>> >>>>>>> I ran an Excel spreadsheet by modifying a spreadsheet I had >>>>>>> published >>>>>>> several >>>>>>> years ago at >>>>>>> https://blog.deriscope.com/index.php/en/excel-value-at-risk >>>>>>> that >>>>>>> uses the GeometricBrownianMotionProcess , albeit with the >>>>>>> PseudoRandom::rsg_type generator. >>>>>>> >>>>>>> At any case, I got convergence to 68.27% as I increased the >>>>>>> number of time >>>>>>> steps and keeping the number of simulations fixed to 100,000, >>>>>>> provided I >>>>>>> used >>>>>>> the correct (Ito-adjusted) means, as below: >>>>>>> >>>>>>> As you see above, the "No Ito-adjusted" boundaries are 74.08 and >>>>>>> 134.99, >>>>>>> just >>>>>>> like those you use. >>>>>>> >>>>>>> The Ito-adjusted boundaries are 70.82 and 129.05 produced by >>>>>>> shifting the >>>>>>> no-adjusted boundaries - in log terms - to the left by >>>>>>> 0.5*sigma^2*t. >>>>>>> >>>>>>> The simulation results (percentage of values being within the >>>>>>> min and max >>>>>>> boundaries) are below: >>>>>>> >>>>>>> As you see, the Ito-adjusted percentage reaches 68.30% for >>>>>>> 100,000 time >>>>>>> steps, >>>>>>> which is close to the expected 68.27%. >>>>>>> >>>>>>> The No Ito-adjusted percentage seems instead to converge to a >>>>>>> different >>>>>>> value >>>>>>> around 67.73% >>>>>>> >>>>>>> On a different topic, I do not understand your statement "/I've >>>>>>> never seen >>>>>>> that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>>>> >>>>>>> What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m >>>>>>> +1SD, where >>>>>>> m is >>>>>>> the mean of a normally distributed variable. This means that one >>>>>>> needs to >>>>>>> calculate m in order to do the mentioned computation. >>>>>>> >>>>>>> If your starting point is an SDE like dS = S*mu*dt + S*sigma*dw, >>>>>>> then >>>>>>> ln(S(t)) >>>>>>> at any time t will be normally distributed with mean m = >>>>>>> ln(S(0)) + mu*t - >>>>>>> 0.5*sigma^2*t , where the last term is a consequence of the Ito >>>>>>> lemma. >>>>>>> >>>>>>> Do you have any reason to believe that the Ito lemma is somehow not >>>>>>> valid so >>>>>>> that m = ln(S(0)) + mu*t? >>>>>>> >>>>>>> Ioannis >>>>>>> >>>>>>> On 9/10/2023 3:00 PM, U.Mutlu wrote: >>>>>>> >>>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>>> > new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, sigma) >>>>>>>> >>>>>>>> Thank you for your analysis. >>>>>>>> I guess you mean "mu * t - 0.5 * sigma * sigma * t", >>>>>>>> though in our example with t=1 it doesn't make any difference. >>>>>>>> It gives for timeSteps=1 this result: >>>>>>>> >>>>>>>> $ ./Testing_GBM_of_QuantLib.exe 1 >>>>>>>> timeSteps=1 nRuns=1000000 seed=1694350548 S=100.000000 mu=0.000000 >>>>>>>> sigma=0.300000 t=1.000000 : nGenAll=1000000 USE_ITO=1 : >>>>>>>> -1SD=70.822035 >>>>>>>> +1SD=129.046162 >>>>>>>> cHit=663067/1000000(66.307%) >>>>>>>> >>>>>>>> This is far from the expected value of 68.2689% for timeStep=1. >>>>>>>> So, there must be a bug somewhere. IMO it's our suspect friend >>>>>>>> Ito :-) >>>>>>>> >>>>>>>> Cf. also the following showing similar results for GBM, >>>>>>>> which then was compared to the standard lognormal calculation: >>>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> And, honestly, I've never seen that Sx@-1SD and Sx@+1SD get >>>>>>>> computed with >>>>>>>> Ito's lemma applied. >>>>>>>> >>>>>>>> >>>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>>>> Thank you for testing the BoxMullerGaussianRng code, which - >>>>>>>>> as I wrote - >>>>>>>>> does >>>>>>>>> not seem to be used in other areas of the standard part of >>>>>>>>> QuantLib. >>>>>>>>> >>>>>>>>> Your numerical results below ... >>>>>>>>> >>>>>>>>> indicate that the simulated values for ln(S(t)) produced with >>>>>>>>> timeSteps = 1 >>>>>>>>> are very likely normally distributed with mean (I use your >>>>>>>>> notation) >>>>>>>>> ln(S(0)) >>>>>>>>> + mu*t and standard deviation sigma * sqrt(t) >>>>>>>>> >>>>>>>>> This result _*is consistent*_ with: >>>>>>>>> >>>>>>>>> a) the SDE: dS(t) = S(t)*mu*dt + S(t)*sigma*dw >>>>>>>>> >>>>>>>>> _*and*_ >>>>>>>>> >>>>>>>>> b) the fact that in QuantLib the PathGenerator.next().value >>>>>>>>> returns the >>>>>>>>> result >>>>>>>>> of the SDE expression _*without *_applying the ITO correction >>>>>>>>> associated >>>>>>>>> with >>>>>>>>> the fact that dt is not infinitesimal. >>>>>>>>> >>>>>>>>> The b) is also responsible for the lack of convergence in your >>>>>>>>> output >>>>>>>>> towards >>>>>>>>> the theoretical target of 68.27% >>>>>>>>> >>>>>>>>> You would get the correct convergence if you modified your >>>>>>>>> code by using >>>>>>>>> the >>>>>>>>> expressions below: >>>>>>>>> >>>>>>>>> const double m1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma * >>>>>>>>> sqrt(t) * >>>>>>>>> -1.0); // Sx at -1SD >>>>>>>>> const double p1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma * >>>>>>>>> sqrt(t) * >>>>>>>>> 1.0); // Sx at +1SD >>>>>>>>> >>>>>>>>> new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, >>>>>>>>> sigma) (as >>>>>>>>> Peter also pointed out) >>>>>>>>> >>>>>>>>> Again, due to b) one can produce the correct simulated values >>>>>>>>> of a GBM >>>>>>>>> diffused quantity (such as a stock price in the GBM model) by >>>>>>>>> using N >>>>>>>>> time >>>>>>>>> steps, with N very large. Using N = 1 (like in your example), the >>>>>>>>> simulated >>>>>>>>> values will still be lognormally distributed (whence your good >>>>>>>>> result >>>>>>>>> with >>>>>>>>> N = >>>>>>>>> 1), but will be centered at a wrong mean and thus will _*fail >>>>>>>>> *_to >>>>>>>>> represent >>>>>>>>> the correct values expected by the GBM SDE. >>>>>>>>> >>>>>>>>> Ioannis >>>>>>>>> >>>>>>>>> On 9/10/2023 11:29 AM, U.Mutlu wrote: >>>>>>>>>> As said in other posting here, after fixing the test program >>>>>>>>>> by skipping the first item (the initial price) in the >>>>>>>>>> generated sample path, BoxMullerGaussianRng now passes the >>>>>>>>>> said test. >>>>>>>>>> >>>>>>>>>> The bugfixed test code and the new test results can be found >>>>>>>>>> here: >>>>>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-4#post-5861666 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> That important fact that the generated sample path contains >>>>>>>>>> not timeSteps elements but 1 + timeSteps elements needs >>>>>>>>>> to be documented in the library doc. >>>>>>>>>> For example on this API doc page one normally would expect >>>>>>>>>> to find such an important information, but it's missing: >>>>>>>>>> https://www.quantlib.org/reference/class_quant_lib_1_1_path_generator.html >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> If you or someone else can change/extend the test program by >>>>>>>>>> using >>>>>>>>>> the suggested alternative(s) to BoxMullerGaussianRng, I would >>>>>>>>>> be happy >>>>>>>>>> to hear about it. Thx. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Ioannis Rigopoulos wrote on 09/09/23 16:28: >>>>>>>>>>> If you search within the QuantLib code for >>>>>>>>>>> BoxMullerGaussianRng, you >>>>>>>>>>> will >>>>>>>>>>> see >>>>>>>>>>> it is used only in the experimental folder. It is therefore not >>>>>>>>>>> surprising if >>>>>>>>>>> it doesn't produce the expected results. >>>>>>>>>>> >>>>>>>>>>> I use myself the MultiPathGenerator with >>>>>>>>>>> PseudoRandom::rsg_type, >>>>>>>>>>> which is >>>>>>>>>>> used >>>>>>>>>>> extensively in other areas of QuantLib. >>>>>>>>>>> >>>>>>>>>>> This type expands to InverseCumulativeRsg< >>>>>>>>>>> RandomSequenceGenerator< >>>>>>>>>>> MersenneTwisterUniformRng > , InverseCumulativeNormal > and >>>>>>>>>>> gives me >>>>>>>>>>> good >>>>>>>>>>> results. >>>>>>>>>>> >>>>>>>>>>> Ioannis Rigopoulos, founder of deriscope.com > > -- This email has been checked for viruses by Avast antivirus software. www.avast.com |
|
From: U.Mutlu <um...@mu...> - 2023-09-11 14:19:17
|
Ioannis Rigopoulos wrote on 09/11/23 13:43: > Could you please send us a link to a file that contains the simulated values > produced by your program when you use one time step? > > Since you use 1,000,000 simulations, the file should contain 1,000,000 numbers. On the following web page you can find download links to even 2 such GBM data sets I just have created and uploaded to a server: https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861992 " 2 GBM generated test files in CSV format (with header line; see there), with 1,000,000 generated stock prices (S0=100, IV=30, t=1y, timeSteps=1, r=0, q=0), each about 29 MB uncompressed (about 12 MB zip-compressed). This GBM generator does not use Ito's lemma. ... " > On 9/11/2023 1:33 PM, U.Mutlu wrote: >> Ioannis Rigopoulos wrote on 09/11/23 11:47: >> > By the way, this is how my histogram of S(t) (for t = 1) simulated values >> > looks like when I used 100,000 time steps and 100,000 simulations. It is >> > clearly lognormal in shape. >> >> Yes, it's lognormally. My values are lognormlly as well, >> You just restate the obvious fact. >> >> > I am sure, if you do the same when time steps = 1, >> > you will get a normal distribution for S(t), which would prove >> > that time steps = 1 does not reproduce the theoretically expected >> > values for S(t). >> >> Not true, simply from the fact that the set boundary is lognormally: >> -1SD: 74.08182206817179 >> +1SD: 134.9858807576003 >> Ie. it's not symmetric (Gaussian, normally distributed) around the >> initial mean (100), but is lognormally (and therefore unsymmetric). >> >> FYI: in my other test program (that does not use QuantLib) >> I get in such simulations exact results almost identical >> to the expected theoretical results! So, your math and arguing is wrong, IMO. >> >> You are too much fixated on timeSteps=1. Maybe you have >> a much different understanding of what timeSteps really >> means in practice. It's _not_ something like the number >> of simulations that you simply can change and expect >> to get the same result; no, it has a clearly defined context >> and gives different results for different timeSteps. >> >> And you haven't answered the question why you think it has >> to be 68.27% for all timeSteps (which IMO is wrong). >> >> For those interested, here are more details about the above said: >> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >> >> >> And, I today did a brief summary of these ongoing discussions on GBM here: >> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861919 >> >> >> >> Ioannis Rigopoulos wrote on 09/11/23 11:47: >>> Our SDE for the evolution of S is: >>> >>> dS = μSdt + σSdw >>> >>> In your code you use: >>> >>> >>> PathGenerator<RandomSequenceGenerator<MersenneBoxMuller>> >>> gbmPathGenerator(gbm, t, timeSteps, gsg, false); >>> const Path& samplePath = gbmPathGenerator.next().value; >>> >>> The PathGenerator::next().value is designed to return an array of numbers >>> {S(T₁) , S(T₂) , ... , S(Tᵤ)} >>> >>> where >>> >>> u := timeSteps >>> >>> Δt := t/u >>> >>> Tᵢ := iΔt for i = 1,...,u >>> >>> It follows: >>> >>> Tᵤ = t = the final time when we are interested to know the simulated values of >>> S (in your test you use t = 1). >>> >>> QuantLib builds the sequence {S(T₁) , S(T₂) , ... , S(Tᵤ)} incrementally >>> through a total number of u steps by adding the quantity ΔS := μSΔt + σSΔw on >>> the previous member of the sequence, with Δw := random number drawn from >>> N~(0,Δt¹ᐟ²). >>> >>> The previous member of S(T₁) is assumed the T₀ = 0 and the initial value S(T₀) >>> = S(0) is assumed deterministic and known. >>> >>> So, starting with the given S(T₀), QuantLib calculates the S(T₁) using the >>> formula: >>> >>> S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw >>> >>> Then given the S(T₁) as calculated above, it proceeds to calculate S(T₂) as: >>> >>> S(T₂) := S(T₁) + μS(T₁)Δt + σS(T₁)Δw >>> >>> For simplicity I have used the same notation for the Δw in the equations >>> above, but they are different. >>> >>> After u steps, the final value S(t) = S(Tᵤ) is calculated as: >>> >>> S(t) = S(Tᵤ) := S(Tᵤ₋₁) + μS(Tᵤ₋₁)Δt + σS(Tᵤ₋₁)Δw >>> >>> All equations above hold only in an approximate sense when Δt is finite. >>> >>> They are accurate only in a limiting sense as Δt → 0. >>> >>> It follows that the value S(t) will be calculated correctly only if Δt → 0, >>> which is equivalent to u → ∞ >>> >>> If you use u = 1 (i.e. one time step), you will still match the 68.27% rule >>> when the 1SD interval is defined in accordance with a normal distribution >>> because S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw corresponds to a normal >>> distribution and if you build a histogram of all simulated values you will >>> notice that S(T₁) is distributed normally, which is clearly not consistent >>> with the SDE that predicts lognormal distribution. >>> >>> The correct procedure is of course to use a very large u and define the 1SD >>> interval in accordance with a lognormal distribution, as I described in my >>> previous email. >>> >>> Then you will get convergence to the 68.27% rule. >>> >>> By the way, this is how my histogram of S(t) (for t = 1) simulated values >>> looks like when I used 100,000 time steps and 100,000 simulations. It is >>> clearly lognormal in shape. I am sure, if you do the same when time steps = 1, >>> you will get a normal distribution for S(t), which would prove that time steps >>> = 1 does not reproduce the theoretically expected values for S(t). >>> >>> Ioannis >>> >>> On 9/11/2023 10:19 AM, U.Mutlu wrote: >>>> Dear Ioannis Rigopoulos, I now analysed your table and >>>> must say that you seem to have a much different expectation to, >>>> and understanding of that table. >>>> >>>> The values in your table for timeSteps > 1 can definitely >>>> not be correct, b/c why do you think it has to be 68.27 for all timeSteps? >>>> This is illogical b/c the more timeSteps the more the hitrate% >>>> must be, like in my table. It saturates at about 84.93% >>>> >>>> Sorry, I have to retract my initial enthusiastic judment >>>> of your work below, as it's fundamentally flawed & wrong, >>>> and I should have been more careful when I quickly & briefly >>>> inspected your posting in the early morning here. >>>> >>>> Since your results in the table for timeSteps > 1 are wrong, >>>> then your formulas must be wrong too, so I had to stop my analysis right >>>> there. >>>> >>>> Please just answer why you think it has to be 68.27% for all timeSteps? >>>> >>>> >>>> U.Mutlu wrote on 09/11/23 08:36: >>>>> Mr. Ioannis Rigopoulos, that's a brilliant work by you. Thank you very much. >>>>> I need some time to study and verify the reults, but it looks very good >>>>> to solve the confusion and mystery I had experienced when I started this >>>>> research some weeks ago. >>>>> I'll try to replicate your results and will report later in detail here. >>>>> >>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>> > On a different topic, I do not understand your statement "/I've never >>>>> seen >>>>> > that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>> > What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m +1SD, where >>>>> >>>>> Sorry for the consfusion, it just means the stock price (Sx after time t, >>>>> ie. S(t) or S_t) for the lower boundary at -1SD and for the upper >>>>> boundary at >>>>> +1SD, >>>>> ie. your "min of S(t)" and "max of S(t)" in your other posting. >>>>> In future I'll use a more common term like yours, sorry. >>>>> >>>>> Thx >>>>> >>>>> >>>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>>> Yes, I mean mu * t - 0.5 * sigma * sigma * t >>>>>> >>>>>> Are you sure, you also adjusted the boundaries corresponding to plus and >>>>>> minus >>>>>> 1 SD? >>>>>> >>>>>> I ran an Excel spreadsheet by modifying a spreadsheet I had published >>>>>> several >>>>>> years ago at https://blog.deriscope.com/index.php/en/excel-value-at-risk >>>>>> that >>>>>> uses the GeometricBrownianMotionProcess , albeit with the >>>>>> PseudoRandom::rsg_type generator. >>>>>> >>>>>> At any case, I got convergence to 68.27% as I increased the number of time >>>>>> steps and keeping the number of simulations fixed to 100,000, provided I >>>>>> used >>>>>> the correct (Ito-adjusted) means, as below: >>>>>> >>>>>> As you see above, the "No Ito-adjusted" boundaries are 74.08 and 134.99, >>>>>> just >>>>>> like those you use. >>>>>> >>>>>> The Ito-adjusted boundaries are 70.82 and 129.05 produced by shifting the >>>>>> no-adjusted boundaries - in log terms - to the left by 0.5*sigma^2*t. >>>>>> >>>>>> The simulation results (percentage of values being within the min and max >>>>>> boundaries) are below: >>>>>> >>>>>> As you see, the Ito-adjusted percentage reaches 68.30% for 100,000 time >>>>>> steps, >>>>>> which is close to the expected 68.27%. >>>>>> >>>>>> The No Ito-adjusted percentage seems instead to converge to a different >>>>>> value >>>>>> around 67.73% >>>>>> >>>>>> On a different topic, I do not understand your statement "/I've never seen >>>>>> that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>>> >>>>>> What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m +1SD, where >>>>>> m is >>>>>> the mean of a normally distributed variable. This means that one needs to >>>>>> calculate m in order to do the mentioned computation. >>>>>> >>>>>> If your starting point is an SDE like dS = S*mu*dt + S*sigma*dw, then >>>>>> ln(S(t)) >>>>>> at any time t will be normally distributed with mean m = ln(S(0)) + mu*t - >>>>>> 0.5*sigma^2*t , where the last term is a consequence of the Ito lemma. >>>>>> >>>>>> Do you have any reason to believe that the Ito lemma is somehow not >>>>>> valid so >>>>>> that m = ln(S(0)) + mu*t? >>>>>> >>>>>> Ioannis >>>>>> >>>>>> On 9/10/2023 3:00 PM, U.Mutlu wrote: >>>>>> >>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>> > new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, sigma) >>>>>>> >>>>>>> Thank you for your analysis. >>>>>>> I guess you mean "mu * t - 0.5 * sigma * sigma * t", >>>>>>> though in our example with t=1 it doesn't make any difference. >>>>>>> It gives for timeSteps=1 this result: >>>>>>> >>>>>>> $ ./Testing_GBM_of_QuantLib.exe 1 >>>>>>> timeSteps=1 nRuns=1000000 seed=1694350548 S=100.000000 mu=0.000000 >>>>>>> sigma=0.300000 t=1.000000 : nGenAll=1000000 USE_ITO=1 : -1SD=70.822035 >>>>>>> +1SD=129.046162 >>>>>>> cHit=663067/1000000(66.307%) >>>>>>> >>>>>>> This is far from the expected value of 68.2689% for timeStep=1. >>>>>>> So, there must be a bug somewhere. IMO it's our suspect friend Ito :-) >>>>>>> >>>>>>> Cf. also the following showing similar results for GBM, >>>>>>> which then was compared to the standard lognormal calculation: >>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> And, honestly, I've never seen that Sx@-1SD and Sx@+1SD get computed with >>>>>>> Ito's lemma applied. >>>>>>> >>>>>>> >>>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>>> Thank you for testing the BoxMullerGaussianRng code, which - as I wrote - >>>>>>>> does >>>>>>>> not seem to be used in other areas of the standard part of QuantLib. >>>>>>>> >>>>>>>> Your numerical results below ... >>>>>>>> >>>>>>>> indicate that the simulated values for ln(S(t)) produced with >>>>>>>> timeSteps = 1 >>>>>>>> are very likely normally distributed with mean (I use your notation) >>>>>>>> ln(S(0)) >>>>>>>> + mu*t and standard deviation sigma * sqrt(t) >>>>>>>> >>>>>>>> This result _*is consistent*_ with: >>>>>>>> >>>>>>>> a) the SDE: dS(t) = S(t)*mu*dt + S(t)*sigma*dw >>>>>>>> >>>>>>>> _*and*_ >>>>>>>> >>>>>>>> b) the fact that in QuantLib the PathGenerator.next().value returns the >>>>>>>> result >>>>>>>> of the SDE expression _*without *_applying the ITO correction associated >>>>>>>> with >>>>>>>> the fact that dt is not infinitesimal. >>>>>>>> >>>>>>>> The b) is also responsible for the lack of convergence in your output >>>>>>>> towards >>>>>>>> the theoretical target of 68.27% >>>>>>>> >>>>>>>> You would get the correct convergence if you modified your code by using >>>>>>>> the >>>>>>>> expressions below: >>>>>>>> >>>>>>>> const double m1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma * >>>>>>>> sqrt(t) * >>>>>>>> -1.0); // Sx at -1SD >>>>>>>> const double p1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma * >>>>>>>> sqrt(t) * >>>>>>>> 1.0); // Sx at +1SD >>>>>>>> >>>>>>>> new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, sigma) (as >>>>>>>> Peter also pointed out) >>>>>>>> >>>>>>>> Again, due to b) one can produce the correct simulated values of a GBM >>>>>>>> diffused quantity (such as a stock price in the GBM model) by using N >>>>>>>> time >>>>>>>> steps, with N very large. Using N = 1 (like in your example), the >>>>>>>> simulated >>>>>>>> values will still be lognormally distributed (whence your good result >>>>>>>> with >>>>>>>> N = >>>>>>>> 1), but will be centered at a wrong mean and thus will _*fail *_to >>>>>>>> represent >>>>>>>> the correct values expected by the GBM SDE. >>>>>>>> >>>>>>>> Ioannis >>>>>>>> >>>>>>>> On 9/10/2023 11:29 AM, U.Mutlu wrote: >>>>>>>>> As said in other posting here, after fixing the test program >>>>>>>>> by skipping the first item (the initial price) in the >>>>>>>>> generated sample path, BoxMullerGaussianRng now passes the said test. >>>>>>>>> >>>>>>>>> The bugfixed test code and the new test results can be found here: >>>>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-4#post-5861666 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> That important fact that the generated sample path contains >>>>>>>>> not timeSteps elements but 1 + timeSteps elements needs >>>>>>>>> to be documented in the library doc. >>>>>>>>> For example on this API doc page one normally would expect >>>>>>>>> to find such an important information, but it's missing: >>>>>>>>> https://www.quantlib.org/reference/class_quant_lib_1_1_path_generator.html >>>>>>>>> >>>>>>>>> >>>>>>>>> If you or someone else can change/extend the test program by using >>>>>>>>> the suggested alternative(s) to BoxMullerGaussianRng, I would be happy >>>>>>>>> to hear about it. Thx. >>>>>>>>> >>>>>>>>> >>>>>>>>> Ioannis Rigopoulos wrote on 09/09/23 16:28: >>>>>>>>>> If you search within the QuantLib code for BoxMullerGaussianRng, you >>>>>>>>>> will >>>>>>>>>> see >>>>>>>>>> it is used only in the experimental folder. It is therefore not >>>>>>>>>> surprising if >>>>>>>>>> it doesn't produce the expected results. >>>>>>>>>> >>>>>>>>>> I use myself the MultiPathGenerator with PseudoRandom::rsg_type, >>>>>>>>>> which is >>>>>>>>>> used >>>>>>>>>> extensively in other areas of QuantLib. >>>>>>>>>> >>>>>>>>>> This type expands to InverseCumulativeRsg< RandomSequenceGenerator< >>>>>>>>>> MersenneTwisterUniformRng > , InverseCumulativeNormal > and gives me >>>>>>>>>> good >>>>>>>>>> results. >>>>>>>>>> >>>>>>>>>> Ioannis Rigopoulos, founder of deriscope.com |
|
From: Ioannis R. <qua...@de...> - 2023-09-11 11:43:41
|
Could you please send us a link to a file that contains the simulated values produced by your program when you use one time step? Since you use 1,000,000 simulations, the file should contain 1,000,000 numbers. On 9/11/2023 1:33 PM, U.Mutlu wrote: > Ioannis Rigopoulos wrote on 09/11/23 11:47: > > By the way, this is how my histogram of S(t) (for t = 1) simulated > values > > looks like when I used 100,000 time steps and 100,000 simulations. > It is > > clearly lognormal in shape. > > Yes, it's lognormally. My values are lognormlly as well, > You just restate the obvious fact. > > > I am sure, if you do the same when time steps = 1, > > you will get a normal distribution for S(t), which would prove > > that time steps = 1 does not reproduce the theoretically expected > > values for S(t). > > Not true, simply from the fact that the set boundary is lognormally: > -1SD: 74.08182206817179 > +1SD: 134.9858807576003 > Ie. it's not symmetric (Gaussian, normally distributed) around the > initial mean (100), but is lognormally (and therefore unsymmetric). > > FYI: in my other test program (that does not use QuantLib) > I get in such simulations exact results almost identical > to the expected theoretical results! So, your math and arguing is > wrong, IMO. > > You are too much fixated on timeSteps=1. Maybe you have > a much different understanding of what timeSteps really > means in practice. It's _not_ something like the number > of simulations that you simply can change and expect > to get the same result; no, it has a clearly defined context > and gives different results for different timeSteps. > > And you haven't answered the question why you think it has > to be 68.27% for all timeSteps (which IMO is wrong). > > For those interested, here are more details about the above said: > https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 > > > And, I today did a brief summary of these ongoing discussions on GBM > here: > https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-5#post-5861919 > > > > Ioannis Rigopoulos wrote on 09/11/23 11:47: >> Our SDE for the evolution of S is: >> >> dS = μSdt + σSdw >> >> In your code you use: >> >> >> PathGenerator<RandomSequenceGenerator<MersenneBoxMuller>> >> gbmPathGenerator(gbm, t, timeSteps, gsg, false); >> const Path& samplePath = gbmPathGenerator.next().value; >> >> The PathGenerator::next().value is designed to return an array of >> numbers >> {S(T₁) , S(T₂) , ... , S(Tᵤ)} >> >> where >> >> u := timeSteps >> >> Δt := t/u >> >> Tᵢ := iΔt for i = 1,...,u >> >> It follows: >> >> Tᵤ = t = the final time when we are interested to know the simulated >> values of >> S (in your test you use t = 1). >> >> QuantLib builds the sequence {S(T₁) , S(T₂) , ... , S(Tᵤ)} incrementally >> through a total number of u steps by adding the quantity ΔS := μSΔt + >> σSΔw on >> the previous member of the sequence, with Δw := random number drawn from >> N~(0,Δt¹ᐟ²). >> >> The previous member of S(T₁) is assumed the T₀ = 0 and the initial >> value S(T₀) >> = S(0) is assumed deterministic and known. >> >> So, starting with the given S(T₀), QuantLib calculates the S(T₁) >> using the >> formula: >> >> S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw >> >> Then given the S(T₁) as calculated above, it proceeds to calculate >> S(T₂) as: >> >> S(T₂) := S(T₁) + μS(T₁)Δt + σS(T₁)Δw >> >> For simplicity I have used the same notation for the Δw in the equations >> above, but they are different. >> >> After u steps, the final value S(t) = S(Tᵤ) is calculated as: >> >> S(t) = S(Tᵤ) := S(Tᵤ₋₁) + μS(Tᵤ₋₁)Δt + σS(Tᵤ₋₁)Δw >> >> All equations above hold only in an approximate sense when Δt is finite. >> >> They are accurate only in a limiting sense as Δt → 0. >> >> It follows that the value S(t) will be calculated correctly only if >> Δt → 0, >> which is equivalent to u → ∞ >> >> If you use u = 1 (i.e. one time step), you will still match the >> 68.27% rule >> when the 1SD interval is defined in accordance with a normal >> distribution >> because S(T₁) := S(T₀) + μS(T₀)Δt + σS(T₀)Δw corresponds to a normal >> distribution and if you build a histogram of all simulated values you >> will >> notice that S(T₁) is distributed normally, which is clearly not >> consistent >> with the SDE that predicts lognormal distribution. >> >> The correct procedure is of course to use a very large u and define >> the 1SD >> interval in accordance with a lognormal distribution, as I described >> in my >> previous email. >> >> Then you will get convergence to the 68.27% rule. >> >> By the way, this is how my histogram of S(t) (for t = 1) simulated >> values >> looks like when I used 100,000 time steps and 100,000 simulations. It is >> clearly lognormal in shape. I am sure, if you do the same when time >> steps = 1, >> you will get a normal distribution for S(t), which would prove that >> time steps >> = 1 does not reproduce the theoretically expected values for S(t). >> >> Ioannis >> >> On 9/11/2023 10:19 AM, U.Mutlu wrote: >>> Dear Ioannis Rigopoulos, I now analysed your table and >>> must say that you seem to have a much different expectation to, >>> and understanding of that table. >>> >>> The values in your table for timeSteps > 1 can definitely >>> not be correct, b/c why do you think it has to be 68.27 for all >>> timeSteps? >>> This is illogical b/c the more timeSteps the more the hitrate% >>> must be, like in my table. It saturates at about 84.93% >>> >>> Sorry, I have to retract my initial enthusiastic judment >>> of your work below, as it's fundamentally flawed & wrong, >>> and I should have been more careful when I quickly & briefly >>> inspected your posting in the early morning here. >>> >>> Since your results in the table for timeSteps > 1 are wrong, >>> then your formulas must be wrong too, so I had to stop my analysis >>> right there. >>> >>> Please just answer why you think it has to be 68.27% for all timeSteps? >>> >>> >>> U.Mutlu wrote on 09/11/23 08:36: >>>> Mr. Ioannis Rigopoulos, that's a brilliant work by you. Thank you >>>> very much. >>>> I need some time to study and verify the reults, but it looks very >>>> good >>>> to solve the confusion and mystery I had experienced when I started >>>> this >>>> research some weeks ago. >>>> I'll try to replicate your results and will report later in detail >>>> here. >>>> >>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>> > On a different topic, I do not understand your statement "/I've >>>> never seen >>>> > that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>> > What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m >>>> +1SD, where >>>> >>>> Sorry for the consfusion, it just means the stock price (Sx after >>>> time t, >>>> ie. S(t) or S_t) for the lower boundary at -1SD and for the upper >>>> boundary at >>>> +1SD, >>>> ie. your "min of S(t)" and "max of S(t)" in your other posting. >>>> In future I'll use a more common term like yours, sorry. >>>> >>>> Thx >>>> >>>> >>>> Ioannis Rigopoulos wrote on 09/10/23 19:29: >>>>> Yes, I mean mu * t - 0.5 * sigma * sigma * t >>>>> >>>>> Are you sure, you also adjusted the boundaries corresponding to >>>>> plus and >>>>> minus >>>>> 1 SD? >>>>> >>>>> I ran an Excel spreadsheet by modifying a spreadsheet I had >>>>> published several >>>>> years ago at >>>>> https://blog.deriscope.com/index.php/en/excel-value-at-risk that >>>>> uses the GeometricBrownianMotionProcess , albeit with the >>>>> PseudoRandom::rsg_type generator. >>>>> >>>>> At any case, I got convergence to 68.27% as I increased the number >>>>> of time >>>>> steps and keeping the number of simulations fixed to 100,000, >>>>> provided I used >>>>> the correct (Ito-adjusted) means, as below: >>>>> >>>>> As you see above, the "No Ito-adjusted" boundaries are 74.08 and >>>>> 134.99, just >>>>> like those you use. >>>>> >>>>> The Ito-adjusted boundaries are 70.82 and 129.05 produced by >>>>> shifting the >>>>> no-adjusted boundaries - in log terms - to the left by 0.5*sigma^2*t. >>>>> >>>>> The simulation results (percentage of values being within the min >>>>> and max >>>>> boundaries) are below: >>>>> >>>>> As you see, the Ito-adjusted percentage reaches 68.30% for 100,000 >>>>> time >>>>> steps, >>>>> which is close to the expected 68.27%. >>>>> >>>>> The No Ito-adjusted percentage seems instead to converge to a >>>>> different value >>>>> around 67.73% >>>>> >>>>> On a different topic, I do not understand your statement "/I've >>>>> never seen >>>>> that Sx@-1SD and Sx@+1SD get computed with Ito's lemma applied/". >>>>> >>>>> What do the Sx@-1SD and Sx@+1SD mean? I suppose m -1SD and m +1SD, >>>>> where m is >>>>> the mean of a normally distributed variable. This means that one >>>>> needs to >>>>> calculate m in order to do the mentioned computation. >>>>> >>>>> If your starting point is an SDE like dS = S*mu*dt + S*sigma*dw, then >>>>> ln(S(t)) >>>>> at any time t will be normally distributed with mean m = ln(S(0)) >>>>> + mu*t - >>>>> 0.5*sigma^2*t , where the last term is a consequence of the Ito >>>>> lemma. >>>>> >>>>> Do you have any reason to believe that the Ito lemma is somehow >>>>> not valid so >>>>> that m = ln(S(0)) + mu*t? >>>>> >>>>> Ioannis >>>>> >>>>> On 9/10/2023 3:00 PM, U.Mutlu wrote: >>>>> >>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>> > new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, sigma) >>>>>> >>>>>> Thank you for your analysis. >>>>>> I guess you mean "mu * t - 0.5 * sigma * sigma * t", >>>>>> though in our example with t=1 it doesn't make any difference. >>>>>> It gives for timeSteps=1 this result: >>>>>> >>>>>> $ ./Testing_GBM_of_QuantLib.exe 1 >>>>>> timeSteps=1 nRuns=1000000 seed=1694350548 S=100.000000 mu=0.000000 >>>>>> sigma=0.300000 t=1.000000 : nGenAll=1000000 USE_ITO=1 : >>>>>> -1SD=70.822035 >>>>>> +1SD=129.046162 >>>>>> cHit=663067/1000000(66.307%) >>>>>> >>>>>> This is far from the expected value of 68.2689% for timeStep=1. >>>>>> So, there must be a bug somewhere. IMO it's our suspect friend >>>>>> Ito :-) >>>>>> >>>>>> Cf. also the following showing similar results for GBM, >>>>>> which then was compared to the standard lognormal calculation: >>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-3#post-5849548 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> And, honestly, I've never seen that Sx@-1SD and Sx@+1SD get >>>>>> computed with >>>>>> Ito's lemma applied. >>>>>> >>>>>> >>>>>> Ioannis Rigopoulos wrote on 09/10/23 13:06: >>>>>>> Thank you for testing the BoxMullerGaussianRng code, which - as >>>>>>> I wrote - >>>>>>> does >>>>>>> not seem to be used in other areas of the standard part of >>>>>>> QuantLib. >>>>>>> >>>>>>> Your numerical results below ... >>>>>>> >>>>>>> indicate that the simulated values for ln(S(t)) produced with >>>>>>> timeSteps = 1 >>>>>>> are very likely normally distributed with mean (I use your >>>>>>> notation) >>>>>>> ln(S(0)) >>>>>>> + mu*t and standard deviation sigma * sqrt(t) >>>>>>> >>>>>>> This result _*is consistent*_ with: >>>>>>> >>>>>>> a) the SDE: dS(t) = S(t)*mu*dt + S(t)*sigma*dw >>>>>>> >>>>>>> _*and*_ >>>>>>> >>>>>>> b) the fact that in QuantLib the PathGenerator.next().value >>>>>>> returns the >>>>>>> result >>>>>>> of the SDE expression _*without *_applying the ITO correction >>>>>>> associated >>>>>>> with >>>>>>> the fact that dt is not infinitesimal. >>>>>>> >>>>>>> The b) is also responsible for the lack of convergence in your >>>>>>> output >>>>>>> towards >>>>>>> the theoretical target of 68.27% >>>>>>> >>>>>>> You would get the correct convergence if you modified your code >>>>>>> by using >>>>>>> the >>>>>>> expressions below: >>>>>>> >>>>>>> const double m1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma >>>>>>> * sqrt(t) * >>>>>>> -1.0); // Sx at -1SD >>>>>>> const double p1SD = S * exp(mu * t - 0.5*sigma^2*t + sigma >>>>>>> * sqrt(t) * >>>>>>> 1.0); // Sx at +1SD >>>>>>> >>>>>>> new GeometricBrownianMotionProcess(S, mu - 0.5*sigma^2*t, >>>>>>> sigma) (as >>>>>>> Peter also pointed out) >>>>>>> >>>>>>> Again, due to b) one can produce the correct simulated values of >>>>>>> a GBM >>>>>>> diffused quantity (such as a stock price in the GBM model) by >>>>>>> using N time >>>>>>> steps, with N very large. Using N = 1 (like in your example), >>>>>>> the simulated >>>>>>> values will still be lognormally distributed (whence your good >>>>>>> result with >>>>>>> N = >>>>>>> 1), but will be centered at a wrong mean and thus will _*fail *_to >>>>>>> represent >>>>>>> the correct values expected by the GBM SDE. >>>>>>> >>>>>>> Ioannis >>>>>>> >>>>>>> On 9/10/2023 11:29 AM, U.Mutlu wrote: >>>>>>>> As said in other posting here, after fixing the test program >>>>>>>> by skipping the first item (the initial price) in the >>>>>>>> generated sample path, BoxMullerGaussianRng now passes the said >>>>>>>> test. >>>>>>>> >>>>>>>> The bugfixed test code and the new test results can be found here: >>>>>>>> https://www.elitetrader.com/et/threads/simulating-stock-prices-using-gbm.375533/page-4#post-5861666 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> That important fact that the generated sample path contains >>>>>>>> not timeSteps elements but 1 + timeSteps elements needs >>>>>>>> to be documented in the library doc. >>>>>>>> For example on this API doc page one normally would expect >>>>>>>> to find such an important information, but it's missing: >>>>>>>> https://www.quantlib.org/reference/class_quant_lib_1_1_path_generator.html >>>>>>>> >>>>>>>> >>>>>>>> If you or someone else can change/extend the test program by using >>>>>>>> the suggested alternative(s) to BoxMullerGaussianRng, I would >>>>>>>> be happy >>>>>>>> to hear about it. Thx. >>>>>>>> >>>>>>>> >>>>>>>> Ioannis Rigopoulos wrote on 09/09/23 16:28: >>>>>>>>> If you search within the QuantLib code for >>>>>>>>> BoxMullerGaussianRng, you will >>>>>>>>> see >>>>>>>>> it is used only in the experimental folder. It is therefore not >>>>>>>>> surprising if >>>>>>>>> it doesn't produce the expected results. >>>>>>>>> >>>>>>>>> I use myself the MultiPathGenerator with >>>>>>>>> PseudoRandom::rsg_type, which is >>>>>>>>> used >>>>>>>>> extensively in other areas of QuantLib. >>>>>>>>> >>>>>>>>> This type expands to InverseCumulativeRsg< >>>>>>>>> RandomSequenceGenerator< >>>>>>>>> MersenneTwisterUniformRng > , InverseCumulativeNormal > and >>>>>>>>> gives me good >>>>>>>>> results. >>>>>>>>> >>>>>>>>> Ioannis Rigopoulos, founder of deriscope.com > > -- This email has been checked for viruses by Avast antivirus software. www.avast.com |