You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(17) 

2001 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(23) 
2002 
_{Jan}
(18) 
_{Feb}
(20) 
_{Mar}
(22) 
_{Apr}
(41) 
_{May}
(28) 
_{Jun}
(25) 
_{Jul}
(10) 
_{Aug}
(7) 
_{Sep}
(5) 
_{Oct}
(20) 
_{Nov}
(13) 
_{Dec}
(11) 
2003 
_{Jan}
(28) 
_{Feb}
(5) 
_{Mar}
(6) 
_{Apr}
(5) 
_{May}
(17) 
_{Jun}
(6) 
_{Jul}
(45) 
_{Aug}
(35) 
_{Sep}
(24) 
_{Oct}
(50) 
_{Nov}
(53) 
_{Dec}
(6) 
2004 
_{Jan}
(4) 
_{Feb}
(10) 
_{Mar}
(52) 
_{Apr}
(46) 
_{May}
(8) 
_{Jun}
(25) 
_{Jul}
(12) 
_{Aug}
(6) 
_{Sep}
(8) 
_{Oct}
(8) 
_{Nov}
(9) 
_{Dec}
(7) 
2005 
_{Jan}
(18) 
_{Feb}
(60) 
_{Mar}
(19) 
_{Apr}
(26) 
_{May}
(14) 
_{Jun}
(27) 
_{Jul}
(8) 
_{Aug}
(15) 
_{Sep}
(19) 
_{Oct}
(53) 
_{Nov}
(20) 
_{Dec}
(23) 
2006 
_{Jan}
(16) 
_{Feb}
(27) 
_{Mar}
(33) 
_{Apr}
(51) 
_{May}
(36) 
_{Jun}
(25) 
_{Jul}
(54) 
_{Aug}
(30) 
_{Sep}
(25) 
_{Oct}
(67) 
_{Nov}
(43) 
_{Dec}
(13) 
2007 
_{Jan}
(23) 
_{Feb}
(27) 
_{Mar}
(55) 
_{Apr}
(79) 
_{May}
(60) 
_{Jun}
(66) 
_{Jul}
(46) 
_{Aug}
(30) 
_{Sep}
(90) 
_{Oct}
(49) 
_{Nov}
(85) 
_{Dec}
(74) 
2008 
_{Jan}
(68) 
_{Feb}
(59) 
_{Mar}
(64) 
_{Apr}
(28) 
_{May}
(66) 
_{Jun}
(35) 
_{Jul}
(73) 
_{Aug}
(76) 
_{Sep}
(65) 
_{Oct}
(46) 
_{Nov}
(41) 
_{Dec}
(19) 
2009 
_{Jan}
(46) 
_{Feb}
(90) 
_{Mar}
(51) 
_{Apr}
(104) 
_{May}
(13) 
_{Jun}
(24) 
_{Jul}
(20) 
_{Aug}
(39) 
_{Sep}
(109) 
_{Oct}
(101) 
_{Nov}
(117) 
_{Dec}
(57) 
2010 
_{Jan}
(55) 
_{Feb}
(42) 
_{Mar}
(39) 
_{Apr}
(22) 
_{May}
(33) 
_{Jun}
(41) 
_{Jul}
(25) 
_{Aug}
(52) 
_{Sep}
(75) 
_{Oct}
(60) 
_{Nov}
(62) 
_{Dec}
(52) 
2011 
_{Jan}
(70) 
_{Feb}
(31) 
_{Mar}
(26) 
_{Apr}
(28) 
_{May}
(17) 
_{Jun}
(38) 
_{Jul}
(51) 
_{Aug}
(35) 
_{Sep}
(27) 
_{Oct}
(35) 
_{Nov}
(10) 
_{Dec}
(20) 
2012 
_{Jan}
(21) 
_{Feb}
(29) 
_{Mar}
(13) 
_{Apr}
(37) 
_{May}
(33) 
_{Jun}
(12) 
_{Jul}
(34) 
_{Aug}
(27) 
_{Sep}
(29) 
_{Oct}
(35) 
_{Nov}
(58) 
_{Dec}
(27) 
2013 
_{Jan}
(27) 
_{Feb}
(16) 
_{Mar}
(40) 
_{Apr}
(16) 
_{May}
(34) 
_{Jun}
(37) 
_{Jul}
(6) 
_{Aug}
(3) 
_{Sep}
(4) 
_{Oct}
(49) 
_{Nov}
(13) 
_{Dec}
(12) 
2014 
_{Jan}
(15) 
_{Feb}
(21) 
_{Mar}
(11) 
_{Apr}
(13) 
_{May}
(27) 
_{Jun}
(60) 
_{Jul}
(19) 
_{Aug}
(29) 
_{Sep}
(20) 
_{Oct}
(28) 
_{Nov}
(41) 
_{Dec}
(15) 
2015 
_{Jan}
(33) 
_{Feb}
(29) 
_{Mar}
(26) 
_{Apr}
(17) 
_{May}
(2) 
_{Jun}
(13) 
_{Jul}
(21) 
_{Aug}
(30) 
_{Sep}
(22) 
_{Oct}
(15) 
_{Nov}
(46) 
_{Dec}
(20) 
2016 
_{Jan}
(6) 
_{Feb}
(5) 
_{Mar}
(9) 
_{Apr}
(15) 
_{May}
(9) 
_{Jun}
(4) 
_{Jul}
(3) 
_{Aug}
(4) 
_{Sep}
(31) 
_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1
(4) 
2
(3) 
3
(6) 
4

5

6
(6) 
7
(4) 
8
(4) 
9
(7) 
10
(3) 
11

12

13
(1) 
14
(1) 
15
(1) 
16
(3) 
17
(3) 
18
(2) 
19

20
(20) 
21
(11) 
22
(2) 
23
(3) 
24

25
(1) 
26
(2) 
27
(6) 
28
(4) 
29
(5) 
30
(2) 


From: Klaus Spanderen <klaus@sp...>  20090430 22:08:14

Hi Michael, > Anyways, can you send me a link or a paper with more information about the > arbitrage violations due to constant extrapolation? I still cant see why > constant extrapolation is violating the arbitrage criteria. Please find attached a small program, where the constant extrapolation as implemented in BlackVarianceSurface generates an arbitrage violation  negative call spread price when the maturity becomes large enough. To get it running you have to enable extrapolation in analyticeuopeanengine.hpp at line 45. (Hope I got everything right with the example;) regards Klaus 
From: sun <pythonsun@gm...>  20090430 14:07:36

Hi Dev, I'm wondering if there are projects from you so I can join and experience the quant development. With some basic knowledge of numerical simulations and financial modelling, it will be good if the task is related to such kinds of training. Best Regards, sun 
From: Michael Heckl <Michael.H<eckl@gm...>  20090429 23:07:44

Hallo Dima, I am very curious about your new kernel interpolation. Sounds like this is something that can help me a lot right now. I would like to try around a bit with different interpolation methods to overcome my numerical problems. If I can make my surface smoother with kernel interpolation I will give it a go and would like to do some testing on it. How do I get your new kernel interpolation running? Can I check it out from the SVN? I checked the trunk and I found a file called kernelinterpolation.hpp. Does it work for 2dimensions or just for one because for the surface I need it for two dimensions. Can you please also send me a link or a paper with more informations about it so I can build up some theoretical knowledge before I start testing. Greetings, Michael _____ From: Dima [mailto:dimathematician@...] Sent: Mittwoch, 29. April 2009 11:55 To: Klaus Spanderen Cc: quantlibdev@... Subject: Re: [Quantlibdev] LocalvolSurface.cpp Just a remark regarding interpolation. I've implemented kernel interpolation (its in the trunk), which can be made sufficiently smooth by choosing a proper standard deviation of the gaussian kernel. I've heard that this smoothness property makes it a good choice for local vol calibrations. I don't have any personal experience with that though. 
From: Michael Heckl <Michael.H<eckl@gm...>  20090429 22:35:28

Hallo Klaus, > BiLinear interpolation doesn't work due to the jumps in the first derivative. > BiCubic should be much better. I totally agree that BiLinear interpolation is not the smoothest interpolation out there. And you are right that in theorie you would get points of discontinuity exactly everywhere where the linear slope changes. Moreover the first derivative would look like a step function. That would in theory lead to a sum of dirac delta functions in the second derivative. But this is only in theory. Since we do discrete approximations in quantlib, I doubt that we have this effect ... Anyways, I do also have problems with BiCubic Interpolation. My hope is now that the newly implemented kernel interpolation (thanks to Dima) gives better test results. > Constant extrapolation could introduce arbitrage violations far ITM or far OTM > and these arbitrage violations can lead to negative variances. Is this part > of the problem in your tests? I would love to not use the constant extrapolation but instead the InterpolatorDefaultExtrapolation. But this doesn't work since I run into another problem by doing this. That is very far ITM/OTM the monoton variance criteria is violated at some point and the program crashes due to this issue. I have now Idea how to encounter this problem?? So I cant really say if that would make it better ... Anyways, can you send me a link or a paper with more information about the arbitrage violations due to constant extrapolation? I still cant see why constant extrapolation is violating the arbitrage criteria. At least it doesn't violate any of the arbitrage criterias that I know (see Roger Lee or Gatheral or Musiela/Rutkowski) ... Greetings from Munich Michael 
From: lowlyworm <jbohart@gm...>  20090429 11:10:21

Hi all, I'm trying to reproduce some calculations from Brigo and Mercurio book interest rate models  theory and practice 2nd edition. particularly the g2++ model in chapter 4. I'm using the BermudaSwaption example and have modified the data but am not understanding how to change from the FlatForward YieldTermStructure to a nonFlatForward yield term structure (using values from Fig 1.1) so i can reproduce their results. Any help would be greatly appreciated! Thanks Joe  View this message in context: http://www.nabble.com/BermudanSwaptionusingnonflatforwardYieldTermStructuretp23295164p23295164.html Sent from the quantlibdev mailing list archive at Nabble.com. 
From: Dima <dimathematician@go...>  20090429 09:54:41

Just a remark regarding interpolation. I've implemented kernelinterpolation (its in the trunk), which can be made sufficiently smooth by choosing a proper standard deviation of the gaussian kernel. I've heard that this smoothness property makes it a good choice for local vol calibrations. I don't have any personal experience with that though. 2009/4/29 Klaus Spanderen <klaus@...> > Hi > > > I ran my tests with both. BiLinear and BiCubic intrapolation. I > encountered > > the instability issues with both. > > BiLinear interpolation doesn't work due to the jumps in the first > derivative. > BiCubic should be much better. > > > You are right that the problems appear far ITM and OTM. But I cant really > > figure out why, since I set up a constant extrapolation for both, Strike > > and Maturity in my BlackVarianceSurface. > > Constant extrapolation could introduce arbitrage violations far ITM or far > OTM > and these arbitrage violations can lead to negative variances. Is this part > of the problem in your tests? > > > Thank you also for the link. To me it seem like if we want to use the > > dupire formula efficient, stable and productive, we wont get around > > implementing some fancy optimizationsplines and smoothing algorithm. > > yes.;) > > regards > Klaus > > >  > Register Now & Save for Velocity, the Web Performance & Operations > Conference from O'Reilly Media. Velocity features a full day of > expertled, handson workshops and two days of sessions from industry > leaders in dedicated Performance & Operations tracks. Use code vel09scf > and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf > _______________________________________________ > QuantLibdev mailing list > QuantLibdev@... > https://lists.sourceforge.net/lists/listinfo/quantlibdev > 
From: Klaus Spanderen <klaus@sp...>  20090429 07:21:19

Hi > I ran my tests with both. BiLinear and BiCubic intrapolation. I encountered > the instability issues with both. BiLinear interpolation doesn't work due to the jumps in the first derivative. BiCubic should be much better. > You are right that the problems appear far ITM and OTM. But I cant really > figure out why, since I set up a constant extrapolation for both, Strike > and Maturity in my BlackVarianceSurface. Constant extrapolation could introduce arbitrage violations far ITM or far OTM and these arbitrage violations can lead to negative variances. Is this part of the problem in your tests? > Thank you also for the link. To me it seem like if we want to use the > dupire formula efficient, stable and productive, we wont get around > implementing some fancy optimizationsplines and smoothing algorithm. yes.;) regards Klaus 
From: Michael Heckl <Michael.H<eckl@gm...>  20090428 21:42:44

Hallo Luigi, Done with the patch manager at ID 2783225. Greetings, Michael Original Message From: Luigi Ballabio [mailto:luigi.ballabio@...] Sent: Dienstag, 21. April 2009 11:52 To: Michael Heckl Cc: quantlibdev@... Subject: Re: [Quantlibdev] EnhancedBlackScholesProcess which supports Vega Tests On Thu, 20090409 at 23:45 +0200, Michael Heckl wrote: > I solved this problem by building up an Enhanced Black Scholes Process > which takes as parameters a stress level and a square of the local > Volatility Surface which it stresses on demand. I thought maybe anyone > is interested in my solution? I already tested it and it works fine. > The solution itself is quite easy and I am working with it so far > without any problems. I would like to contribute it and maybe we can > together improve it and enhance Quantlib? > > Since this is my first experience with Quantlib Mailing Lists I am not > quite sure if I can attach my cpp files? Can anybody give me some > advice? Sorry for the delayyes, you can post them here or in the Sourceforge patch manager. Luigi  Just remember what ol' Jack Burton does when the earth quakes, the poison arrows fall from the sky, and the pillars of Heaven shake. Yeah, Jack Burton just looks that big old storm right in the eye and says, "Give me your best shot. I can take it."  Jack Burton, "Big trouble in Little China" 
From: SourceForge.net <noreply@so...>  20090428 21:40:11

Patches item #2783225, was opened at 20090428 23:40 Message generated for change (Tracker Item Submitted) made by heckl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=312740&aid=2783225&group_id=12740 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Heckl (heckl) Assigned to: Nobody/Anonymous (nobody) Summary: EnhancedBlackScholesProcess that supports vega stresstests Initial Comment: This BlackScholes Process takes 5 extra arguments which define a square of the local volatility surface that is stressed by a configurable stresslevel. You can also use this process for local vol curve stress tests. The solution is quite easy but very very helpful. I did lots of testing on it and it works perfectly. Especially for examining where (what moneyness and what time bucket) the vega sensitivities are at path dependent asian options you can do wonderful stresstests with monte carlo. This is because it is not always wanted to stress the implied surface (since this could cause smoothness problems), but also to stress the local volatility surface. There is no other option to do this so far. That is why i developed this process. Check it out. I can also provide some test cases that demonstrate the tremendous usefulness of this process. Greetings, Michael  You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=312740&aid=2783225&group_id=12740 
From: Michael Heckl <Michael.H<eckl@gm...>  20090428 21:20:04

Hallo Nando, Original Message On Mon, Apr 27, 2009 at 11:20 AM, Michael Heckl <Michael.Heckl@...> wrote: > I set up a constant extrapolation for both, Strike and > Maturity in my BlackVarianceSurface. I don't work on equities but constant variance extrapolation in strike and maturity seems plain wrong to me. In time it implies zero forward volatility, in strike it violates concavity smile requirement ciao  Nando  What is done with the time extrapolation is the following: if (t<=times_.back()) return varianceSurface_(t, strike, true); else // t>times_.back()  extrapolate return varianceSurface_(times_.back(), strike, true) * t/times_.back(); I.e. the implied variance surface is not extrapolated constant in time, but the implied volatility surface. What exactly do you mean by zero forward volatility. IMHO the forward volatility would be in that case (between two dates past at extrapolation) exactly the constant extrapolated volatility (analog as with interest rates for instance). And at Strikes past the maximum/minimum Strike the implied variance surface is indeed extrapolated flat, i.e. // enforce constant extrapolation when required if (strike < strikes_.front() && lowerExtrapolation_ == ConstantExtrapolation) strike = strikes_.front(); if (strike > strikes_.back() && upperExtrapolation_ == ConstantExtrapolation) strike = strikes_.back(); Well, I don't know what you mean with your concavity smile requirement. But I cant see what the problem should be with this extrapolation? If you look at the NoArbitrage requirements for the implied volatility surface they only give limites to the slope. But constant extrapolation means zero slope. So I cant see any restrictions to that. You can find noarbitrage restrictions for the implied volatility for instance at Roger W. Lee ("Implied Volatility: Statics, Dynamics, and Probabalistic Interpretation") The paper is online here: http://www.math.uchicago.edu/~rl/ Can you send me a paper which works out the arguments you stated? Greetings, Michael 
From: Ferdinando Ametrano <qf@am...>  20090428 10:25:59

On Mon, Apr 27, 2009 at 11:20 AM, Michael Heckl <Michael.Heckl@...> wrote: > I set up a constant extrapolation for both, Strike and > Maturity in my BlackVarianceSurface. I don't work on equities but constant variance extrapolation in strike and maturity seems plain wrong to me. In time it implies zero forward volatility, in strike it violates concavity smile requirement ciao  Nando 
From: Dirk Eddelbuettel <edd@de...>  20090427 11:02:29

On 27 April 2009 at 11:48, Ferdinando Ametrano wrote:  Hi Dirk   the issue arises when the test is run on a nontrading day. To fix  this I've added a  Settings::instance().evaluationDate() =  conventions.calendar.adjust(Date::todaysDate());  line so it won't happen again. Cool, thank you! Dirk  thank you for the report   ciao  Nando   On Sun, Apr 26, 2009 at 2:13 PM, Dirk Eddelbuettel <edd@...> wrote:  >  > Hi,  >  > Arnaud noticed that quantlibtestsuite dies on the error below, seemingly  > from the code in testsuite/swaptionvolatilitymatrix.cpp that does:  >  > Date exerciseDate = swaption.exercise()>dates().front();  > if (exerciseDate!=vol>optionDates()[i])  > BOOST_FAIL(  > "optionDateFromTenor mismatch for " <<  > description << ":"  > "\n option tenor: " << atm.tenors.options[i] <<  > "\nactual option date: " << exerciseDate <<  > "\n exp. option date: " << vol>optionDates()[i]);  >  >  > This happened to him on amd64, I see the same on i386. Is that considered an  > actual bug, or merely an imperfect implementation in the date logic in  > swaption exercise handling ?  >  > Thanks, Dirk  >  > On 25 April 2009 at 22:53, Arnaud Battistella wrote:  >  Hi,  >  This is probably a rather unimportant bug; however, quantlibtestsuite reports 1 failure. I do not know if  >  this problem is old or not as it is the first time I run this command (I only use quantlib from R). Let me  >  know if you need additional information.  >  Thanks again for your amazing work with R, quantlib, gsl etc...  >  Very best!  >  A.  >   >  Output:  >   >  Running 390 test cases...  >  swaptionvolatilitymatrix.cpp(214): fatal error in  >  "SwaptionVolatilityMatrixTest::testSwaptionVolMatrixCoherence": optionDateFromTenor mismatch for floating  >  reference date, floating market data:  >  option tenor: 1M  >  actual option date: May 25th, 2009  >  exp. option date: May 27th, 2009  >   >  Tests completed in 18 m 35 s  >   >   >  *** 1 failure detected in test suite "Master Test Suite"  >   >   System Information:  >  Debian Release: squeeze/sid  >  APT prefers testing  >  APT policy: (990, 'testing'), (300, 'unstable')  >  Architecture: amd64 (x86_64)  >   >  Kernel: Linux 2.6.262amd64 (SMP w/4 CPU cores)  >  Locale: LANG=en_US.UTF8, LC_CTYPE=en_US.UTF8 (charmap=UTF8)  >  Shell: /bin/sh linked to /bin/bash  >   >  Versions of packages libquantlib0dev depends on:  >  ii libboosttestdev 1.34.115+b1 components for writing and executi  >  ii libboosttest1.34.1 1.34.115+b1 components for writing and executi  >  ii libc6 2.94 GNU C Library: Shared libraries  >  ii libc6dev 2.94 GNU C Library: Development Librari  >  ii libgcc1 1:4.3.33 GCC support library  >  ii libquantlib0.9.7 0.9.71 Quantitative Finance Library  de  >  ii libstdc++6 4.3.33 The GNU Standard C++ Library v3  >   >  libquantlib0dev recommends no packages.  >   >  libquantlib0dev suggests no packages.  >   >   no debconf information  >   >   >  >   > Three out of two people have difficulties with fractions.  >  >   > Crystal Reports  New Free Runtime and 30 Day Trial  > Check out the new simplified licensign option that enables unlimited  > royaltyfree distribution of the report engine for externally facing  > server and web deployment.  > http://p.sf.net/sfu/businessobjects  > _______________________________________________  > QuantLibdev mailing list  > QuantLibdev@...  > https://lists.sourceforge.net/lists/listinfo/quantlibdev  >  Three out of two people have difficulties with fractions. 
From: Dima <dimathematician@go...>  20090427 10:44:51

Thanks a lot. Did you have the chance to look at the black calculator that I posted? Sorry for annoying, but since I'm using it intensively in all of my current classes I'm just afraid that the discussion will start later and I have to go back and change everything in all of the classes :) 2009/4/27 Ferdinando Ametrano <nando@...> > On Mon, Apr 27, 2009 at 11:03 AM, Dima <dimathematician@...> > wrote: > > While I'm still waiting for feedback > > I've just committed your fix for the (stdDev<QL_EPSILON, fwd==strike) case > > > I'd need smilesection.hpp to return the reference date. Is this possible? > just done. > > > I continue coding and have VannaVolga and > > Malz ready, testing included. > > share them as a patch whenever you're comfortable with them > > ciao  Nando > 
From: Ferdinando Ametrano <nando@am...>  20090427 10:23:11

On Mon, Apr 27, 2009 at 11:03 AM, Dima <dimathematician@...> wrote: > While I'm still waiting for feedback I've just committed your fix for the (stdDev<QL_EPSILON, fwd==strike) case > I'd need smilesection.hpp to return the reference date. Is this possible? just done. > I continue coding and have VannaVolga and > Malz ready, testing included. share them as a patch whenever you're comfortable with them ciao  Nando 
From: Ferdinando Ametrano <qf@am...>  20090427 09:48:06

Hi Dirk the issue arises when the test is run on a nontrading day. To fix this I've added a Settings::instance().evaluationDate() = conventions.calendar.adjust(Date::todaysDate()); line so it won't happen again. thank you for the report ciao  Nando On Sun, Apr 26, 2009 at 2:13 PM, Dirk Eddelbuettel <edd@...> wrote: > > Hi, > > Arnaud noticed that quantlibtestsuite dies on the error below, seemingly > from the code in testsuite/swaptionvolatilitymatrix.cpp that does: > > Date exerciseDate = swaption.exercise()>dates().front(); > if (exerciseDate!=vol>optionDates()[i]) > BOOST_FAIL( > "optionDateFromTenor mismatch for " << > description << ":" > "\n option tenor: " << atm.tenors.options[i] << > "\nactual option date: " << exerciseDate << > "\n exp. option date: " << vol>optionDates()[i]); > > > This happened to him on amd64, I see the same on i386. Is that considered an > actual bug, or merely an imperfect implementation in the date logic in > swaption exercise handling ? > > Thanks, Dirk > > On 25 April 2009 at 22:53, Arnaud Battistella wrote: >  Hi, >  This is probably a rather unimportant bug; however, quantlibtestsuite reports 1 failure. I do not know if >  this problem is old or not as it is the first time I run this command (I only use quantlib from R). Let me >  know if you need additional information. >  Thanks again for your amazing work with R, quantlib, gsl etc... >  Very best! >  A. >  >  Output: >  >  Running 390 test cases... >  swaptionvolatilitymatrix.cpp(214): fatal error in >  "SwaptionVolatilityMatrixTest::testSwaptionVolMatrixCoherence": optionDateFromTenor mismatch for floating >  reference date, floating market data: >  option tenor: 1M >  actual option date: May 25th, 2009 >  exp. option date: May 27th, 2009 >  >  Tests completed in 18 m 35 s >  >  >  *** 1 failure detected in test suite "Master Test Suite" >  >   System Information: >  Debian Release: squeeze/sid >  APT prefers testing >  APT policy: (990, 'testing'), (300, 'unstable') >  Architecture: amd64 (x86_64) >  >  Kernel: Linux 2.6.262amd64 (SMP w/4 CPU cores) >  Locale: LANG=en_US.UTF8, LC_CTYPE=en_US.UTF8 (charmap=UTF8) >  Shell: /bin/sh linked to /bin/bash >  >  Versions of packages libquantlib0dev depends on: >  ii libboosttestdev 1.34.115+b1 components for writing and executi >  ii libboosttest1.34.1 1.34.115+b1 components for writing and executi >  ii libc6 2.94 GNU C Library: Shared libraries >  ii libc6dev 2.94 GNU C Library: Development Librari >  ii libgcc1 1:4.3.33 GCC support library >  ii libquantlib0.9.7 0.9.71 Quantitative Finance Library  de >  ii libstdc++6 4.3.33 The GNU Standard C++ Library v3 >  >  libquantlib0dev recommends no packages. >  >  libquantlib0dev suggests no packages. >  >   no debconf information >  >  > >  > Three out of two people have difficulties with fractions. > >  > Crystal Reports  New Free Runtime and 30 Day Trial > Check out the new simplified licensign option that enables unlimited > royaltyfree distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > QuantLibdev mailing list > QuantLibdev@... > https://lists.sourceforge.net/lists/listinfo/quantlibdev > 
From: Michael Heckl <Michael.H<eckl@gm...>  20090427 09:21:00

Hallo Klaus, I ran my tests with both. BiLinear and BiCubic intrapolation. I encountered the instability issues with both. You are right that the problems appear far ITM and OTM. But I cant really figure out why, since I set up a constant extrapolation for both, Strike and Maturity in my BlackVarianceSurface. I attached the cpp file that I use for my tests as well as a small jpg of the surface that is included in my test cases. I hope this gives you all the information you need and you can give me some feedback on my test cases. Thank you also for the link. To me it seem like if we want to use the dupire formula efficient, stable and productive, we wont get around implementing some fancy optimizationsplines and smoothing algorithm. I am looking forward to hear back from you. Greetings, Michael Original Message From: Klaus Spanderen [mailto:klaus@...] Sent: Sonntag, 26. April 2009 22:01 To: quantlibdev@... Cc: Michael Heckl Subject: Re: [Quantlibdev] LocalvolSurface.cpp > > It happens at different strikes and moneyness under some conditions that > the d2wdy2 blows up and becomes something like 4231,12 > Which interpolation scheme do you use for the volatility surface. The standard interpolation is linear in the variance which very often leads to problems with the second derivative. Therefore I've used BicubicSplineInterpolation volTS>setInterpolation<Bicubic>(); Does this happen only for deep ITM or OTM paths? (Is this a extrapolation problem for very large (very small) spots?) At least for "extrem" extrapolations" I found simillar problems and "solved" it be setting negative variances to zero. Can I get access to the parameters of your testsurface? > But would that probably > make the dupire formula useless to us? People your using e.g. splines together with an optimization technique like in Reconstructing The Unknown Local Volatility Function, Thomas F. Coleman, Yuying Li, ARUN VERMA, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.6202 to solve the instability. But a lot of code is needed to implemented this;(. For the time being I think we should give Nando's approach a try and use the first and second derivatives taken from the interpolation method regards Klaus 
From: Dima <dimathematician@go...>  20090427 09:03:48

While I'm still waiting for feedback, I continue coding and have VannaVolga and Malz ready, testing included. Can I ask for a simple feature which I need in these implementations? I'd need smilesection.hpp to return the reference date. Is this possible? Thanks 2009/4/21 Dima <dimathematician@...> > Nando, I think the modification should be ok for the blackcalculator. I've > uploaded > my current working version of the deltacalculator including a > testsuite(your recent change not incl.). > > You can get it here: > > longvega.com/DeltaEngine.zip > > Now as you see its not really incorporated into blackcalculator and lets > see > how we might be able to join the concepts. My reasons were the following: > > 1) First of all, we have 4 deltas to take care of and even more atm quotes, > which > are rather FX specific, since we have 2 numeraires. So I have enums, which > basically have only FX specific types, don't know if that should go into a > general > blackcalculator. > > 2) > I needed functions such as > > Real deltaFromStrike(const Real &strike) const; > Real strikeFromDelta(const Real &delta) const; > > since they will be later called very often in some numerical procedures in > the > smile setup. Again, its FX specific to quote vols again deltas from which > we can > extract strikes. And again, the functions return different values for > different deltas. > It would be a bit incoherent from my point of view to return a strike > different from > the one given in the constructor, as would be the case for the blackcalc. > > Here functions are generic, in a blackcalculator I might need to setup 8 > new functions? > The bigger problem for me was, that the strike in blackcalc is in the > payoff, and > I need it to change very often. So I've created a parsimonious constructor > which doesn't need > a strike nor a delta. If you look into strikeFromDelta, you'll see > numerical procedures > for premium adjusted stuff, so I can't really make use of cached data > anyways and > it doesn't have to do a lot with the standard black formulas anymore. > > > Btw: I'd appreciate a function which returns d1 and d2 in blackcalc, which > I need for > other functions I'm working on as well. If that would be possible, I'd > setup a blackcalc > in my class and the whole class would be basically based on the blackcalc. > > I'm open to discussion. Whats your oppinion? Thanks > > > > > > > > > 2009/4/20 Ferdinando Ametrano <nando@...> > > Hi Dima >> >> > I plan to add some new FX machinery to QuantLib. >> >> if applicable please consider adding generic formulas to >> ql/pricingengines/blackformula.hpp >> >> > In the BlackCalculator constructor we have >> > if (stdDev_>=QL_EPSILON) { >> > ... >> > } >> > else{ >> > if (forward>strike_) { >> > cum_d1_ = 1.0; >> > cum_d2_= 1.0; >> > } else { >> > cum_d1_ = 0.0; >> > cum_d2_= 0.0; >> > } >> > } >> > I wonder if that's 100% right or if I've overlooked something. But if >> > forward==strike, >> > then we have log(f/K)=0, >> > so, if vol is not zero, we should rather have >> > cum_d1=N (0*5*stdDev_) >> > which would be approximately 0.5 for very small vols. Opinions? >> >> yeah, you're right. It should be patched as below, isn't it? >> >> if (stdDev_>=QL_EPSILON) { >> if (close(strike_, 0.0)) { >> cum_d1_ = 1.0; >> cum_d2_ = 1.0; >> n_d1_ = 0.0; >> n_d2_ = 0.0; >> } else { >> D1_ = std::log(forward/strike_)/stdDev_ + 0.5*stdDev_; >> D2_ = D1_stdDev_; >> CumulativeNormalDistribution f; >> cum_d1_ = f(D1_); >> cum_d2_ = f(D2_); >> n_d1_ = f.derivative(D1_); >> n_d2_ = f.derivative(D2_); >> } >> } else { >> if (close(forward, strike_)) { >> cum_d1_ = 0.5; >> cum_d2_ = 0.5; >> n_d1_ = M_SQRT_2 * M_1_SQRTPI; >> n_d2_ = M_SQRT_2 * M_1_SQRTPI; >> } else if (forward>strike_) { >> cum_d1_ = 1.0; >> cum_d2_ = 1.0; >> n_d1_ = 0.0; >> n_d2_ = 0.0; >> } else { >> cum_d1_ = 0.0; >> cum_d2_ = 0.0; >> n_d1_ = 0.0; >> n_d2_ = 0.0; >> } >> } >> >> ciao  Nando >> >> > 
From: Klaus Spanderen <klaus@sp...>  20090426 20:01:03

Hi Micheal > First of all I found some minor errors in your code which are: yes, you are obviously right. I've changed the code in the SVN repository accordingly. Thanks for the hint! > > It happens at different strikes and moneyness under some conditions that > the d2wdy2 blows up and becomes something like 4231,12 > Which interpolation scheme do you use for the volatility surface. The standard interpolation is linear in the variance which very often leads to problems with the second derivative. Therefore I've used BicubicSplineInterpolation volTS>setInterpolation<Bicubic>(); Does this happen only for deep ITM or OTM paths? (Is this a extrapolation problem for very large (very small) spots?) At least for "extrem" extrapolations" I found simillar problems and "solved" it be setting negative variances to zero. Can I get access to the parameters of your testsurface? > For some reason I don't get rid of the feeling that we would be better of > to work completely without the second derivative since it seems to be > impossible to get this numerically under control. No, IMO the second derivative is needed. > But would that probably > make the dupire formula useless to us? People your using e.g. splines together with an optimization technique like in Reconstructing The Unknown Local Volatility Function, Thomas F. Coleman, Yuying Li, ARUN VERMA, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.6202 to solve the instability. But a lot of code is needed to implemented this;(. For the time being I think we should give Nando's approach a try and use the first and second derivatives taken from the interpolation method regards Klaus 
From: Dirk Eddelbuettel <edd@de...>  20090426 12:14:15

Hi, Arnaud noticed that quantlibtestsuite dies on the error below, seemingly from the code in testsuite/swaptionvolatilitymatrix.cpp that does: Date exerciseDate = swaption.exercise()>dates().front(); if (exerciseDate!=vol>optionDates()[i]) BOOST_FAIL( "optionDateFromTenor mismatch for " << description << ":" "\n option tenor: " << atm.tenors.options[i] << "\nactual option date: " << exerciseDate << "\n exp. option date: " << vol>optionDates()[i]); This happened to him on amd64, I see the same on i386. Is that considered an actual bug, or merely an imperfect implementation in the date logic in swaption exercise handling ? Thanks, Dirk On 25 April 2009 at 22:53, Arnaud Battistella wrote:  Hi,  This is probably a rather unimportant bug; however, quantlibtestsuite reports 1 failure. I do not know if  this problem is old or not as it is the first time I run this command (I only use quantlib from R). Let me  know if you need additional information.  Thanks again for your amazing work with R, quantlib, gsl etc...  Very best!  A.   Output:   Running 390 test cases...  swaptionvolatilitymatrix.cpp(214): fatal error in  "SwaptionVolatilityMatrixTest::testSwaptionVolMatrixCoherence": optionDateFromTenor mismatch for floating  reference date, floating market data:  option tenor: 1M  actual option date: May 25th, 2009  exp. option date: May 27th, 2009   Tests completed in 18 m 35 s    *** 1 failure detected in test suite "Master Test Suite"    System Information:  Debian Release: squeeze/sid  APT prefers testing  APT policy: (990, 'testing'), (300, 'unstable')  Architecture: amd64 (x86_64)   Kernel: Linux 2.6.262amd64 (SMP w/4 CPU cores)  Locale: LANG=en_US.UTF8, LC_CTYPE=en_US.UTF8 (charmap=UTF8)  Shell: /bin/sh linked to /bin/bash   Versions of packages libquantlib0dev depends on:  ii libboosttestdev 1.34.115+b1 components for writing and executi  ii libboosttest1.34.1 1.34.115+b1 components for writing and executi  ii libc6 2.94 GNU C Library: Shared libraries  ii libc6dev 2.94 GNU C Library: Development Librari  ii libgcc1 1:4.3.33 GCC support library  ii libquantlib0.9.7 0.9.71 Quantitative Finance Library  de  ii libstdc++6 4.3.33 The GNU Standard C++ Library v3   libquantlib0dev recommends no packages.   libquantlib0dev suggests no packages.    no debconf information    Three out of two people have difficulties with fractions. 
From: Michael Heckl <Michael.H<eckl@gm...>  20090425 15:17:01

Hallo Klaus, I checked out your new version of the localvolsurface.cpp from the SVN and run some extensive tests. First of all I found some minor errors in your code which are: 1. (Line 116): Real strikept = strike*dr*dq/(drpt*dqpt); This should be: Real strikept = strike*dr*dqpt/(drpt*dq); 2. (same in line 130 and 131): Real strikept = strike*dr*dq/(drpt*dqpt); Real strikemt = strike*dr*dq/(drmt*dqmt); This should be: Real strikept = strike*dr*dqpt/(drpt*dq); Real strikemt = strike*dr*dqmt/(drmt*dq); Just minor things, but think it over. You divide through the risk free discount and multiply with the dividend discount. And if you do this from t to t+dt then this is what you get. Now a few words what I figured out by extensive testing. I ran tests with MC with 100 thousand of paths. And your BugFix brought some slight improvements but the problem with the second derivative still remains. It happens very often that the program crashes and tells me "negative local vol^2 at ..." The black vol surface I am using is very smooth. So this is not the reason for the problems. Much more is the problem again the numerical instability when taking the second derivative of the implied variance surface with respect to the logmoneyness. Let me introduce 2 more variables: Real z1,z2; Where z1=wpwm; z2=wp2.0*w+wm; Then the first derivative of w with respect to logmoneyness is: dwdy = (z1)/(2.0*dy); And the second is: d2wdy2 = (z2)/(dy*dy); It happens at different strikes and moneyness under some conditions that the d2wdy2 blows up and becomes something like 4231,12 Like a really big negative number. This obviously makes the denominator which consists of den1+den2+den3 negative and the whole program crashes. The numerator is always positive since we make sure that the implied variance is monotone increasing. So the only thing we have to take care about is that our denominator is not getting negative. I figured out (by extensive testing and only empirically) that 0.4 < den1+den2<1.3 So at least for the surfaces I was testing, this was always given. So basically we have to make sure that den3 is not getting smaller then 0.4 in order for the program not to crash. But my tests showed that den3 is either in the green zone or it blows up dramatically which gives me the suppression that we encounter some numerical problems under certain conditions. The only way how I managed for the program to run stable is by controlling z1,z2 by setting: if ((std::abs(z1) < std::abs(dy)/1000 && z1!=0.0)  std::abs(z1) > 2*std::abs(dy)) z1=0; and if ((std::abs(z2) < dy*dy/1000 && z2!=0.0)  std::abs(z2) > dy*dy) z2=0; So what I basically do is setting the first derivative to zero when it is getting so small that it doesn't really affect our result anymore anyways. Same with the second derivative. But the more critical part is setting the derivative to zero when it blows up. I just cut off all critical values. This makes (obviously) the program run stable. On the offside I cant really say for sure (at least till now) how big the impacts of this manipulation are for the precision of our results. I figured that we basically never get problems with z1 which means with the first derivative. But we set the second derivative which means z2 quit often to zero when it falls out of the good range. For some reason I don't get rid of the feeling that we would be better of to work completely without the second derivative since it seems to be impossible to get this numerically under control. But would that probably make the dupire formula useless to us? Do you have any Ideas? Or can I help you with this issue any further? Greetings, Michael PS: is it possible to write a paper which examines under which conditions (what range for what variables and what interdependencies of the variables and what restrictions to the shape of the black vol surface) the formula is theoretically/mathematically possible (i.e. doesn't get negative) and then use this new gained knowledge to make our code stable with the smallest impact possible to the precision of the result? Original Message From: Klaus Spanderen [mailto:klaus@...] Sent: Freitag, 24. April 2009 00:07 To: quantlibdev@... Cc: Michael Heckl; Ferdinando Ametrano Subject: Re: [Quantlibdev] LocalvolSurface.cpp Hi Michael, you wrote > To be a bit more precise the problem lies in the second derivative of the > black variance with respect to the strike. Even if I take surfaces without > Smile/Skew, i.e. flat ones, I still get the problem there. This is because > (wp2.0*w+wm) is not exactly zero but very very small. And this gets > devided by 0. 000000000001. To me the root of the problem was the line dy = ((y!=0.0) ? y*0.000001 : 0.000001); For ve y small y this code leads to unrealistic small dy and to numerical problems during the calculation of the difference quotient. Therfore I've changed it into dy = ((std::fabs(y) > 0.001) ? y*0.0001 : 0.000001); and at least for my tests the numerical problems with the difference quotient disappeared. (pls see the latest version of localvolsurface.cpp in the SVN repository). Could you test this fix using your test cases? regards Klaus 
From: Klaus Spanderen <klaus@sp...>  20090423 22:06:56

Hi Michael, you wrote > To be a bit more precise the problem lies in the second derivative of the > black variance with respect to the strike. Even if I take surfaces without > Smile/Skew, i.e. flat ones, I still get the problem there. This is because > (wp2.0*w+wm) is not exactly zero but very very small. And this gets > devided by 0. 000000000001. To me the root of the problem was the line dy = ((y!=0.0) ? y*0.000001 : 0.000001); For ve y small y this code leads to unrealistic small dy and to numerical problems during the calculation of the difference quotient. Therfore I've changed it into dy = ((std::fabs(y) > 0.001) ? y*0.0001 : 0.000001); and at least for my tests the numerical problems with the difference quotient disappeared. (pls see the latest version of localvolsurface.cpp in the SVN repository). Could you test this fix using your test cases? regards Klaus 
From: Ferdinando Ametrano <qf@am...>  20090423 10:47:05

Hi Michael > i recently did a lot of testing with the localvolSurface class (which > contains Gatherals Dupire Formula) and i am not very happy with the results. > Ok, the class is (regarding to the documentation) untested, so I guess I > cant expect it to work properly. Last week Klaus fixed time derivative (now performed at constant moneyness instead of constant strike) and added tests. > But I figured that numerical problems cause > this class to return the error message “negative local vol … the black vol > surface is not smooth enough”. [...] > the problem lies in the second derivative of the > black variance with respect to the strike. you are right. I will fix it, probably moving time/strike (numerical) derivatives in the BlackVolTermStructure base interface, allowing for overloading in derived classes which might provide exact derivatives. BTW I have a related question. Black ATM variance must be increasing in time; I've always taken for granted that this is also true for every (not just ATM) constant moneyness section of the Black surface. Is this a nonarbitrage result or shaky common sense? ciao  Nando 
From: jt <jt@ca...>  20090423 09:49:41

JOB: Permanent Stats Programmer LOCATION: London, England, UK I'm working with an employer who are looking to recruit a permanent programmer that has experience of working on developing either mathematical or numerical programs. The job will be based in London. You need to have experience of both scripting (R would be very applicable but not vital, although if you don't have experience of R you will have to learn it!) and also compiled languages (pref C++). The more languages you have the better, but it will not be a show stopper if you have only a few (for example, good knowledge of one scripting language and C++ would be OK!). Experience of using MySQL or similar would be good! Please contact me offlist if you'd like to speak with me about this. My email address is james@... Thanks, JAMES >> to learn more about Camalyn please visit http://www.camalyn.org 
From: Michael Heckl <Michael.H<eckl@gm...>  20090422 22:23:05

Hello everybody, i recently did a lot of testing with the localvolSurface class (which contains Gatherals Dupire Formula) and i am not very happy with the results. Ok, the class is (regarding to the documentation) untested, so I guess I cant expect it to work properly. But I figured that numerical problems cause this class to return the error message "negative local vol . the black vol surface is not smooth enough". This also happens for very smooth black vol surfaces. The Problem lies in this code: Real forwardValue = underlying * (dividendTS>discount(t, true)/ riskFreeTS>discount(t, true)); // strike derivatives Real strike, y, dy, strikep, strikem; Real w, wp, wm, dwdy, d2wdy2; strike = underlyingLevel; y = std::log(strike/forwardValue); dy = ((y!=0.0) ? y*0.000001 : 0.000001); strikep=strike*std::exp(dy); strikem=strike/std::exp(dy); w = blackTS>blackVariance(t, strike, true); wp = blackTS>blackVariance(t, strikep, true); wm = blackTS>blackVariance(t, strikem, true); dwdy = (wpwm)/(2.0*dy); d2wdy2 = (wp2.0*w+wm)/(dy*dy); // time derivative Real dt, wpt, wmt, dwdt; if (t==0.0) { dt = 0.0001; wpt = blackTS>blackVariance(t+dt, strike, true); QL_ENSURE(wpt>=w, "decreasing variance at strike " << strike << " between time " << t << " and time " << t+dt); dwdt = (wptw)/dt; } else { dt = std::min<Time>(0.0001, t/2.0); wpt = blackTS>blackVariance(t+dt, strike, true); wmt = blackTS>blackVariance(tdt, strike, true); QL_ENSURE(wpt>=w, "decreasing variance at strike " << strike << " between time " << t << " and time " << t+dt); QL_ENSURE(w>=wmt, "decreasing variance at strike " << strike << " between time " << tdt << " and time " << t); dwdt = (wptwmt)/(2.0*dt); } if (dwdy==0.0 && d2wdy2==0.0) { // avoid /w where w might be 0.0 return std::sqrt(dwdt); } else { Real den1 = 1.0  y/w*dwdy; Real den2 = 0.25*(0.25  1.0/w + y*y/w/w)*dwdy*dwdy; Real den3 = 0.5*d2wdy2; Real den = den1+den2+den3; Real result = dwdt / den; QL_ENSURE(result>=0.0, "negative local vol^2 at strike " << strike << " and time " << t << "; the black vol surface is not smooth enough"); return std::sqrt(result); // return std::sqrt(dwdt / (1.0  y/w*dwdy + // 0.25*(0.25  1.0/w + y*y/w/w)*dwdy*dwdy + 0.5*d2wdy2)); } To be a bit more precise the problem lies in the second derivative of the black variance with respect to the strike. Even if I take surfaces without Smile/Skew, i.e. flat ones, I still get the problem there. This is because (wp2.0*w+wm) is not exactly zero but very very small. And this gets devided by 0. 000000000001. So if it is not exactly zero but very very little negative, it can blow up the whole calculations and we end up with this error message. To avoid these numerical problems I now decreased the accuracy in the derivatives a little bit and also introduced a check that sets this term to zero if it is very very very small (smaller than 1.0e12). Furthermore I changed the derivative with respect to the time. This was not necessary but I wanted to make it equal to the localvolcurve.cpp code which works just fine. The modified code now looks like: Real forwardValue = underlying * (dividendTS>discount(t, true)/ riskFreeTS>discount(t, true)); // strike derivatives Real strike, y, dy, strikep, strikem; Real w, wp, wm, dwdy, d2wdy2; Real z1,z2; // strike ist gegeben strike = underlyingLevel; // log(strike/forwardValue) y = std::log(strike/forwardValue); std::cout << "y: " << y << std::endl; // wir leiten blackScholesVariance nach y ab, bilde daher diskrete kleine unterteilung dy = ((y!=0.0) ? y*0.0001 : 0.0001); std::cout << "dy: " << dy << std::endl; strikep=strike*std::exp(dy); strikem=strike/std::exp(dy); w = blackTS>blackVariance(t, strike, true); wp = blackTS>blackVariance(t, strikep, true); wm = blackTS>blackVariance(t, strikem, true); z1=wpwm; if (std::abs(z1) < 1.0e12) z1=0; dwdy = (z1)/(2.0*dy); z2=wp2.0*w+wm; if (std::abs(z2) < 1.0e12) z2=0; d2wdy2 = (z2)/(dy*dy); // time derivative Real dt, wpt, wmt, dwdt; dt = (1.0/365.0); wpt = blackTS>blackVariance(t+dt, strike, true); QL_ENSURE(wpt>=w, "decreasing variance at strike " << strike << " between time " << t << " and time " << t+dt); dwdt = (wptw)/dt; if (dwdy==0.0 && d2wdy2==0.0) { // avoid /w where w might be 0.0 return std::sqrt(dwdt); } else { Real den1 = 1.0  y/w*dwdy; Real den2 = 0.25*(0.25  1.0/w + y*y/w/w)*dwdy*dwdy; Real den3 = 0.5*d2wdy2; Real den = den1+den2+den3; Real result = dwdt / den; QL_ENSURE(result>=0.0, "negative local vol^2 at strike " << strike << " and time " << t << "; the black vol surface is not smooth enough"); return std::sqrt(result); // return std::sqrt(dwdt / (1.0  y/w*dwdy + // 0.25*(0.25  1.0/w + y*y/w/w)*dwdy*dwdy + 0.5*d2wdy2)); } So far all the testing I did I received much better and more stable results. But since I am not an expert in computational and numerical methods regarding precision, can anybody who is a bit more experienced with c++ numerical issues give me some advice and probably check this out? Greetings Michael 
From: Klaus Spanderen <klaus@sp...>  20090422 21:56:17

Hi > In Hestonprocess.cpp there is a switch on Exact Variance Simulation where > it's said that one uses Alan Lewi's trick to decorrelate equity and > variance process. If the equity process and the variance process are not correlated in the Heston model one can use an exact sampling method for the variance process, which is a "square root" process. (see e.g. Glasserman, Monte Carlo Methods in Finance). This removes one main source for the bias of MonteCarlo Sampling methods for the Heston model, namely the variance process can not get negative values using this discretization method. To achieve zero correlation for a "normal" Heston model one should transform the equity process x(t)=ln(S(t)) using Ito's Lemma and y(t)=x(t)\frac{rho}{sigma}\nu(t). This removes the correlation between y(t) and the variance and one can use exact sampling for the variance process and Euler discretization for y(t). This schema might be better than the other discretization schemes if \sigma is large. regards Klaus 