From: Raimar S. <rai...@ui...> - 2012-04-26 09:36:32
|
Dear András, the bugfix #3482771 https://sourceforge.net/tracker/?func=detail&aid=3482771&group_id=187775&atid=922653 introduces a problem with my script 2ParticlesRingCavityFull. I am comparing the branches raimar/private/stable (without the fix) and raimar/private/development (with the fix), the testcase is one of my production runs: 2ParticlesRingCavityFull --deltaCSin -10 --minitSin 0.1 --evol single \ --kappaCos 0.5 --etaSin 2 --fin2 5 --deltaCCos -10 --precision 6 \ --dc 0 --minitCos 0.1 --modeCavSin Sin --T 300 \ --cutoffSin 12 --modeCavCos Cos --pinit1 "(0.5 0 0.2 0)" \ --pinit2 "(-0.5 0 0.2 0)" --UnotCos -5 --kappaSin 0.5 \ --fin1 5 --seed 1001 --UnotSin -5 --Dt 1 --cutoffCos 5 In the development branch, the trajectory comes to a halt almost completely around t=90, while stable continues to run. So I cannot use the version with the fix at the moment. The timesteps in development are only slightly smaller, on the order of 10^-5. In stable the timesteps are mostly on the order of 10^-4 and only sometimes 10^-5. As the simulation is quite slow I will provide a .sv file for t=89 to better reproduce the problem. From there maybe I can find out what is going on in the stepper. Is it true that the bug #3482771 only happens without Hamiltonian? Would it be possible to test for that and introduce the time fuzziness only for such systems? Best regards Raimar |
From: Andras V. <and...@ui...> - 2012-04-26 11:22:16
|
Dear Raimar, These trajectory halts are really annoying, especially because the bugfix in question was introduced just against such halts. And now you say that it is with the fix that you experience new halts?! At the moment I don't have much time to look into this in depth, but I would be more than happy to recant this fuzziness of time, since I've been feeling uncomfortable with it throughout. May I ask you to look a bit more into this, and to study also the occurence of halts before revision #204, and why they occur only without Hamiltonian evolution? (Though I have some explanation, it's vague and would be difficult to present here.) Perhaps you can come up with some better idea then how to solve this. I will nevertheless try to find time to experiment as well. Thanks a lot and best regards, András On Thu, Apr 26, 2012 at 11:34 AM, Raimar Sandner <rai...@ui...> wrote: > > Dear András, > > the bugfix #3482771 > > https://sourceforge.net/tracker/?func=detail&aid=3482771&group_id=187775&atid=922653 > > introduces a problem with my script 2ParticlesRingCavityFull. I am comparing > the branches raimar/private/stable (without the fix) and > raimar/private/development (with the fix), the testcase is one of my production > runs: > > 2ParticlesRingCavityFull --deltaCSin -10 --minitSin 0.1 --evol single \ > --kappaCos 0.5 --etaSin 2 --fin2 5 --deltaCCos -10 --precision 6 \ > --dc 0 --minitCos 0.1 --modeCavSin Sin --T 300 \ > --cutoffSin 12 --modeCavCos Cos --pinit1 "(0.5 0 0.2 0)" \ > --pinit2 "(-0.5 0 0.2 0)" --UnotCos -5 --kappaSin 0.5 \ > --fin1 5 --seed 1001 --UnotSin -5 --Dt 1 --cutoffCos 5 > > In the development branch, the trajectory comes to a halt almost completely > around t=90, while stable continues to run. So I cannot use the version with > the fix at the moment. > > The timesteps in development are only slightly smaller, on the order of 10^-5. > In stable the timesteps are mostly on the order of 10^-4 and only sometimes > 10^-5. > > As the simulation is quite slow I will provide a .sv file for t=89 to better > reproduce the problem. From there maybe I can find out what is going on in the > stepper. > > Is it true that the bug #3482771 only happens without Hamiltonian? Would it be > possible to test for that and introduce the time fuzziness only for such > systems? > > Best regards > Raimar > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2012-04-26 21:56:00
|
Dear András, here is what I think is happening, let's take rev 203 and this example: PumpedLossyMode_C++QED --kappa 0 --eta 0 --minitFock 2 --dc 0 --Dt 0.1 Dt is 0.1 and DtTry is 0.01 initially. This means *MCWF_Trajectory<RANK>::coherentTimeDevelopment* is called 10 times, each time *stepToDo* evaluates to 0.01. Due to rounding errors this does not sum up to 0.1 exactly but to something that is slightly less then 0.1 (0.1 - 1e-17 in my case). Just before the next display *coherentTimeDevelopment* is called once more with *stepToDo* evaluating to 1e-17. In the next call to *getEvolved()->update*, DtTry is updated to this very small value. As there is no mechanism to increase DtTry again, the stepper will continue with this DtTry also after the next display, and the trajectory seems to be stuck (where really it is just extremely slow). Even if initially Dt is not an even multiple of DtTry the problem might occur eventually. DtTry can only become smaller whenever the trajectory approaches the next display timestep, so sooner or later after some displays the trajectory probably won't make any significant progress anymore. This also explains why this only happens if the system has no Hamiltonian part. With a Hamiltonian the gsl integrator takes care of DtTry. Please have a look at raimar/bug3482771, where I reverted the changes introduced in rev 204 (except the check for li_ when calculating *stepToDo* which is independent from the rest). To fix the bug, I would suggest to leave DtTry unchanged for the case ha_=0, there is no reason to decrease DtTry only because we reach a display time. In my branch this is achieved by calling *getEvolved()->update* with the argument *getDtTry()* instead of *stepToDo*. Best regards Raimar |
From: Andras V. <and...@ui...> - 2012-05-02 17:15:19
|
Dear Raimar, Sorry for the delay. I think your explanation is correct, and this was more or less my explanation, too, although I didn't want to believe that adding up x/10. ten times will not result in x. (Do you think there is some floating point type in some library which doesn't have this problem?) I experimented a bit with your bugfix, which seems perfect to me, although at the moment I don't quite understand where dtTry comes into play in the case when ha_=li_=0. However, a deeper problem in this case is that it is a waste to do 10 steps between two outputs, but we could rather make one step always to the next output. I think it is rather in this way that this bug should be resolved. (Perhaps somehow already in the constructor of MCWF_Trajectory.) What do you think? Keep in touch, András On Thu, Apr 26, 2012 at 11:55 PM, Raimar Sandner <rai...@ui...> wrote: > Dear András, > > here is what I think is happening, let's take rev 203 and this example: > > PumpedLossyMode_C++QED --kappa 0 --eta 0 --minitFock 2 --dc 0 --Dt 0.1 > > Dt is 0.1 and DtTry is 0.01 initially. This means > *MCWF_Trajectory<RANK>::coherentTimeDevelopment* is called 10 times, each time > *stepToDo* evaluates to 0.01. Due to rounding errors this does not sum up to > 0.1 exactly but to something that is slightly less then 0.1 (0.1 - 1e-17 in my > case). Just before the next display *coherentTimeDevelopment* is called once > more with *stepToDo* evaluating to 1e-17. In the next call to > *getEvolved()->update*, DtTry is updated to this very small value. As there is > no mechanism to increase DtTry again, the stepper will continue with this > DtTry also after the next display, and the trajectory seems to be stuck (where > really it is just extremely slow). > > Even if initially Dt is not an even multiple of DtTry the problem might occur > eventually. DtTry can only become smaller whenever the trajectory approaches > the next display timestep, so sooner or later after some displays the > trajectory probably won't make any significant progress anymore. > > This also explains why this only happens if the system has no Hamiltonian > part. With a Hamiltonian the gsl integrator takes care of DtTry. > > Please have a look at raimar/bug3482771, where I reverted the changes > introduced in rev 204 (except the check for li_ when calculating *stepToDo* > which is independent from the rest). To fix the bug, I would suggest to leave > DtTry unchanged for the case ha_=0, there is no reason to decrease DtTry only > because we reach a display time. In my branch this is achieved by calling > *getEvolved()->update* with the argument *getDtTry()* instead of *stepToDo*. > > Best regards > Raimar > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |
From: Raimar S. <rai...@ui...> - 2012-05-03 11:06:22
|
Dear András, On Wednesday 02 May 2012 19:14:50 you wrote: > I think your explanation is correct, and this was more or less my > explanation, too, although I didn't want to believe that adding up > x/10. ten times will not result in x. (Do you think there is some > floating point type in some library which doesn't have this problem?) A quick search turned up gmplib, a gnu arbitrary precision library. One could use gmp rational numbers for stepsizes (converted to and from floats when interacting with gsl). Then ten times x/10 is guaranteed to be x. Whether this is practical depends on the overhead. > I experimented a bit with your bugfix, which seems perfect to me, > although at the moment I don't quite understand where dtTry comes into > play in the case when ha_=li_=0. I agree that dtTry should not come into play for the case ha_==li_==0, and this is essentially my fix. Where it did come into play: line 185 of MCWF_Trajectory.tcc, the call to *TimeStepBookkeeper::update*, where the new *DtTry* is always set to *stepToDo* for the case ha_==0. In revision #203 subsequent steps will then use this *DtTry* as *stepToDo*. > However, a deeper problem in this case is that it is a waste to do 10 > steps between two outputs, but we could rather make one step always to > the next output. I think it is rather in this way that this bug should > be resolved. (Perhaps somehow already in the constructor of > MCWF_Trajectory.) > > What do you think? Yes I agree, but the bug3482771 branch does exactly this: ha_ != 0: let dpLimit (if li_) and gsl manage the stepsize ha_ == 0, li_!=0: let dpLimit alone manage the stepsize, and fix the bug that approaching an output time might decrease the stepsize. (*) ha_ == li_ == 0: *stepToDo* will always evaluate to *Dt*, which means we jump right to the next output time. This is ensured by the li_ ? ... : Dt check in the assignment to *stepToDo*. (*) Actually the stepsize can only decrease in the case ha_==0, li_!=0, which might be inefficient. Maybe we should consider to increase the stepsize if dpLimit is undershot significantly? Take for example a driven three level atom in V configuration with a strong transition and a metastable excited state (optical shelving). In periods where many photons are scattered we need a small timestep, but when the atom is "shelved" larger timesteps are sufficient. Best regards Raimar > > On Thu, Apr 26, 2012 at 11:55 PM, Raimar Sandner > > <rai...@ui...> wrote: > > Dear András, > > > > here is what I think is happening, let's take rev 203 and this example: > > > > PumpedLossyMode_C++QED --kappa 0 --eta 0 --minitFock 2 --dc 0 --Dt 0.1 > > > > Dt is 0.1 and DtTry is 0.01 initially. This means > > *MCWF_Trajectory<RANK>::coherentTimeDevelopment* is called 10 times, each > > time *stepToDo* evaluates to 0.01. Due to rounding errors this does not > > sum up to 0.1 exactly but to something that is slightly less then 0.1 > > (0.1 - 1e-17 in my case). Just before the next display > > *coherentTimeDevelopment* is called once more with *stepToDo* evaluating > > to 1e-17. In the next call to > > *getEvolved()->update*, DtTry is updated to this very small value. As > > there is no mechanism to increase DtTry again, the stepper will continue > > with this DtTry also after the next display, and the trajectory seems to > > be stuck (where really it is just extremely slow). > > > > Even if initially Dt is not an even multiple of DtTry the problem might > > occur eventually. DtTry can only become smaller whenever the trajectory > > approaches the next display timestep, so sooner or later after some > > displays the trajectory probably won't make any significant progress > > anymore. > > > > This also explains why this only happens if the system has no Hamiltonian > > part. With a Hamiltonian the gsl integrator takes care of DtTry. > > > > Please have a look at raimar/bug3482771, where I reverted the changes > > introduced in rev 204 (except the check for li_ when calculating > > *stepToDo* > > which is independent from the rest). To fix the bug, I would suggest to > > leave DtTry unchanged for the case ha_=0, there is no reason to decrease > > DtTry only because we reach a display time. In my branch this is achieved > > by calling *getEvolved()->update* with the argument *getDtTry()* instead > > of *stepToDo*. > > > > Best regards > > Raimar |
From: Andras V. <and...@ui...> - 2012-05-03 14:51:57
|
Dear Raimar, All right, I seem to understand everything. I have merged and pushed, so everyone can update now, and you can also remove your bugfix branch. Thanks for clarifying this. > (*) Actually the stepsize can only decrease in the case ha_==0, li_!=0, which > might be inefficient. Maybe we should consider to increase the stepsize if > dpLimit is undershot significantly? Take for example a driven three level atom > in V configuration with a strong transition and a metastable excited state > (optical shelving). In periods where many photons are scattered we need a > small timestep, but when the atom is "shelved" larger timesteps are sufficient. Yes, it's definitely a problem that the timestep can only decrease in this case. In the case of ha_=0, we should completely delegate timestep management to the Liouvillean part, so that it can also increase the timestep. What we need is this: * timestep can increase if both the Hamiltonian and the Liouvillean agree with this * timestep should decrease if either requires this (if either ha_ or li_ is 0, it always agrees with an increase and never requires a decrease) * timestep should be Dt if both ha_ and li_ are 0, in which case the dc!=0 case doesn't even make sense, really (so this should maybe be filtered out when constructing MCWF_Trajectory) Best regards, András |
From: Raimar S. <rai...@ui...> - 2012-05-04 10:15:11
|
Dear András, thanks for merging. The bugfix branch is removed now. On Thursday 03 May 2012 16:51:25 Andras Vukics wrote: > > (*) Actually the stepsize can only decrease in the case ha_==0, li_!=0, > > which might be inefficient. Maybe we should consider to increase the > > stepsize if dpLimit is undershot significantly? Take for example a driven > > three level atom in V configuration with a strong transition and a > > metastable excited state (optical shelving). In periods where many > > photons are scattered we need a small timestep, but when the atom is > > "shelved" larger timesteps are sufficient. > Yes, it's definitely a problem that the timestep can only decrease in > this case. In the case of ha_=0, we should completely delegate > timestep management to the Liouvillean part, so that it can also > increase the timestep. > > What we need is this: > * timestep can increase if both the Hamiltonian and the Liouvillean > agree with this > * timestep should decrease if either requires this > (if either ha_ or li_ is 0, it always agrees with an increase and > never requires a decrease) > * timestep should be Dt if both ha_ and li_ are 0, in which case the > dc!=0 case doesn't even make sense, really (so this should maybe be > filtered out when constructing MCWF_Trajectory) Yes. The second requirement is already implemented, the third requirement is also implemented but maybe not in the right way. I agree that this is better handled in the constructor, the check in every timestep will always give the same result. Should we just set DtTry to Dt in the constructor if ha_==li_==0 and throw an exception if dc != 0? The first requirement is partially implemented for the case ha_!=0. For the other case maybe something like this? if (!ha_ && dpOverDt*dtDid<undershootTolerance_*dpLimit_){ getEvolved()->setDtTry(dpLimit_/dpOverDt); } Best regards Raimar |