From: Raimar S. <rai...@ui...> - 2012-05-03 11:06:22
|
Dear András, On Wednesday 02 May 2012 19:14:50 you wrote: > I think your explanation is correct, and this was more or less my > explanation, too, although I didn't want to believe that adding up > x/10. ten times will not result in x. (Do you think there is some > floating point type in some library which doesn't have this problem?) A quick search turned up gmplib, a gnu arbitrary precision library. One could use gmp rational numbers for stepsizes (converted to and from floats when interacting with gsl). Then ten times x/10 is guaranteed to be x. Whether this is practical depends on the overhead. > I experimented a bit with your bugfix, which seems perfect to me, > although at the moment I don't quite understand where dtTry comes into > play in the case when ha_=li_=0. I agree that dtTry should not come into play for the case ha_==li_==0, and this is essentially my fix. Where it did come into play: line 185 of MCWF_Trajectory.tcc, the call to *TimeStepBookkeeper::update*, where the new *DtTry* is always set to *stepToDo* for the case ha_==0. In revision #203 subsequent steps will then use this *DtTry* as *stepToDo*. > However, a deeper problem in this case is that it is a waste to do 10 > steps between two outputs, but we could rather make one step always to > the next output. I think it is rather in this way that this bug should > be resolved. (Perhaps somehow already in the constructor of > MCWF_Trajectory.) > > What do you think? Yes I agree, but the bug3482771 branch does exactly this: ha_ != 0: let dpLimit (if li_) and gsl manage the stepsize ha_ == 0, li_!=0: let dpLimit alone manage the stepsize, and fix the bug that approaching an output time might decrease the stepsize. (*) ha_ == li_ == 0: *stepToDo* will always evaluate to *Dt*, which means we jump right to the next output time. This is ensured by the li_ ? ... : Dt check in the assignment to *stepToDo*. (*) Actually the stepsize can only decrease in the case ha_==0, li_!=0, which might be inefficient. Maybe we should consider to increase the stepsize if dpLimit is undershot significantly? Take for example a driven three level atom in V configuration with a strong transition and a metastable excited state (optical shelving). In periods where many photons are scattered we need a small timestep, but when the atom is "shelved" larger timesteps are sufficient. Best regards Raimar > > On Thu, Apr 26, 2012 at 11:55 PM, Raimar Sandner > > <rai...@ui...> wrote: > > Dear András, > > > > here is what I think is happening, let's take rev 203 and this example: > > > > PumpedLossyMode_C++QED --kappa 0 --eta 0 --minitFock 2 --dc 0 --Dt 0.1 > > > > Dt is 0.1 and DtTry is 0.01 initially. This means > > *MCWF_Trajectory<RANK>::coherentTimeDevelopment* is called 10 times, each > > time *stepToDo* evaluates to 0.01. Due to rounding errors this does not > > sum up to 0.1 exactly but to something that is slightly less then 0.1 > > (0.1 - 1e-17 in my case). Just before the next display > > *coherentTimeDevelopment* is called once more with *stepToDo* evaluating > > to 1e-17. In the next call to > > *getEvolved()->update*, DtTry is updated to this very small value. As > > there is no mechanism to increase DtTry again, the stepper will continue > > with this DtTry also after the next display, and the trajectory seems to > > be stuck (where really it is just extremely slow). > > > > Even if initially Dt is not an even multiple of DtTry the problem might > > occur eventually. DtTry can only become smaller whenever the trajectory > > approaches the next display timestep, so sooner or later after some > > displays the trajectory probably won't make any significant progress > > anymore. > > > > This also explains why this only happens if the system has no Hamiltonian > > part. With a Hamiltonian the gsl integrator takes care of DtTry. > > > > Please have a look at raimar/bug3482771, where I reverted the changes > > introduced in rev 204 (except the check for li_ when calculating > > *stepToDo* > > which is independent from the rest). To fix the bug, I would suggest to > > leave DtTry unchanged for the case ha_=0, there is no reason to decrease > > DtTry only because we reach a display time. In my branch this is achieved > > by calling *getEvolved()->update* with the argument *getDtTry()* instead > > of *stepToDo*. > > > > Best regards > > Raimar |