From: Soeren D. S. <soe...@gm...> - 2013-07-14 22:12:35
|
Hi again, I have a question concerning the Adams methods. TL;DR: Is there a specific reason why we have the Adams methods in Qucs? Are there any examples where they perform better than all other methods? If not, is it safe to drop them? Long version: If I understand it right, the main reason why Adams methods are popular is their implementation as a predictor-corrector pair. When applied on ordinary differential equations (ODEs), this provides the stability of the implicit Adams method (which is passable for non-stiff equations), but without any need for a Newton iteration thanks to the prediction from the explicit method. In the application of circuit simulation, however, both reasons are irrelevant. First, the problems are usually *not* non-stiff. Any kind of damping or damped oscillation usually means that we get a stiff equation. Secondly, we cannot apply the "predictor-corrector" method because we don't have an ODE but a differential-algebraic equation (DAE). In order to get the solution for one integration step, we *have to* solve a non-linear system. (The source code uses the terms "predictor" and "corrector", but that's technically incorrect because the result of the predictor is used as a starting value for the Newton iteration, while the idea of the predictor-corrector pair is exactly to avoid this.) I'm asking about this because the refactoring that I'm going to do will require me to reimplement all integration methods, more or less. And it's not sensible to reimplement a method that we don't need. Sören |
From: al d. <ad...@fr...> - 2013-07-16 15:46:22
|
On Sunday 14 July 2013, Soeren D. Schulze wrote: > TL;DR: Is there a specific reason why we have the Adams > methods in Qucs? Are there any examples where they perform > better than all other methods? If not, is it safe to drop > them? Probably, it's there because he could and it was easy. If you drop them, two years from now somebody will come along claiming how great they are and propose adding them. So that brings up another reason for them to be there. It is clear to me that Qucs, and its component Qucsator, was designed to be primarily an educational tool. So the different methods are there so you can compare them and learn by observation which works best. There are other places too where this strategy shows, for example having several matrix solvers to choose from. For something to be good for education, first it must be easy to use, especially for simple tasks. Then it must be easy to understand how it works when you take the cover off. High performance is irrelevant and complexity is harmful. Practically ... as far as I know, no mainstream circuit simulators use the Adams methods. Most don't even do Gear at higher than second order. For good reason, all successful industry circuit simulators use a combination of trapezoidal, Gear-2, and backward Euler. Some just give you a global choice, some local, some choose automatically. In practice, I have found that usually, depending on what you want, either trap or Backward Euler works best, and Gear-2 is almost always second choice. The catch is that when trap is best, Backward Euler is a bad choice, and vice-versa, so it is easy to cop out and use Gear-2, which seems to work equally mediocre everywhere. Multistep methods have problems with variable step size and do not have the desired stability characteristics. Most authors of numerical analysis text books do not understand the stiffness issue in circuit simulation. The stiffness issue is with the strays. In a real circuit mostly you can ignore them, but those are the ones with the worst artifacts in trap. So, that's where to use BE, which errs in the direction of doing less. Trap artifacts are usually a sign that the step control is not working as well as it should. Backward Euler works best if you do not consider truncation error at all in step control, and just accept that the numerical truncation error is high, and it the direction of damping out oscillations. |
From: Richard C. <r.c...@ed...> - 2013-07-17 08:04:52
|
On 16/07/2013 16:46, al davis wrote: > On Sunday 14 July 2013, Soeren D. Schulze wrote: >> TL;DR: Is there a specific reason why we have the Adams >> methods in Qucs? Are there any examples where they perform >> better than all other methods? If not, is it safe to drop >> them? > Probably, it's there because he could and it was easy. > > If you drop them, two years from now somebody will come along > claiming how great they are and propose adding them. > > So that brings up another reason for them to be there. It is > clear to me that Qucs, and its component Qucsator, was designed > to be primarily an educational tool. So the different methods > are there so you can compare them and learn by observation which > works best. > > There are other places too where this strategy shows, for > example having several matrix solvers to choose from. > > For something to be good for education, first it must be easy to > use, especially for simple tasks. Then it must be easy to > understand how it works when you take the cover off. High > performance is irrelevant and complexity is harmful. > Yes, this is the impression I also have, Qucs is at least 50% an educational tool.I like how the code matches up to the extensive documentationthat's available and you can see exactly how the algorithms are implemented, are you(Soeren) willing to write equivalent documentation describing the algorithms in your modified solver? This is another reason I have reservations about switching to using system libraries for some of these computations. You can no longer go look and see how they actually work and are implemented. Soeren, could you implement your changes as a new fast transient solver component option, e.g. "FTR", retaining the old transient solver? This would have the best of both worlds maybe. You can maybe inherit from the existing classes such as nasolver etc. to avoid duplication? Richard -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Soeren D. S. <soe...@gm...> - 2013-07-17 15:02:31
|
Am 17.07.2013 10:04, schrieb Richard Crozier: > On 16/07/2013 16:46, al davis wrote: >> On Sunday 14 July 2013, Soeren D. Schulze wrote: >>> TL;DR: Is there a specific reason why we have the Adams >>> methods in Qucs? Are there any examples where they perform >>> better than all other methods? If not, is it safe to drop >>> them? >> Probably, it's there because he could and it was easy. >> >> If you drop them, two years from now somebody will come along >> claiming how great they are and propose adding them. >> >> So that brings up another reason for them to be there. It is >> clear to me that Qucs, and its component Qucsator, was designed >> to be primarily an educational tool. So the different methods >> are there so you can compare them and learn by observation which >> works best. >> >> There are other places too where this strategy shows, for >> example having several matrix solvers to choose from. >> >> For something to be good for education, first it must be easy to >> use, especially for simple tasks. Then it must be easy to >> understand how it works when you take the cover off. High >> performance is irrelevant and complexity is harmful. >> > > Yes, this is the impression I also have, Qucs is at least 50% an > educational tool. Oh, I didn't know that. Does this mean that things like sparse matrix representation and faster, more complicated methods aren't desired at all? There are plenty of "academic" and "toy" simulators around, which are nice for educational purposes but not for much else. Qucs, on the other hand, is really useful for end-users. From a user's perspective, I'd like some state-of-the-art methods to be implemented. There aren't that many user-friendly free software circuit simulators around. > I like how the code matches up to the extensive > documentationthat's available and you can see exactly how the algorithms > are implemented, are you(Soeren) willing to write equivalent > documentation describing the algorithms in your modified solver? It sounds doable. > This is another reason I have reservations about switching to using > system libraries for some of these computations. You can no longer go > look and see how they actually work and are implemented. OK, I see. But on the other hand, C++ provides some really good abstraction mechanisms, so it should be possible to have one simple and a couple of advanced methods for everything. I don't like the coding style of, e.g., Netlib, either. They're sacrificing readability for a few saved CPU cycles. We don't need to do that, but I wouldn't agree that "performance is irrelevant", either. > Soeren, could you implement your changes as a new fast transient solver > component option, e.g. "FTR", retaining the old transient solver? This > would have the best of both worlds maybe. You can maybe inherit from the > existing classes such as nasolver etc. to avoid duplication? Well, the implementation of Newton's method is severely flawed, unfortunately, which causes Qucsator to crash upon small step sizes. I don't think it's a good idea to keep it. Newton's method is usually formulated as: xnew = xold - J^(-1) * f(xold), where J is the Jacobian of f. The Jacobian may be badly conditioned, especially for small step sizes. Now what Qucsator does is factoring it out: xnew = J^(-1) * (J * xold - f(xold)). The two equations are mathematically equivalent, but the latter is much more inaccurate in computation because xold gets multiplied by a badly conditioned matrix and its inverse -- which is totally unnecessary and really something that one should not do. Because this mechanism is very hard-coded, it takes changes in *every* energy-storage component to fix it. The technical manual conveys the intuition that "a capacitor is just a current source plus a conductance, locally". This may be a nice image, but numerically speaking, it's just a bad idea to implement it that way, and regarding the educational purpose, I think we shouldn't state bad examples. Sören |
From: Richard C. <r.c...@ed...> - 2013-07-17 15:26:57
|
> Oh, I didn't know that. Does this mean that things like sparse matrix > representation and faster, more complicated methods aren't desired at > all? > Well, no, I don't think so, and I think other people involved in the project will probably also have their own opinion. I think improvements are desirable. > There are plenty of "academic" and "toy" simulators around, which are > nice for educational purposes but not for much else. Qucs, on the > other hand, is really useful for end-users. From a user's > perspective, I'd like some state-of-the-art methods to be > implemented. There aren't that many user-friendly free software > circuit simulators around. > This is certainly a valid point. >> I like how the code matches up to the extensive >> documentationthat's available and you can see exactly how the algorithms >> are implemented, are you(Soeren) willing to write equivalent >> documentation describing the algorithms in your modified solver? > > It sounds doable. > It would be great if we could keep the docs up to date. > Well, the implementation of Newton's method is severely flawed, > unfortunately, which causes Qucsator to crash upon small step sizes. > I don't think it's a good idea to keep it. > > <snip> > Because this mechanism is very hard-coded, it takes changes in *every* > energy-storage component to fix it. > > The technical manual conveys the intuition that "a capacitor is just a > current source plus a conductance, locally". This may be a nice > image, but numerically speaking, it's just a bad idea to implement it > that way, and regarding the educational purpose, I think we shouldn't > state bad examples. > > > Sören > Well, you do make a good point about bad examples, and a convincing argument for modifying the solver. We should really have some standard examples of circuits to compare before and after your modifications though, and for testing generally.This very deep surgery will probably require a lot of testing. Richard -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: roucaries b. <rou...@gm...> - 2013-07-17 19:42:54
|
On Wed, Jul 17, 2013 at 5:02 PM, Soeren D. Schulze <soe...@gm...> wrote: > Am 17.07.2013 10:04, schrieb Richard Crozier: >> On 16/07/2013 16:46, al davis wrote: >>> On Sunday 14 July 2013, Soeren D. Schulze wrote: >>>> TL;DR: Is there a specific reason why we have the Adams >>>> methods in Qucs? Are there any examples where they perform >>>> better than all other methods? If not, is it safe to drop >>>> them? >>> Probably, it's there because he could and it was easy. >>> >>> If you drop them, two years from now somebody will come along >>> claiming how great they are and propose adding them. >>> >>> So that brings up another reason for them to be there. It is >>> clear to me that Qucs, and its component Qucsator, was designed >>> to be primarily an educational tool. So the different methods >>> are there so you can compare them and learn by observation which >>> works best. >>> >>> There are other places too where this strategy shows, for >>> example having several matrix solvers to choose from. >>> >>> For something to be good for education, first it must be easy to >>> use, especially for simple tasks. Then it must be easy to >>> understand how it works when you take the cover off. High >>> performance is irrelevant and complexity is harmful. >>> >> >> Yes, this is the impression I also have, Qucs is at least 50% an >> educational tool. > > Oh, I didn't know that. Does this mean that things like sparse matrix > representation and faster, more complicated methods aren't desired at all? No they are desired, that why under the branch roucarb_qucscorecleanup20130717, I use libeigen > There are plenty of "academic" and "toy" simulators around, which are > nice for educational purposes but not for much else. Qucs, on the other > hand, is really useful for end-users. From a user's perspective, I'd > like some state-of-the-art methods to be implemented. There aren't that > many user-friendly free software circuit simulators around. I do my research under qucs. So it is not only an educationnal tool. >> I like how the code matches up to the extensive >> documentationthat's available and you can see exactly how the algorithms >> are implemented, are you(Soeren) willing to write equivalent >> documentation describing the algorithms in your modified solver? > > It sounds doable. > >> This is another reason I have reservations about switching to using >> system libraries for some of these computations. You can no longer go >> look and see how they actually work and are implemented. > > OK, I see. But on the other hand, C++ provides some really good > abstraction mechanisms, so it should be possible to have one simple and > a couple of advanced methods for everything. Libeigen will allow to use transparantly the matrix from a c++ point of view, and we could call efficient fortran lib if we have the need. > I don't like the coding style of, e.g., Netlib, either. They're > sacrificing readability for a few saved CPU cycles. We don't need to do > that, but I wouldn't agree that "performance is irrelevant", either. > >> Soeren, could you implement your changes as a new fast transient solver >> component option, e.g. "FTR", retaining the old transient solver? This >> would have the best of both worlds maybe. You can maybe inherit from the >> existing classes such as nasolver etc. to avoid duplication? > > Well, the implementation of Newton's method is severely flawed, > unfortunately, which causes Qucsator to crash upon small step sizes. I > don't think it's a good idea to keep it. I think also > > Newton's method is usually formulated as: > > xnew = xold - J^(-1) * f(xold), > > where J is the Jacobian of f. The Jacobian may be badly conditioned, > especially for small step sizes. Now what Qucsator does is factoring it > out: > > xnew = J^(-1) * (J * xold - f(xold)). > > The two equations are mathematically equivalent, but the latter is much > more inaccurate in computation because xold gets multiplied by a badly > conditioned matrix and its inverse -- which is totally unnecessary and > really something that one should not do. Yes and it is better to use the equivalent of balckslas under matlab. Instead of multipling by inverse, solve an equivalent problem. > Because this mechanism is very hard-coded, it takes changes in *every* > energy-storage component to fix it. > > The technical manual conveys the intuition that "a capacitor is just a > current source plus a conductance, locally". This may be a nice image, > but numerically speaking, it's just a bad idea to implement it that way, > and regarding the educational purpose, I think we shouldn't state bad > examples. It is a classical method in electrical engineering to use this (and physically it is indiscernable for an external point of view). For you what is the good image ? > > Sören > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Qucs-devel mailing list > Quc...@li... > https://lists.sourceforge.net/lists/listinfo/qucs-devel |
From: Soeren D. S. <soe...@gm...> - 2013-07-17 21:56:02
|
Am 17.07.2013 21:42, schrieb roucaries bastien: >> Newton's method is usually formulated as: >> >> xnew = xold - J^(-1) * f(xold), >> >> where J is the Jacobian of f. The Jacobian may be badly conditioned, >> especially for small step sizes. Now what Qucsator does is factoring it >> out: >> >> xnew = J^(-1) * (J * xold - f(xold)). >> >> The two equations are mathematically equivalent, but the latter is much >> more inaccurate in computation because xold gets multiplied by a badly >> conditioned matrix and its inverse -- which is totally unnecessary and >> really something that one should not do. > > Yes and it is better to use the equivalent of balckslas under matlab. > Instead of multipling by inverse, solve an equivalent problem. That's already the way it's done. In numerical mathematics, multiplying by the inverse is just shorthand for "solve the linear system". There is a wisdom which says that whenever you think you need to compute the inverse explicitly, you're probably doing something wrong. >> Because this mechanism is very hard-coded, it takes changes in *every* >> energy-storage component to fix it. >> >> The technical manual conveys the intuition that "a capacitor is just a >> current source plus a conductance, locally". This may be a nice image, >> but numerically speaking, it's just a bad idea to implement it that way, >> and regarding the educational purpose, I think we shouldn't state bad >> examples. > > It is a classical method in electrical engineering to use this (and > physically it is indiscernable for an external point of view). For you > what is the good image ? Replacing the capacitor by a conductance and a current source is one way to imagine what is done in numerical integration. Myself, I prefer thinking in terms of equations rather than in terms of equivalent circuit diagrams, but it's just a matter of taste. Mathematically speaking, both formulations are equivalent (at least in BDF -- I'm not sure about Adams methods). On a computer with finite precision, however, it's not advisable to follow the electrical intuition too naively, because replacing a capacitor by DC components and then doing a DC simulation in each step equals "factoring out the Jacobian", which I mentioned above. I'm afraid I'm not that familiar with the code of other "real-world" simulators. Do you know if they use the same approach? Sören |
From: Soeren D. S. <soe...@gm...> - 2013-07-18 16:46:14
|
Am 18.07.2013 11:30, schrieb roucaries bastien: >> On a computer with finite precision, however, it's not advisable to follow >> the electrical intuition too naively, because replacing a capacitor by DC >> components and then doing a DC simulation in each step equals "factoring out >> the Jacobian", which I mentioned above. >> >> I'm afraid I'm not that familiar with the code of other "real-world" >> simulators. Do you know if they use the same approach? > It is call companion model as norton form. It is uselly used as thevenin Thanks. I checked gnucap, which does the same thing, and if I can trust some random web articles, Spice does it, too. I haven't used Spice that much, but convergence seems to be a bit better there, which I now can't explain any more. This is a bit surprising to me. For my Bachelor's thesis, I had primarily looked at mathematical literature on this topic, and they all write down the equation globally and then apply some standard solver, avoiding the condition problem. Apparently, electronic engineers and mathematicians don't talk to each other enough... Sören |
From: roucaries b. <rou...@gm...> - 2013-07-19 11:19:20
|
Le 18 juil. 2013 18:46, "Soeren D. Schulze" <soe...@gm...> a écrit : > > Am 18.07.2013 11:30, schrieb roucaries bastien: > >>> On a computer with finite precision, however, it's not advisable to follow >>> the electrical intuition too naively, because replacing a capacitor by DC >>> components and then doing a DC simulation in each step equals "factoring out >>> the Jacobian", which I mentioned above. >>> >>> I'm afraid I'm not that familiar with the code of other "real-world" >>> simulators. Do you know if they use the same approach? >> >> It is call companion model as norton form. It is uselly used as thevenin > > > Thanks. I checked gnucap, which does the same thing, and if I can trust some random web articles, Spice does it, too. I haven't used Spice that much, but convergence seems to be a bit better there, which I now can't explain any more. > > This is a bit surprising to me. For my Bachelor's thesis, I had primarily looked at mathematical literature on this topic, and they all write down the equation globally and then apply some standard solver, avoiding the condition problem. > > Apparently, electronic engineers and mathematicians don't talk to each other enough... They talk but only a few of them. Notice that the theory behind differential equation is hard. And notice that simulation is often need on crappy case : resonance is often a good case (in mathematical language source term is an eigenvalue/vector). Companion model help in this case by being equivalent to a pertubative method of first order (physician hat look like). Other case are resolved by hand by asymptotic method > > Sören |
From: al d. <ad...@fr...> - 2013-07-21 02:14:52
|
On Thursday 18 July 2013, Soeren D. Schulze wrote: > Thanks. I checked gnucap, which does the same thing, and if > I can trust some random web articles, Spice does it, > too. I haven't used Spice that much, but convergence seems > to be a bit better there, which I now can't explain any > more. Yes, that's the way they do it. Can you show me an example of a real working circuit that fails to converge in gnucap? I know that some do not converge. I want examples that I can study. As to Spice ..... often the models are the same, and in this case I expect convergence to be the same. Sometimes there are differences such as time stepping that result in convergence on one, and non-convergence on the other. Convergence checking is not as strict in spice, so I have seen a few cases where gnucap reports a convergence failure and spice does not, but gives incorrect results. Which would you rather have .... a simulator that correctly tells you when there is a problem? .. or a simulator that glosses over it, giving you an incorrect result and the feeling that all is well? What behavior is correct for a circuit that does not have a stable DC operating point? > This is a bit surprising to me. For my Bachelor's thesis, I > had primarily looked at mathematical literature on this > topic, and they all write down the equation globally and > then apply some standard solver, avoiding the condition > problem. That works great for the simplest of simple problems. For the complex issues in circuit simulation there are other issues, leading to a need for a more heuristic approach. What you are talking about might have been state of the art about 1970, before Spice. As a challenge to you .... Here's a really simple circuit, with some commands to analyze it. It should run in any simulator that supports spice syntax. I am curious what other simulators say, and how your algorithm would handle it. Gnucap gives a correct result. NGspice and LTspice do not. The "*>" lines are gnucap post-processing commands. It calculates the period, rise time, and fall time. * switch as negative resistance oscillator SW1 1 0 1 0 SWITCH1 C1 1 0 1n I1 0 1 100u .MODEL SWITCH1 SW VT=2.5 VH=2.475 RON=1 ROFF=10MEG .print tran V(1) *>.print tran + r(SW1) iter(0) timef(sw1) control(0) *>.store tran v(1) .option method=trap numdgt=9 .tran 100e-6 200e-6 uic trace all *>.measure t2=cross("v(1)" cross=2 fall last) *>.measure t1=cross("v(1)" cross=2 fall last before=t2) *>.measure t0=cross("v(1)" cross=2 fall last before=t1) *>.measure tmin0=min("v(1)" arg last after=t0 before=t1) *>.measure tmin1=min("v(1)" arg last after=t1 before=t2) *>.measure tmax1=max("v(1)" arg last before=t1 after=t0) *>.param falltime=tmin1-tmax1 *>.param risetime=tmax1-tmin0 *>.param dt=t2-t1 *>.eval falltime *>.eval risetime *>.eval dt .END > Apparently, electronic engineers and mathematicians don't > talk to each other enough... Yes .. the mathematicians could learn a lot from the engineers if they would talk more. |
From: Soeren D. S. <soe...@gm...> - 2013-07-21 12:03:32
|
Am 21.07.2013 04:14, schrieb al davis: > On Thursday 18 July 2013, Soeren D. Schulze wrote: >> Thanks. I checked gnucap, which does the same thing, and if >> I can trust some random web articles, Spice does it, >> too. I haven't used Spice that much, but convergence seems >> to be a bit better there, which I now can't explain any >> more. > > Yes, that's the way they do it. > > Can you show me an example of a real working circuit that fails > to converge in gnucap? I know that some do not converge. I > want examples that I can study. Sorry, I meant to say that convergence in Spice is better than in Qucs. In gnucap, it's even better. A set of "benchmark examples" would be actually nice. I know a test set created by mathematicians (https://pitagora.dm.uniba.it/~testset/report/testset.pdf) which has a few electronic examples, but they solve the equations globally using standard integrators. For an academic example, take a differentiator: * Double differentiatior .generator freq=10k phase=90 V1 1 0 generator(1) C1 1 0 10u IC=1 H2 2 0 V1 1 C2 2 3 10u IC=0 V2 3 0 0 H3 4 0 V2 1 IC=0.31831 .print tran V(1) V(2) V(3) V(4) .option reltol=0.001 method=trap short=10u .tran 3u 1m uic .END It fails once you lower "short". Mathematically speaking, the "short" option makes a lot of sense because it lowers the index of the underlying differential-algebraic equation. To make things more difficult, add a third differentiator. If Qucs gets a gnucap back-end for transient analysis anyway, maybe it's better to implement improved algorithms in gnucap first? Are the goals of gnucap and Qucs compatible? Maybe it's even possible to merge them into one. >> This is a bit surprising to me. For my Bachelor's thesis, I >> had primarily looked at mathematical literature on this >> topic, and they all write down the equation globally and >> then apply some standard solver, avoiding the condition >> problem. > > That works great for the simplest of simple problems. For the > complex issues in circuit simulation there are other issues, > leading to a need for a more heuristic approach. What you are > talking about might have been state of the art about 1970, > before Spice. Again, both approaches are mathematically equivalent, but the approach that Spice (and gnucap and Qucs) uses is numerically at a disadvantage. Things like that haven't really been researched until the 80's and early 90's, so I doubt that Spice is state-of-the-art concerning the numerical integration algorithms. Applying "gmin" and "short" to every node and every voltage source is a very effective measure because it transforms the differential-algebraic equation into an ordinary differential equation. [Unless there is an inductor in the circuit. Then you need to add "short" as a series resistance to the inductor. But even without that, the worst thing you can get is an index-1 DAE, which is already much better than a higher-index DAE.] But of course "gmin" and "short" are modifications to the circuit. I think it would be possible to reduce the need for them by a better implementation of the algorithm. > As a challenge to you .... Here's a really simple circuit, with > some commands to analyze it. It should run in any simulator > that supports spice syntax. I am curious what other simulators > say, and how your algorithm would handle it. Gnucap gives a > correct result. NGspice and LTspice do not. > > The "*>" lines are gnucap post-processing commands. It > calculates the period, rise time, and fall time. > > * switch as negative resistance oscillator > SW1 1 0 1 0 SWITCH1 > C1 1 0 1n > I1 0 1 100u > .MODEL SWITCH1 SW VT=2.5 VH=2.475 RON=1 ROFF=10MEG > .print tran V(1) > *>.print tran + r(SW1) iter(0) timef(sw1) control(0) > *>.store tran v(1) > .option method=trap numdgt=9 > .tran 100e-6 200e-6 uic trace all > *>.measure t2=cross("v(1)" cross=2 fall last) > *>.measure t1=cross("v(1)" cross=2 fall last before=t2) > *>.measure t0=cross("v(1)" cross=2 fall last before=t1) > *>.measure tmin0=min("v(1)" arg last after=t0 before=t1) > *>.measure tmin1=min("v(1)" arg last after=t1 before=t2) > *>.measure tmax1=max("v(1)" arg last before=t1 after=t0) > *>.param falltime=tmin1-tmax1 > *>.param risetime=tmax1-tmin0 > *>.param dt=t2-t1 > *>.eval falltime > *>.eval risetime > *>.eval dt > .END Works for me in NGSpice, if I lower RELTOL. But the initial DC fails everywhere. The circuit doesn't seem overly difficult to me, but gnucap has better stepsize control, apparently. Sören |
From: roucaries b. <rou...@gm...> - 2013-07-21 12:26:13
|
On Sun, Jul 21, 2013 at 2:03 PM, Soeren D. Schulze <soe...@gm...> wrote: > Am 21.07.2013 04:14, schrieb al davis: >> On Thursday 18 July 2013, Soeren D. Schulze wrote: >>> Thanks. I checked gnucap, which does the same thing, and if >>> I can trust some random web articles, Spice does it, >>> too. I haven't used Spice that much, but convergence seems >>> to be a bit better there, which I now can't explain any >>> more. >> >> Yes, that's the way they do it. >> >> Can you show me an example of a real working circuit that fails >> to converge in gnucap? I know that some do not converge. I >> want examples that I can study. > > Sorry, I meant to say that convergence in Spice is better than in Qucs. > In gnucap, it's even better. > > A set of "benchmark examples" would be actually nice. I know a test set > created by mathematicians > (https://pitagora.dm.uniba.it/~testset/report/testset.pdf) which has a > few electronic examples, but they solve the equations globally using > standard integrators. Yes it is really nice, we could now add a test suite to qucs. > For an academic example, take a differentiator: > > * Double differentiatior > .generator freq=10k phase=90 > V1 1 0 generator(1) > C1 1 0 10u IC=1 > H2 2 0 V1 1 > C2 2 3 10u IC=0 > V2 3 0 0 > H3 4 0 V2 1 IC=0.31831 > .print tran V(1) V(2) V(3) V(4) > .option reltol=0.001 method=trap short=10u > .tran 3u 1m uic > .END > > It fails once you lower "short". Mathematically speaking, the "short" > option makes a lot of sense because it lowers the index of the > underlying differential-algebraic equation. > > To make things more difficult, add a third differentiator. > > If Qucs gets a gnucap back-end for transient analysis anyway, maybe it's > better to implement improved algorithms in gnucap first? > > Are the goals of gnucap and Qucs compatible? Maybe it's even possible > to merge them into one. > >>> This is a bit surprising to me. For my Bachelor's thesis, I >>> had primarily looked at mathematical literature on this >>> topic, and they all write down the equation globally and >>> then apply some standard solver, avoiding the condition >>> problem. >> >> That works great for the simplest of simple problems. For the >> complex issues in circuit simulation there are other issues, >> leading to a need for a more heuristic approach. What you are >> talking about might have been state of the art about 1970, >> before Spice. > > Again, both approaches are mathematically equivalent, but the approach > that Spice (and gnucap and Qucs) uses is numerically at a disadvantage. > Things like that haven't really been researched until the 80's and > early 90's, so I doubt that Spice is state-of-the-art concerning the > numerical integration algorithms. Yes I know they are few research on it. The problem as mthematician is the lack of physical sense. Sometimes the solution given by the math are not causal or are not good from a physical point of view. The problem lie mainly in use electrical engineer we forget to add constraint on the solutions. Another potential pitfall are pathological problem RS flip flop or dual state logic based on tunel diode are inherently hard to get DC solution. > Applying "gmin" and "short" to every node and every voltage source is a > very effective measure because it transforms the differential-algebraic > equation into an ordinary differential equation. [Unless there is an > inductor in the circuit. Then you need to add "short" as a series > resistance to the inductor. But even without that, the worst thing you > can get is an index-1 DAE, which is already much better than a > higher-index DAE.] > > But of course "gmin" and "short" are modifications to the circuit. I > think it would be possible to reduce the need for them by a better > implementation of the algorithm. Gmin and gshort replace a lot of time the missing step not given by the electrical engineer. In the case of biasing a tunnel diode, it make sense from a phycical point of view: your source are not perfectly constant but a heaviside step. > >> As a challenge to you .... Here's a really simple circuit, with >> some commands to analyze it. It should run in any simulator >> that supports spice syntax. I am curious what other simulators >> say, and how your algorithm would handle it. Gnucap gives a >> correct result. NGspice and LTspice do not. >> >> The "*>" lines are gnucap post-processing commands. It >> calculates the period, rise time, and fall time. >> >> * switch as negative resistance oscillator >> SW1 1 0 1 0 SWITCH1 >> C1 1 0 1n >> I1 0 1 100u >> .MODEL SWITCH1 SW VT=2.5 VH=2.475 RON=1 ROFF=10MEG >> .print tran V(1) >> *>.print tran + r(SW1) iter(0) timef(sw1) control(0) >> *>.store tran v(1) >> .option method=trap numdgt=9 >> .tran 100e-6 200e-6 uic trace all >> *>.measure t2=cross("v(1)" cross=2 fall last) >> *>.measure t1=cross("v(1)" cross=2 fall last before=t2) >> *>.measure t0=cross("v(1)" cross=2 fall last before=t1) >> *>.measure tmin0=min("v(1)" arg last after=t0 before=t1) >> *>.measure tmin1=min("v(1)" arg last after=t1 before=t2) >> *>.measure tmax1=max("v(1)" arg last before=t1 after=t0) >> *>.param falltime=tmin1-tmax1 >> *>.param risetime=tmax1-tmin0 >> *>.param dt=t2-t1 >> *>.eval falltime >> *>.eval risetime >> *>.eval dt >> .END > > Works for me in NGSpice, if I lower RELTOL. But the initial DC fails > everywhere. > > The circuit doesn't seem overly difficult to me, but gnucap has better > stepsize control, apparently. Bastien > > > Sören > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Qucs-devel mailing list > Quc...@li... > https://lists.sourceforge.net/lists/listinfo/qucs-devel |
From: Soeren D. S. <soe...@gm...> - 2013-07-21 13:21:24
|
Am 21.07.2013 14:26, schrieb roucaries bastien: >> But of course "gmin" and "short" are modifications to the circuit. I >> think it would be possible to reduce the need for them by a better >> implementation of the algorithm. > > Gmin and gshort replace a lot of time the missing step not given by > the electrical engineer. In the case of biasing a tunnel diode, it > make sense from a phycical point of view: your source are not > perfectly constant but a heaviside step. Make sure not to confuse gnucap's "gmin" with Spice's "gmin". The former is similar to Spice's "rshunt". Is it always harmless to add a resistance of 1e12 Ohm to ground in each node? In most cases, it probably is, but users of circuit simulation programs should be educated about it, so there won't be any surprises. Adding a series resistance to a voltage source seems both more effective and more harmless to me -- really, who is going to notice 10u Ohm? It's perhaps a good idea to either add this as a global option or incorporate it as a parameter in the ideal voltage source model. Currently, Qucs adds a virtual resistance on demand inside an integration step. I'm not sure if that's a good idea because it modifies the circuit while the simulation is already running, so the result from the previous integration step may be inconsistent. It's important for users to understand that leaving out certain parasitic components sometimes doesn't simplify things but actually makes them trickier. Sören |
From: al d. <ad...@fr...> - 2013-07-23 05:06:40
|
On Sunday 21 July 2013, Soeren D. Schulze wrote: > Make sure not to confuse gnucap's "gmin" with Spice's > "gmin". The former is similar to Spice's "rshunt". how things change .... That was not so some years ago. It is not consistent in the different versions of spice. |
From: Bastien R. <rou...@gm...> - 2013-07-23 18:02:05
|
On Sunday, July 21, 2013, Soeren D. Schulze <soe...@gm...> wrote: > Am 21.07.2013 14:26, schrieb roucaries bastien: >>> >>> But of course "gmin" and "short" are modifications to the circuit. I >>> think it would be possible to reduce the need for them by a better >>> implementation of the algorithm. >> >> Gmin and gshort replace a lot of time the missing step not given by >> the electrical engineer. In the case of biasing a tunnel diode, it >> make sense from a phycical point of view: your source are not >> perfectly constant but a heaviside step. > > Make sure not to confuse gnucap's "gmin" with Spice's "gmin". The former is similar to Spice's "rshunt". > > Is it always harmless to add a resistance of 1e12 Ohm to ground in each node? In most cases, it probably is, but users of circuit simulation programs should be educated about it, so there won't be any surprises. > > Adding a series resistance to a voltage source seems both more effective and more harmless to me -- really, who is going to notice 10u Ohm? 10uohm could change some sensor like thwrmocouple . 1e12 ohm of insulation is not measurable. So 1e12 ohm is better. Note that 1e12 breaking symmetry is measurable. >It's perhaps a good idea to either add this as a >global option or incorporate it as a parameter in >the ideal voltage source model. Currently, Qucs >adds a virtual resistance on demand inside an >integration step. I'm not sure if that's a good idea because it modifies the circuit while the simulation is already running, so the result from the previous integration step may be inconsistent. Yes but statistcally it is like adding noise or diphtering. It converge to good solution in almost the case (except for a set of initial condition of null measure). > > It's important for users to understand that leaving out certain parasitic components sometimes doesn't simplify things but actually makes them trickier. Yes > > > Sören > |
From: al d. <ad...@fr...> - 2013-07-21 14:55:30
|
On Sunday 21 July 2013, Soeren D. Schulze wrote: > Adding a series resistance to a voltage source seems both > more effective and more harmless to me -- really, who is > going to notice 10u Ohm? It's a toss-up. One way to model (I should say approximate) a voltage source is with a small series resistance, which is converted into a current source with shunt. Gnucap does it this way. It messes up the conditioning of the matrix somewhat. Another way is to add another equation, the equivalent of an extra node, representing the current. (Nullor model) Gnucap could do it this way, but doesn't. Another voltage-source plugin would give you a choice. It messes up the pattern of the matrix. Yet another way is to eliminate a node and solve the missing node by adding the voltage source voltage to the nearby node that you keep. All of these have their advantages and disadvantages, which may not be evident unless you look deeper and try to implement it. Similar reasoning applies to the inductor. Spice uses the internal node model of an inductor. Gnucap may use either. It adds the internal node only if there is mutual inductance. The spice method is a disaster in run time for a circuit with lots of inductors, like you get with parasitic extraction, the RLGC model of a system of coupled transmission lines. On Sunday 21 July 2013, Soeren D. Schulze wrote: > It's important for users to understand that leaving out > certain parasitic components sometimes doesn't simplify > things but actually makes them trickier. One of my favorite counterintuitive surprises here is the addition of transmission lines to a circuit to improve convergence. On the other hand, adding stray capacitance usually makes things worse. |
From: Bastien R. <rou...@gm...> - 2013-07-23 18:04:44
|
On Sunday, July 21, 2013, al davis <ad...@fr...> wrote: > On Sunday 21 July 2013, Soeren D. Schulze wrote: >> Adding a series resistance to a voltage source seems both >> more effective and more harmless to me -- really, who is >> going to notice 10u Ohm? > > It's a toss-up. > > One way to model (I should say approximate) a voltage source is > with a small series resistance, which is converted into a > current source with shunt. Gnucap does it this way. It messes > up the conditioning of the matrix somewhat. > > Another way is to add another equation, the equivalent of an > extra node, representing the current. (Nullor model) Gnucap > could do it this way, but doesn't. Another voltage-source > plugin would give you a choice. It messes up the pattern of > the matrix. > > Yet another way is to eliminate a node and solve the missing > node by adding the voltage source voltage to the nearby node > that you keep. > > All of these have their advantages and disadvantages, which may > not be evident unless you look deeper and try to implement it. > > Similar reasoning applies to the inductor. Spice uses the > internal node model of an inductor. Gnucap may use either. It > adds the internal node only if there is mutual inductance. The > spice method is a disaster in run time for a circuit with lots > of inductors, like you get with parasitic extraction, the RLGC > model of a system of coupled transmission lines. > > On Sunday 21 July 2013, Soeren D. Schulze wrote: >> It's important for users to understand that leaving out >> certain parasitic components sometimes doesn't simplify >> things but actually makes them trickier. > > One of my favorite counterintuitive surprises here is the > addition of transmission lines to a circuit to improve > convergence. Form me is it perfectly intuitive but I come from rf field :) > On the other hand, adding stray capacitance usually makes things > worse. > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Qucs-devel mailing list > Quc...@li... > https://lists.sourceforge.net/lists/listinfo/qucs-devel > |
From: Soeren D. S. <soe...@gm...> - 2013-07-23 17:00:44
|
Am 21.07.2013 16:55, schrieb al davis: > On Sunday 21 July 2013, Soeren D. Schulze wrote: >> Adding a series resistance to a voltage source seems both >> more effective and more harmless to me -- really, who is >> going to notice 10u Ohm? > > It's a toss-up. > > One way to model (I should say approximate) a voltage source is > with a small series resistance, which is converted into a > current source with shunt. Gnucap does it this way. It messes > up the conditioning of the matrix somewhat. Yes, it often does. But when driving a capacitive load, it's also often better than modeling a voltage source with zero series resistance (such as in the circuit I posted last time). > Another way is to add another equation, the equivalent of an > extra node, representing the current. (Nullor model) Gnucap > could do it this way, but doesn't. Another voltage-source > plugin would give you a choice. It messes up the pattern of > the matrix. Is it only a performance issue or are there other reasons why this is bad? If I interpret the source code correctly, gnucap makes use of the matrix being positive definite by doing LU decomposition without pivoting. (If so, have you considered using Cholesky decomposition, by the way?) > Yet another way is to eliminate a node and solve the missing > node by adding the voltage source voltage to the nearby node > that you keep. I read about that. The method is interesting because it prevents some "accidental differentiations" that lead to a bad condition number and typically occur when driving a capacitor by a voltage source. > Similar reasoning applies to the inductor. Spice uses the > internal node model of an inductor. Gnucap may use either. It > adds the internal node only if there is mutual inductance. The > spice method is a disaster in run time for a circuit with lots > of inductors, like you get with parasitic extraction, the RLGC > model of a system of coupled transmission lines. What does gnucap do if you put two inductors in series, with something else connected to the node in-between? Does it merge all three nodes into a super-node? That's also interesting because doing so could eliminate another source of accidental differentiations. The only thing that makes me a bit skeptical is the fact that leaving out the node makes the resulting equation even more non-linear. Such equations aren't that well-researched for their theoretical and numerical properties, and the algorithm that I'd like to implement later requires linearity in the derivatives. Sören |
From: al d. <ad...@fr...> - 2013-07-23 17:41:26
|
On Tuesday 23 July 2013, Soeren D. Schulze wrote: > > Another way is to add another equation, the equivalent of > > an extra node, representing the current. (Nullor > > model) Gnucap could do it this way, but doesn't. Another > > voltage-source plugin would give you a choice. It messes > > up the pattern of the matrix. > > Is it only a performance issue or are there other reasons why > this is bad? If I interpret the source code correctly, > gnucap makes use of the matrix being positive definite by > doing LU decomposition without pivoting. (If so, have you > considered using Cholesky decomposition, by the way?) Only passive circuits are truly symmetric, which is why not Cholesky decomposition. Gnucap allocates the matrix as structurally symmetric, but not numerically symmetric. Strictly, this is not the most efficient at storage space or matrix solve time, but it pays off in other ways, such as a simpler allocation scheme, preservation of blocks, vector storage. So it is a good tradeoff. Preservation of blocks is important. Gnucap does not update or solve the whole matrix every step. Most of the time it does low-rank updates and partial solutions. Often, it will only solve the whole matrix once for a complete simulation. But at all times, it gives the illusion of doing and having it all. > What does gnucap do if you put two inductors in series, with > something else connected to the node in-between? Does it > merge all three nodes into a super-node? That's also > interesting because doing so could eliminate another source > of accidental differentiations. There is no collapsing of nodes. It might be worth investigating someday. Note that the inductor is a plugin. The core engine has no knowledge of any specific component. > The only thing that makes me a bit skeptical is the fact that > leaving out the node makes the resulting equation even more > non-linear. Such equations aren't that well-researched for > their theoretical and numerical properties, and the > algorithm that I'd like to implement later requires > linearity in the derivatives. What makes me more skeptical is that pre-analysis can take a significant amount of time. You might argue that it is only done once, but if the pre-analysis takes quadratic time, with a big enough circuit the quadratic part will dominate. It seems you have indicated a preference for a state-variable formulation instead of a modified-nodal formulation, or perhaps just a curiosity about it. There were experiments with this long ago ... around 1970. An early book on simulation discusses it at length. I don't have it handy, don't know the title, but I believe the author was Leon Chua. As I recall, it works well for simple problems but when the circuits get large and complex, the complexity of generating the formulation gets out of hand. In this context, large is by 1970 standards. |
From: Bastien R. <rou...@gm...> - 2013-07-23 17:56:12
|
On Tuesday, July 23, 2013, al davis <ad...@fr...> wrote: > On Tuesday 23 July 2013, Soeren D. Schulze wrote: >> > Another way is to add another equation, the equivalent of >> > an extra node, representing the current. (Nullor >> > model) Gnucap could do it this way, but doesn't. Another >> > voltage-source plugin would give you a choice. It messes >> > up the pattern of the matrix. >> >> Is it only a performance issue or are there other reasons why >> this is bad? If I interpret the source code correctly, >> gnucap makes use of the matrix being positive definite by >> doing LU decomposition without pivoting. (If so, have you >> considered using Cholesky decomposition, by the way?) > > Only passive circuits are truly symmetric, which is why not > Cholesky decomposition. > > Gnucap allocates the matrix as structurally symmetric, but not > numerically symmetric. Strictly, this is not the most efficient > at storage space or matrix solve time, but it pays off in other > ways, such as a simpler allocation scheme, preservation of > blocks, vector storage. So it is a good tradeoff. > > Preservation of blocks is important. Gnucap does not update or > solve the whole matrix every step. Most of the time it does > low-rank updates and partial solutions. Often, it will only > solve the whole matrix once for a complete simulation. But at > all times, it gives the illusion of doing and having it all. > >> What does gnucap do if you put two inductors in series, with >> something else connected to the node in-between? Does it >> merge all three nodes into a super-node? That's also >> interesting because doing so could eliminate another source >> of accidental differentiations. > > There is no collapsing of nodes. > > It might be worth investigating someday. > > Note that the inductor is a plugin. The core engine has no > knowledge of any specific component. > >> The only thing that makes me a bit skeptical is the fact that >> leaving out the node makes the resulting equation even more >> non-linear. Such equations aren't that well-researched for >> their theoretical and numerical properties, and the >> algorithm that I'd like to implement later requires >> linearity in the derivatives. > > What makes me more skeptical is that pre-analysis can take a > significant amount of time. You might argue that it is only > done once, but if the pre-analysis takes quadratic time, with a > big enough circuit the quadratic part will dominate. > > It seems you have indicated a preference for a state-variable > formulation instead of a modified-nodal formulation, or perhaps > just a curiosity about it. There were experiments with this > long ago ... around 1970. An early book on simulation discusses > it at length. I don't have it handy, don't know the title, but > I believe the author was Leon Chua. Yes it is the chua 1975 As I recall, it works well > for simple problems but when the circuits get large and complex, > the complexity of generating the formulation gets out of hand. > In this context, large is by 1970 standards. > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Qucs-devel mailing list > Quc...@li... > https://lists.sourceforge.net/lists/listinfo/qucs-devel > |
From: Soeren D. S. <soe...@gm...> - 2013-07-23 21:42:12
|
Am 23.07.2013 19:41, schrieb al davis: > On Tuesday 23 July 2013, Soeren D. Schulze wrote: >>> Another way is to add another equation, the equivalent of >>> an extra node, representing the current. (Nullor >>> model) Gnucap could do it this way, but doesn't. Another >>> voltage-source plugin would give you a choice. It messes >>> up the pattern of the matrix. >> >> Is it only a performance issue or are there other reasons why >> this is bad? If I interpret the source code correctly, >> gnucap makes use of the matrix being positive definite by >> doing LU decomposition without pivoting. (If so, have you >> considered using Cholesky decomposition, by the way?) > > Only passive circuits are truly symmetric, which is why not > Cholesky decomposition. > > Gnucap allocates the matrix as structurally symmetric, but not > numerically symmetric. Strictly, this is not the most efficient > at storage space or matrix solve time, but it pays off in other > ways, such as a simpler allocation scheme, preservation of > blocks, vector storage. So it is a good tradeoff. > > Preservation of blocks is important. Gnucap does not update or > solve the whole matrix every step. Most of the time it does > low-rank updates and partial solutions. Often, it will only > solve the whole matrix once for a complete simulation. But at > all times, it gives the illusion of doing and having it all. OK, so the structure that you want to preserve is having appropriate pivot elements on the diagonal, so low-rank updates can be done. Is that correct? >> What does gnucap do if you put two inductors in series, with >> something else connected to the node in-between? Does it >> merge all three nodes into a super-node? That's also >> interesting because doing so could eliminate another source >> of accidental differentiations. > > There is no collapsing of nodes. Then I don't understand how inductors are modeled in gnucap, sorry. > It seems you have indicated a preference for a state-variable > formulation instead of a modified-nodal formulation, or perhaps > just a curiosity about it. There were experiments with this > long ago ... around 1970. An early book on simulation discusses > it at length. I don't have it handy, don't know the title, but > I believe the author was Leon Chua. As I recall, it works well > for simple problems but when the circuits get large and complex, > the complexity of generating the formulation gets out of hand. > In this context, large is by 1970 standards. Well, I'm considering them from a mathematical perspective. Even if they don't work well for large circuits, they may still be interesting for circuits that are small but tricky. On the other hand, the mathematical papers argue like this: "Everybody uses MNA, so we'll do the same thing", and they make huge efforts to improve the mathematical properties... like http://opus.kobv.de/tuberlin/volltexte/2007/1524/pdf/baechle_simone.pdf I'm still surprised that nobody has ever questioned the practice of obtaining the next integration step directly from the non-linear equation ("factoring out the Jacobian") rather than computing an update and adding that to the previous one, which is, according to the theory, much better. If you implement Newton's method like xnew = J^(-1) * (... something dependent on xold ...), then J needs to be absolutely accurate and well-conditioned, and the LU decomposition needs to be up-to-date in every step. If, on the other hand, you do it like: xnew = xold - J^(-1) * f(xold), then J resp. its LU decomposition can be held constant for at least one integration step (resulting in simplified Newton's method) or maybe even more than one (such as implemented in the code radau5). The former is what Qucs does. If I understand it right, Spice and gnucap do the same thing, essentially. Is that correct? Sören |
From: Soeren D. S. <soe...@gm...> - 2013-08-04 20:46:20
|
Another reason why we should drop them: To work properly, the predictor needs the derivative of the solution, which we don't have (for fundamental reasons). What does the code do to obtain the derivative? First-order divided differences. So the Adams predictor is, in general, only 1st order for any number of steps. And it's probably worse than explicit Euler because polynomial extrapolation amplifies errors a lot. There *are* better methods for numerical differentiation: you could first do polynomial interpolation and derive that. But because the next step is integrating the polynomial, you end up with exactly the BDF predictor (one order higher). So while it is possible to use the Adams corrector with the BDF predictor, it's no more an actual Adams method. So, I don't see any educational purpose to keep Adams methods, either, because they just don't work the way they are intended. Do you agree? Sören |