Re: [myhdl-list] CHIP, a myhdl like project
Brought to you by:
jandecaluwe
From: Christopher F. <chr...@gm...> - 2011-10-17 18:32:33
|
>>> On 10/17/2011 11:40 AM, Sébastien Bourdeauducq wrote: >>> On 10/17/2011 09:51 AM, Jan Decaluwe wrote: >>>> One way, and the big winner >>>> to date, is the event-driven paradigm. That is what >>>> VHDL, Verilog and MyHDL have in common. >>>> >>>> It is incorrect (although often done) to call such >>>> languages "RTL" languages. Instead, RTL is >>>> a semantic subset of such languages. There are >>>> other ways to do RTL, but I'm convinced that this >>>> is the single way to do it right. >>> >>> Oh, come on. For synchronous systems, which is what the >>> common mortals who design with FPGAs almost always deal >>> with, the event-driven paradigm is just a complicated way >>> of getting things done. Cycle-based simulations are simpler, >>> faster, and also correct. > On 10/17/2011 03:07 PM, Christopher Felton wrote: >> Ok you lost me here. Best of my knowledge, most simulators are event >> based simulators. They have event queues and at each simulation cycle >> the event queues are inspected. Once all events are processed, the >> simulator moves on to the next cycle. A cycle-based simulation without >> events and event queues would need to do a lot of explicit checking and >> unneeded execution without the queues. I don't understand how a cycle >> based simulation without events is faster? > > See Verilator for an open-source example. > > S. Feeling left out because I was unaware of cycle-based simulation :( I thought I better try and educate myself. I didn't find any useful information on the Verilator site to define cycle-based versus event-driven. But this article seemed to explain the difference, http://chipdesignmag.com/display.php?articleId=192&issueId=13 """ In an event-driven simulator, the entire logic function is re-evaluated each time one of the inputs changes value. Re-evaluation increases the simulation time and reduces the event-driven simulator's performance. Cycle-based simulators evaluate each logic function once per cycle. Cycle-based simulators effectively use flattened cones of logic. Each cone is evaluated once during a simulation cycle. """ Other than defining what a "cone" of logic is the description seems straight-forward. And it seems to fit my "guess" description; that cycle-base would have to evaluate each set (cone) of logic on each cycle. Intuitively this would seem slower (but intuition is often wrong). So, the missing data is; why would a cycle-based simulation be faster? It isn't too hard to think of examples in which either implementation would be slower. In my mind, for cycle-based to be faster, on average most designs have to have logic blocks which inputs change multiple times per cycle. This seems reasonable but some data to support this would be nice. The Verilator benchmarks probably not a good source for this data. One, it is doing more than just using cycle-based vs. event-driven. And second, some of their comparisons are "guesses". He takes published results and uses some multiplier to adjust of different machines etc. As the Verilator site and the ChipDesign article show, it can only be used for a subset of the language. I guess you should have said "common mortals who design with FPGAs using a subset ..." :) Or you believe "most only _use_ the RTL portion of popular HDLs". In the end, the comment seemed out of placed and introduced a fairly off-topic discussion (although educational for me and somewhat interesting). The idea of how the simulator *implements* the simulation is not the same as how a language abstracts (represents) the design/hardware and what those paradigms should be called. Regards, Chris |