Thread: [myhdl-list] CHIP, a myhdl like project
Brought to you by:
jandecaluwe
From: Martín G. <ga...@gm...> - 2011-10-14 15:25:50
|
I was randomly browsing PyPi when I found this project: http://dawsonjon.github.com/chips/ Although (in a a not so deep review) it looks inmature compared with MyHDL, I wonder if the chip's author knows MyHDL and why not put his effort in order to improve this. Has somebody hear about this project and/or its author? |
From: Jan D. <ja...@ja...> - 2011-10-17 07:51:48
|
On 10/14/2011 05:25 PM, Martín Gaitán wrote: > I was randomly browsing PyPi when I found this project: > http://dawsonjon.github.com/chips/ > > Although (in a a not so deep review) it looks inmature compared with > MyHDL, I wonder if the chip's author knows MyHDL and why not put his > effort in order to improve this. > > Has somebody hear about this project and/or its author? Yes, and I don't worry too much about it. I don't think you are correct in saying that this is "MyHDL-like". Recently, there have also been other sources of confusion about the nature of MyHDL (and conversion), e.g. how it relates to beasts like guarded atomic actions. Let me therefore try to clarify some concepts. There are several paradigms to construct "hardware description" languages. One way, and the big winner to date, is the event-driven paradigm. That is what VHDL, Verilog and MyHDL have in common. It is incorrect (although often done) to call such languages "RTL" languages. Instead, RTL is a semantic subset of such languages. There are other ways to do RTL, but I'm convinced that this is the single way to do it right. The point to remember is that in MyHDL (and VHDL and Verilog) events are the name of the game. Trying to stuff in other paradigms will just lead to confusion. Consider MyHDL conversion. What it does is to convert an event-driven model into an equivalent event-driven model. Conversion is "event accurate". Within this constraint, it tries to automate as much as possible, but that doesn't make it a synthesis tool. The goal of conversion is to make sure that a MyHDL-based design flow plays well with the mainstream event-driven HDLs. Nothing more, nothing less. In contrast, high-level synthesis can be defined as a tool that creates an equivalent lower-level RTL model without the constraint of "event accuracy". The high-level input models may not be based on events at all. The big difficulty with such tools is to define clearly what "equivalent" means. Unlike RTL, there are many potentially meaningful ways to do high-level modeling/synthesis. For the user, the most important concept is the modeling paradigm, as implemented by the input language. MyHDL could possibly be the input language of a high-level synthesis tool based on high-level event-driven models. However, this has been tried before with VHDL and Verilog without a lot of success. This does not mean that MyHDL has no role to play in high-level synthesis. On the contrary: I believe that it is the ideal back-end language for such tools: with the same effort, one can support 3 RTL back-ends instead of one, thanks to conversion. Now consider CHIPS. It is clearly a high-level modeling/ synthesis attempt. The paradigm seems to be some kind of stream-based modeling. No events in sight. Therefore, it is not "MyHDL-like". The back-end is VHDL RTL, and as I noticed before, I think MyHDL would have been a better choice. Whether this particular approach is useful is something that everyone should judge for himself. Personally, I don't see how it could be useful for the kind of project I am doing. Moreover, in the very first paragraphs of the rationale behind the project, the author makes makes a few doubtful statements: http://dawsonjon.github.com/chips/introduction/index.html#a-new-approach-to-device-design * "A hardware model written in an imperative style cannot be synthesised" This is wrong: an imperative style is actually very useful to describe combinatorial and clocked RTL logic. * "The primitive elements of an RTL design are clocked memory elements (registers) and combinational logic elements." Actually, those are the primitive elements of a gate level synthesized from RTL. In my view, the primitive elements in RTL design are clocked processes (and an occasional combinatorial process.) In particular, registers are inferred from behavior, not implied or instantiated. (Note: the name RTL is historical. For event-driven RTL as we all are using, it is a misnomer.) I agreed with the author that RTL is "low-level". However, it is definitely not as "low-level" as he suggests. If the starting point is shaky, I don't have a lot of confidence in the "solution". Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Sébastien B. <seb...@mi...> - 2011-10-17 08:41:54
|
On 10/17/2011 09:51 AM, Jan Decaluwe wrote: > One way, and the big winner > to date, is the event-driven paradigm. That is what > VHDL, Verilog and MyHDL have in common. > > It is incorrect (although often done) to call such > languages "RTL" languages. Instead, RTL is > a semantic subset of such languages. There are > other ways to do RTL, but I'm convinced that this > is the single way to do it right. Oh, come on. For synchronous systems, which is what the common mortals who design with FPGAs almost always deal with, the event-driven paradigm is just a complicated way of getting things done. Cycle-based simulations are simpler, faster, and also correct. > The back-end is VHDL RTL, and > as I noticed before, I think MyHDL would have been a > better choice. Then maybe you'll like my HLS prototype better? If so, please consider merging my patches which are needed to support it. > * "A hardware model written in an imperative style cannot be > synthesised" > > This is wrong: an imperative style is actually very useful to describe > combinatorial and clocked RTL logic. > (...) > I agreed with the author that RTL is "low-level". However, it is > definitely not as "low-level" as he suggests. If the starting > point is shaky, I don't have a lot of confidence in the "solution". I think he meant "imperative style for a complete algorithm", not "imperative style for describing what can be done within one clock cycle". From this point of view, it is true that the efficient synthesis of a complex algorithm written in imperative style is a very difficult problem, and that using a different "dataflow" paradigm can make sense. S. |
From: Jan D. <ja...@ja...> - 2011-10-17 22:01:58
|
On 10/17/2011 10:38 AM, Sébastien Bourdeauducq wrote: > On 10/17/2011 09:51 AM, Jan Decaluwe wrote: >> One way, and the big winner to date, is the event-driven paradigm. >> That is what VHDL, Verilog and MyHDL have in common. >> >> It is incorrect (although often done) to call such languages "RTL" >> languages. Instead, RTL is a semantic subset of such languages. >> There are other ways to do RTL, but I'm convinced that this is the >> single way to do it right. > > Oh, come on. For synchronous systems, which is what the common > mortals who design with FPGAs almost always deal with, the > event-driven paradigm is just a complicated way of getting things > done. Cycle-based simulations are simpler, faster, and also correct. I was not talking about simulation techniques, but about the fundamental paradigm behind language design. Verilator is still a Verilog simulator, and Verilog is an event-driven language. Historically, the name RTL (Register Transfer Language) refers to a totally different paradigm: one in which registers are explicit and (clock) events are implicit. AHDL is probably the best example that is still in use. Many other RTL languages of this type have been proposed over the years. They are all but forgotten, to the point that it is even hard find any traces about them on the internet. And of course, this gives room to the occasional genius to construct a new "fully synthesizable" HDL that is "much less verbose" than Verilog or VHDL. In event-driven RTL, (clock) events are explicit and registers are implicit (inferred). We kept the name RTL, but that really is a misnomer. Something like "clocked behavior language" would be more accurate. Regardless of the name, this is the winning paradigm for RTL. And the reason it has won is that many designers need much more powerful modeling than just RTL, within the same language. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Christopher F. <chr...@gm...> - 2011-10-17 13:08:15
|
On 10/17/11 3:38 AM, Sébastien Bourdeauducq wrote: > On 10/17/2011 09:51 AM, Jan Decaluwe wrote: > > One way, and the big winner > > to date, is the event-driven paradigm. That is what > > VHDL, Verilog and MyHDL have in common. > > > > It is incorrect (although often done) to call such > > languages "RTL" languages. Instead, RTL is > > a semantic subset of such languages. There are > > other ways to do RTL, but I'm convinced that this > > is the single way to do it right. > > Oh, come on. For synchronous systems, which is what the common mortals > who design with FPGAs almost always deal with, the event-driven paradigm > is just a complicated way of getting things done. Cycle-based > simulations are simpler, faster, and also correct. > Ok you lost me here. Best of my knowledge, most simulators are event based simulators. They have event queues and at each simulation cycle the event queues are inspected. Once all events are processed, the simulator moves on to the next cycle. A cycle-based simulation without events and event queues would need to do a lot of explicit checking and unneeded execution without the queues. I don't understand how a cycle based simulation without events is faster? >> The back-end is VHDL RTL, and >> as I noticed before, I think MyHDL would have been a >> better choice. > > Then maybe you'll like my HLS prototype better? If so, please consider > merging my patches which are needed to support it. > > > * "A hardware model written in an imperative style cannot be > > synthesised" > > > > This is wrong: an imperative style is actually very useful to describe > > combinatorial and clocked RTL logic. > > (...) > > I agreed with the author that RTL is "low-level". However, it is > > definitely not as "low-level" as he suggests. If the starting > > point is shaky, I don't have a lot of confidence in the "solution". > > I think he meant "imperative style for a complete algorithm", not > "imperative style for describing what can be done within one clock > cycle". From this point of view, it is true that the efficient synthesis > of a complex algorithm written in imperative style is a very difficult > problem, and that using a different "dataflow" paradigm can make sense. I agree, I think the author poorly described the point. And probably should have said "would implement an algorithm in an imperative style". Given how it is worded it gives the wrong impression. Regards, Chris > > S. > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2d-oct |
From: Sébastien B. <seb...@mi...> - 2011-10-17 16:44:10
|
On 10/17/2011 03:07 PM, Christopher Felton wrote: > Ok you lost me here. Best of my knowledge, most simulators are event > based simulators. They have event queues and at each simulation cycle > the event queues are inspected. Once all events are processed, the > simulator moves on to the next cycle. A cycle-based simulation without > events and event queues would need to do a lot of explicit checking and > unneeded execution without the queues. I don't understand how a cycle > based simulation without events is faster? See Verilator for an open-source example. S. |
From: Christopher F. <chr...@gm...> - 2011-10-17 18:32:33
|
>>> On 10/17/2011 11:40 AM, Sébastien Bourdeauducq wrote: >>> On 10/17/2011 09:51 AM, Jan Decaluwe wrote: >>>> One way, and the big winner >>>> to date, is the event-driven paradigm. That is what >>>> VHDL, Verilog and MyHDL have in common. >>>> >>>> It is incorrect (although often done) to call such >>>> languages "RTL" languages. Instead, RTL is >>>> a semantic subset of such languages. There are >>>> other ways to do RTL, but I'm convinced that this >>>> is the single way to do it right. >>> >>> Oh, come on. For synchronous systems, which is what the >>> common mortals who design with FPGAs almost always deal >>> with, the event-driven paradigm is just a complicated way >>> of getting things done. Cycle-based simulations are simpler, >>> faster, and also correct. > On 10/17/2011 03:07 PM, Christopher Felton wrote: >> Ok you lost me here. Best of my knowledge, most simulators are event >> based simulators. They have event queues and at each simulation cycle >> the event queues are inspected. Once all events are processed, the >> simulator moves on to the next cycle. A cycle-based simulation without >> events and event queues would need to do a lot of explicit checking and >> unneeded execution without the queues. I don't understand how a cycle >> based simulation without events is faster? > > See Verilator for an open-source example. > > S. Feeling left out because I was unaware of cycle-based simulation :( I thought I better try and educate myself. I didn't find any useful information on the Verilator site to define cycle-based versus event-driven. But this article seemed to explain the difference, http://chipdesignmag.com/display.php?articleId=192&issueId=13 """ In an event-driven simulator, the entire logic function is re-evaluated each time one of the inputs changes value. Re-evaluation increases the simulation time and reduces the event-driven simulator's performance. Cycle-based simulators evaluate each logic function once per cycle. Cycle-based simulators effectively use flattened cones of logic. Each cone is evaluated once during a simulation cycle. """ Other than defining what a "cone" of logic is the description seems straight-forward. And it seems to fit my "guess" description; that cycle-base would have to evaluate each set (cone) of logic on each cycle. Intuitively this would seem slower (but intuition is often wrong). So, the missing data is; why would a cycle-based simulation be faster? It isn't too hard to think of examples in which either implementation would be slower. In my mind, for cycle-based to be faster, on average most designs have to have logic blocks which inputs change multiple times per cycle. This seems reasonable but some data to support this would be nice. The Verilator benchmarks probably not a good source for this data. One, it is doing more than just using cycle-based vs. event-driven. And second, some of their comparisons are "guesses". He takes published results and uses some multiplier to adjust of different machines etc. As the Verilator site and the ChipDesign article show, it can only be used for a subset of the language. I guess you should have said "common mortals who design with FPGAs using a subset ..." :) Or you believe "most only _use_ the RTL portion of popular HDLs". In the end, the comment seemed out of placed and introduced a fairly off-topic discussion (although educational for me and somewhat interesting). The idea of how the simulator *implements* the simulation is not the same as how a language abstracts (represents) the design/hardware and what those paradigms should be called. Regards, Chris |