Thread: [myhdl-list] essay about integer arithmetic
Brought to you by:
jandecaluwe
From: Jan D. <ja...@ja...> - 2009-03-06 11:37:34
|
This is an essay that I wanted to write for a long time. It describes what I think is wrong with integer arithmetic in VHDL and Verilog, and why MyHDL provides a solution. Before releasing it to the general public, I'm interested to hear what you think about it. http://www.jandecaluwe.com/hdldesign/counting.html -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Neal B. <ndb...@gm...> - 2009-03-06 12:22:08
|
Jan Decaluwe wrote: > This is an essay that I wanted to write for a long time. > It describes what I think is wrong with integer arithmetic > in VHDL and Verilog, and why MyHDL provides a solution. > > Before releasing it to the general public, I'm interested > to hear what you think about it. > > http://www.jandecaluwe.com/hdldesign/counting.html > > > If you really want to automate the bit widths, perhaps some kind of interval arithmetic is wanted? python mpmath and pyinterval both supply some interval arithmetic, but these are over reals, not integers. |
From: Michael B. <ma...@cr...> - 2009-03-06 12:30:07
|
IMHO, though informed with years of hardware implementation experience, automating bit-widths should be 100% a non-goal. It should never be done under any circumstances whatsoever. IMHO, this is a serious failing of MyHDL. Best, Michael California On Fri, 06 Mar 2009 07:21:50 -0500, Neal Becker wrote > Jan Decaluwe wrote: > > > This is an essay that I wanted to write for a long time. > > It describes what I think is wrong with integer arithmetic > > in VHDL and Verilog, and why MyHDL provides a solution. > > > > Before releasing it to the general public, I'm interested > > to hear what you think about it. > > > > http://www.jandecaluwe.com/hdldesign/counting.html > > > > > > > > If you really want to automate the bit widths, perhaps some kind of > interval arithmetic is wanted? > > python mpmath and pyinterval both supply some interval arithmetic, > but these are over reals, not integers. > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San > Francisco, CA -OSBC tackles the biggest issue in open source: Open > Sourcing the Enterprise -Strategies to boost innovation and cut > costs with open source participation -Receive a $600 discount off > the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Pieter <pie...@gm...> - 2009-03-06 13:23:20
|
That's funny, only yesterday I saw someone write this: some_signal <= '0' if ((a-b) = 1 or (b-a) = 1) then some_signal <= '1'; end if; The arithmetic rules in Verilog are confusing and just wrong. The rules in VHDL are hard to grasp for students, but there's some logic behind it. If only the result of an operation would be as wide as the target, that would solve many issues. It doesn't feel right to change the size of the operand, to get the right size for the result. (and of course you should be able to add a signed and an unsigned operand without having to cast the unsigned to signed + adding a zero to the front) @ Michael Baxter I don't have a lot experience in hardware design. For me this doesn't feel like automating bit-widths. You still define max and min values for the intbev, so you declare the bit width explicitly. Isn't it about the arithmetic rules that are counter-intuitive? I can't see there's anything wrong with those of MyHDL, they are intuitive and create the hardware you want. Can you explain why you believe this automates bit widths and this should never be done? kind regards, pieter 2009/3/6 Michael Baxter <ma...@cr...>: > IMHO, though informed with years of hardware implementation experience, > automating bit-widths should be 100% a non-goal. It should never be done under > any circumstances whatsoever. > > IMHO, this is a serious failing of MyHDL. > > Best, > Michael > California > > On Fri, 06 Mar 2009 07:21:50 -0500, Neal Becker wrote >> Jan Decaluwe wrote: >> >> > This is an essay that I wanted to write for a long time. >> > It describes what I think is wrong with integer arithmetic >> > in VHDL and Verilog, and why MyHDL provides a solution. >> > >> > Before releasing it to the general public, I'm interested >> > to hear what you think about it. >> > >> > http://www.jandecaluwe.com/hdldesign/counting.html >> > >> > >> > >> >> If you really want to automate the bit widths, perhaps some kind of >> interval arithmetic is wanted? >> >> python mpmath and pyinterval both supply some interval arithmetic, >> but these are over reals, not integers. >> >> ------------------------------------------------------------------------------ >> Open Source Business Conference (OSBC), March 24-25, 2009, San >> Francisco, CA -OSBC tackles the biggest issue in open source: Open >> Sourcing the Enterprise -Strategies to boost innovation and cut >> costs with open source participation -Receive a $600 discount off >> the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H >> _______________________________________________ >> myhdl-list mailing list >> myh...@li... >> https://lists.sourceforge.net/lists/listinfo/myhdl-list > > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise > -Strategies to boost innovation and cut costs with open source participation > -Receive a $600 discount off the registration fee with the source code: SFAD > http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list > -- Pieter Cogghe Iepenstraat 43 9000 Gent 0487 10 14 21 |
From: Michael B. <ma...@cr...> - 2009-03-06 14:01:10
|
Hi Pieter, Verilog functioned successfully for many years without a representation for signedness. Adding signed bitfields to the language is really I think only a form of syntatic sugar. It's not strictly necessary for hardware design. There are no data types in hardware! Bits do what their hardware says to do, and it may be completely unrelated to signedness, or even to values. The rules of Verilog for arithmetic DO make sense: they do what hardware does, and that is good. That is why software is different than hardware, and it's why HDLs are completely different than computer programming languages. Applying computer programming language models (such as data types) to hardware is a very dangerous idea for efficient, high-performance hardware engineering of logic. Programming models don't work the way hardware works. Ranges of values due not define a bitwidth explicitly, they define it implicitly. As a designer, I want complete and total control over the number of bits represented in the (simulated or synthesized) hardware. A program should NOT do that for me, because a computer program is too stupid to understand what I want those bits to do, or to mean. A collection of bits can well more than a single range in representation. A 32-bit register can represent 32 bit flags, each having a range [0,1]. Or it could represent (4) 8-bit values, each having a range [0,255], [-128,127] or a mix. Or it could represent two fields, (6) 1-bit flag values concatenated to a 26-bit ordinal. Or, or, or... any number of alternative representations involving bit-level concurrency that cannot be represented with a single interval specification. This kind of thing is done all the time in hardware. I don't want to use several intervals to imply what I mean, I want to use a single ordinal to say how many bits I want, their interpretation being possibly completely unrelated to the number of bits as a total. Use of intervals is being proposed as a means to imply bitwidth, and this is an anathma to explicit, specific, user-defined design representation. As a designer, I already know what I want the underlying hardware bit representation to do, or to mean. A high-level HDL should get out of my way to let me do that. Hope this answers your queries with some additional amplification. Best, Michael On Fri, 6 Mar 2009 14:22:59 +0100, Pieter wrote > That's funny, only yesterday I saw someone write this: > > some_signal <= '0' > if ((a-b) = 1 or (b-a) = 1) then > some_signal <= '1'; > end if; > > The arithmetic rules in Verilog are confusing and just wrong. The > rules in VHDL are hard to grasp for students, but there's some logic > behind it. If only the result of an operation would be as wide as the > target, that would solve many issues. It doesn't feel right to change > the size of the operand, to get the right size for the result. (and > of course you should be able to add a signed and an unsigned operand > without having to cast the unsigned to signed + adding a zero to the > front) > > @ Michael Baxter > I don't have a lot experience in hardware design. For me this doesn't > feel like automating bit-widths. You still define max and min values > for the intbev, so you declare the bit width explicitly. Isn't it > about the arithmetic rules that are counter-intuitive? I can't see > there's anything wrong with those of MyHDL, they are intuitive and > create the hardware you want. Can you explain why you believe this > automates bit widths and this should never be done? > > kind regards, > > pieter > > 2009/3/6 Michael Baxter <ma...@cr...>: > > IMHO, though informed with years of hardware implementation experience, > > automating bit-widths should be 100% a non-goal. It should never be done under > > any circumstances whatsoever. > > > > IMHO, this is a serious failing of MyHDL. > > > > Best, > > Michael > > California > > > > On Fri, 06 Mar 2009 07:21:50 -0500, Neal Becker wrote > >> Jan Decaluwe wrote: > >> > >> > This is an essay that I wanted to write for a long time. > >> > It describes what I think is wrong with integer arithmetic > >> > in VHDL and Verilog, and why MyHDL provides a solution. > >> > > >> > Before releasing it to the general public, I'm interested > >> > to hear what you think about it. > >> > > >> > http://www.jandecaluwe.com/hdldesign/counting.html > >> > > >> > > >> > > >> > >> If you really want to automate the bit widths, perhaps some kind of > >> interval arithmetic is wanted? > >> > >> python mpmath and pyinterval both supply some interval arithmetic, > >> but these are over reals, not integers. > >> > >> ------------------------------------------------------------------------------ > >> Open Source Business Conference (OSBC), March 24-25, 2009, San > >> Francisco, CA -OSBC tackles the biggest issue in open source: Open > >> Sourcing the Enterprise -Strategies to boost innovation and cut > >> costs with open source participation -Receive a $600 discount off > >> the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H > >> _______________________________________________ > >> myhdl-list mailing list > >> myh...@li... > >> https://lists.sourceforge.net/lists/listinfo/myhdl-list > > > > > > ------------------------------------------------------------------------------ > > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA > > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise > > -Strategies to boost innovation and cut costs with open source participation > > -Receive a $600 discount off the registration fee with the source code: SFAD > > http://p.sf.net/sfu/XcvMzF8H > > _______________________________________________ > > myhdl-list mailing list > > myh...@li... > > https://lists.sourceforge.net/lists/listinfo/myhdl-list > > > > -- > Pieter Cogghe > Iepenstraat 43 > 9000 Gent > 0487 10 14 21 > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San > Francisco, CA -OSBC tackles the biggest issue in open source: Open > Sourcing the Enterprise -Strategies to boost innovation and cut > costs with open source participation -Receive a $600 discount off > the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Pieter <pie...@gm...> - 2009-03-06 14:52:42
|
> Ranges of values due not define a bitwidth explicitly, they define it implicitly. You're right, I didn't look at that as an implicit bit width, but in fact it is. It is undesirable in the cases you name, but I can see it has its use if you only mean to create some integer number. 2009/3/6 Michael Baxter <ma...@cr...>: > Hi Pieter, > > Verilog functioned successfully for many years without a representation for > signedness. Adding signed bitfields to the language is really I think only a > form of syntatic sugar. It's not strictly necessary for hardware design. > > There are no data types in hardware! Bits do what their hardware says to do, > and it may be completely unrelated to signedness, or even to values. The rules > of Verilog for arithmetic DO make sense: they do what hardware does, and that > is good. That is why software is different than hardware, and it's why HDLs > are completely different than computer programming languages. > > Applying computer programming language models (such as data types) to hardware > is a very dangerous idea for efficient, high-performance hardware engineering > of logic. Programming models don't work the way hardware works. > > Ranges of values due not define a bitwidth explicitly, they define it implicitly. > > As a designer, I want complete and total control over the number of bits > represented in the (simulated or synthesized) hardware. A program should NOT > do that for me, because a computer program is too stupid to understand what I > want those bits to do, or to mean. > > A collection of bits can well more than a single range in representation. A > 32-bit register can represent 32 bit flags, each having a range [0,1]. Or it > could represent (4) 8-bit values, each having a range [0,255], [-128,127] or a > mix. Or it could represent two fields, (6) 1-bit flag values concatenated to a > 26-bit ordinal. Or, or, or... any number of alternative representations > involving bit-level concurrency that cannot be represented with a single > interval specification. This kind of thing is done all the time in hardware. > > I don't want to use several intervals to imply what I mean, I want to use a > single ordinal to say how many bits I want, their interpretation being > possibly completely unrelated to the number of bits as a total. > > Use of intervals is being proposed as a means to imply bitwidth, and this is > an anathma to explicit, specific, user-defined design representation. > > As a designer, I already know what I want the underlying hardware bit > representation to do, or to mean. A high-level HDL should get out of my way to > let me do that. > > Hope this answers your queries with some additional amplification. > > Best, > Michael > > On Fri, 6 Mar 2009 14:22:59 +0100, Pieter wrote >> That's funny, only yesterday I saw someone write this: >> >> some_signal <= '0' >> if ((a-b) = 1 or (b-a) = 1) then >> some_signal <= '1'; >> end if; >> >> The arithmetic rules in Verilog are confusing and just wrong. The >> rules in VHDL are hard to grasp for students, but there's some logic >> behind it. If only the result of an operation would be as wide as the >> target, that would solve many issues. It doesn't feel right to change >> the size of the operand, to get the right size for the result. (and >> of course you should be able to add a signed and an unsigned operand >> without having to cast the unsigned to signed + adding a zero to the >> front) >> >> @ Michael Baxter >> I don't have a lot experience in hardware design. For me this doesn't >> feel like automating bit-widths. You still define max and min values >> for the intbev, so you declare the bit width explicitly. Isn't it >> about the arithmetic rules that are counter-intuitive? I can't see >> there's anything wrong with those of MyHDL, they are intuitive and >> create the hardware you want. Can you explain why you believe this >> automates bit widths and this should never be done? >> >> kind regards, >> >> pieter >> >> 2009/3/6 Michael Baxter <ma...@cr...>: >> > IMHO, though informed with years of hardware implementation experience, >> > automating bit-widths should be 100% a non-goal. It should never be done under >> > any circumstances whatsoever. >> > >> > IMHO, this is a serious failing of MyHDL. >> > >> > Best, >> > Michael >> > California >> > >> > On Fri, 06 Mar 2009 07:21:50 -0500, Neal Becker wrote >> >> Jan Decaluwe wrote: >> >> >> >> > This is an essay that I wanted to write for a long time. >> >> > It describes what I think is wrong with integer arithmetic >> >> > in VHDL and Verilog, and why MyHDL provides a solution. >> >> > >> >> > Before releasing it to the general public, I'm interested >> >> > to hear what you think about it. >> >> > >> >> > http://www.jandecaluwe.com/hdldesign/counting.html >> >> > >> >> > >> >> > >> >> >> >> If you really want to automate the bit widths, perhaps some kind of >> >> interval arithmetic is wanted? >> >> >> >> python mpmath and pyinterval both supply some interval arithmetic, >> >> but these are over reals, not integers. >> >> >> >> > ------------------------------------------------------------------------------ >> >> Open Source Business Conference (OSBC), March 24-25, 2009, San >> >> Francisco, CA -OSBC tackles the biggest issue in open source: Open >> >> Sourcing the Enterprise -Strategies to boost innovation and cut >> >> costs with open source participation -Receive a $600 discount off >> >> the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H >> >> _______________________________________________ >> >> myhdl-list mailing list >> >> myh...@li... >> >> https://lists.sourceforge.net/lists/listinfo/myhdl-list >> > >> > >> > ------------------------------------------------------------------------------ >> > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA >> > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise >> > -Strategies to boost innovation and cut costs with open source participation >> > -Receive a $600 discount off the registration fee with the source code: SFAD >> > http://p.sf.net/sfu/XcvMzF8H >> > _______________________________________________ >> > myhdl-list mailing list >> > myh...@li... >> > https://lists.sourceforge.net/lists/listinfo/myhdl-list >> > >> >> -- >> Pieter Cogghe >> Iepenstraat 43 >> 9000 Gent >> 0487 10 14 21 >> >> ------------------------------------------------------------------------------ >> Open Source Business Conference (OSBC), March 24-25, 2009, San >> Francisco, CA -OSBC tackles the biggest issue in open source: Open >> Sourcing the Enterprise -Strategies to boost innovation and cut >> costs with open source participation -Receive a $600 discount off >> the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H >> _______________________________________________ >> myhdl-list mailing list >> myh...@li... >> https://lists.sourceforge.net/lists/listinfo/myhdl-list > > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise > -Strategies to boost innovation and cut costs with open source participation > -Receive a $600 discount off the registration fee with the source code: SFAD > http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list > -- Pieter Cogghe Iepenstraat 43 9000 Gent 0487 10 14 21 |
From: Jan D. <ja...@ja...> - 2009-03-06 16:49:34
|
Michael Baxter wrote: > There are no data types in hardware! No, but that doesn't this prove that there shoudn't be data types in hardware description languages. Even a simple bit-vector is an abstraction that "doesn't exist" as such in hardware. Isn't this just a matter of what you're used to and what you trust? > Bits do what their hardware says to do, > and it may be completely unrelated to signedness, or even to values. The rules > of Verilog for arithmetic DO make sense: they do what hardware does, and that > is good. I insist that Verilog's implicit casting of signed's to unsigned's in an expression does *not* make sense. The rules could have been different, and then they would makes sense in terms of their integer interpretation. But they would *still* do "what the hardware does": the difference would be sign-bit extension (= routing) instead of zero padding. > That is why software is different than hardware, and it's why HDLs > are completely different than computer programming languages. Needless to say, one of my goals with MyHDL is to prove that this is false. Seems I'm not succeeding :-) > Applying computer programming language models (such as data types) to hardware > is a very dangerous idea for efficient, high-performance hardware engineering > of logic. Programming models don't work the way hardware works. Many models don't. But with data types such as integers, booleans, and enums there is no loss of efficiency, only a gain in clarity and productivity. > As a designer, I want complete and total control over the number of bits > represented in the (simulated or synthesized) hardware. A program should NOT > do that for me, because a computer program is too stupid to understand what I > want those bits to do, or to mean. In case of integer arithmetic, which is really the sole topic of the essay, that statement is evidently incorrect. The support of VHDL integer subtypes in synthesis tools proves it. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Michael B. <ma...@cr...> - 2009-03-07 09:47:16
|
Jan and All, Due to IRL work, I will need a lot more time to produce any serious scientific results in response to some requests made here. I just do not have time right now, whereas Jan's essay motivated me to respond, at least initially. In the interim, I'll attempt additional elucidation toward my viewpoint, responding more in depth to this post. Before I do forget this: I've no intention whatsoever to trash anybody here. It's more that I've become non-believer in virtually all high-level HDLs, and have a really different philosophical outlook on what may comprise high-level design. Please also understand that due to it's extremely clean design compared with others, a lot due to it's Pythonic character, MyHDL is certainly my favorite among high-level HDLs. On Sat, 07 Mar 2009 09:00:01 +0100, Jan Decaluwe wrote > Michael Baxter wrote: > > > Not at all. I might have done well to explain this better. A HDL should > > provide a clean isomorphism between it's text representation and the > > underlying hardware representation, most particularly with clearly denoting > > the efficiency aspects of the implementation. Verilog does this now. > > What counts is the actually obtained efficiency, not the efficiency that > one think one sees from the code. If you can design at a higher level > (perhaps with less "isomorphism") without loss of efficiency, you > gain. > > It think the real issue is that you seem to deny that synthesis can > provide this kind of efficiency. We should be able to resolve this > through experiments. I will argue that behavioral synthesis leaves performance on the table, and can make for poorer technology mapping, because I've seen it. I can produce some evidence, but that's not what the real issue was for me. The real issue is I want to set the number of bits directly for number representation, period, without question, and never have them inferred. So that usual and unusual things can be done with representation, encoding, and for hardware or software interpretation of the meaning of the bits. I explicitly do NOT want a compiler attempting to assert meaning upon a field of bits, as having a range would do. Specifying a number of bits does not place an interpretation upon the use of a bitfield. A number range does. The simple case I pointed out before was microprocessor hardware where the bits are interpreted as both signed and unsigned, at the same or different times. Take an 8-bit example. If the interval [0, 255] is used to specify some byte-oriented function by implication, how will the byte be interpreted when the exact same flip-flops are expected to mean an interval of [-128,127] at a later time in the same hardware? IMHO, it's a serious elision error to specify a 1 byte register by implication, using a range, when in the course of use the bits comprising the byte, the interpretation could necessarily vary from that range. A set range, as a specification syntax or an HDL, appears to imply only one possible interpretation. But hardware is regularly used right now with dual interpretations, and sometimes more. Another example of multivariate bit-level interpretation is for a signum taken over a field of bits. One range does not elucidate all possible outcomes of representation. I will aver IMHO that this is literally an example of the problem inherent is attempting to use strong data typing practices from software unto hardware problems. Hardware is different. Now, apparently, I did have a misapprehension reading the essay that inferring bit-widths was the ONLY way arithmetic was to be supported in MyHDL ... if this is not so, then my bad, and I am sorry for that mistake. If slice notation is still available, and you don't need to do any casting to produce operands or results, then one possible use of MyHDL includes just ignoring number ranges that infer a specific finitude. > > I don't have the LRM handy, but I'm pretty sure Verilog-2001 deals with this > > by allowing signed reg variables, and these do what you would expect. This is > > distinct from the behavior of Verilog-1995, and it's treatment of integer > > versus reg. Will try to run some compiles to check this out... > > No, the problem *is* with 2001 signed regs. See the example in the essay > when signed and unsigned are mixed and for example, 7 + -2 = -11 > instead of 5. I will check this in more detail. I never use signed regs, and this generally avoids all kinds of problems. So, I need to look further into this, and will take your example under advisement. Something does not sound right here, and I'm wonder if there's an easy explanation for the phenomena you describe. > > (With apologies to Alan Perlis...) MyHDL programmers know the value of > > everything, but the cost of nothing. > > I think it's unfair to make such a statement (no matter how good it > sounds) unless you can prove it. So I challenge you to prove that > MyHDL-based designs are systematically less efficient than pure > Verilog designs. Actually this is pretty easy. This can be shown with logic designs that cannot be inferred from behavioral Verilog, but are still written in Verilog. Yes, inspecting the Verilog code would reveal enormous gulfs. > Note that "synthesising" MyHDL always requires conversion to Verilog > or VHDL first. So it may be sufficient to inspect the Verilog output > code :-) Ohterwise, the MyHDL Cookbook may be a good start. I > provides examples, including one from Xilinx ISE, complete with > Verilog code after conversion and Xilinx ISE synthesis results. > > Here are my predictions: > > * there will be no systematic efficiency difference between MyHDL > and pure Verilog * there may be significant efficiency differences > between different synthesis tools * for some examples, I will be > able to design them at a higher level (e.g using higher level data > types) than the Verilog couterpart, wihout loss of efficiency. I'll really have disagree on the first part, sorry. I would agree that MyHDL could offer more "engineering efficiency," in the sense that using higher level abstractions can obtain a result that actually does work more readily. Even more particularly, modern silicon technology could be massive enabling in that a smaller amount of HDL text (that means more) can produce a greater amount of logic, than the case for Verilog alone. I could agree even that producing designs this way is powerful, and quite flexible. But! None of that "engineering efficiency," matters when you absolutely must obtain very high performance to meet specific system requirements, or to achieve compliance with variety of industrial protocol or interoperability specifications. The temporal aspects of representation matter very much; I will argue again that hardware is different than software. There is an enormously wide difference between easily obtaining hardware that just works at all, and hardware that must meet specifications, or its useless. > > So in this matter, I vigorously disagree. HDLs must be different than computer > > programming languages, because hardware is concurrent. > > Both HDLs and programming languages are simulated/executed on a sequential > computer. Therefore, all HDLs need some cleverness to maintain the concurrency > illusion. MyHDL builds this technique into a powerful programming > language, that's all. Some CAD tools actually use sequential computers in plural, so in that sense, the running programs really are concurrent ... > > Once the isomorphism between the HDL representation and the underlying > > hardware representation is lost, how can efficiency even be quantified? > > By synthesizing and analyzing the result. OK. > > If you want to synthesize real hardware, and not merely be a modeling language, then > > what the hardware is represented as, it's efficiency, and mutability of > > purpose has primacy, in order for the efficiency argument to be true. > > Didn't get that, sorry. I was saying in other words what I'd said before; that bitfields in real hardware already now have mutable representations. Implying that this is necessary in synthesis of real hardware, but not necessarily demanded in a modeling language used for simulation-only. > > But, what I said is still true. Automatically inferring the number of bits > > required to represent numbers (integers), instead of allowing the designer to > > choose that implementation directly is a serious language design error. > > Again: MyHDL doesn't impose this *error* on you. You can set the bit > width directly, using slice notation. If that's all you need or want > to know, fine. I apparently misunderstood from the essay that slice notation was going away, that intervals were the only method, or that slices could not be used for arithmetic operations; my apologies again for not understanding. > Aren't we exaggerating a little here? You make it sound as though inferring > a bit width from an interval requires a complicated optimization. In > reality it's trivial of course. Yes of course this is easy. The inference of bit width is not what I am concerned about. My concern is the use of inference to provide values, a fact which means interpreting the meaning of bits by a compiler according to a value set. This implication allows only one interpretation, where two or several may truly be needed. If hardware is specified in a manner where on interpretation is pre-set, but others become necessary, then the literal HDL text describing this hardware is not accurate. > Since I started with HDL-based design in 1990, I always wished I'd > have a generalized concept of VHDL's integer subtypes. So I thought > and worked hard to implement it the way I want. So be sure the > "error* is intentional :-) I did understand this intention. I can also foresee that integer subtypes are quite useful for DSP hardware in particular. However, I have strenuously argued that it's an ill-suited representation or specification system for general-purpose hardware. > Jan Best, M |
From: Jan D. <ja...@ja...> - 2009-03-09 09:49:02
|
Michael Baxter wrote: > Specifying a number of bits does not place an interpretation upon the use of a > bitfield. A number range does. > The simple case I pointed out before was microprocessor hardware where the > bits are interpreted as both signed and unsigned, at the same or different > times. Take an 8-bit example. If the interval [0, 255] is used to specify some > byte-oriented function by implication, how will the byte be interpreted when > the exact same flip-flops are expected to mean an interval of [-128,127] at a > later time in the same hardware? > > IMHO, it's a serious elision error to specify a 1 byte register by > implication, using a range, when in the course of use the bits comprising the > byte, the interpretation could necessarily vary from that range. > > A set range, as a specification syntax or an HDL, appears to imply only one > possible interpretation. But hardware is regularly used right now with dual > interpretations, and sometimes more. Another example of multivariate bit-level > interpretation is for a signum taken over a field of bits. One range does not > elucidate all possible outcomes of representation. > > I will aver IMHO that this is literally an example of the problem inherent is > attempting to use strong data typing practices from software unto hardware > problems. Hardware is different. Strong typing's answer to mutability of meaning is data type conversion. Actually I consider this to be an appopriate usage of conversion functions, in contrast to the ones caused by VHDL's low level approach to arithmetic, as described in the essay. I you need to transport bits, use a bit vector. If a part of it should be interpreted as an interval in some module, convert it to an interval and do the operation. An so on. Seems to me this makes the purpose clearer than having to work with naked bits everywhere. There's actually a user example of this on the MyHDL website, where a complex number is tranported over a data bus: http://www.myhdl.org/doku.php/projects:cplx_math > Now, apparently, I did have a misapprehension reading the essay that inferring > bit-widths was the ONLY way arithmetic was to be supported in MyHDL ... if > this is not so, then my bad, and I am sorry for that mistake. If slice > notation is still available, and you don't need to do any casting to produce > operands or results, then one possible use of MyHDL includes just ignoring > number ranges that infer a specific finitude. I will modify a sentence in the essay to make it absolutely clear that bit-width support is here to stay. In fact intbv's behavior is quite stable for a number of years now, including both bit vector support and integer subtype-like support. I am now just taking some time to describe certain aspects which some may find useful. I have waited with this (as I expect it will be controversial) until I could back it up with working silicon. Thanks for the feedback. > I will check this in more detail. I never use signed regs, and this generally > avoids all kinds of problems. So, I need to look further into this, and will > take your example under advisement. Something does not sound right here, and > I'm wonder if there's an easy explanation for the phenomena you describe. It's easy enough: the result is what you get when you apply Verilog's rules on this. I believe therefore that the rules are flawed. The basic issue is that when you mix signed and unsigned operands, all operands are implicitly cast to unsigned, instead of signed as it should be. >>> (With apologies to Alan Perlis...) MyHDL programmers know the value of >>> everything, but the cost of nothing. >> I think it's unfair to make such a statement (no matter how good it >> sounds) unless you can prove it. So I challenge you to prove that >> MyHDL-based designs are systematically less efficient than pure >> Verilog designs. > > Actually this is pretty easy. This can be shown with logic designs that cannot > be inferred from behavioral Verilog, but are still written in Verilog. It seems to me that your are questioning logic synthesis itself. And indeed, I can imagine that there are applications where it's not applicable or efficient. One shouldn't use MyHDL in such cases, its paradigm relies on efficient synthesis. What I'm claiming is that *if* synthesis is applicable, there will be no significant difference between a MyHDL and a Verilog-based design flow. I'm quite confident that there remain sufficient applications for which synthesis works very well. I'm also pretty sure that I'll be able to show you examples for which it actually works better than what can be expected from a human designer in a reasonable time. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Michael B. <ma...@cr...> - 2009-03-06 18:54:32
|
On Fri, 06 Mar 2009 17:49:14 +0100, Jan Decaluwe wrote > Michael Baxter wrote: > > > There are no data types in hardware! I should have said: "There are no software-like data types in hardware!" Verilog does have data types, but they meant to indicate hardware, not represent variable or structures in computer programs. > No, but that doesn't this prove that there shoudn't be data types > in hardware description languages. Even a simple bit-vector > is an abstraction that "doesn't exist" as such in hardware. > Isn't this just a matter of what you're used to and what > you trust? Not at all. I might have done well to explain this better. A HDL should provide a clean isomorphism between it's text representation and the underlying hardware representation, most particularly with clearly denoting the efficiency aspects of the implementation. Verilog does this now. > > Bits do what their hardware says to do, > > and it may be completely unrelated to signedness, or even to values. The rules > > of Verilog for arithmetic DO make sense: they do what hardware does, and that > > is good. > > I insist that Verilog's implicit casting of signed's to unsigned's > in an expression does *not* make sense. The rules could have been > different, and then they would makes sense in terms of their > integer interpretation. But they would *still* do "what the hardware > does": the difference would be sign-bit extension (= routing) > instead of zero padding. I don't have the LRM handy, but I'm pretty sure Verilog-2001 deals with this by allowing signed reg variables, and these do what you would expect. This is distinct from the behavior of Verilog-1995, and it's treatment of integer versus reg. Will try to run some compiles to check this out... > > That is why software is different than hardware, and it's why HDLs > > are completely different than computer programming languages. > > Needless to say, one of my goals with MyHDL is to prove that this is > false. Seems I'm not succeeding :-) (With apologies to Alan Perlis...) MyHDL programmers know the value of everything, but the cost of nothing. So in this matter, I vigorously disagree. HDLs must be different than computer programming languages, because hardware is concurrent. Virtually all computer programming languages lack concurrency; more acutely, they lack explicit concurrency that is crucial to hardware. MyHDL exploits some novel features in Python very cleverly, and it's very impressive in that regard. I would not however describe Python as explicitly concurrent. This does not mean that high-level approaches are inapplicable to HDLs, IMHO. The issue is over how, the policies to be employed for representation, etc. > > Applying computer programming language models (such as data types) to hardware > > is a very dangerous idea for efficient, high-performance hardware engineering > > of logic. Programming models don't work the way hardware works. > > Many models don't. But with data types such as integers, booleans, > and enums there is no loss of efficiency, only a gain in clarity and > productivity. Once the isomorphism between the HDL representation and the underlying hardware representation is lost, how can efficiency even be quantified? If you want to synthesize real hardware, and not merely be a modeling language, then what the hardware is represented as, it's efficiency, and mutability of purpose has primacy, in order for the efficiency argument to be true. > > As a designer, I want complete and total control over the number of bits > > represented in the (simulated or synthesized) hardware. A program should NOT > > do that for me, because a computer program is too stupid to understand what I > > want those bits to do, or to mean. > > In case of integer arithmetic, which is really the sole topic of the > essay, that statement is evidently incorrect. The support of VHDL > integer subtypes in synthesis tools proves it. Chuckle. I'll evade that argument by describing VHDL as a computer programming language, which also happens to have a use for hardware design. Verilog is a lot closer to real hardware, and as a HDL there the isomorphism between representation as text and as hardware is much stronger. But, what I said is still true. Automatically inferring the number of bits required to represent numbers (integers), instead of allowing the designer to choose that implementation directly is a serious language design error. Best, M > Jan > > -- > Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com > Python as a hardware description language: > http://www.myhdl.org > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San > Francisco, CA -OSBC tackles the biggest issue in open source: Open > Sourcing the Enterprise -Strategies to boost innovation and cut > costs with open source participation -Receive a $600 discount off > the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Jan D. <ja...@ja...> - 2009-03-07 08:00:24
|
Michael Baxter wrote: > Not at all. I might have done well to explain this better. A HDL should > provide a clean isomorphism between it's text representation and the > underlying hardware representation, most particularly with clearly denoting > the efficiency aspects of the implementation. Verilog does this now. What counts is the actually obtained efficiency, not the efficiency that one think one sees from the code. If you can design at a higher level (perhaps with less "isomorphism") without loss of efficiency, you gain. It think the real issue is that you seem to deny that synthesis can provide this kind of efficiency. We should be able to resolve this through experiments. > I don't have the LRM handy, but I'm pretty sure Verilog-2001 deals with this > by allowing signed reg variables, and these do what you would expect. This is > distinct from the behavior of Verilog-1995, and it's treatment of integer > versus reg. Will try to run some compiles to check this out... No, the problem *is* with 2001 signed regs. See the example in the essay when signed and unsigned are mixed and for example, 7 + -2 = -11 instead of 5. > (With apologies to Alan Perlis...) MyHDL programmers know the value of > everything, but the cost of nothing. I think it's unfair to make such a statement (no matter how good it sounds) unless you can prove it. So I challenge you to prove that MyHDL-based designs are systematically less efficient than pure Verilog designs. Note that "synthesising" MyHDL always requires conversion to Verilog or VHDL first. So it may be sufficient to inspect the Verilog output code :-) Ohterwise, the MyHDL Cookbook may be a good start. I provides examples, including one from Xilinx ISE, complete with Verilog code after conversion and Xilinx ISE synthesis results. Here are my predictions: * there will be no systematic efficiency difference between MyHDL and pure Verilog * there may be significant efficiency differences between different synthesis tools * for some examples, I will be able to design them at a higher level (e.g using higher level data types) than the Verilog couterpart, wihout loss of efficiency. > So in this matter, I vigorously disagree. HDLs must be different than computer > programming languages, because hardware is concurrent. Both HDLs and programming languages are simulated/executed on a sequential computer. Therefore, all HDLs need some cleverness to maintain the concurrency illusion. MyHDL builds this technique into a powerful programming language, that's all. > Once the isomorphism between the HDL representation and the underlying > hardware representation is lost, how can efficiency even be quantified? By synthesizing and analyzing the result. > If you want to synthesize real hardware, and not merely be a modeling language, then > what the hardware is represented as, it's efficiency, and mutability of > purpose has primacy, in order for the efficiency argument to be true. Didn't get that, sorry. > But, what I said is still true. Automatically inferring the number of bits > required to represent numbers (integers), instead of allowing the designer to > choose that implementation directly is a serious language design error. Again: MyHDL doesn't impose this *error* on you. You can set the bit width directly, using slice notation. If that's all you need or want to know, fine. Aren't we exaggerating a little here? You make it sound as though inferring a bit width from an interval requires a complicated optimization. In reality it's trivial of course. Since I started with HDL-based design in 1990, I always wished I'd have a generalized concept of VHDL's integer subtypes. So I thought and worked hard to implement it the way I want. So be sure the "error* is intentional :-) Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Neal B. <ndb...@gm...> - 2009-03-06 13:22:23
|
Michael Baxter wrote: > IMHO, though informed with years of hardware implementation experience, > automating bit-widths should be 100% a non-goal. It should never be done > under any circumstances whatsoever. > > IMHO, this is a serious failing of MyHDL. > Actually, I agree. That's why I wrote about my c++ approach to fixed-pt integers. There, the approach is: For unary op, the output size is same as input For binary op, the 2nd operand is converted to the first, then the output size is same as input (Actually, fixed-pt has 2 attributes: a number of integer bits and a number of frac bits, or equivalently a size and a binary point) If you want to multiply 8bit x 8bit and result in 16bit, what you do in convert both operands to 16bit first. The reason is, I really don't think it's possible to have a machine correctly deal with these issues in all cases. Perhaps more important, though, is the question of even if it is possible for some kind of algorithm to decide the bit widths, is this really desirable? I'm not sure about that. Maybe the designer really should be aware of these details. |
From: Neal B. <ndb...@gm...> - 2009-03-06 13:45:11
Attachments:
run_time_fixed_pt.hpp
|
In case anyone is interested, here is my run_time_fixed_pt class. (run_time because bit widths are set at run time, so I can use it from python. I have another version that sets widths at compile time as template parameters). |
From: Felton C. <chr...@gm...> - 2009-03-06 14:14:58
|
> > > A collection of bits can well more than a single range in > representation. A > 32-bit register can represent 32 bit flags, each having a range > [0,1]. Or it > could represent (4) 8-bit values, each having a range [0,255], > [-128,127] or a > mix. Or it could represent two fields, (6) 1-bit flag values > concatenated to a > 26-bit ordinal. Or, or, or... any number of alternative > representations > involving bit-level concurrency that cannot be represented with a > single > interval specification. This kind of thing is done all the time in > hardware. > Think this topic is only discussing when a collection of bits is acting as a number, integer, and not some other "data type" such as flags, logic, etc. You wouldn't use the min, max when defining a collection of bits that are not used for integer arithmetic. I think there will be some different opinions in the design approach but if you wanted an 8 bit register that was used as simple flags, you could do that the conventional way without min and max. As the essay states in the beginning "essay about elementary arithmetic with integers". This topic is about representing integers (when the designer intends integers) and not removing basic bit vectors for other uses. |
From: Jan D. <ja...@ja...> - 2009-03-10 14:46:27
|
Neal Becker wrote: > Jan Decaluwe wrote: > >> This is an essay that I wanted to write for a long time. >> It describes what I think is wrong with integer arithmetic >> in VHDL and Verilog, and why MyHDL provides a solution. >> >> Before releasing it to the general public, I'm interested >> to hear what you think about it. >> >> http://www.jandecaluwe.com/hdldesign/counting.html >> >> >> > > If you really want to automate the bit widths, perhaps some kind of interval > arithmetic is wanted? Note that the essay describes an existing implementation, not something that would want for some future version. -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Pieter <pie...@gm...> - 2009-03-06 13:36:17
|
> If you want to multiply 8bit x 8bit and result in 16bit, what you do in > convert both operands to 16bit first. Hmm, for me it seems more logical that you should have 16 bits at the left hand side of the assignment, to get a 16 bit result? result_16 = op_a_8 * op_b_8; 2009/3/6 Neal Becker <ndb...@gm...>: > Michael Baxter wrote: > >> IMHO, though informed with years of hardware implementation experience, >> automating bit-widths should be 100% a non-goal. It should never be done >> under any circumstances whatsoever. >> >> IMHO, this is a serious failing of MyHDL. >> > Actually, I agree. That's why I wrote about my c++ approach to fixed-pt > integers. There, the approach is: > > For unary op, the output size is same as input > For binary op, the 2nd operand is converted to the first, then the output > size is same as input > > (Actually, fixed-pt has 2 attributes: a number of integer bits and a number > of frac bits, or equivalently a size and a binary point) > > If you want to multiply 8bit x 8bit and result in 16bit, what you do in > convert both operands to 16bit first. > > The reason is, I really don't think it's possible to have a machine > correctly deal with these issues in all cases. > > Perhaps more important, though, is the question of even if it is possible > for some kind of algorithm to decide the bit widths, is this really > desirable? I'm not sure about that. Maybe the designer really should be > aware of these details. > > > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise > -Strategies to boost innovation and cut costs with open source participation > -Receive a $600 discount off the registration fee with the source code: SFAD > http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list > -- Pieter Cogghe Iepenstraat 43 9000 Gent 0487 10 14 21 |
From: Jan D. <ja...@ja...> - 2009-03-10 14:50:21
|
Pieter wrote: >> If you want to multiply 8bit x 8bit and result in 16bit, what you do in >> convert both operands to 16bit first. > > Hmm, for me it seems more logical that you should have 16 bits at the > left hand side of the assignment, to get a 16 bit result? > > result_16 = op_a_8 * op_b_8; There we go :-) See? If you start from the bit-widths, nobody agrees and the result seems always wrong. I think bit widths should be such that an integer interpretation works as expected. However, calling this "automating bit-widths" is really too much honour. It's close to trivial. The trick is in the mindset. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Michael B. <ma...@cr...> - 2009-03-06 14:27:03
|
Yes, but I intend for integers to represent fields of bits, and to do numerous other things that are not constrained by intervals. In fact, some of those things (involving plural, concurrent bitfields) can be done purely with conventional arithmetic operations. Normal integers, but not being used normally. A 32-bit microprocessor (comprised of hardware) can have 32-bit registers that represent integers that are not signed, or that are signed, both at the same time. It all depends on what software does to interpret the meaning of those 32-bit register bits. What's the correct interval representation for the design of the registers in that hardware? Another case I neglected is also salient. In some very useful encoding systems, collections of bits can also mean [-1,1] values, which has a wonderful representation in bit-form, but not as integers, nor as intervals. Best, M On Fri, 6 Mar 2009 08:14:39 -0600, Felton Christopher wrote > > > > > > A collection of bits can well more than a single range in > > representation. A > > 32-bit register can represent 32 bit flags, each having a range > > [0,1]. Or it > > could represent (4) 8-bit values, each having a range [0,255], > > [-128,127] or a > > mix. Or it could represent two fields, (6) 1-bit flag values > > concatenated to a > > 26-bit ordinal. Or, or, or... any number of alternative > > representations > > involving bit-level concurrency that cannot be represented with a > > single > > interval specification. This kind of thing is done all the time in > > hardware. > > > > Think this topic is only discussing when a collection of bits is > acting as a number, integer, and not some other "data type" such as > flags, logic, etc. You wouldn't use the min, max when defining a > collection of bits that are not used for integer arithmetic. > > I think there will be some different opinions in the design approach > but if you wanted an 8 bit register that was used as simple flags, > you could do that the conventional way without min and max. > > As the essay states in the beginning "essay about elementary > arithmetic with integers". This topic is about representing > integers > (when the designer intends integers) and not removing basic bit > vectors for other uses. > > ------------------------------------------------------------------------------ > Open Source Business Conference (OSBC), March 24-25, 2009, San > Francisco, CA -OSBC tackles the biggest issue in open source: Open > Sourcing the Enterprise -Strategies to boost innovation and cut > costs with open source participation -Receive a $600 discount off > the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Felton C. <chr...@gm...> - 2009-03-06 15:00:16
|
On Mar 6, 2009, at 8:26 AM, Michael Baxter wrote: > Yes, but I intend for integers to represent fields of bits, and to > do numerous > other things that are not constrained by intervals. In fact, some of > those > things (involving plural, concurrent bitfields) can be done purely > with > conventional arithmetic operations. Normal integers, but not being > used normally. I disagree, here we are specifically talking about integers constrained by intervals. Integer being used as integers. Particular to myhdl you can still simply define bit vector based on the number of bits. Using the constrained integer bit vectors, I would think of as a subset. And that it would be useful to design with integers, when intended to use integers, instead of dealing with a more basic type. I don't believe this is intended to force min, max for any and all bit vectors. I think the essay makes a good argument to use constrained integer bit vectors when the design is using integers. > > > A 32-bit microprocessor (comprised of hardware) can have 32-bit > registers that > represent integers that are not signed, or that are signed, both at > the same > time. It all depends on what software does to interpret the meaning > of those > 32-bit register bits. What's the correct interval representation for > the > design of the registers in that hardware? This is a great example, you have a generic container. Presumably you may not want to use an integer to represent it in its basic form. But from the perspective of different computation elements it could be desirable that the register is presented as a constrained integer. In the Verilog case it would be the same type, in VHDL example the register maybe a std_logic but the adder may deal with it as a signed type. Similar scenario in myhdl could occur. > > > Another case I neglected is also salient. In some very useful encoding > systems, collections of bits can also mean [-1,1] values, which has a > wonderful representation in bit-form, but not as integers, nor as > intervals. This is another good example, question constrained integer useful or not. Chris |
From: Jan D. <ja...@ja...> - 2009-03-06 16:02:52
|
Michael Baxter wrote: > Yes, but I intend for integers to represent fields of bits, and to do numerous > other things that are not constrained by intervals. In fact, some of those > things (involving plural, concurrent bitfields) can be done purely with > conventional arithmetic operations. Normal integers, but not being used normally. > > A 32-bit microprocessor (comprised of hardware) can have 32-bit registers that > represent integers that are not signed, or that are signed, both at the same > time. It all depends on what software does to interpret the meaning of those > 32-bit register bits. What's the correct interval representation for the > design of the registers in that hardware? There is none. You use a bit vector in this case, design at the bit level, and let the integer interpretation to software: a = intbv(0)[32:] I hope it's clear from the essay that intbv is dual-mode type, where you have the *option* to use it as a true integer if you want. I realize that you suggest that such use cases don't exist. That sounds absurd to me, but of course I've not only used Verilog but also VHDL :-) Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Thomas H. <th...@ct...> - 2009-03-10 16:09:00
|
Jan Decaluwe schrieb: > This is an essay that I wanted to write for a long time. > It describes what I think is wrong with integer arithmetic > in VHDL and Verilog, and why MyHDL provides a solution. > > Before releasing it to the general public, I'm interested > to hear what you think about it. > > http://www.jandecaluwe.com/hdldesign/counting.html > > > Jan, let me say that I enjoyed your essay very much. I have done some designs in VHDL but did not have to use arithmetic in the past, other than the usual '+1' for counters. MyHDL to the rescue when I had to implement a serial pipelined divider: your article explains very well the issues that I could avoid with it. -- Thanks, Thomas |
From: Andrew L. <bs...@al...> - 2009-03-11 18:38:27
|
Jan Decaluwe wrote: > This is an essay that I wanted to write for a long time. > It describes what I think is wrong with integer arithmetic > in VHDL and Verilog, and why MyHDL provides a solution. > > Before releasing it to the general public, I'm interested > to hear what you think about it. > > http://www.jandecaluwe.com/hdldesign/counting.html The essay sums up some of my gripes about the mishmash that is the whole abstract behavior of numbers problem. Nicely done. I know that I'll point more than a few newbies at it. I do have a bit of a gripe about statements like: "This situation would not persist without the widespread support of the designer community." Ummmmm, no. Most of the designers I know of hate the way a *lot* of things are done in Verilog and VHDL. Would we be using MyHDL if that were not the case? ;) However, an individual designer has very limited options to push back into these standards. See SystemVerilog, for example. EDA vendors have a high incentive and large resources to push/implement what is *profitable* to them--usefulness to the the user is an orthogonal consideration. However, if we're talking about abstract behavior of numbers, how about some fixed-point support? One of the nice things about VHDL is the ability to specify negative indicies that align with negative powers of 2. Verilog doesn't (or at least didn't) provide even this level of support. Writing, say, a delta-sigma modulator in any HDL language is kind of a pain because we don't have an abstract "fixed point number" that you can assert against. Adding extra bits at either end to cover different issues (Did it overflow? I need more integer bits. Is the error too large? I need more fractional bits.) is a pain when it interacts with sign bits. Unrelated note: I *STILL* hate c.next = <some expression> The fact that c = <some expression> often silently does the wrong thing when you really meant c.next = <some expression> is very un-Pythonic. Did Python 3K enable some form of introspection that could do something about this? -a |
From: Jan D. <ja...@ja...> - 2009-03-12 06:40:53
|
Andrew Lentvorski wrote: > Jan Decaluwe wrote: >> This is an essay that I wanted to write for a long time. >> It describes what I think is wrong with integer arithmetic >> in VHDL and Verilog, and why MyHDL provides a solution. >> >> Before releasing it to the general public, I'm interested >> to hear what you think about it. >> >> http://www.jandecaluwe.com/hdldesign/counting.html > > The essay sums up some of my gripes about the mishmash that is the whole > abstract behavior of numbers problem. Nicely done. I know that I'll > point more than a few newbies at it. > > > I do have a bit of a gripe about statements like: > "This situation would not persist without the widespread support of the > designer community." > > Ummmmm, no. Most of the designers I know of hate the way a *lot* of > things are done in Verilog and VHDL. Would we be using MyHDL if that > were not the case? ;) However, an individual designer has very limited > options to push back into these standards. See SystemVerilog, for > example. I agree that "widespread support" sounds a bit harsh and perhaps too "active". Perhaps "compliancy" is a better description? On the other hand, most designers I know *personally* also favor a more abstract approach, but overall I have every indication that this is a currently a minority view. Otherwise, wouldn't VHDL integer subtypes be used more than they are? Wouldn't people complain more about Verilog's bizarre integer interpretation, instead of silently obeying the rules? And even in the MyHDL community, I don't expect everyone to agree. Look at some responses in this thread. There's a reason I put this essay on my personal website instead of myhdl.org. I expect controversy (and I'm just starting :-)). It's not necessary to agree with my viewpoints to be able use MyHDL successfully, so I shouldn't alienate people unnecessarily. Regards, Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: Jan D. <ja...@ja...> - 2009-03-12 06:45:24
|
Andrew Lentvorski wrote: > However, if we're talking about abstract behavior of numbers, how about > some fixed-point support? One of the nice things about VHDL is the > ability to specify negative indicies that align with negative powers of > 2. Verilog doesn't (or at least didn't) provide even this level of support. > > Writing, say, a delta-sigma modulator in any HDL language is kind of a > pain because we don't have an abstract "fixed point number" that you can > assert against. Adding extra bits at either end to cover different > issues (Did it overflow? I need more integer bits. Is the error too > large? I need more fractional bits.) is a pain when it interacts with > sign bits. I have no experience with this. Is there synthesis support for it? > Unrelated note: I *STILL* hate c.next = <some expression> > > The fact that c = <some expression> often silently does the wrong thing > when you really meant c.next = <some expression> is very un-Pythonic. > > Did Python 3K enable some form of introspection that could do something > about this? Some MyHDL decorators (always_comb) use introspection already. So we could use them to do checks like the one you propose. Of course, this would only work when decorators are used to create generators. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a hardware description language: http://www.myhdl.org |
From: David B. <da...@we...> - 2009-03-12 08:03:59
|
Jan Decaluwe wrote: > Andrew Lentvorski wrote: > >> However, if we're talking about abstract behavior of numbers, how about >> some fixed-point support? One of the nice things about VHDL is the >> ability to specify negative indicies that align with negative powers of >> 2. Verilog doesn't (or at least didn't) provide even this level of support. >> >> Writing, say, a delta-sigma modulator in any HDL language is kind of a >> pain because we don't have an abstract "fixed point number" that you can >> assert against. Adding extra bits at either end to cover different >> issues (Did it overflow? I need more integer bits. Is the error too >> large? I need more fractional bits.) is a pain when it interacts with >> sign bits. > > I have no experience with this. Is there synthesis support for it? > I haven't used that feature myself, but I believe it is synthesisable in VHDL. It just allows indexes that don't start at 0. However, I expect it would be a little awkward to use the same idea with intbv since negative indexes have a different meaning for slicing in Python. >> Unrelated note: I *STILL* hate c.next = <some expression> >> >> The fact that c = <some expression> often silently does the wrong thing >> when you really meant c.next = <some expression> is very un-Pythonic. >> >> Did Python 3K enable some form of introspection that could do something >> about this? > > Some MyHDL decorators (always_comb) use introspection already. > So we could use them to do checks like the one you propose. > Of course, this would only work when decorators are used to > create generators. > > Jan > > |