Re: [myhdl-list] essay about integer arithmetic
Brought to you by:
jandecaluwe
From: Michael B. <ma...@cr...> - 2009-03-07 09:47:16
|
Jan and All, Due to IRL work, I will need a lot more time to produce any serious scientific results in response to some requests made here. I just do not have time right now, whereas Jan's essay motivated me to respond, at least initially. In the interim, I'll attempt additional elucidation toward my viewpoint, responding more in depth to this post. Before I do forget this: I've no intention whatsoever to trash anybody here. It's more that I've become non-believer in virtually all high-level HDLs, and have a really different philosophical outlook on what may comprise high-level design. Please also understand that due to it's extremely clean design compared with others, a lot due to it's Pythonic character, MyHDL is certainly my favorite among high-level HDLs. On Sat, 07 Mar 2009 09:00:01 +0100, Jan Decaluwe wrote > Michael Baxter wrote: > > > Not at all. I might have done well to explain this better. A HDL should > > provide a clean isomorphism between it's text representation and the > > underlying hardware representation, most particularly with clearly denoting > > the efficiency aspects of the implementation. Verilog does this now. > > What counts is the actually obtained efficiency, not the efficiency that > one think one sees from the code. If you can design at a higher level > (perhaps with less "isomorphism") without loss of efficiency, you > gain. > > It think the real issue is that you seem to deny that synthesis can > provide this kind of efficiency. We should be able to resolve this > through experiments. I will argue that behavioral synthesis leaves performance on the table, and can make for poorer technology mapping, because I've seen it. I can produce some evidence, but that's not what the real issue was for me. The real issue is I want to set the number of bits directly for number representation, period, without question, and never have them inferred. So that usual and unusual things can be done with representation, encoding, and for hardware or software interpretation of the meaning of the bits. I explicitly do NOT want a compiler attempting to assert meaning upon a field of bits, as having a range would do. Specifying a number of bits does not place an interpretation upon the use of a bitfield. A number range does. The simple case I pointed out before was microprocessor hardware where the bits are interpreted as both signed and unsigned, at the same or different times. Take an 8-bit example. If the interval [0, 255] is used to specify some byte-oriented function by implication, how will the byte be interpreted when the exact same flip-flops are expected to mean an interval of [-128,127] at a later time in the same hardware? IMHO, it's a serious elision error to specify a 1 byte register by implication, using a range, when in the course of use the bits comprising the byte, the interpretation could necessarily vary from that range. A set range, as a specification syntax or an HDL, appears to imply only one possible interpretation. But hardware is regularly used right now with dual interpretations, and sometimes more. Another example of multivariate bit-level interpretation is for a signum taken over a field of bits. One range does not elucidate all possible outcomes of representation. I will aver IMHO that this is literally an example of the problem inherent is attempting to use strong data typing practices from software unto hardware problems. Hardware is different. Now, apparently, I did have a misapprehension reading the essay that inferring bit-widths was the ONLY way arithmetic was to be supported in MyHDL ... if this is not so, then my bad, and I am sorry for that mistake. If slice notation is still available, and you don't need to do any casting to produce operands or results, then one possible use of MyHDL includes just ignoring number ranges that infer a specific finitude. > > I don't have the LRM handy, but I'm pretty sure Verilog-2001 deals with this > > by allowing signed reg variables, and these do what you would expect. This is > > distinct from the behavior of Verilog-1995, and it's treatment of integer > > versus reg. Will try to run some compiles to check this out... > > No, the problem *is* with 2001 signed regs. See the example in the essay > when signed and unsigned are mixed and for example, 7 + -2 = -11 > instead of 5. I will check this in more detail. I never use signed regs, and this generally avoids all kinds of problems. So, I need to look further into this, and will take your example under advisement. Something does not sound right here, and I'm wonder if there's an easy explanation for the phenomena you describe. > > (With apologies to Alan Perlis...) MyHDL programmers know the value of > > everything, but the cost of nothing. > > I think it's unfair to make such a statement (no matter how good it > sounds) unless you can prove it. So I challenge you to prove that > MyHDL-based designs are systematically less efficient than pure > Verilog designs. Actually this is pretty easy. This can be shown with logic designs that cannot be inferred from behavioral Verilog, but are still written in Verilog. Yes, inspecting the Verilog code would reveal enormous gulfs. > Note that "synthesising" MyHDL always requires conversion to Verilog > or VHDL first. So it may be sufficient to inspect the Verilog output > code :-) Ohterwise, the MyHDL Cookbook may be a good start. I > provides examples, including one from Xilinx ISE, complete with > Verilog code after conversion and Xilinx ISE synthesis results. > > Here are my predictions: > > * there will be no systematic efficiency difference between MyHDL > and pure Verilog * there may be significant efficiency differences > between different synthesis tools * for some examples, I will be > able to design them at a higher level (e.g using higher level data > types) than the Verilog couterpart, wihout loss of efficiency. I'll really have disagree on the first part, sorry. I would agree that MyHDL could offer more "engineering efficiency," in the sense that using higher level abstractions can obtain a result that actually does work more readily. Even more particularly, modern silicon technology could be massive enabling in that a smaller amount of HDL text (that means more) can produce a greater amount of logic, than the case for Verilog alone. I could agree even that producing designs this way is powerful, and quite flexible. But! None of that "engineering efficiency," matters when you absolutely must obtain very high performance to meet specific system requirements, or to achieve compliance with variety of industrial protocol or interoperability specifications. The temporal aspects of representation matter very much; I will argue again that hardware is different than software. There is an enormously wide difference between easily obtaining hardware that just works at all, and hardware that must meet specifications, or its useless. > > So in this matter, I vigorously disagree. HDLs must be different than computer > > programming languages, because hardware is concurrent. > > Both HDLs and programming languages are simulated/executed on a sequential > computer. Therefore, all HDLs need some cleverness to maintain the concurrency > illusion. MyHDL builds this technique into a powerful programming > language, that's all. Some CAD tools actually use sequential computers in plural, so in that sense, the running programs really are concurrent ... > > Once the isomorphism between the HDL representation and the underlying > > hardware representation is lost, how can efficiency even be quantified? > > By synthesizing and analyzing the result. OK. > > If you want to synthesize real hardware, and not merely be a modeling language, then > > what the hardware is represented as, it's efficiency, and mutability of > > purpose has primacy, in order for the efficiency argument to be true. > > Didn't get that, sorry. I was saying in other words what I'd said before; that bitfields in real hardware already now have mutable representations. Implying that this is necessary in synthesis of real hardware, but not necessarily demanded in a modeling language used for simulation-only. > > But, what I said is still true. Automatically inferring the number of bits > > required to represent numbers (integers), instead of allowing the designer to > > choose that implementation directly is a serious language design error. > > Again: MyHDL doesn't impose this *error* on you. You can set the bit > width directly, using slice notation. If that's all you need or want > to know, fine. I apparently misunderstood from the essay that slice notation was going away, that intervals were the only method, or that slices could not be used for arithmetic operations; my apologies again for not understanding. > Aren't we exaggerating a little here? You make it sound as though inferring > a bit width from an interval requires a complicated optimization. In > reality it's trivial of course. Yes of course this is easy. The inference of bit width is not what I am concerned about. My concern is the use of inference to provide values, a fact which means interpreting the meaning of bits by a compiler according to a value set. This implication allows only one interpretation, where two or several may truly be needed. If hardware is specified in a manner where on interpretation is pre-set, but others become necessary, then the literal HDL text describing this hardware is not accurate. > Since I started with HDL-based design in 1990, I always wished I'd > have a generalized concept of VHDL's integer subtypes. So I thought > and worked hard to implement it the way I want. So be sure the > "error* is intentional :-) I did understand this intention. I can also foresee that integer subtypes are quite useful for DSP hardware in particular. However, I have strenuously argued that it's an ill-suited representation or specification system for general-purpose hardware. > Jan Best, M |