Re: [myhdl-list] fxintbv dependency on dspsim
Brought to you by:
jandecaluwe
From: Jan L. <jan...@et...> - 2009-12-15 16:57:31
|
Hi, I havent yet looked into Christopher's new version of the lib, but I in general I think a perfect solution would be the possibility to write user-defined datatypes that do auto-promotion and all kinds of stuff, and give them some kind of "synthesis rules", that define a mapping to some lower level bit_vector (maybe intbv). Then Signal is a little more light-weight. With such a solution one could even write synthesizeable floating point types or synthesizeable records. Jan Am 15.12.2009 um 17:46 schrieb Jan Decaluwe: > I'm not a fixed-point specialist, but am I correct in thinking that > things would be considerably simpler if there would a fixed point > datatype that we could map to in VHDL/Verilog? > > I seem to have read that such a (synthesizable) datatype exists for > VHDL, but not yet for Verilog. But even it's VHDL-only, it seems like > an interesting path to investigate? > > Felton Christopher wrote: >> Jan L., >> >> After our conversation I was thinking about this a little more. >> Think I >> came up with a better scheme. This still uses the elaboration >> phase to >> determine the size/alignment/type but I think it is more clear. This >> way you can use an arbitrarily complex expression. But the >> expression >> will have to be outside the generators (behavioral statement) so that >> the elaborate phase can determine the correct type/size. And also in >> the generator for conversion/simulation. >> >> Example: >> def fx_mix_ex(a,b,c,d,e): >> >> e.Q = (b-a)*c + d >> @always_comb >> def rtl(): >> e.next = (b-a)*c + d >> >> return rtl >> >> But I have hit some issues with the base myhdl.Signal component (more >> than likely my mis-understanding and an issue in my stuff). I will >> try >> to resolve these and let you know how it goes. Attached is an >> updated >> version with the "Q" property. Also the "Q" property removes all the >> *Rep methods. >> >> Thanks again for the previous patches. >> >> On Dec 10, 2009, at 9:52 AM, Christopher L. Felton wrote: >> >>> Jan Langer wrote: >>>> Hello Christopher, >>>> >>>> Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: >>>>> Jan Langer wrote: >>>>>> Somehow, I dont really understand it. The return type of the >>>>>> fxintbv operators is just plain intbv, so I cannot write a - b*c >>>>>> for three fxintbv variables. If fxintbv does not take care of the >>>>>> point in operations and I need to remember and adjust it by >>>>>> hand, I >>>>>> dont see the advantages of fxintbv. fxint seems to do what I >>>>>> need, >>>>>> except the two questions I wrote yesterday. I am a little >>>>>> confused >>>>>> but maybe I just want the wrong things :-) >>>>>> >>>>> Correct, with the fxintbv a little more work needs to be done. >>>>> But >>>>> not as much as doing it all manually. Keeping within the design >>>>> goals of MyHDL nothing will try to evaluate the expression x= a- >>>>> b*c >>>>> and adjust the result. But given the design of MyHDL and the >>>>> elaborate phase there are some options. The fxintbv doesn't do >>>>> the >>>>> auto promotion (adjusting the number of bits for the word) but >>>>> there >>>>> are helper functions to determine the size of the word based on >>>>> the >>>>> operations. >>>>> These helper functions/methods help during design and analysis but >>>>> also can make the modules modular. If an input size is adjusted >>>>> the >>>>> module will automatically adjust. The example in the PDF >>>>> (ooppps, >>>>> just notice didn't include everything in the PDF, my bad) or the >>>>> example in examples/sos/sos_hdl.py there is a function that gets >>>>> the >>>>> types needed. The function uses the fxintbv methods to determine >>>>> this. This essentially auto-promotes. >>>>> >>>>> def getFxTypes(Q,N): >>>>> # Different fixed-point types (sizes) used >>>>> fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) >>>>> fxmul_t = fx_t.ProductRep(fx_t) >>>>> fxsum_t = fxmul_t.SummationRep(N) >>>>> >>>>> return fx_t, fxmul_t, fxsum_t >>>>> >>>>> Right now there are a bunch of public functions to assist in >>>>> auto-promotion and alignment. The ProductRep, AdditionRep, >>>>> SummationRep will determine the correct "type" given the inputs. >>>>> For the specific example x = a-b*c something like the following >>>>> could be done >>>>> >>>>> def fxCalc(clk, x, a, b, c): >>>>> i_t = fxintbv.ProductRep(b,c) >>>>> x_t = fxintbv.AdditionRep(a, i_t) >>>>> x.NewRep(x_t.iwl, x_t.fwl) >>>>> >>>>> @always(clk.posedge) >>>>> def rtl_result(): >>>>> x.next = a-b*c >>>>> >>>>> return instances() >>>>> >>>>> In this scenario all the promotion and alignment etc will be >>>>> handled. "x" will be adjusted in the elaboration phase based on >>>>> the >>>>> a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. >>>>> The benefit of doing this is you will get the auto promotion of >>>>> "x" >>>>> when the inputs change and you get some additional functions for >>>>> fixed point analysis. I think this is a good balance between >>>>> design >>>>> flexibility and automation. The fxint does a lot more under the >>>>> hood (auto promotion, bit alignment) but it is not convertible. I >>>>> don't think it makes sense for something like fxint to be >>>>> convertible. You want a little more control when mapping to >>>>> hardware. I think the fxintbv has the balance between right >>>>> amount >>>>> of abstraction and design guidance. >>>> >>>> Okay, I understand how its done. I will try it later. But I can't >>>> follow your argumentation that something like fxint should not be >>>> convertible. In my opinion, there is no semantic difference between >>>> the above code, which explicitly promotes the type, and implicit >>>> auto-promotion. The manual promotion will be used as a pattern >>>> and I >>>> dont see any additional control of the mapping process. I mean if >>>> there is a fixed and documented definition like >>>> >>>> a.b + c.d -> (max(a,c)+1).max(b,d) >>>> a.b * c.d -> (a+c).(b+d) >>>> >>>> a mapping is easy to understand. >>>> >>>> I see that current myhdl might not be able to support a convertible >>>> auto-promotable fxint type, but somehow I think that would be >>>> desirable. :-) >>>> Jan >>>> >>> It would require changes to the base MyHDL to evaluate the >>> expressions >>> and generate the code. I think this would be a fairly large >>> project. >>> MyHDL, for the most part, isn't that active in the conversion. >>> There >>> is a one to one mapping of expressions to the converted code. >>> Arbitrarily complex expressions would have to handled. Things like >>> having a "summation" embedded in a expression changes the rules that >>> you might want to apply. The expression has to be decomposed such >>> that it would be an optimal synthesis of the expression. Example If >>> you a take an expression like >>> (a + b + c) * d >>> You can apply the basic rules but if a,b and c are the same type >>> (same >>> number bits and point alignment) more bits will be assigned to the >>> result than needed. In general the addition result will require >>> another bit but a summation (multiple additions same type) only >>> requires log2(N) bits, where N is the number of additions. This >>> is a >>> general rule as well but it requires more advanced parsing of the >>> expressions which would have to be built into MyHDL. Not saying it >>> cannot be done, just would take more effort than I would currently >>> have time for. >>> >>> At this point I don't think it is something that should be built >>> into >>> the MyHDL language. The fxint and fxintbv are simple >>> libraries/objects that can be used with MyHDL. What might be >>> possible >>> in the future (time is the limiting resource) is that MyHDL can be >>> modified slightly so that "plugins" can be added. When an >>> expression >>> is evaluate and unknown (non-convertible) types are present it can >>> search for a plugin with rules to guide the conversion. This way >>> the >>> base language doesn't explode with all kinds features but something >>> like you suggestion can be added. Some serious thought would have >>> to >>> be given to such an endeavor. It is more complicated that single >>> expressions, could apply to blocks of expressions (like loops). And >>> it would be changing the types on the fly, might require a two-pass >>> approach. It would be a nice feature and a fun project but >>> currently >>> not feasible. > -- Jan Langer (Telefon) +49-371-531-33158 / (PGP) F1B8C1CC Schaltkreis- und Systementwurf / TU Chemnitz |