Thread: [myhdl-list] fxintbv dependency on dspsim
Brought to you by:
jandecaluwe
From: Jan L. <jan...@et...> - 2009-12-02 10:37:56
|
Hi, I encountered a problem using the fixed_point library on the myhdl website. It tries to find a module called dspsim (fixed_point/fxlib/ _blocks.py, line 23), which is not found on my system. A google search didn't turn up anything useful. What is it and where can I get it? Best regards, Jan traceback: -- Jan Langer (Telefon) +49-371-531-33158 / (PGP) F1B8C1CC Schaltkreis- und Systementwurf / TU Chemnitz |
From: Christopher F. <chr...@gm...> - 2009-12-02 14:24:56
|
Jan L. Sorry for any confusion, I had put the fixed_point MyHDL object in the user and projects space to share with the MyHDL community and others. I mistakenly had the one testbench in the examples that used the "dspsim". The "dspsim" is another tool that I put together and it currently isn't available (also renamed to dspsys). My intention was only to make the fxintbv available as the "dspsys" is not complete. The fixed_point package (fxint and fxintbv) can be used with out the "dspsys" package. Certain portions of the examples will not be operational because of the "dspsys" dependency. I will gladly send you a copy of the latest dspsys but it is in a primitive state with no documentation and probably lots of bugs. I will try and send you a copy later tonight (GMT-6). Little background on "dspsys" (dspsim) package. It is a fork of the "dsptools" provided by Chris Terman (http://web.mit.edu/6.02/www/s2009/dsptools/index.html). The signal processing hardware that I was creating with MyHDL in most cases had similar testbenches. There would be some set of DSP blocks that were used to drive the HDL and some set of blocks to capture and analyze the results. The dspsys allows a user to simply define a signal processing system for the testbench and mix high level blocks and HDL blocks. It can be used for "golden" model comparisons. I converted the dsptools to use the MyHDL simulation engine so that it can be used to simulate (or run real-time) DSP systems or uses for mixed simulation with HDL. .chris On Wed, Dec 2, 2009 at 4:37 AM, Jan Langer <jan...@et...> wrote: > Hi, > I encountered a problem using the fixed_point library on the myhdl > website. It tries to find a module called dspsim (fixed_point/fxlib/ > _blocks.py, line 23), which is not found on my system. A google search > didn't turn up anything useful. What is it and where can I get it? > Best regards, > Jan > > traceback: > > -- > Jan Langer > (Telefon) +49-371-531-33158 / (PGP) F1B8C1CC > Schaltkreis- und Systementwurf / TU Chemnitz > > > ------------------------------------------------------------------------------ > Join us December 9, 2009 for the Red Hat Virtual Experience, > a free event focused on virtualization and cloud computing. > Attend in-depth sessions from your desk. Your couch. Anywhere. > http://p.sf.net/sfu/redhat-sfdev2dev > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list > |
From: Christopher L. F. <chr...@gm...> - 2009-12-10 02:57:41
|
Jan, some additional information below, Jan Langer wrote: > Hello Christopher, > > Am 09.12.2009 um 06:02 schrieb Christopher L. Felton: >> I will try and reply later with more detail. You want to use the >> *fxintbv* object and not the fxint. Page 8 of the pdf , >> http://www.myhdl.org/lib/exe/fetch.php/users:cfelton:projects:myhdlfixedpoint.pdf >> is an example of convertible fixed-point for add. Should be exactly >> the same for subtract (given no bugs). > > Somehow, I dont really understand it. The return type of the fxintbv > operators is just plain intbv, so I cannot write a - b*c for three > fxintbv variables. If fxintbv does not take care of the point in > operations and I need to remember and adjust it by hand, I dont see > the advantages of fxintbv. fxint seems to do what I need, except the > two questions I wrote yesterday. I am a little confused but maybe I > just want the wrong things :-) Correct, with the fxintbv a little more work needs to be done. But not as much as doing it all manually. Keeping within the design goals of MyHDL nothing will try to evaluate the expression x= a-b*c and adjust the result. But given the design of MyHDL and the elaborate phase there are some options. The fxintbv doesn't do the auto promotion (adjusting the number of bits for the word) but there are helper functions to determine the size of the word based on the operations. These helper functions/methods help during design and analysis but also can make the modules modular. If an input size is adjusted the module will automatically adjust. The example in the PDF (ooppps, just notice didn't include everything in the PDF, my bad) or the example in examples/sos/sos_hdl.py there is a function that gets the types needed. The function uses the fxintbv methods to determine this. This essentially auto-promotes. def getFxTypes(Q,N): # Different fixed-point types (sizes) used fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) fxmul_t = fx_t.ProductRep(fx_t) fxsum_t = fxmul_t.SummationRep(N) return fx_t, fxmul_t, fxsum_t Right now there are a bunch of public functions to assist in auto-promotion and alignment. The ProductRep, AdditionRep, SummationRep will determine the correct "type" given the inputs. For the specific example x = a-b*c something like the following could be done def fxCalc(clk, x, a, b, c): i_t = fxintbv.ProductRep(b,c) x_t = fxintbv.AdditionRep(a, i_t) x.NewRep(x_t.iwl, x_t.fwl) @always(clk.posedge) def rtl_result(): x.next = a-b*c return instances() In this scenario all the promotion and alignment etc will be handled. "x" will be adjusted in the elaboration phase based on the a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. The benefit of doing this is you will get the auto promotion of "x" when the inputs change and you get some additional functions for fixed point analysis. I think this is a good balance between design flexibility and automation. The fxint does a lot more under the hood (auto promotion, bit alignment) but it is not convertible. I don't think it makes sense for something like fxint to be convertible. You want a little more control when mapping to hardware. I think the fxintbv has the balance between right amount of abstraction and design guidance. Hope that helps .chris In the fixed_point/fxlib/_hdl.py there are some examples. These are intended to be > > > > Jan Langer wrote: >> Hello Christopher, >> I finally switched my design to fxint and in general it works fine. >> However, I have two questions. >> >> Why is for example the result of subtracting two fxint Signals an int? >> To solve this issue I wrote a helper function: >> >> def sigtofx(x): >> return fxint(x.fValue,Q=(x.iwl,x.fwl)) >> >> to convert the signal to an fxint, which has the correct subtraction. >> This is of course not synthesizeable. :-( >> >> The other thing is assignment of fxint signals, which requires >> alignment of the points. I see that this is necessary, but it will >> require tricky slicing operations. I solve it in another helper >> function for now, but is is also not convertible: >> >> def fxassign(dst,src): >> dst.next = fxint(src.fValue,Q=(dst.iwl,dst.fwl)) >> >> For example, converting Q=(5,6) to Q=(5,3) is >> >> Is there a preferred "myhdl"-solution for my problems? >> >> I found two things which might be bugs and I enclose the >> corresponding patch file. >> >> >> >> Thanks for implementing the library, >> Jan >> >> Am 03.12.2009 um 17:09 schrieb Felton Christopher: >>> Attached is my latest dspsys (dspsim) and fixed_point. As I >>> mentioned dspsys is not mature, use at your own risk. >>> >>> On Dec 2, 2009, at 4:37 AM, Jan Langer wrote: >>>> I encountered a problem using the fixed_point library on the myhdl >>>> website. It tries to find a module called dspsim (fixed_point/fxlib/ >>>> _blocks.py, line 23), which is not found on my system. A google search >>>> didn't turn up anything useful. What is it and where can I get it? >> > |
From: Jan L. <jan...@et...> - 2009-12-10 15:20:38
|
Hello Christopher, Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: > Jan Langer wrote: >> Somehow, I dont really understand it. The return type of the >> fxintbv operators is just plain intbv, so I cannot write a - b*c >> for three fxintbv variables. If fxintbv does not take care of the >> point in operations and I need to remember and adjust it by hand, I >> dont see the advantages of fxintbv. fxint seems to do what I need, >> except the two questions I wrote yesterday. I am a little confused >> but maybe I just want the wrong things :-) >> > Correct, with the fxintbv a little more work needs to be done. But > not as much as doing it all manually. Keeping within the design > goals of MyHDL nothing will try to evaluate the expression x= a-b*c > and adjust the result. But given the design of MyHDL and the > elaborate phase there are some options. The fxintbv doesn't do the > auto promotion (adjusting the number of bits for the word) but there > are helper functions to determine the size of the word based on the > operations. > These helper functions/methods help during design and analysis but > also can make the modules modular. If an input size is adjusted the > module will automatically adjust. The example in the PDF (ooppps, > just notice didn't include everything in the PDF, my bad) or the > example in examples/sos/sos_hdl.py there is a function that gets the > types needed. The function uses the fxintbv methods to determine > this. This essentially auto-promotes. > > def getFxTypes(Q,N): > # Different fixed-point types (sizes) used > fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) > fxmul_t = fx_t.ProductRep(fx_t) > fxsum_t = fxmul_t.SummationRep(N) > > return fx_t, fxmul_t, fxsum_t > > Right now there are a bunch of public functions to assist in auto- > promotion and alignment. The ProductRep, AdditionRep, SummationRep > will determine the correct "type" given the inputs. For the > specific example x = a-b*c something like the following could be done > > def fxCalc(clk, x, a, b, c): > i_t = fxintbv.ProductRep(b,c) > x_t = fxintbv.AdditionRep(a, i_t) > x.NewRep(x_t.iwl, x_t.fwl) > > @always(clk.posedge) > def rtl_result(): > x.next = a-b*c > > return instances() > > In this scenario all the promotion and alignment etc will be > handled. "x" will be adjusted in the elaboration phase based on the > a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. > The benefit of doing this is you will get the auto promotion of "x" > when the inputs change and you get some additional functions for > fixed point analysis. I think this is a good balance between design > flexibility and automation. The fxint does a lot more under the > hood (auto promotion, bit alignment) but it is not convertible. I > don't think it makes sense for something like fxint to be > convertible. You want a little more control when mapping to > hardware. I think the fxintbv has the balance between right amount > of abstraction and design guidance. Okay, I understand how its done. I will try it later. But I can't follow your argumentation that something like fxint should not be convertible. In my opinion, there is no semantic difference between the above code, which explicitly promotes the type, and implicit auto- promotion. The manual promotion will be used as a pattern and I dont see any additional control of the mapping process. I mean if there is a fixed and documented definition like a.b + c.d -> (max(a,c)+1).max(b,d) a.b * c.d -> (a+c).(b+d) a mapping is easy to understand. I see that current myhdl might not be able to support a convertible auto-promotable fxint type, but somehow I think that would be desirable. :-) Jan -- Jan Langer (Telefon) +49-371-531-33158 / (PGP) F1B8C1CC Schaltkreis- und Systementwurf / TU Chemnitz |
From: Christopher L. F. <chr...@gm...> - 2009-12-10 15:53:00
|
Jan Langer wrote: > Hello Christopher, > > Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: >> Jan Langer wrote: >>> Somehow, I dont really understand it. The return type of the fxintbv >>> operators is just plain intbv, so I cannot write a - b*c for three >>> fxintbv variables. If fxintbv does not take care of the point in >>> operations and I need to remember and adjust it by hand, I dont see >>> the advantages of fxintbv. fxint seems to do what I need, except the >>> two questions I wrote yesterday. I am a little confused but maybe I >>> just want the wrong things :-) >>> >> Correct, with the fxintbv a little more work needs to be done. But >> not as much as doing it all manually. Keeping within the design >> goals of MyHDL nothing will try to evaluate the expression x= a-b*c >> and adjust the result. But given the design of MyHDL and the >> elaborate phase there are some options. The fxintbv doesn't do the >> auto promotion (adjusting the number of bits for the word) but there >> are helper functions to determine the size of the word based on the >> operations. >> These helper functions/methods help during design and analysis but >> also can make the modules modular. If an input size is adjusted the >> module will automatically adjust. The example in the PDF (ooppps, >> just notice didn't include everything in the PDF, my bad) or the >> example in examples/sos/sos_hdl.py there is a function that gets the >> types needed. The function uses the fxintbv methods to determine >> this. This essentially auto-promotes. >> >> def getFxTypes(Q,N): >> # Different fixed-point types (sizes) used >> fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) >> fxmul_t = fx_t.ProductRep(fx_t) >> fxsum_t = fxmul_t.SummationRep(N) >> >> return fx_t, fxmul_t, fxsum_t >> >> Right now there are a bunch of public functions to assist in >> auto-promotion and alignment. The ProductRep, AdditionRep, >> SummationRep will determine the correct "type" given the inputs. >> For the specific example x = a-b*c something like the following could >> be done >> >> def fxCalc(clk, x, a, b, c): >> i_t = fxintbv.ProductRep(b,c) >> x_t = fxintbv.AdditionRep(a, i_t) >> x.NewRep(x_t.iwl, x_t.fwl) >> >> @always(clk.posedge) >> def rtl_result(): >> x.next = a-b*c >> >> return instances() >> >> In this scenario all the promotion and alignment etc will be >> handled. "x" will be adjusted in the elaboration phase based on the >> a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. >> The benefit of doing this is you will get the auto promotion of "x" >> when the inputs change and you get some additional functions for >> fixed point analysis. I think this is a good balance between design >> flexibility and automation. The fxint does a lot more under the hood >> (auto promotion, bit alignment) but it is not convertible. I don't >> think it makes sense for something like fxint to be convertible. You >> want a little more control when mapping to hardware. I think the >> fxintbv has the balance between right amount of abstraction and >> design guidance. > > Okay, I understand how its done. I will try it later. But I can't > follow your argumentation that something like fxint should not be > convertible. In my opinion, there is no semantic difference between > the above code, which explicitly promotes the type, and implicit > auto-promotion. The manual promotion will be used as a pattern and I > dont see any additional control of the mapping process. I mean if > there is a fixed and documented definition like > > a.b + c.d -> (max(a,c)+1).max(b,d) > a.b * c.d -> (a+c).(b+d) > > a mapping is easy to understand. > > I see that current myhdl might not be able to support a convertible > auto-promotable fxint type, but somehow I think that would be > desirable. :-) > Jan > It would require changes to the base MyHDL to evaluate the expressions and generate the code. I think this would be a fairly large project. MyHDL, for the most part, isn't that active in the conversion. There is a one to one mapping of expressions to the converted code. Arbitrarily complex expressions would have to handled. Things like having a "summation" embedded in a expression changes the rules that you might want to apply. The expression has to be decomposed such that it would be an optimal synthesis of the expression. Example If you a take an expression like (a + b + c) * d You can apply the basic rules but if a,b and c are the same type (same number bits and point alignment) more bits will be assigned to the result than needed. In general the addition result will require another bit but a summation (multiple additions same type) only requires log2(N) bits, where N is the number of additions. This is a general rule as well but it requires more advanced parsing of the expressions which would have to be built into MyHDL. Not saying it cannot be done, just would take more effort than I would currently have time for. At this point I don't think it is something that should be built into the MyHDL language. The fxint and fxintbv are simple libraries/objects that can be used with MyHDL. What might be possible in the future (time is the limiting resource) is that MyHDL can be modified slightly so that "plugins" can be added. When an expression is evaluate and unknown (non-convertible) types are present it can search for a plugin with rules to guide the conversion. This way the base language doesn't explode with all kinds features but something like you suggestion can be added. Some serious thought would have to be given to such an endeavor. It is more complicated that single expressions, could apply to blocks of expressions (like loops). And it would be changing the types on the fly, might require a two-pass approach. It would be a nice feature and a fun project but currently not feasible. .chris |
From: Felton C. <chr...@gm...> - 2009-12-15 14:00:12
|
Jan L., After our conversation I was thinking about this a little more. Think I came up with a better scheme. This still uses the elaboration phase to determine the size/alignment/type but I think it is more clear. This way you can use an arbitrarily complex expression. But the expression will have to be outside the generators (behavioral statement) so that the elaborate phase can determine the correct type/ size. And also in the generator for conversion/simulation. Example: def fx_mix_ex(a,b,c,d,e): e.Q = (b-a)*c + d @always_comb def rtl(): e.next = (b-a)*c + d return rtl But I have hit some issues with the base myhdl.Signal component (more than likely my mis-understanding and an issue in my stuff). I will try to resolve these and let you know how it goes. Attached is an updated version with the "Q" property. Also the "Q" property removes all the *Rep methods. Thanks again for the previous patches. .chris On Dec 10, 2009, at 9:52 AM, Christopher L. Felton wrote: > Jan Langer wrote: >> Hello Christopher, >> >> Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: >>> Jan Langer wrote: >>>> Somehow, I dont really understand it. The return type of the >>>> fxintbv operators is just plain intbv, so I cannot write a - b*c >>>> for three fxintbv variables. If fxintbv does not take care of the >>>> point in operations and I need to remember and adjust it by hand, >>>> I dont see the advantages of fxintbv. fxint seems to do what I >>>> need, except the two questions I wrote yesterday. I am a little >>>> confused but maybe I just want the wrong things :-) >>>> >>> Correct, with the fxintbv a little more work needs to be done. >>> But not as much as doing it all manually. Keeping within the >>> design goals of MyHDL nothing will try to evaluate the expression >>> x= a-b*c and adjust the result. But given the design of MyHDL and >>> the elaborate phase there are some options. The fxintbv doesn't >>> do the auto promotion (adjusting the number of bits for the word) >>> but there are helper functions to determine the size of the word >>> based on the operations. >>> These helper functions/methods help during design and analysis but >>> also can make the modules modular. If an input size is adjusted >>> the module will automatically adjust. The example in the PDF >>> (ooppps, just notice didn't include everything in the PDF, my bad) >>> or the example in examples/sos/sos_hdl.py there is a function that >>> gets the types needed. The function uses the fxintbv methods to >>> determine this. This essentially auto-promotes. >>> >>> def getFxTypes(Q,N): >>> # Different fixed-point types (sizes) used >>> fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) >>> fxmul_t = fx_t.ProductRep(fx_t) >>> fxsum_t = fxmul_t.SummationRep(N) >>> >>> return fx_t, fxmul_t, fxsum_t >>> >>> Right now there are a bunch of public functions to assist in auto- >>> promotion and alignment. The ProductRep, AdditionRep, >>> SummationRep will determine the correct "type" given the inputs. >>> For the specific example x = a-b*c something like the following >>> could be done >>> >>> def fxCalc(clk, x, a, b, c): >>> i_t = fxintbv.ProductRep(b,c) >>> x_t = fxintbv.AdditionRep(a, i_t) >>> x.NewRep(x_t.iwl, x_t.fwl) >>> >>> @always(clk.posedge) >>> def rtl_result(): >>> x.next = a-b*c >>> >>> return instances() >>> >>> In this scenario all the promotion and alignment etc will be >>> handled. "x" will be adjusted in the elaboration phase based on >>> the a,b,c inputs. It assumes that x, a, b, c are all of type >>> fxintbv. The benefit of doing this is you will get the auto >>> promotion of "x" when the inputs change and you get some >>> additional functions for fixed point analysis. I think this is a >>> good balance between design flexibility and automation. The fxint >>> does a lot more under the hood (auto promotion, bit alignment) but >>> it is not convertible. I don't think it makes sense for something >>> like fxint to be convertible. You want a little more control when >>> mapping to hardware. I think the fxintbv has the balance between >>> right amount of abstraction and design guidance. >> >> Okay, I understand how its done. I will try it later. But I can't >> follow your argumentation that something like fxint should not be >> convertible. In my opinion, there is no semantic difference between >> the above code, which explicitly promotes the type, and implicit >> auto-promotion. The manual promotion will be used as a pattern and >> I dont see any additional control of the mapping process. I mean if >> there is a fixed and documented definition like >> >> a.b + c.d -> (max(a,c)+1).max(b,d) >> a.b * c.d -> (a+c).(b+d) >> >> a mapping is easy to understand. >> >> I see that current myhdl might not be able to support a convertible >> auto-promotable fxint type, but somehow I think that would be >> desirable. :-) >> Jan >> > It would require changes to the base MyHDL to evaluate the > expressions and generate the code. I think this would be a fairly > large project. MyHDL, for the most part, isn't that active in the > conversion. There is a one to one mapping of expressions to the > converted code. > Arbitrarily complex expressions would have to handled. Things like > having a "summation" embedded in a expression changes the rules that > you might want to apply. The expression has to be decomposed such > that it would be an optimal synthesis of the expression. Example If > you a take an expression like > (a + b + c) * d > You can apply the basic rules but if a,b and c are the same type > (same number bits and point alignment) more bits will be assigned to > the result than needed. In general the addition result will require > another bit but a summation (multiple additions same type) only > requires log2(N) bits, where N is the number of additions. This is > a general rule as well but it requires more advanced parsing of the > expressions which would have to be built into MyHDL. Not saying it > cannot be done, just would take more effort than I would currently > have time for. > > At this point I don't think it is something that should be built > into the MyHDL language. The fxint and fxintbv are simple libraries/ > objects that can be used with MyHDL. What might be possible in the > future (time is the limiting resource) is that MyHDL can be modified > slightly so that "plugins" can be added. When an expression is > evaluate and unknown (non-convertible) types are present it can > search for a plugin with rules to guide the conversion. This way > the base language doesn't explode with all kinds features but > something like you suggestion can be added. Some serious thought > would have to be given to such an endeavor. It is more complicated > that single expressions, could apply to blocks of expressions (like > loops). And it would be changing the types on the fly, might > require a two-pass approach. It would be a nice feature and a fun > project but currently not feasible. > > .chris > |
From: Jan D. <ja...@ja...> - 2009-12-15 16:43:57
|
I'm not a fixed-point specialist, but am I correct in thinking that things would be considerably simpler if there would a fixed point datatype that we could map to in VHDL/Verilog? I seem to have read that such a (synthesizable) datatype exists for VHDL, but not yet for Verilog. But even it's VHDL-only, it seems like an interesting path to investigate? Jan Felton Christopher wrote: > Jan L., > > After our conversation I was thinking about this a little more. Think I > came up with a better scheme. This still uses the elaboration phase to > determine the size/alignment/type but I think it is more clear. This > way you can use an arbitrarily complex expression. But the expression > will have to be outside the generators (behavioral statement) so that > the elaborate phase can determine the correct type/size. And also in > the generator for conversion/simulation. > > Example: > def fx_mix_ex(a,b,c,d,e): > > e.Q = (b-a)*c + d > @always_comb > def rtl(): > e.next = (b-a)*c + d > > return rtl > > But I have hit some issues with the base myhdl.Signal component (more > than likely my mis-understanding and an issue in my stuff). I will try > to resolve these and let you know how it goes. Attached is an updated > version with the "Q" property. Also the "Q" property removes all the > *Rep methods. > > Thanks again for the previous patches. > .chris > > > ------------------------------------------------------------------------ > > > On Dec 10, 2009, at 9:52 AM, Christopher L. Felton wrote: > >> Jan Langer wrote: >>> Hello Christopher, >>> >>> Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: >>>> Jan Langer wrote: >>>>> Somehow, I dont really understand it. The return type of the >>>>> fxintbv operators is just plain intbv, so I cannot write a - b*c >>>>> for three fxintbv variables. If fxintbv does not take care of the >>>>> point in operations and I need to remember and adjust it by hand, I >>>>> dont see the advantages of fxintbv. fxint seems to do what I need, >>>>> except the two questions I wrote yesterday. I am a little confused >>>>> but maybe I just want the wrong things :-) >>>>> >>>> Correct, with the fxintbv a little more work needs to be done. But >>>> not as much as doing it all manually. Keeping within the design >>>> goals of MyHDL nothing will try to evaluate the expression x= a-b*c >>>> and adjust the result. But given the design of MyHDL and the >>>> elaborate phase there are some options. The fxintbv doesn't do the >>>> auto promotion (adjusting the number of bits for the word) but there >>>> are helper functions to determine the size of the word based on the >>>> operations. >>>> These helper functions/methods help during design and analysis but >>>> also can make the modules modular. If an input size is adjusted the >>>> module will automatically adjust. The example in the PDF (ooppps, >>>> just notice didn't include everything in the PDF, my bad) or the >>>> example in examples/sos/sos_hdl.py there is a function that gets the >>>> types needed. The function uses the fxintbv methods to determine >>>> this. This essentially auto-promotes. >>>> >>>> def getFxTypes(Q,N): >>>> # Different fixed-point types (sizes) used >>>> fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) >>>> fxmul_t = fx_t.ProductRep(fx_t) >>>> fxsum_t = fxmul_t.SummationRep(N) >>>> >>>> return fx_t, fxmul_t, fxsum_t >>>> >>>> Right now there are a bunch of public functions to assist in >>>> auto-promotion and alignment. The ProductRep, AdditionRep, >>>> SummationRep will determine the correct "type" given the inputs. >>>> For the specific example x = a-b*c something like the following >>>> could be done >>>> >>>> def fxCalc(clk, x, a, b, c): >>>> i_t = fxintbv.ProductRep(b,c) >>>> x_t = fxintbv.AdditionRep(a, i_t) >>>> x.NewRep(x_t.iwl, x_t.fwl) >>>> >>>> @always(clk.posedge) >>>> def rtl_result(): >>>> x.next = a-b*c >>>> >>>> return instances() >>>> >>>> In this scenario all the promotion and alignment etc will be >>>> handled. "x" will be adjusted in the elaboration phase based on the >>>> a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. >>>> The benefit of doing this is you will get the auto promotion of "x" >>>> when the inputs change and you get some additional functions for >>>> fixed point analysis. I think this is a good balance between design >>>> flexibility and automation. The fxint does a lot more under the >>>> hood (auto promotion, bit alignment) but it is not convertible. I >>>> don't think it makes sense for something like fxint to be >>>> convertible. You want a little more control when mapping to >>>> hardware. I think the fxintbv has the balance between right amount >>>> of abstraction and design guidance. >>> >>> Okay, I understand how its done. I will try it later. But I can't >>> follow your argumentation that something like fxint should not be >>> convertible. In my opinion, there is no semantic difference between >>> the above code, which explicitly promotes the type, and implicit >>> auto-promotion. The manual promotion will be used as a pattern and I >>> dont see any additional control of the mapping process. I mean if >>> there is a fixed and documented definition like >>> >>> a.b + c.d -> (max(a,c)+1).max(b,d) >>> a.b * c.d -> (a+c).(b+d) >>> >>> a mapping is easy to understand. >>> >>> I see that current myhdl might not be able to support a convertible >>> auto-promotable fxint type, but somehow I think that would be >>> desirable. :-) >>> Jan >>> >> It would require changes to the base MyHDL to evaluate the expressions >> and generate the code. I think this would be a fairly large project. >> MyHDL, for the most part, isn't that active in the conversion. There >> is a one to one mapping of expressions to the converted code. >> Arbitrarily complex expressions would have to handled. Things like >> having a "summation" embedded in a expression changes the rules that >> you might want to apply. The expression has to be decomposed such >> that it would be an optimal synthesis of the expression. Example If >> you a take an expression like >> (a + b + c) * d >> You can apply the basic rules but if a,b and c are the same type (same >> number bits and point alignment) more bits will be assigned to the >> result than needed. In general the addition result will require >> another bit but a summation (multiple additions same type) only >> requires log2(N) bits, where N is the number of additions. This is a >> general rule as well but it requires more advanced parsing of the >> expressions which would have to be built into MyHDL. Not saying it >> cannot be done, just would take more effort than I would currently >> have time for. >> >> At this point I don't think it is something that should be built into >> the MyHDL language. The fxint and fxintbv are simple >> libraries/objects that can be used with MyHDL. What might be possible >> in the future (time is the limiting resource) is that MyHDL can be >> modified slightly so that "plugins" can be added. When an expression >> is evaluate and unknown (non-convertible) types are present it can >> search for a plugin with rules to guide the conversion. This way the >> base language doesn't explode with all kinds features but something >> like you suggestion can be added. Some serious thought would have to >> be given to such an endeavor. It is more complicated that single >> expressions, could apply to blocks of expressions (like loops). And >> it would be changing the types on the fly, might require a two-pass >> approach. It would be a nice feature and a fun project but currently >> not feasible. >> >> .chris >> > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Return on Information: > Google Enterprise Search pays you back > Get the facts. > http://p.sf.net/sfu/google-dev2dev > > > ------------------------------------------------------------------------ > > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |
From: Jan L. <jan...@et...> - 2009-12-15 16:57:31
|
Hi, I havent yet looked into Christopher's new version of the lib, but I in general I think a perfect solution would be the possibility to write user-defined datatypes that do auto-promotion and all kinds of stuff, and give them some kind of "synthesis rules", that define a mapping to some lower level bit_vector (maybe intbv). Then Signal is a little more light-weight. With such a solution one could even write synthesizeable floating point types or synthesizeable records. Jan Am 15.12.2009 um 17:46 schrieb Jan Decaluwe: > I'm not a fixed-point specialist, but am I correct in thinking that > things would be considerably simpler if there would a fixed point > datatype that we could map to in VHDL/Verilog? > > I seem to have read that such a (synthesizable) datatype exists for > VHDL, but not yet for Verilog. But even it's VHDL-only, it seems like > an interesting path to investigate? > > Felton Christopher wrote: >> Jan L., >> >> After our conversation I was thinking about this a little more. >> Think I >> came up with a better scheme. This still uses the elaboration >> phase to >> determine the size/alignment/type but I think it is more clear. This >> way you can use an arbitrarily complex expression. But the >> expression >> will have to be outside the generators (behavioral statement) so that >> the elaborate phase can determine the correct type/size. And also in >> the generator for conversion/simulation. >> >> Example: >> def fx_mix_ex(a,b,c,d,e): >> >> e.Q = (b-a)*c + d >> @always_comb >> def rtl(): >> e.next = (b-a)*c + d >> >> return rtl >> >> But I have hit some issues with the base myhdl.Signal component (more >> than likely my mis-understanding and an issue in my stuff). I will >> try >> to resolve these and let you know how it goes. Attached is an >> updated >> version with the "Q" property. Also the "Q" property removes all the >> *Rep methods. >> >> Thanks again for the previous patches. >> >> On Dec 10, 2009, at 9:52 AM, Christopher L. Felton wrote: >> >>> Jan Langer wrote: >>>> Hello Christopher, >>>> >>>> Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: >>>>> Jan Langer wrote: >>>>>> Somehow, I dont really understand it. The return type of the >>>>>> fxintbv operators is just plain intbv, so I cannot write a - b*c >>>>>> for three fxintbv variables. If fxintbv does not take care of the >>>>>> point in operations and I need to remember and adjust it by >>>>>> hand, I >>>>>> dont see the advantages of fxintbv. fxint seems to do what I >>>>>> need, >>>>>> except the two questions I wrote yesterday. I am a little >>>>>> confused >>>>>> but maybe I just want the wrong things :-) >>>>>> >>>>> Correct, with the fxintbv a little more work needs to be done. >>>>> But >>>>> not as much as doing it all manually. Keeping within the design >>>>> goals of MyHDL nothing will try to evaluate the expression x= a- >>>>> b*c >>>>> and adjust the result. But given the design of MyHDL and the >>>>> elaborate phase there are some options. The fxintbv doesn't do >>>>> the >>>>> auto promotion (adjusting the number of bits for the word) but >>>>> there >>>>> are helper functions to determine the size of the word based on >>>>> the >>>>> operations. >>>>> These helper functions/methods help during design and analysis but >>>>> also can make the modules modular. If an input size is adjusted >>>>> the >>>>> module will automatically adjust. The example in the PDF >>>>> (ooppps, >>>>> just notice didn't include everything in the PDF, my bad) or the >>>>> example in examples/sos/sos_hdl.py there is a function that gets >>>>> the >>>>> types needed. The function uses the fxintbv methods to determine >>>>> this. This essentially auto-promotes. >>>>> >>>>> def getFxTypes(Q,N): >>>>> # Different fixed-point types (sizes) used >>>>> fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) >>>>> fxmul_t = fx_t.ProductRep(fx_t) >>>>> fxsum_t = fxmul_t.SummationRep(N) >>>>> >>>>> return fx_t, fxmul_t, fxsum_t >>>>> >>>>> Right now there are a bunch of public functions to assist in >>>>> auto-promotion and alignment. The ProductRep, AdditionRep, >>>>> SummationRep will determine the correct "type" given the inputs. >>>>> For the specific example x = a-b*c something like the following >>>>> could be done >>>>> >>>>> def fxCalc(clk, x, a, b, c): >>>>> i_t = fxintbv.ProductRep(b,c) >>>>> x_t = fxintbv.AdditionRep(a, i_t) >>>>> x.NewRep(x_t.iwl, x_t.fwl) >>>>> >>>>> @always(clk.posedge) >>>>> def rtl_result(): >>>>> x.next = a-b*c >>>>> >>>>> return instances() >>>>> >>>>> In this scenario all the promotion and alignment etc will be >>>>> handled. "x" will be adjusted in the elaboration phase based on >>>>> the >>>>> a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. >>>>> The benefit of doing this is you will get the auto promotion of >>>>> "x" >>>>> when the inputs change and you get some additional functions for >>>>> fixed point analysis. I think this is a good balance between >>>>> design >>>>> flexibility and automation. The fxint does a lot more under the >>>>> hood (auto promotion, bit alignment) but it is not convertible. I >>>>> don't think it makes sense for something like fxint to be >>>>> convertible. You want a little more control when mapping to >>>>> hardware. I think the fxintbv has the balance between right >>>>> amount >>>>> of abstraction and design guidance. >>>> >>>> Okay, I understand how its done. I will try it later. But I can't >>>> follow your argumentation that something like fxint should not be >>>> convertible. In my opinion, there is no semantic difference between >>>> the above code, which explicitly promotes the type, and implicit >>>> auto-promotion. The manual promotion will be used as a pattern >>>> and I >>>> dont see any additional control of the mapping process. I mean if >>>> there is a fixed and documented definition like >>>> >>>> a.b + c.d -> (max(a,c)+1).max(b,d) >>>> a.b * c.d -> (a+c).(b+d) >>>> >>>> a mapping is easy to understand. >>>> >>>> I see that current myhdl might not be able to support a convertible >>>> auto-promotable fxint type, but somehow I think that would be >>>> desirable. :-) >>>> Jan >>>> >>> It would require changes to the base MyHDL to evaluate the >>> expressions >>> and generate the code. I think this would be a fairly large >>> project. >>> MyHDL, for the most part, isn't that active in the conversion. >>> There >>> is a one to one mapping of expressions to the converted code. >>> Arbitrarily complex expressions would have to handled. Things like >>> having a "summation" embedded in a expression changes the rules that >>> you might want to apply. The expression has to be decomposed such >>> that it would be an optimal synthesis of the expression. Example If >>> you a take an expression like >>> (a + b + c) * d >>> You can apply the basic rules but if a,b and c are the same type >>> (same >>> number bits and point alignment) more bits will be assigned to the >>> result than needed. In general the addition result will require >>> another bit but a summation (multiple additions same type) only >>> requires log2(N) bits, where N is the number of additions. This >>> is a >>> general rule as well but it requires more advanced parsing of the >>> expressions which would have to be built into MyHDL. Not saying it >>> cannot be done, just would take more effort than I would currently >>> have time for. >>> >>> At this point I don't think it is something that should be built >>> into >>> the MyHDL language. The fxint and fxintbv are simple >>> libraries/objects that can be used with MyHDL. What might be >>> possible >>> in the future (time is the limiting resource) is that MyHDL can be >>> modified slightly so that "plugins" can be added. When an >>> expression >>> is evaluate and unknown (non-convertible) types are present it can >>> search for a plugin with rules to guide the conversion. This way >>> the >>> base language doesn't explode with all kinds features but something >>> like you suggestion can be added. Some serious thought would have >>> to >>> be given to such an endeavor. It is more complicated that single >>> expressions, could apply to blocks of expressions (like loops). And >>> it would be changing the types on the fly, might require a two-pass >>> approach. It would be a nice feature and a fun project but >>> currently >>> not feasible. > -- Jan Langer (Telefon) +49-371-531-33158 / (PGP) F1B8C1CC Schaltkreis- und Systementwurf / TU Chemnitz |
From: Christopher L. F. <chr...@gm...> - 2009-12-21 23:20:18
|
Jan Decaluwe wrote: > I'm not a fixed-point specialist, but am I correct in thinking that > things would be considerably simpler if there would a fixed point > datatype that we could map to in VHDL/Verilog? > > I seem to have read that such a (synthesizable) datatype exists for > VHDL, but not yet for Verilog. But even it's VHDL-only, it seems like > an interesting path to investigate? I believe there is a fixed-point type in the latest standard of VHDL (VHDL-2007). I don't know much more about the new VHDL type. If it has been ratified, supported in tools, etc. From a quick search, the VHDL fixed-point type doesn't do auto-promotion (what size should the result be). The new types introduce the negative index in ranges (???). I don't think it supports all features the lib attempts to support. Regardless, I think this is one of highlights of MyHDL to handle these kinds "types". I think the current MyHDL approach gives the design much more flexibility. A designer might not always want the "default" rules for handling fixed-point. If mapped to the VHDL type I think there would be a "collision" of rules and loss of control. .chris > > Jan > > Felton Christopher wrote: >> Jan L., >> >> After our conversation I was thinking about this a little more. Think I >> came up with a better scheme. This still uses the elaboration phase to >> determine the size/alignment/type but I think it is more clear. This >> way you can use an arbitrarily complex expression. But the expression >> will have to be outside the generators (behavioral statement) so that >> the elaborate phase can determine the correct type/size. And also in >> the generator for conversion/simulation. >> >> Example: >> def fx_mix_ex(a,b,c,d,e): >> >> e.Q = (b-a)*c + d >> @always_comb >> def rtl(): >> e.next = (b-a)*c + d >> >> return rtl >> >> But I have hit some issues with the base myhdl.Signal component (more >> than likely my mis-understanding and an issue in my stuff). I will try >> to resolve these and let you know how it goes. Attached is an updated >> version with the "Q" property. Also the "Q" property removes all the >> *Rep methods. >> >> Thanks again for the previous patches. >> .chris >> >> >> ------------------------------------------------------------------------ >> >> >> On Dec 10, 2009, at 9:52 AM, Christopher L. Felton wrote: >> >>> Jan Langer wrote: >>>> Hello Christopher, >>>> >>>> Am 10.12.2009 um 03:57 schrieb Christopher L. Felton: >>>>> Jan Langer wrote: >>>>>> Somehow, I dont really understand it. The return type of the >>>>>> fxintbv operators is just plain intbv, so I cannot write a - b*c >>>>>> for three fxintbv variables. If fxintbv does not take care of the >>>>>> point in operations and I need to remember and adjust it by hand, I >>>>>> dont see the advantages of fxintbv. fxint seems to do what I need, >>>>>> except the two questions I wrote yesterday. I am a little confused >>>>>> but maybe I just want the wrong things :-) >>>>>> >>>>> Correct, with the fxintbv a little more work needs to be done. But >>>>> not as much as doing it all manually. Keeping within the design >>>>> goals of MyHDL nothing will try to evaluate the expression x= a-b*c >>>>> and adjust the result. But given the design of MyHDL and the >>>>> elaborate phase there are some options. The fxintbv doesn't do the >>>>> auto promotion (adjusting the number of bits for the word) but there >>>>> are helper functions to determine the size of the word based on the >>>>> operations. >>>>> These helper functions/methods help during design and analysis but >>>>> also can make the modules modular. If an input size is adjusted the >>>>> module will automatically adjust. The example in the PDF (ooppps, >>>>> just notice didn't include everything in the PDF, my bad) or the >>>>> example in examples/sos/sos_hdl.py there is a function that gets the >>>>> types needed. The function uses the fxintbv methods to determine >>>>> this. This essentially auto-promotes. >>>>> >>>>> def getFxTypes(Q,N): >>>>> # Different fixed-point types (sizes) used >>>>> fx_t = fxintbv(0, min=-1, max=1, res=2**-Q) >>>>> fxmul_t = fx_t.ProductRep(fx_t) >>>>> fxsum_t = fxmul_t.SummationRep(N) >>>>> >>>>> return fx_t, fxmul_t, fxsum_t >>>>> >>>>> Right now there are a bunch of public functions to assist in >>>>> auto-promotion and alignment. The ProductRep, AdditionRep, >>>>> SummationRep will determine the correct "type" given the inputs. >>>>> For the specific example x = a-b*c something like the following >>>>> could be done >>>>> >>>>> def fxCalc(clk, x, a, b, c): >>>>> i_t = fxintbv.ProductRep(b,c) >>>>> x_t = fxintbv.AdditionRep(a, i_t) >>>>> x.NewRep(x_t.iwl, x_t.fwl) >>>>> >>>>> @always(clk.posedge) >>>>> def rtl_result(): >>>>> x.next = a-b*c >>>>> >>>>> return instances() >>>>> >>>>> In this scenario all the promotion and alignment etc will be >>>>> handled. "x" will be adjusted in the elaboration phase based on the >>>>> a,b,c inputs. It assumes that x, a, b, c are all of type fxintbv. >>>>> The benefit of doing this is you will get the auto promotion of "x" >>>>> when the inputs change and you get some additional functions for >>>>> fixed point analysis. I think this is a good balance between design >>>>> flexibility and automation. The fxint does a lot more under the >>>>> hood (auto promotion, bit alignment) but it is not convertible. I >>>>> don't think it makes sense for something like fxint to be >>>>> convertible. You want a little more control when mapping to >>>>> hardware. I think the fxintbv has the balance between right amount >>>>> of abstraction and design guidance. >>>> Okay, I understand how its done. I will try it later. But I can't >>>> follow your argumentation that something like fxint should not be >>>> convertible. In my opinion, there is no semantic difference between >>>> the above code, which explicitly promotes the type, and implicit >>>> auto-promotion. The manual promotion will be used as a pattern and I >>>> dont see any additional control of the mapping process. I mean if >>>> there is a fixed and documented definition like >>>> >>>> a.b + c.d -> (max(a,c)+1).max(b,d) >>>> a.b * c.d -> (a+c).(b+d) >>>> >>>> a mapping is easy to understand. >>>> >>>> I see that current myhdl might not be able to support a convertible >>>> auto-promotable fxint type, but somehow I think that would be >>>> desirable. :-) >>>> Jan >>>> >>> It would require changes to the base MyHDL to evaluate the expressions >>> and generate the code. I think this would be a fairly large project. >>> MyHDL, for the most part, isn't that active in the conversion. There >>> is a one to one mapping of expressions to the converted code. >>> Arbitrarily complex expressions would have to handled. Things like >>> having a "summation" embedded in a expression changes the rules that >>> you might want to apply. The expression has to be decomposed such >>> that it would be an optimal synthesis of the expression. Example If >>> you a take an expression like >>> (a + b + c) * d >>> You can apply the basic rules but if a,b and c are the same type (same >>> number bits and point alignment) more bits will be assigned to the >>> result than needed. In general the addition result will require >>> another bit but a summation (multiple additions same type) only >>> requires log2(N) bits, where N is the number of additions. This is a >>> general rule as well but it requires more advanced parsing of the >>> expressions which would have to be built into MyHDL. Not saying it >>> cannot be done, just would take more effort than I would currently >>> have time for. >>> >>> At this point I don't think it is something that should be built into >>> the MyHDL language. The fxint and fxintbv are simple >>> libraries/objects that can be used with MyHDL. What might be possible >>> in the future (time is the limiting resource) is that MyHDL can be >>> modified slightly so that "plugins" can be added. When an expression >>> is evaluate and unknown (non-convertible) types are present it can >>> search for a plugin with rules to guide the conversion. This way the >>> base language doesn't explode with all kinds features but something >>> like you suggestion can be added. Some serious thought would have to >>> be given to such an endeavor. It is more complicated that single >>> expressions, could apply to blocks of expressions (like loops). And >>> it would be changing the types on the fly, might require a two-pass >>> approach. It would be a nice feature and a fun project but currently >>> not feasible. >>> >>> .chris >>> >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------ >> Return on Information: >> Google Enterprise Search pays you back >> Get the facts. >> http://p.sf.net/sfu/google-dev2dev >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> myhdl-list mailing list >> myh...@li... >> https://lists.sourceforge.net/lists/listinfo/myhdl-list > > |
From: Felton C. <chr...@gm...> - 2010-01-15 13:58:08
|
On Jan 15, 2010, at 2:46 AM, Jan Decaluwe wrote: > Christopher L. Felton wrote: >> Jan Decaluwe wrote: >>> I'm not a fixed-point specialist, but am I correct in thinking that >>> things would be considerably simpler if there would a fixed point >>> datatype that we could map to in VHDL/Verilog? >>> >>> I seem to have read that such a (synthesizable) datatype exists for >>> VHDL, but not yet for Verilog. But even it's VHDL-only, it seems >>> like >>> an interesting path to investigate? >> >> I believe there is a fixed-point type in the latest standard of VHDL >> (VHDL-2007). I don't know much more about the new VHDL type. If >> it has >> been ratified, supported in tools, etc. >> >> From a quick search, the VHDL fixed-point type doesn't do >> auto-promotion (what size should the result be). The new types >> introduce the negative index in ranges (???). I don't think it >> supports >> all features the lib attempts to support. >> >> Regardless, I think this is one of highlights of MyHDL to handle >> these >> kinds "types". I think the current MyHDL approach gives the design >> much >> more flexibility. A designer might not always want the "default" >> rules >> for handling fixed-point. If mapped to the VHDL type I think there >> would be a "collision" of rules and loss of control. > > I'm intrigued by these statements, please enlighten me. Which statements? My incomplete understanding of the VHDL fixed-point support? Or the flexibility of MyHDL to handle fractional numbers without changing the core package? > With intbv, I have tried (and I believe not without success) to > implement > the single way to do it "right". The basic idea is to let it behave > as mathematical integers do (and there is little controversy about > that.) > The convertor implements this in Verilog and VHDL with lower > level types by using type casts and resizings. > (In contrast, there is a lot of controversy about how such lower level > types should behave!) What I am trying to suggest and promote. Is that, as you mention, the intbv handles the integers fine. WIth fixed point we are trying to use fractional numbers like 3.141592. The MyHDL architecture has a nice mechanism to do this. During elaboration we can perform the necessary steps to get our integer representation of our fractional number. Then all the conversion, simulation, etc is handled by the intbv. > > What you suggest is that with fixed point, something like that would > not be possible: you suggest that there isn't "one good way to do it". > I'd like to understand better why you think that is so. I am doing a poor job of communicating. I do think adding support (external packages) for higher types (fixed-point, floating-point) is possible! Exactly as you say, instead of mapping to the VHDL type in MyHDL first map to intbv because the rules are clear! The goal of the fixed point package is to give the developer the tools to map a fractional number to an integer representation. Then intbv handles the rest. The issues with these higher types is discovering a "user friendly" mechanism. I have run across some issues, not necessarily MyHDL issues, but things I wanted to do in the elaboration and have not been able to do. Hope that helps explain a little better? > > Jan > > > -- > Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com > Python as a HDL: http://www.myhdl.org > VHDL development, the modern way: http://www.sigasi.com > Analog design automation: http://www.mephisto-da.com > World-class digital design: http://www.easics.com > > > ------------------------------------------------------------------------------ > Throughout its 18-year history, RSA Conference consistently attracts > the > world's best and brightest in the field, creating opportunities for > Conference > attendees to learn about information security's most important > issues through > interactions with peers, luminaries and emerging and established > companies. > http://p.sf.net/sfu/rsaconf-dev2dev > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Jan D. <ja...@ja...> - 2010-01-18 08:10:30
|
Felton Christopher wrote: > On Jan 15, 2010, at 2:46 AM, Jan Decaluwe wrote: > >> Christopher L. Felton wrote: >>> Jan Decaluwe wrote: >>>> I'm not a fixed-point specialist, but am I correct in thinking that >>>> things would be considerably simpler if there would a fixed point >>>> datatype that we could map to in VHDL/Verilog? >>>> >>>> I seem to have read that such a (synthesizable) datatype exists for >>>> VHDL, but not yet for Verilog. But even it's VHDL-only, it seems >>>> like >>>> an interesting path to investigate? >>> I believe there is a fixed-point type in the latest standard of VHDL >>> (VHDL-2007). I don't know much more about the new VHDL type. If >>> it has >>> been ratified, supported in tools, etc. >>> >>> From a quick search, the VHDL fixed-point type doesn't do >>> auto-promotion (what size should the result be). The new types >>> introduce the negative index in ranges (???). I don't think it >>> supports >>> all features the lib attempts to support. >>> >>> Regardless, I think this is one of highlights of MyHDL to handle >>> these >>> kinds "types". I think the current MyHDL approach gives the design >>> much >>> more flexibility. A designer might not always want the "default" >>> rules >>> for handling fixed-point. If mapped to the VHDL type I think there >>> would be a "collision" of rules and loss of control. >> I'm intrigued by these statements, please enlighten me. > > Which statements? My incomplete understanding of the VHDL fixed-point > support? Or the flexibility of MyHDL to handle fractional numbers > without changing the core package? I was referring to your statement that "A designer might not always want the "default" rules for handling fixed point". Regardless of the mechanism to make fixed point available with MyHDL, I would hope, naively perhaps, that we can find a single way to make everybody happy. But again, I've not yet studied the subject in depth. -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |
From: Jan D. <ja...@ja...> - 2010-01-15 08:43:49
|
Christopher L. Felton wrote: > Jan Decaluwe wrote: >> I'm not a fixed-point specialist, but am I correct in thinking that >> things would be considerably simpler if there would a fixed point >> datatype that we could map to in VHDL/Verilog? >> >> I seem to have read that such a (synthesizable) datatype exists for >> VHDL, but not yet for Verilog. But even it's VHDL-only, it seems like >> an interesting path to investigate? > > I believe there is a fixed-point type in the latest standard of VHDL > (VHDL-2007). I don't know much more about the new VHDL type. If it has > been ratified, supported in tools, etc. > > From a quick search, the VHDL fixed-point type doesn't do > auto-promotion (what size should the result be). The new types > introduce the negative index in ranges (???). I don't think it supports > all features the lib attempts to support. > > Regardless, I think this is one of highlights of MyHDL to handle these > kinds "types". I think the current MyHDL approach gives the design much > more flexibility. A designer might not always want the "default" rules > for handling fixed-point. If mapped to the VHDL type I think there > would be a "collision" of rules and loss of control. I'm intrigued by these statements, please enlighten me. With intbv, I have tried (and I believe not without success) to implement the single way to do it "right". The basic idea is to let it behave as mathematical integers do (and there is little controversy about that.) The convertor implements this in Verilog and VHDL with lower level types by using type casts and resizings. (In contrast, there is a lot of controversy about how such lower level types should behave!) What you suggest is that with fixed point, something like that would not be possible: you suggest that there isn't "one good way to do it". I'd like to understand better why you think that is so. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |