Thread: [myhdl-list] Cosimulation help
Brought to you by:
jandecaluwe
From: Newell J. <pil...@gm...> - 2009-04-01 06:15:50
|
Wondering if anyone might be able to help me with a Cosimulation error that I am getting. I am using py.test and don't know if this is the reason I am getting the error. Any suggestions are welcome. Here is the error: jensen@ubuntu-2012:~/Desktop/python$ py.test simulate Traceback (most recent call last): File "/usr/bin/py.test", line 10, in <module> py.test.cmdline.main() File "/usr/lib/python2.5/site-packages/py/test/cmdline.py", line 15, in main failures = session.main() File "/usr/lib/python2.5/site-packages/py/test/session.py", line 57, in main colitems = self.config.getcolitems() File "/usr/lib/python2.5/site-packages/py/test/config.py", line 65, in getcolitems return [self._getcollector(path) for path in (trails or self.args)] File "/usr/lib/python2.5/site-packages/py/test/config.py", line 77, in _getcollector return col._getitembynames(names) File "/usr/lib/python2.5/site-packages/py/test/collect.py", line 149, in _getitembynames assert next is not None, (cur, name, namelist) AssertionError: (<Directory 'python'>, 'simulate', ['simulate']) And here is the script that I am running (simulate): #! /usr/bin/env python import os from myhdl import * cmd = "iverilog -o simple_bench -cconffile.txt" WB_PERIOD = 10 # Make a clk generator for pci_clock, wb_clock # Make a 32 bit signal for listening to the reg AD def bench(): """ Practice Unit Test for the opencores PCI project """ pci_clock = Signal(bool(0)) wb_clock = Signal(bool(0)) AD = Signal(intbv(0, min=0, max=2**32)) dut = simple_bench(pci_clock, wb_clock, AD) @always(delay(10)) def pci_clkgen(): pci_clock.next = not pci_clock @always(delay(WB_PERIOD/2)) def wb_clkgen(): wb_clock.next = not wb_clock @always(pci_clock.negedge) def monitor(): print AD return pci_clkgen, wb_clkgen, monitor, dut def test_bench(): sim = Simulation(bench()) sim.run() def simple_bench(pci_clock, wb_clock, AD): os.system(cmd) return Cosimulation("vvp -m ./myhdl.vpi simple_bench", pci_clock=pci_clock, wb_clock=wb_clock, AD=AD) -- Newell http://www.gempillar.com Before enlightenment: chop wood, carry water After enlightenment: code, build circuits |
From: Christopher L. F. <chr...@gm...> - 2009-04-12 03:31:10
|
Curiosity, what was the rational behind the mathematical operators for the intbv object returning the integer values and the logic operators (shifts, and, or, xor, etc) returning an intbv object? Example a = intbv(1) b = intbv(2) c = a + b type(c) <type 'int'> c = a ^ b type(c) <class 'myhdl._intbv.intbv'> Thanks Chris |
From: Jan D. <ja...@ja...> - 2009-04-19 08:08:59
|
Christopher L. Felton wrote: > Curiosity, what was the rational behind the mathematical operators for > the intbv object returning the integer values and the logic operators > (shifts, and, or, xor, etc) returning an intbv object? > > Example > a = intbv(1) > b = intbv(2) > > c = a + b > type(c) > <type 'int'> > > c = a ^ b > type(c) > <class 'myhdl._intbv.intbv'> If the return value is an intbv, you can slice and index it. This probably makes sense for "bit-oriented" operations, and perhaps less for "arithmetic" operations. Therefore, for the latter case it might be a good idea to avoid the overhead of intbv construction. This is all a little bit arbitrary and speculative - for example, I haven't done tests to check how significant the intbv construction overhead exactly is. We always have the option to change the return type to intbv if there's a real need. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |
From: Felton C. <chr...@gm...> - 2009-04-30 20:57:32
|
On Apr 19, 2009, at 3:08 AM, Jan Decaluwe wrote: > Christopher L. Felton wrote: >> Curiosity, what was the rational behind the mathematical operators >> for >> the intbv object returning the integer values and the logic operators >> (shifts, and, or, xor, etc) returning an intbv object? >> >> Example >> a = intbv(1) >> b = intbv(2) >> >> c = a + b >> type(c) >> <type 'int'> >> >> c = a ^ b >> type(c) >> <class 'myhdl._intbv.intbv'> > > If the return value is an intbv, you can slice and index it. > This probably makes sense for "bit-oriented" operations, > and perhaps less for "arithmetic" operations. > Therefore, for the latter case it might be a good idea > to avoid the overhead of intbv construction. > > This is all a little bit arbitrary and speculative - > for example, I haven't done tests to check how significant > the intbv construction overhead exactly is. > We always have the option to change the return type > to intbv if there's a real need. > > Jan > In my experience if you have a type (object) and some operations are performed the result is the same type. Also, I think it would simply matters some what for users (but could cause other issues) not having to use the [:]. There could be more negatives than positives to such a change. I imagine it might be difficult to change at this point because it could affect much of the user code. Also, don't know if it would have any implications on conversion, checking, etc. The reason I had come across this question was because I spend some time putting together a fixed-point object that used the intbv as the base class. Hence it would be convertible. I came across this scenario, for the mathematical operations leave the same implementation as intbv or return the type. I have uploaded a draft document I started for the fixed-point object. It is very (very, very, very) rough at this point http://www.myhdl.org/lib/exe/fetch.php/users:cfelton:projects:myhdlfixedpoint.pdf?id=users%3Acfelton%3Aprojects%3Afxintbv&cache=cache Thanks |
From: Jan D. <ja...@ja...> - 2009-05-10 08:30:09
|
Felton Christopher wrote: >> > > In my experience if you have a type (object) and some operations are > performed the result is the same type. I agree with that. However, in my mind, intbv is a subtype of int, in which case this would be acceptable. It is true that technically, intbv is *not* implemented as a subclass of int. The reason why this couldn't be done is that it turns out to be impossible to derive a mutable type (like intbv) from an immutable type (like int). But that's a pity, it would have simplified many things. > Also, I think it would simply > matters some what for users (but could cause other issues) not having > to use the [:]. There could be more negatives than positives to such > a change. I assume you refer to the requirement to assign to intbv instances as follows: a[:] = b to change their value. This is *not* related to the return type of intbv operations. It would still be necessary when the return type is changed. The reason is that when you would use plain name assignment: a = b the original intbv object is gone, including its constraints. So in simulation, you lose run-time bound checking, and conversion would become close to impossible. I think it's a small price to pay for these features. I'm thinking about adding a "val" property to intbv though so you could do: a.val = b Perhaps this makes more sense when you're not thinking about the bit representation. > I imagine it might be difficult to change at this point because it > could affect much of the user code. Also, don't know if it would have > any implications on conversion, checking, etc. No, I think it would be straighforward. I expect that everything that works now would continue working, except that you could do some additional things. In particular, you could index and slice an arithmetic expression. The only price would be simulation performance hit. The question is whether this is worthwhile for a feature that is seldom (?) used. Someone would have to analyze this in detail to make a good decision. Perhaps the performance hit is irrelevant - I don't know. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |
From: Felton C. <chr...@gm...> - 2009-05-15 11:12:28
|
>> > > No, I think it would be straighforward. I expect that everything that > works now would continue working, except that you could do some > additional things. In particular, you could index and slice an > arithmetic expression. > > The only price would be simulation performance hit. The question is > whether this is worthwhile for a feature that is seldom (?) used. > Someone would have to analyze this in detail to make a good > decision. Perhaps the performance hit is irrelevant - I don't know. > > Jan > I took some time to profile this change. I used the python built-in cProfile. Below are some numbers (warning lots of stuff below). The results are a little interesting. The total time for the math ops (__add__) effectively doubled. But the overall simulation time increased slightly. From this limited test case (more needs to be done) I would conclude that having the intbv math ops return and intbv type is little effect on overall simulation performance. The change was to the _intbv.py. All math operations were changed to return a intbv type versus an int type. This also suggests that a fair amount of simulation time is spent on other areas. If these other areas are ever change to be more efficient this change could have more effect on overall performance. Intbv Math returns Int Type --------------Start Test Loop-------------- --------------End Test Loop---------------- 10935715 function calls (10900641 primitive calls) in 26.753 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 26.753 26.753 <string>:1(<module>) 10 0.000 0.000 0.000 0.000 _Signal.py:155(_clear) 351088 2.770 0.000 4.089 0.000 _Signal.py:162(_update) 351088 1.268 0.000 14.802 0.000 _Signal.py:191(_set_next) 35070 0.036 0.000 0.036 0.000 _Signal.py: 204(_get_negedge) 70250 0.107 0.000 0.107 0.000 _Signal.py: 235(_setNextBool) 280838 2.165 0.000 12.916 0.000 _Signal.py: 245(_setNextIntbv) 140496 0.157 0.000 0.157 0.000 _Signal.py: 282(__nonzero__) 69447 0.270 0.000 0.463 0.000 _Signal.py:300(__add__) 70148 0.296 0.000 0.466 0.000 _Signal.py:308(__sub__) 35074 0.166 0.000 0.264 0.000 _Signal.py:316(__mul__) 35074 0.118 0.000 0.172 0.000 _Signal.py:420(__int__) 35074 0.072 0.000 0.271 0.000 _Signal.py:437(__cmp__) 1 0.000 0.000 0.000 0.000 _Simulation.py: 72(_finalize) 1 2.196 2.196 26.753 26.753 _Simulation.py:92(run) 70249 0.295 0.000 1.320 0.000 _Waiter.py:123(next) 70250 0.247 0.000 15.794 0.000 _Waiter.py:136(next) 35073 0.338 0.000 2.967 0.000 _Waiter.py:51(next) 140499 0.270 0.000 16.513 0.000 _always.py:98(genfunc) 2 0.000 0.000 0.000 0.000 _delay.py:29(__init__) 563219 0.421 0.000 0.421 0.000 _intbv.py: 105(__nonzero__) 69447 0.110 0.000 0.149 0.000 _intbv.py:187(__add__) 70148 0.114 0.000 0.140 0.000 _intbv.py:195(__sub__) 35074 0.071 0.000 0.083 0.000 _intbv.py:203(__mul__) 279149 1.902 0.000 3.267 0.000 _intbv.py:36(__init__) 35074 0.054 0.000 0.054 0.000 _intbv.py:406(__int__) 315912 0.671 0.000 0.981 0.000 _intbv.py:424(__cmp__) 559987 1.387 0.000 1.387 0.000 _intbv.py: 76(_checkBounds) 279149 0.584 0.000 3.851 0.000 _intbv.py: 95(__deepcopy__) Intbv Math returns Intbv Type --------------Start Test Loop-------------- --------------End Test Loop---------------- 11460400 function calls (11425326 primitive calls) in 28.543 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 28.543 28.543 <string>:1(<module>) 10 0.000 0.000 0.000 0.000 _Signal.py:155(_clear) 351088 2.758 0.000 4.080 0.000 _Signal.py:162(_update) 351088 1.286 0.000 14.785 0.000 _Signal.py:191(_set_next) 35070 0.034 0.000 0.034 0.000 _Signal.py: 204(_get_negedge) 70250 0.114 0.000 0.114 0.000 _Signal.py: 235(_setNextBool) 280838 2.050 0.000 12.881 0.000 _Signal.py: 245(_setNextIntbv) 140496 0.169 0.000 0.169 0.000 _Signal.py: 282(__nonzero__) 69447 0.269 0.000 0.984 0.000 _Signal.py:300(__add__) 70148 0.296 0.000 0.983 0.000 _Signal.py:308(__sub__) 35074 0.159 0.000 0.530 0.000 _Signal.py:316(__mul__) 35074 0.117 0.000 0.175 0.000 _Signal.py:420(__int__) 35074 0.070 0.000 0.281 0.000 _Signal.py:437(__cmp__) 1 0.000 0.000 0.000 0.000 _Simulation.py: 72(_finalize) 1 2.175 2.175 28.543 28.543 _Simulation.py:92(run) 70249 0.285 0.000 1.291 0.000 _Waiter.py:123(next) 70250 0.258 0.000 17.623 0.000 _Waiter.py:136(next) 35073 0.321 0.000 2.976 0.000 _Waiter.py:51(next) 140499 0.248 0.000 18.311 0.000 _always.py:98(genfunc) 2 0.000 0.000 0.000 0.000 _delay.py:29(__init__) 563215 0.417 0.000 0.417 0.000 _intbv.py: 105(__nonzero__) 69447 0.231 0.000 0.672 0.000 _intbv.py:187(__add__) 70148 0.229 0.000 0.657 0.000 _intbv.py:195(__sub__) 35074 0.132 0.000 0.356 0.000 _intbv.py:203(__mul__) 35074 0.115 0.000 0.343 0.000 _intbv.py:261(__rshift__) 488891 2.733 0.000 4.493 0.000 _intbv.py:36(__init__) 35074 0.057 0.000 0.057 0.000 _intbv.py:406(__int__) 315912 0.674 0.000 0.992 0.000 _intbv.py:424(__cmp__) 769729 1.710 0.000 1.710 0.000 _intbv.py: 76(_checkBounds) 279148 0.606 0.000 3.890 0.000 _intbv.py: 95(__deepcopy__) def mathSys1(clk, rst, x, y, N_BITS = 14): """ """ M0 = 2**(N_BITS-1) M1 = 2**(2*N_BITS-1) M2 = 2**(20*N_BITS-1) sine = Signal(intbv(0, min=-M0, max=M0)) v1 = Signal(intbv(0, min=-M1, max=M1)) v1d = Signal(intbv(0, min=-M1, max=M1)) v2 = Signal(intbv(0, min=-2*M1, max=2*M1)) v3 = Signal(intbv(0, min=-M2, max=M2)) i1 = sine_table(clk, rst, sine, N_BITS=N_BITS) @always(clk.posedge) def rtl(): if rst: v1.next = 0 v1d.next = 0 v2.next = 0 v3.next = 0 else: v1.next = x * sine v2.next = v1 - v1d v1d.next = v1 v3.next = (v3 + v2) >> N_BITS y.next = v3 - v1 #print '.... ', v1, v1d, v2, v3, y return i1, rtl |
From: Christopher F. <chr...@gm...> - 2009-05-28 16:06:08
|
> > > > > > The change was to the _intbv.py. All math operations were changed to > > return a intbv type versus an int type. > > Thanks for the work, that really helps. > > In your example, the peformance hit is around 7%. Not dramatic perhaps, > but not negligible either. > > On the other hand, we would probably get this back easily (and more) if > _intbv.py is redone in C, which may make a lot of sense just like Signal. > > I still hesitate. If the reason is purely esthetics, that doesn't seem > enough for the change. But if there is a valid technical reason, I have > nothing against it either. What does the community think? > > Jan > If such performance improvements were attempted would it be better to focus on the Signal class. In the profiles I have captured the Signal class consumes much more than intbv. The C implementations get tricky, cross-compile multiple platforms etc. Before such an effort is undertaken it might make sense to simply define the C interface. The numpy folks have done something similar. Defined "ctypes" types. http://www.scipy.org/Cookbook/Ctypes http://mentat.za.net/ctpug-numpy/ctypes.html If "ctype" types were defined for intbv and Signal that might be enough to start implementing portions in C-code. Ctypes might not be as efficient as actually embedding C code but could be a easier approach. I don't have much experience in this area others may want to comment. Another major bottleneck is how python handles the generators (schedules them)? Would this be correct? Is the simulation performance mainly dictated by this? It sounds like most are busy and this topic will have to be put on the back burner. Chris |
From: Jan D. <ja...@ja...> - 2009-06-08 20:38:08
|
Christopher Felton wrote: > > > > > The change was to the _intbv.py. All math operations were changed to > > return a intbv type versus an int type. > > Thanks for the work, that really helps. > > In your example, the peformance hit is around 7%. Not dramatic perhaps, > but not negligible either. > > On the other hand, we would probably get this back easily (and more) if > _intbv.py is redone in C, which may make a lot of sense just like > Signal. > > I still hesitate. If the reason is purely esthetics, that doesn't seem > enough for the change. But if there is a valid technical reason, I have > nothing against it either. What does the community think? > > Jan > > > If such performance improvements were attempted would it be better to > focus on the Signal class. In the profiles I have captured the Signal > class consumes much more than intbv. It is clear that Signal is expensive. I prefer coding styles in which they are not over-used. In many cases Signal wraps an intbv object. From you profiles, is it always clear to what functionality the time should be attributed? What you gain by a C implementation is very much dependent on the module. I once rewrote the basic simulation engine in MyHDL in C, with almost no gain. If you mainly use the Python API in C, there's little gain. On the other hand, if you can rewrite numeric operations in C directly instead of with the Python API, the gains can be very large. For this reason, I expect the potential relative gains for intbv to much larger than for Signal. On the other hand, as you mention, numeric operations that mimic intbv (or int) may be very tricky in C. I wouldn't want to try it! The best starting point would probably be the Python int module. > The C implementations get tricky, cross-compile multiple platforms etc. > Before such an effort is undertaken it might make sense to simply > define the C interface. The numpy folks have done something similar. > Defined "ctypes" types. > http://www.scipy.org/Cookbook/Ctypes > http://mentat.za.net/ctpug-numpy/ctypes.html > > If "ctype" types were defined for intbv and Signal that might be enough > to start implementing portions in C-code. > Ctypes might not be as efficient as actually embedding C code but could > be a easier approach. I don't have much experience in this area others > may want to comment. > > Another major bottleneck is how python handles the generators (schedules > them)? Would this be correct? Is the simulation performance mainly > dictated by this? Python doesn't schedule the generators. The MyHDL Simulation does, according to a strict algorithm that mimics what VHDL does. The simulation engine contains optimizations already. For example, every generator is inspected before the simulation starts. Depending on its sensitivity lists, it is wrapped in an appropriate wrapper to mimimize the number of tests during simulation. This makes sense because many generators are endless loops that run over and over again. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |
From: Jan D. <ja...@ja...> - 2009-05-16 06:18:27
|
Felton Christopher wrote: >>> >> >> No, I think it would be straighforward. I expect that everything that >> works now would continue working, except that you could do some >> additional things. In particular, you could index and slice an >> arithmetic expression. >> >> The only price would be simulation performance hit. The question is >> whether this is worthwhile for a feature that is seldom (?) used. >> Someone would have to analyze this in detail to make a good >> decision. Perhaps the performance hit is irrelevant - I don't know. >> >> Jan >> > > > I took some time to profile this change. I used the python built-in > cProfile. Below are some numbers (warning lots of stuff below). The > results are a little interesting. The total time for the math ops > (__add__) effectively doubled. But the overall simulation time > increased slightly. From this limited test case (more needs to be done) > I would conclude that having the intbv math ops return and intbv type is > little effect on overall simulation performance. > > The change was to the _intbv.py. All math operations were changed to > return a intbv type versus an int type. Thanks for the work, that really helps. In your example, the peformance hit is around 7%. Not dramatic perhaps, but not negligible either. On the other hand, we would probably get this back easily (and more) if _intbv.py is redone in C, which may make a lot of sense just like Signal. I still hesitate. If the reason is purely esthetics, that doesn't seem enough for the change. But if there is a valid technical reason, I have nothing against it either. What does the community think? Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com Analog design automation: http://www.mephisto-da.com World-class digital design: http://www.easics.com |