myhdl-list Mailing List for MyHDL (Page 109)
Brought to you by:
jandecaluwe
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(14) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(10) |
Mar
(19) |
Apr
(14) |
May
(1) |
Jun
(4) |
Jul
(10) |
Aug
|
Sep
(2) |
Oct
(7) |
Nov
(17) |
Dec
(12) |
2005 |
Jan
(6) |
Feb
(10) |
Mar
(17) |
Apr
(10) |
May
(9) |
Jun
(5) |
Jul
(26) |
Aug
(34) |
Sep
(10) |
Oct
(38) |
Nov
(71) |
Dec
(74) |
2006 |
Jan
(20) |
Feb
(20) |
Mar
(7) |
Apr
(2) |
May
(13) |
Jun
|
Jul
|
Aug
(4) |
Sep
(37) |
Oct
(43) |
Nov
(30) |
Dec
(33) |
2007 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
(30) |
Jun
(9) |
Jul
(1) |
Aug
|
Sep
(8) |
Oct
(13) |
Nov
|
Dec
(4) |
2008 |
Jan
(13) |
Feb
(46) |
Mar
(25) |
Apr
(7) |
May
(20) |
Jun
(73) |
Jul
(38) |
Aug
(47) |
Sep
(24) |
Oct
(18) |
Nov
(9) |
Dec
(36) |
2009 |
Jan
(31) |
Feb
(24) |
Mar
(73) |
Apr
(13) |
May
(47) |
Jun
(28) |
Jul
(36) |
Aug
(2) |
Sep
(5) |
Oct
(8) |
Nov
(16) |
Dec
(29) |
2010 |
Jan
(34) |
Feb
(18) |
Mar
(18) |
Apr
(5) |
May
|
Jun
(24) |
Jul
(53) |
Aug
(3) |
Sep
(18) |
Oct
(33) |
Nov
(19) |
Dec
(15) |
2011 |
Jan
(9) |
Feb
(4) |
Mar
(39) |
Apr
(213) |
May
(86) |
Jun
(46) |
Jul
(22) |
Aug
(11) |
Sep
(78) |
Oct
(59) |
Nov
(38) |
Dec
(24) |
2012 |
Jan
(9) |
Feb
(22) |
Mar
(89) |
Apr
(55) |
May
(222) |
Jun
(86) |
Jul
(57) |
Aug
(32) |
Sep
(49) |
Oct
(69) |
Nov
(12) |
Dec
(35) |
2013 |
Jan
(67) |
Feb
(39) |
Mar
(18) |
Apr
(42) |
May
(79) |
Jun
(1) |
Jul
(19) |
Aug
(18) |
Sep
(54) |
Oct
(79) |
Nov
(9) |
Dec
(26) |
2014 |
Jan
(30) |
Feb
(44) |
Mar
(26) |
Apr
(11) |
May
(39) |
Jun
(1) |
Jul
(89) |
Aug
(15) |
Sep
(7) |
Oct
(6) |
Nov
(20) |
Dec
(27) |
2015 |
Jan
(107) |
Feb
(106) |
Mar
(130) |
Apr
(90) |
May
(147) |
Jun
(28) |
Jul
(53) |
Aug
(16) |
Sep
(23) |
Oct
(7) |
Nov
|
Dec
(16) |
2016 |
Jan
(86) |
Feb
(41) |
Mar
(38) |
Apr
(31) |
May
(37) |
Jun
(11) |
Jul
(1) |
Aug
(1) |
Sep
(3) |
Oct
(1) |
Nov
(5) |
Dec
(3) |
2017 |
Jan
|
Feb
(4) |
Mar
(2) |
Apr
(2) |
May
|
Jun
(3) |
Jul
(2) |
Aug
(2) |
Sep
(1) |
Oct
(2) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
(1) |
Feb
(1) |
Mar
(7) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(1) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2020 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(3) |
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
(12) |
Dec
(11) |
2022 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
(1) |
Feb
(2) |
Mar
(4) |
Apr
(2) |
May
(2) |
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(2) |
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jan D. <ja...@ja...> - 2011-09-27 10:52:39
|
On 09/25/2011 10:38 AM, Bob Cunningham wrote: > > I'm mainly trying to get a feel for how flexible my FPGA development > process should be. In software, we frequently create multiple > independent tests for a single piece of code that runs in a single > environment. In hardware, it seems we should first do our basic > testbenches in all available environments (myHDL, cosimulation, > hardware). I don' understand what you mean here. > In software we have many well-known metrics we use to determine when > testing is thorough: Static code inspection, modeling, input/output > corner analysis, random vector analysis (black box testing), > path/state analysis, coverage analysis, and the list goes on. > Embedded/real-time software basically gets tested to death. We also > need to test that our software responds appropriately in the presence > of significant hardware failures. > > Do equivalent metrics (and their support tools) exist in the hardware > domain? The test benches I've seen so far appear to primarily be > simple I/O stimulus. Do tools exist that ensure a testbench does > indeed access all critical internal states (including corner cases)? > That all gates ha A core idea of MyHDL is that you should be able to use any verification methodology or tool that is available for general Python development. Personally, I use test-driven design and unit testing systematically for hardware design now. > Or do I need to insert instrumentation into my circuit to expose > internal states for access by a more sophisticated testbench? And > couldn't some or all of the results acquired using such added > instrumentation be invalidated when the circuit is run without the > instrumentation? I would imagine the synthesis output could be very > different, resulting in some changed functionality in the > implementation. > > I suppose much of this may reside in the synthesis toolchain, > something I have not yet thoroughly explored. I don't yet understand > some of the content of the WebPACK ISE synthesis reports. It seems to me still have to come to terms with one fundamental difference with software design: modeling/simulation and synthesis are completely different things. Modeling with MyHDL can be very high level and general. You can use full Python power and tools. However, MyHDL is not going to magically solve the fact that synthesis is conceptually quite low-level and restrictive. > For example, one concern for me would be finding the maximum usable > clock rate for a particular design in a particular FPGA. I haven't > yet seen anything in a testbench that would help determine this. In > software, we are seldom worried if the CPU is going too fast! Quite > the opposite. Surely in software there are also many metrics that your "test bench" or verification code itself cannot help you with directly. Take profiling for example, to see how fast particular functions run. Finding out the clock rate is similar, with the big advantage that it is a static check. In both cases, you can use such feedback to go back to the code and make changes. > > I'm wondering how gnarly I can expect my testing to get as my designs > become more complex. > > -BobC > > > ------------------------------------------------------------------------------ > > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2dcopy2 -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Jan D. <ja...@ja...> - 2011-09-27 07:46:52
|
On 09/27/2011 06:13 AM, David Greenberg wrote: > When I try to run the following code in a file, it terminates with an > error. It appears to occur when a memory list is used in a function > that doesn't return something directly executable, Hierarchy extraction (used for lower level tasks like conversion and waveform tracing) doesn't work for things that are not strictly instances according to MyHDL definition: http://www.myhdl.org/doc/current/manual/modeling.html#structural-modeling -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: David G. <dsg...@gm...> - 2011-09-27 04:13:08
|
When I try to run the following code in a file, it terminates with an error. It appears to occur when a memory list is used in a function that doesn't return something directly executable, even though I am throwing away the other return value (but I'd like to be able to return a usable function that is synthesized into a verilog talk/function). What functions do I need to look at in the compiler to add this feature, or why is it impossible in the current system? Thanks, David from myhdl import * def test(a, o): mem = [Signal(intbv(i)[32:]) for i in range(32)] @always_comb def logic(): o.next = mem[a] def foo(): pass return logic, foo clk = Signal(bool(0)) @always(delay(10)) def clkgen(): clk.next = not clk def tb(clk): a = Signal(intbv(0)[32:]) o = Signal(intbv(0)[32:]) t_logic, foo = test(a,o) counter = Signal(intbv(0)[4:]) @always(clk.posedge) def logic(): counter.next = (counter + 1) % 16 if counter == 3: a.next = 0 else: a.next = 7 return t_logic, logic toVerilog(tb, clk) |
From: David G. <dsg...@gm...> - 2011-09-27 00:12:54
|
Have you ever heard of Guarded Atomic Actions? It's a model that lets you map classes/methods into sythesizable constructs. Essentially, a method has 2 implicit signals, "ready" and "start". When you call a method, the "ready" signal is used as a guard, so that the logic using that method call is transformed as follows: @always(clk.posedge) def exec(): out.next = regfile.read0(in) becomes @always(clk.posedge) def exec(): if regfile.read0.ready: regfile.read0.start.next = bool(1) out.next = regfile.read0(in) The power of this technique becomes apparent when you implement an arbiter object that's able to multiplex N modules with M consumers, dynamically scheduling the workload across them but allowing the code that uses the modules to appear to use simple method calls. I'd be interested in working on implementing this for MyHDL. There are several papers on HDL synthesis with GAAs, but they're all about BlueSpec, which has another related feature (one-rule-at-a-time-semantics) that is often a hindrance. Here's one such paper: http://csg.csail.mit.edu/pubs/memos/Memo-468new/memo468.pdf Perhaps we could generate some discussion on what GAAs might look like in MyHDL? One possible implementation would be to create an @guarded(*signals) decorator that wraps the generator it contains with guards on all the signals in the input list, and also allows for guarded atomic actions to be invoked from within the generator (and the compiler can infer the additional guards to add). Then, it would be a matter of adding decorators to method calls/function calls to be synthesized as gated modules rather than as tasks/functions (I come from verilog-land). Under this, a register file could be implemented as: class RegFile(): __init__(self, clk, regs, bits): self.clk = clk self.regs = [Signal(intbv(0)[bits:] for i in range(regs)] self.bypass_reg = Signal(intbv(0)[log2(regs)]) self.bypass_val = Signal(intbv(0)[bits:]) def read(self, reg): if bypass_reg == reg: return bypass_val else: return self.regs[reg] @action(clk) #this means that this should be synthesized with the gating signals and logic using the clk def write(self, reg, val): self.regs[reg].next = val self.bypass_reg.next = reg # I'm not sure how to do the equivalent of verilog "assign" self.bypass_val.next = val @always(clk.posedge) @guarded #allows us to use GAAs in this generator def exec(): if op == enum_t.ADD: regfile.write(dest_reg, operand0 + operand1) elif op == enum_t.SUB: regfile.write(dest_reg, operand0 - operand1) #compiler understands the writes are mutually exclusive regfile.write(0, 0) #compiler can complain about a path containing 2 calls to write in the control dependency graph Feedback? On Mon, Sep 26, 2011 at 10:59 AM, Christopher Felton <chr...@gm...> wrote: > On 9/25/2011 5:26 PM, David Greenberg wrote: >> Hi, I'm just getting started using MyHDL for a microprocessor project. >> I am not sure what the best way to make a synthesizable register file >> is. I'll use a 32 register, 32 bit width file as an example. >> >> All I can think of how to express easily is: >> rfile_underlying = Signal(intbv(0)[(32 * 32) : ]) >> rfile = [rfile_underlying[32*(i+1):32*i] for i in range(32)] >> >> and then indexing into the rfile variable. This approach makes it hard >> to bypass a register when it's being read and written in the same >> cycle (assuming I want that behavior). >> >> I'd really like to be able to say: >> >> rfile = RegFile(bits = 32, size = 32, bypass = false) >> and then have functions like >> rfile.readPort0(reg) >> rfile.readPort1(reg) >> rfile.write(reg, value) >> >> I think I can do this with a class and functions, but I don't >> understand how synthesis of classes or compound objects works if they >> are used within generators. >> >> Thanks, >> David >> > > Two questions here, basic concept how to generate -- what most call a > register file -- and second how to organize in the code. Jan provided a > link to answer the first question. Use a list of signals to create a > homogeneous bit-vector array. > > The second part, you can't use classes as you show in your example if > you want to convert the code. It will not convert with classes. Your > example doesn't really fit a hardware description, more of a code flow > (testbench?). > > When describing hardware you need to define concurrent processes and the > mechanism for them to communicate. If you simply want to Model the > hardware you can do something like you specified but if you want to > convert/synthesize you can not. To generate actual hardware of a loose > description the tools would need to make very large assumptions. > > What you can do, is create a RegFile function that configures/creates > the underlying register and returns the write and read generators, e.g: > > rfile_write, rfile_read = RegFile(clk, srst, wr, rd, data, reg_addr, > bits=32, size=32, bypass=False) > > You need to define the interface to the register file (Signals that > allow them to communicate). Now you can use some nice features of > Python to help define and manage register files. > > Hope that helps, > Chris > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2dcopy1 > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list > |
From: Christopher F. <chr...@gm...> - 2011-09-26 15:00:28
|
On 9/25/2011 5:26 PM, David Greenberg wrote: > Hi, I'm just getting started using MyHDL for a microprocessor project. > I am not sure what the best way to make a synthesizable register file > is. I'll use a 32 register, 32 bit width file as an example. > > All I can think of how to express easily is: > rfile_underlying = Signal(intbv(0)[(32 * 32) : ]) > rfile = [rfile_underlying[32*(i+1):32*i] for i in range(32)] > > and then indexing into the rfile variable. This approach makes it hard > to bypass a register when it's being read and written in the same > cycle (assuming I want that behavior). > > I'd really like to be able to say: > > rfile = RegFile(bits = 32, size = 32, bypass = false) > and then have functions like > rfile.readPort0(reg) > rfile.readPort1(reg) > rfile.write(reg, value) > > I think I can do this with a class and functions, but I don't > understand how synthesis of classes or compound objects works if they > are used within generators. > > Thanks, > David > Two questions here, basic concept how to generate -- what most call a register file -- and second how to organize in the code. Jan provided a link to answer the first question. Use a list of signals to create a homogeneous bit-vector array. The second part, you can't use classes as you show in your example if you want to convert the code. It will not convert with classes. Your example doesn't really fit a hardware description, more of a code flow (testbench?). When describing hardware you need to define concurrent processes and the mechanism for them to communicate. If you simply want to Model the hardware you can do something like you specified but if you want to convert/synthesize you can not. To generate actual hardware of a loose description the tools would need to make very large assumptions. What you can do, is create a RegFile function that configures/creates the underlying register and returns the write and read generators, e.g: rfile_write, rfile_read = RegFile(clk, srst, wr, rd, data, reg_addr, bits=32, size=32, bypass=False) You need to define the interface to the register file (Signals that allow them to communicate). Now you can use some nice features of Python to help define and manage register files. Hope that helps, Chris |
From: Jan D. <ja...@ja...> - 2011-09-26 09:55:48
|
On 09/26/2011 01:18 AM, Bob Cunningham wrote: > I've been scanning the myHDL manual trying to learn the limits of > myHDL as an 'all-purpose' HDL, and came across the notion of > user-defined code > (http://www.myhdl.org/doc/current/whatsnew/0.7.html#new-method-to-specify-user-defined-code). > > This brings several questions to mind: > > 1. If I incorporate Verilog into a myHDL file via<func>.vhdl_code, is > there a way I can cause an error to be generated if an attempt is > made to convert the file using toVHDL()? No, but you'll get errors from the VHDL compiler of course :-) And vice-versa? Is there > any problem having both .vhdl_code and .verilog_code definitions? No, that's the intention. toVHDL() and toVerilog() know where to look. > 2. Can/should I use "<func>.verilog_code" to add Verilog synthesis > pragmas to my myHDL code? Probably, but why would you? > > 3. If I have some proven Verilog IP, can/should I wrap it in > "<func>.verilog_code" so I can make it a tightly integrated part of > my project? Yes. This works especially well if you can use a high-level model in MyHDL for simulation, and at the same time getting the IP inserted when converting. Note that when you use those hooks, you no longer have to take the conversion restrictions into account for the model that sits alongside with the hooks. toVerilog() or toVHDL() know when to stop converting. > 4. Are there examples of using .verilog_code .vhdl_code other than in > myhdl-0.7/myhdl/test/conversion/[toVerilog|toVHDL]/test_newcustom.py? > > > > There are several underlying issues: > > A. How best to manage a mix of generated and external Verilog and > VHDL files? > > In the best general case, I would like *all* Verilog and VHDL files > to be dynamically generated by myHDL, and thus be treated as a > disposable temporary files in the build and synthesis process. > > That would require absorbing all external Verilog/VHDL IP into a > myHDL files via "<func>.verilog_code" and "<func>.vhdl_code" (or a > similar future mechanism). We've been doing this for decades in the > C++ community, wrapping C code with 'extern "C" { ... }' before > feeding it to a C++ compiler. > > Is a similar technique considered a 'best practice' for using > 'legacy' HDL code in a myHDL project? I'm very much in favor of such > a policy for at least three reasons: 1) Eliminate the presence of > non-temporary VHDL and Verilog files in the myHDL process (only myHDL > files need tracking), 2) over time it would encourage migrating the > wrapped 'legacy' code into myHDL, and 3) I can more easily use Python > testbenches to test Verilog and VHDL code. > > > B. It may be useful for myHDL objects to be able to define which of > toVerilog() or toVHDL() should be applied to themselves, rather than > have this solely be an option of an external agent. In particular, > I'd like the need for 'bad' myHDL code to be present to force an > error and remind the user to use the 'other' converter function. > > *Feature Wish*: How about a generic "toLegacyHDL()" call that > wouldaccess a system/project default conversion preferenceif a > function fails to set a preference of its own (by defining either > .verilog_code or .vhdl_code)? That is, it would automatically call > toVerilog() if .verilog_code is present, and toVHDL() if .vhdl_code > is present. This would be convenient when wrapping imported 'legacy' > IP. When the legacy IP has been correctly converted to myHDL, the > legacy code could be retained as a comment by deleting or > commenting-out the .verilog_code or .vhdl_code tag. > > > C. This actually keys into a much higher-level issue: Is myHDL > intended to always be a pre-processor that emits Verilog and VHDL > code? Or is myHDL in principle intended to eventually support its > own compiler, as a peer to VHDL, Verilog, SystemC and the other > 'native' HDLs? As always in HDLs, we have two kinds for compilers: for modeling/simulation and for synthesis. The modeling/simulation part is Ok today: you can use it for a MyHDL centric design flow. Note that large parts of the code, for high-level modeling and test benches, don't need to be converted at all. (Except when using test bench conversion as an alternative to cosimulation for verifying RTL code.) For synthesis, it is probably unrealistic to assume that an EDA vendor will soon be interested to support MyHDL natively. I fear we'll have to deal with the tricky conversion issues ourselves. However, once you accept the conversion restrictions (which are severe, but much less so than those for synthesis) the flow can be largely automated. I like to think of Verilog/VHDL as a "back-end format" like many others, that is used at one particular stage of an automated flow. > When I first started using C++ (way back when it was known as "C with > Classes"), we had to use the AT&T / Bell Labs "cfront" preprocessor, > which converted C++ code to something a C compiler could understand. > But this was a very slow and inefficient process, and it wasn't until > native C++ compilers came along that the full power of C++ could be > unleashed, and the language could reach its full potential. > > Is there anything 'missing' in myHDL that would prevent it becoming a > 'full-fledged' HDL? If a tool vendor were to say "Let's create a > myHDL RTL compiler to add to our Verilog and VHDL toolchains!", what > limitations of myHDL would prevent success? > > > Yes, I have an ulterior motive here: I would very much like there to > one day be a world where I could create FPGA code without having to > use (or learn) any HDL other than myHDL. Based on what I've seen so > far, I believe myHDL can/should be the 'top dog' in the HDL world, if > only because it a) is less verbose and is more clear (to me) than > either of Verilog or VHDL, and b) it unleashes the full power of > Python in testbenches. > > Until myHDL gets its own synthesis chain, I think it should be able > to cleanly and easily 'envelop' legacy HDL code. (Then we could wrap > everything in OpenCores and make it a myHDL site! Bwa-ha-ha-ha!) > > > -BobC > > > ------------------------------------------------------------------------------ > > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2dcopy2 -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Jan D. <ja...@ja...> - 2011-09-26 09:36:35
|
On 09/26/2011 12:26 AM, David Greenberg wrote: > Hi, I'm just getting started using MyHDL for a microprocessor project. > I am not sure what the best way to make a synthesizable register file > is. I'll use a 32 register, 32 bit width file as an example. http://www.myhdl.org/doc/current/manual/conversion_examples.html#ram-inference > > All I can think of how to express easily is: > rfile_underlying = Signal(intbv(0)[(32 * 32) : ]) > rfile = [rfile_underlying[32*(i+1):32*i] for i in range(32)] > > and then indexing into the rfile variable. This approach makes it hard > to bypass a register when it's being read and written in the same > cycle (assuming I want that behavior). > > I'd really like to be able to say: > > rfile = RegFile(bits = 32, size = 32, bypass = false) > and then have functions like > rfile.readPort0(reg) > rfile.readPort1(reg) > rfile.write(reg, value) > > I think I can do this with a class and functions, but I don't > understand how synthesis of classes or compound objects works if they > are used within generators. > > Thanks, > David > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2dcopy2 -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Bob C. <Fl...@gm...> - 2011-09-25 23:19:05
|
I've been scanning the myHDL manual trying to learn the limits of myHDL as an 'all-purpose' HDL, and came across the notion of user-defined code (http://www.myhdl.org/doc/current/whatsnew/0.7.html#new-method-to-specify-user-defined-code). This brings several questions to mind: 1. If I incorporate Verilog into a myHDL file via <func>.vhdl_code, is there a way I can cause an error to be generated if an attempt is made to convert the file using toVHDL()? And vice-versa? Is there any problem having both .vhdl_code and .verilog_code definitions? 2. Can/should I use "<func>.verilog_code" to add Verilog synthesis pragmas to my myHDL code? 3. If I have some proven Verilog IP, can/should I wrap it in "<func>.verilog_code" so I can make it a tightly integrated part of my project? 4. Are there examples of using .verilog_code .vhdl_code other than in myhdl-0.7/myhdl/test/conversion/[toVerilog|toVHDL]/test_newcustom.py? There are several underlying issues: A. How best to manage a mix of generated and external Verilog and VHDL files? In the best general case, I would like *all* Verilog and VHDL files to be dynamically generated by myHDL, and thus be treated as a disposable temporary files in the build and synthesis process. That would require absorbing all external Verilog/VHDL IP into a myHDL files via "<func>.verilog_code" and "<func>.vhdl_code" (or a similar future mechanism). We've been doing this for decades in the C++ community, wrapping C code with 'extern "C" { ... }' before feeding it to a C++ compiler. Is a similar technique considered a 'best practice' for using 'legacy' HDL code in a myHDL project? I'm very much in favor of such a policy for at least three reasons: 1) Eliminate the presence of non-temporary VHDL and Verilog files in the myHDL process (only myHDL files need tracking), 2) over time it would encourage migrating the wrapped 'legacy' code into myHDL, and 3) I can more easily use Python testbenches to test Verilog and VHDL code. B. It may be useful for myHDL objects to be able to define which of toVerilog() or toVHDL() should be applied to themselves, rather than have this solely be an option of an external agent. In particular, I'd like the need for 'bad' myHDL code to be present to force an error and remind the user to use the 'other' converter function. *Feature Wish*: How about a generic "toLegacyHDL()" call that wouldaccess a system/project default conversion preferenceif a function fails to set a preference of its own (by defining either .verilog_code or .vhdl_code)? That is, it would automatically call toVerilog() if .verilog_code is present, and toVHDL() if .vhdl_code is present. This would be convenient when wrapping imported 'legacy' IP. When the legacy IP has been correctly converted to myHDL, the legacy code could be retained as a comment by deleting or commenting-out the .verilog_code or .vhdl_code tag. C. This actually keys into a much higher-level issue: Is myHDL intended to always be a pre-processor that emits Verilog and VHDL code? Or is myHDL in principle intended to eventually support its own compiler, as a peer to VHDL, Verilog, SystemC and the other 'native' HDLs? When I first started using C++ (way back when it was known as "C with Classes"), we had to use the AT&T / Bell Labs "cfront" preprocessor, which converted C++ code to something a C compiler could understand. But this was a very slow and inefficient process, and it wasn't until native C++ compilers came along that the full power of C++ could be unleashed, and the language could reach its full potential. Is there anything 'missing' in myHDL that would prevent it becoming a 'full-fledged' HDL? If a tool vendor were to say "Let's create a myHDL RTL compiler to add to our Verilog and VHDL toolchains!", what limitations of myHDL would prevent success? Yes, I have an ulterior motive here: I would very much like there to one day be a world where I could create FPGA code without having to use (or learn) any HDL other than myHDL. Based on what I've seen so far, I believe myHDL can/should be the 'top dog' in the HDL world, if only because it a) is less verbose and is more clear (to me) than either of Verilog or VHDL, and b) it unleashes the full power of Python in testbenches. Until myHDL gets its own synthesis chain, I think it should be able to cleanly and easily 'envelop' legacy HDL code. (Then we could wrap everything in OpenCores and make it a myHDL site! Bwa-ha-ha-ha!) -BobC |
From: David G. <dsg...@gm...> - 2011-09-25 22:26:13
|
Hi, I'm just getting started using MyHDL for a microprocessor project. I am not sure what the best way to make a synthesizable register file is. I'll use a 32 register, 32 bit width file as an example. All I can think of how to express easily is: rfile_underlying = Signal(intbv(0)[(32 * 32) : ]) rfile = [rfile_underlying[32*(i+1):32*i] for i in range(32)] and then indexing into the rfile variable. This approach makes it hard to bypass a register when it's being read and written in the same cycle (assuming I want that behavior). I'd really like to be able to say: rfile = RegFile(bits = 32, size = 32, bypass = false) and then have functions like rfile.readPort0(reg) rfile.readPort1(reg) rfile.write(reg, value) I think I can do this with a class and functions, but I don't understand how synthesis of classes or compound objects works if they are used within generators. Thanks, David |
From: Bob C. <Fl...@gm...> - 2011-09-25 08:38:51
|
On 09/24/2011 07:50 PM, Christopher Felton wrote: > Is there a particular issue you are having with co-simulation? Or does it seem like a steep learning curve? Which it might be if you have not used other HDL simulators. I think if you follow the examples and use Icarus that will get you pretty far. I'm mainly trying to get a feel for how flexible my FPGA development process should be. In software, we frequently create multiple independent tests for a single piece of code that runs in a single environment. In hardware, it seems we should first do our basic testbenches in all available environments (myHDL, cosimulation, hardware). In software we have many well-known metrics we use to determine when testing is thorough: Static code inspection, modeling, input/output corner analysis, random vector analysis (black box testing), path/state analysis, coverage analysis, and the list goes on. Embedded/real-time software basically gets tested to death. We also need to test that our software responds appropriately in the presence of significant hardware failures. Do equivalent metrics (and their support tools) exist in the hardware domain? The test benches I've seen so far appear to primarily be simple I/O stimulus. Do tools exist that ensure a testbench does indeed access all critical internal states (including corner cases)? That all gates have been forced to change state at least once (coverage)? Or do I need to insert instrumentation into my circuit to expose internal states for access by a more sophisticated testbench? And couldn't some or all of the results acquired using such added instrumentation be invalidated when the circuit is run without the instrumentation? I would imagine the synthesis output could be very different, resulting in some changed functionality in the implementation. I suppose much of this may reside in the synthesis toolchain, something I have not yet thoroughly explored. I don't yet understand some of the content of the WebPACK ISE synthesis reports. For example, one concern for me would be finding the maximum usable clock rate for a particular design in a particular FPGA. I haven't yet seen anything in a testbench that would help determine this. In software, we are seldom worried if the CPU is going too fast! Quite the opposite. I'm wondering how gnarly I can expect my testing to get as my designs become more complex. -BobC |
From: Christopher F. <chr...@gm...> - 2011-09-25 02:51:00
|
On 9/24/11 5:49 PM, Bob Cunningham wrote: > On 09/24/2011 01:05 PM, Christopher Felton wrote: >> On 9/24/11 2:56 AM, Bob Cunningham wrote: >>> I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" >>> >>> Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) >>> >> Couple ways to look at this, say you are co-simulating MyHDL and Verilog >> (not converted MyHDL -> Verilog). And your Verilog code has the following: >> >> always(*) begin >> x = #10 y; >> end >> >> The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate >> back to anything in the MyHDL world. The Verilog simulator will have >> its time step based on the settings or `timescale and the MyHDL >> simulator has it time step, these may not be the same in other-words >> they Verilog simulator and MyHDL simulator simulation steps are not >> related. They are syncronized when the MyHDL is the simulation master. >> When the MyHDL testbench generates the singals that sync e.g. clocks. >> >> A delay is not synthesizable, a synthesizer will not try and generate >> some circuit (wound wire) to match the delay. >> >> In general, if you are converting MyHDL to Verilog/VHDL this should not >> be much of an issue, unless you have "yield delay(N)" in the code you >> are trying to convert (I don't know if that statement will convert, it >> might for the testbench conversions). > > From what little I know, it seems "yield delay(n)" is mainly used to avoid adding yet another clock, to avoid race conditions and allow more delta-cycles to occur during simulation. Is that right? If so, is there always a clean way to avoid using "yield delay(n)" without causing a clock explosion? (From what I've seen, creating and routing clocks can quickly get expensive.) > > I suppose my main concern is more about how to get the most from my myHDL simulations and Verilog cosimulations before testing a programmed FPGA. > If you follow the tutorials and cookbook examples these will get you started with usable testbenches. In general, just use the "delay" in the test code and not code that is intended to be converted. The "yield delay(n)" is used to control time. One way to do this is use the "clock" and count ticks another is to delay *n* simulation ticks/steps. > If I'm doing all my own work in myHDL, is there really any need to do Verilog cosimulation? Should I reasonably expect to go from myHDL to a programmed part? (Adding only I/O assignment.) > In my mind, yes. This is a very common flow. After each translation (conversion, synthesis, etc) some method is used to validate the transition. You still want to verify that the converted and synthesized circuits act as you expect. > When I'm merging OPIP (Other-People's IP) into my own projects, then I suspect I'll want to do lots of cosimulation, especially to ensure the Verilog generated from my myHDL code integrates well with the other code I'm using (such as from OpenCores). If so, I'll want to integrate cosimulation into my workflow right from the start (mainly to get lots of practice with it). > Yes, I would agree. > In the specific case you mention, can it be handled by writing a myHDL wrapper that, when converted to Verilog, would permit the rest of the myHDL test bench to work without problems? That is, can I reasonably expect to be able to shove such problems completely into the Verilog domain? > > I also want my myHDL testbenches to be as useful as possible for testing programmed parts. I see three ways to go in this area: > 1. Create synthesizable testbenches. This is not preferred in the general case, since it would eliminate much of the power of having a Python test environment. > 2. Create a logic analyzer(-ish) interface between the myHDL test bench and the programmed part. > 3. A merged approach, where a more-or-less generic test interface is programmed with the DUT with a higher-level interface to the myHDL testbench. (Put the fast stuff in the FPGA, with slower interactions on the PC.) > I would just abstract some of the interfaces. They you can test functionally the design. I did this with the USBP project, the USB interface was abstracted, in simulation I could use the same test code that would be used with the hardware. Example, if you wanted to test an OpenCores UART. You could create test code with an interface as simple as write and read. For simulation you would need to create an cycle and bit accurate interface for the real hardware simply call the pyserial functions. > > Perhaps some perspective on where I'm coming from would help: When I started doing embedded/real-time systems 25 years ago, I had to hand-tune assembler to maximize performance from 8-bit micros operating at 1 MHz, where it was rare to have an integer divide instruction. Next, as problem complexity and precision grew and timing deadlines shrank, I often used hand-tuned DSP code. As silicon speeds increased and compiler technology improved, it became practical to tackle ever more complex problems using higher-level languages. Increased memory sizes permitted use of ever more sophisticated RTOSes. Then came embedded CISC processors with caches and FPUs, and I was back to hand-tuning to avoid cache thrashing and pipeline stalls. Next came MMX and SSE, soon followed by multiprocessing an > d multicore, all of which required more hand-tuning to keep the hardware fully utilized. Yet I was increasingly able to cost-effectively solve relatively big/hard problems using commodity > platforms and a tweaked Linux kernel. > > The next big change came with programming for GPUs. It felt like I was starting over. GPUs are almost antithetical to use in embedded/real-time systems given the long startup and pipeline delay times required to make full use of a GPU. That's when I decided to pursue FPGA development, since the tools are far more mature than current GPU tools (OpenCL and CUDA). Plus, after finding myHDL, I get to use Python! > > Basically, I'm an algorithm guy with a fetish for fast, efficient implementations. I'm a fanatic about testing, and about getting from concept to implementation as quickly as possible while maximizing quality. > Sounds like MyHDL is a good fit for your goals. > How much should I expect to rely on cosimulation? > As mentioned and since you state you are a fanatic about testing (and verification) I would say you would prefer to include co-simulation as much as possible. I see many FPGA developers use very little simulation period because they do all the testing in hardware. Personally, I like to simulated and co-simulate and be done :) Is there a particular issue you are having with co-simulation? Or does it seem like a steep learning curve? Which it might be if you have not used other HDL simulators. I think if you follow the examples and use Icarus that will get you pretty far. Regards, Chris |
From: Bob C. <Fl...@gm...> - 2011-09-24 22:49:48
|
On 09/24/2011 01:05 PM, Christopher Felton wrote: > On 9/24/11 2:56 AM, Bob Cunningham wrote: >> I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" >> >> Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) >> > Couple ways to look at this, say you are co-simulating MyHDL and Verilog > (not converted MyHDL -> Verilog). And your Verilog code has the following: > > always(*) begin > x = #10 y; > end > > The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate > back to anything in the MyHDL world. The Verilog simulator will have > its time step based on the settings or `timescale and the MyHDL > simulator has it time step, these may not be the same in other-words > they Verilog simulator and MyHDL simulator simulation steps are not > related. They are syncronized when the MyHDL is the simulation master. > When the MyHDL testbench generates the singals that sync e.g. clocks. > > A delay is not synthesizable, a synthesizer will not try and generate > some circuit (wound wire) to match the delay. > > In general, if you are converting MyHDL to Verilog/VHDL this should not > be much of an issue, unless you have "yield delay(N)" in the code you > are trying to convert (I don't know if that statement will convert, it > might for the testbench conversions). From what little I know, it seems "yield delay(n)" is mainly used to avoid adding yet another clock, to avoid race conditions and allow more delta-cycles to occur during simulation. Is that right? If so, is there always a clean way to avoid using "yield delay(n)" without causing a clock explosion? (From what I've seen, creating and routing clocks can quickly get expensive.) I suppose my main concern is more about how to get the most from my myHDL simulations and Verilog cosimulations before testing a programmed FPGA. If I'm doing all my own work in myHDL, is there really any need to do Verilog cosimulation? Should I reasonably expect to go from myHDL to a programmed part? (Adding only I/O assignment.) When I'm merging OPIP (Other-People's IP) into my own projects, then I suspect I'll want to do lots of cosimulation, especially to ensure the Verilog generated from my myHDL code integrates well with the other code I'm using (such as from OpenCores). If so, I'll want to integrate cosimulation into my workflow right from the start (mainly to get lots of practice with it). In the specific case you mention, can it be handled by writing a myHDL wrapper that, when converted to Verilog, would permit the rest of the myHDL test bench to work without problems? That is, can I reasonably expect to be able to shove such problems completely into the Verilog domain? I also want my myHDL testbenches to be as useful as possible for testing programmed parts. I see three ways to go in this area: 1. Create synthesizable testbenches. This is not preferred in the general case, since it would eliminate much of the power of having a Python test environment. 2. Create a logic analyzer(-ish) interface between the myHDL test bench and the programmed part. 3. A merged approach, where a more-or-less generic test interface is programmed with the DUT with a higher-level interface to the myHDL testbench. (Put the fast stuff in the FPGA, with slower interactions on the PC.) Perhaps some perspective on where I'm coming from would help: When I started doing embedded/real-time systems 25 years ago, I had to hand-tune assembler to maximize performance from 8-bit micros operating at 1 MHz, where it was rare to have an integer divide instruction. Next, as problem complexity and precision grew and timing deadlines shrank, I often used hand-tuned DSP code. As silicon speeds increased and compiler technology improved, it became practical to tackle ever more complex problems using higher-level languages. Increased memory sizes permitted use of ever more sophisticated RTOSes. Then came embedded CISC processors with caches and FPUs, and I was back to hand-tuning to avoid cache thrashing and pipeline stalls. Next came MMX and SSE, soon followed by multiprocessing and multicore, all of which required more hand-tuning to keep the hardware fully utilized. Yet I was increasingly able to cost-effectively solve relatively big/hard problems using commodity platforms and a tweaked Linux kernel. The next big change came with programming for GPUs. It felt like I was starting over. GPUs are almost antithetical to use in embedded/real-time systems given the long startup and pipeline delay times required to make full use of a GPU. That's when I decided to pursue FPGA development, since the tools are far more mature than current GPU tools (OpenCL and CUDA). Plus, after finding myHDL, I get to use Python! Basically, I'm an algorithm guy with a fetish for fast, efficient implementations. I'm a fanatic about testing, and about getting from concept to implementation as quickly as possible while maximizing quality. How much should I expect to rely on cosimulation? -BobC |
From: Christopher F. <chr...@gm...> - 2011-09-24 20:05:36
|
On 9/24/11 2:56 AM, Bob Cunningham wrote: > I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" > > Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) > Couple ways to look at this, say you are co-simulating MyHDL and Verilog (not converted MyHDL -> Verilog). And your Verilog code has the following: always(*) begin x = #10 y; end The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate back to anything in the MyHDL world. The Verilog simulator will have its time step based on the settings or `timescale and the MyHDL simulator has it time step, these may not be the same in other-words they Verilog simulator and MyHDL simulator simulation steps are not related. They are syncronized when the MyHDL is the simulation master. When the MyHDL testbench generates the singals that sync e.g. clocks. A delay is not synthesizable, a synthesizer will not try and generate some circuit (wound wire) to match the delay. In general, if you are converting MyHDL to Verilog/VHDL this should not be much of an issue, unless you have "yield delay(N)" in the code you are trying to convert (I don't know if that statement will convert, it might for the testbench conversions). > As an aside, I've tried to run all the cosimulation examples and tests in the myHDL distribution (myhdl-0.7/cosimulation/* and myhdl-0.7/myhdl/test/conversion/*), but only Icarus works. Where can I find more information about getting the other environments to work with myHDL cosimulation? Or should I not worry about it, and simply stick with Iverilog? > > The main reason I want to use cosimulation is because WebPACK ISE is v-e-r-y s-l-o-w for even simple examples. I want to invoke it only on designs that have a good chance of working the first time they are written to an FPGA. > I have successfully co-simulated with the following: iverilog (Icarus) vsim (Modelsim) cver (open-source cver) cvc (Tachyon Design Automation) ncsim (Cadence) I don't know if there is much reason not to use Icarus, unless there is some feature that it is missing and another simulator supports. There might not be any directions explicitly on the wiki for other simulators. If there is a different simulator you want to use, the newsgroup might be able to give some advice. Regards, Chris |
From: Bob C. <Fl...@gm...> - 2011-09-24 07:56:52
|
I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) As an aside, I've tried to run all the cosimulation examples and tests in the myHDL distribution (myhdl-0.7/cosimulation/* and myhdl-0.7/myhdl/test/conversion/*), but only Icarus works. Where can I find more information about getting the other environments to work with myHDL cosimulation? Or should I not worry about it, and simply stick with Iverilog? The main reason I want to use cosimulation is because WebPACK ISE is v-e-r-y s-l-o-w for even simple examples. I want to invoke it only on designs that have a good chance of working the first time they are written to an FPGA. TIA, -BobC |
From: Bob C. <Fl...@gm...> - 2011-09-22 09:54:08
|
I'm making my way through the myHDL manual, tutorials and cookbook using PyPy. Several things have been needed to make PyPy work for the tutorials: 1. Install pypy 1.6: For Fedora, it is in Rawhide. For other platforms, follow these instructions: http://pypy.org/download.html 2. Install setuptools: wget http://python-distribute.org/distribute_setup.py sudo pypy distribute_setup.py sudo ln -s /usr/lib/pypy-1.6/bin/easy_install /usr/bin/pypy_easy_install 3. Install pytest: sudo pypy_easy_install -U pytest sudo ln -s /usr/lib/pypy-1.6/bin/py.test /usr/bin/pypy.test As you can see above, I created two soft links: 1) For easy_install as pypy_easy_install, and 2) for py.test as pypy.test Both were done mainly to keep my default python install pristine while having the pypy tools in my path. So instead of running py.test, I run pypy.test. I've completed the StopWatch and SineComputer examples, and so far everything works fine. I'm using Eclipse + PyDev along with WebPACK ISE. For simplicity, I'm using the default Eclipse project structure (~/EclipseWorkspace/<ProjectName>/src) with a ~/EclipseWorkspace/<ProjectName>/ise directory for the Xilinx project. For cosimulation I tried to use cver, but it segfaults on my Fedora 15 system, so I've been modifying the tutorials as I go to use Icarus, which works well. For example, here's my modification to SineComputer_v() in SineComputer.py: def SineComputer_v(cos_z0, sin_z0, done, z0, start, clock, reset): """Perform Verilog cosimulaton.""" toVerilog(SineComputer, **locals()) # Presently, cver segfaults on my system. #cmd = "cver -q +loadvpi=myhdl_vpi:vpi_compat_bootstrap SineComputer.v tb_SineComputer.v" #return Cosimulation(cmd, **locals()) # Use Icarus instead. os.system("iverilog -o SineComputer SineComputer.v tb_SineComputer.v") return Cosimulation("vvp -m ./myhdl.vpi SineComputer", **locals()) One other thing: I could not find "myhdl.vpi", but was able to build it from myhdl_vpi.c. Copying myhdl.vpi into the local project directory seems wasteful, but I haven't yet looked for a system-wide solution. If anyone has any other hints to add, I'm all ears! -BobC |
From: Shakthi K. <sha...@gm...> - 2011-09-17 02:04:41
|
Hi Christopher: --- On Fri, Sep 16, 2011 at 11:35 PM, Christopher Felton <chr...@gm...> wrote: | This feeds an argument against the explicit logic walk through because, | as you said, the focus is Python and Python-MyHDL, IMO. \-- Yes, it is an exception. I found all the examples from the manual to be simple and easy to follow except for the reasoning behind using the XOR for the bin2gray example. Hence, the need for the explicit logic walk. --- | That is the point, a negative value assigned to an unsigned causes a | value error (you won't see this in Verilog). You need the correct | bounds to represent a number. Failing examples are as useful as correct | examples. \-- Makes sense! Will include them. --- | I was thinking a slide that summarizes | up to the point (just before the pypy) will be a better transition to | the pypy information. Summarizing the design flow is one possibility. \-- Will include this. Thanks for your feedback and prompt replies. Appreciate it! SK -- Shakthi Kannan http://www.shakthimaan.com |
From: Christopher F. <chr...@gm...> - 2011-09-16 18:06:04
|
On 9/16/2011 11:17 AM, Shakthi Kannan wrote: > Hi Christopher: > > --- On Fri, Sep 16, 2011 at 8:19 PM, Christopher Felton > <chr...@gm...> wrote: > | The number of slides dedicated to explaining Gray code is a little > | excessive. After the truth table slide, you can probably jump to slide > | 20 (think it is 20). And then jump to the last slide of the Gray table. > | Explicitly walking through XOR of 0s is ... kinda ... yawn. > \-- > > :) There will be quite a few students in the audience, and I don't > want them to ponder on the logic as to how it works. Hence, the > explicit walk through. Others can just skip through that part. Sure, it is ok. But, my opinion, you only need to show 0+0=0 once, and you can jump ahead. But no foul no harm. > > The manual is well documented, so I didn't want to repeat the same in > the presentation. Moreover, the focus is on how Python and > python-myhdl is useful for HDL. Hence, just the code examples and > illustrations. This feeds an argument against the explicit logic walk through because, as you said, the focus is Python and Python-MyHDL, IMO. > > --- > | I would show a signed and unsigned representation before converting > | (extending) an unsigned to a signed. > | a = intbv(12, min=0, max=16) # Unsigned > | a = intbv(-12, min=0, max=16) # What happens if unsigned to set sign > | b = intbv(-12, min=16, max=16) # Signed > | # Then conversion to signed > \-- > > For "a = intbv(-12, min=0, max=16)", how can we set value to -12, when > minimum is 0? Same with "b = intbv(-12, min=16, max=16)". These return > a ValueError. > > Can you elaborate on what you would like to emphasize here? That is the point, a negative value assigned to an unsigned causes a value error (you won't see this in Verilog). You need the correct bounds to represent a number. Failing examples are as useful as correct examples. > > --- > | Unit Tests > | ----------- > | Emphasize, this is one of the reasons why Python HDL *rules*. You > | inherit the world of Python and its ecosystem and you don't have to > | reinvent the world -- unless that is what you do and you can convince > | your customers to pay for it :) --. > \-- > > True :) > > --- > | I guess this is the same for the "Structural Modeling", in the > | discussion make clear the "structural" and "RTL" are convertible and the > | other "modeling" is not. > \-- > > I have now explicitly mentioned "High Level Modelling" for the sparse > memory and fifo examples at: > > http://www.myhdl.org/doc/current/manual/modeling.html#high-level-modeling Cool, I think that works, stating the different types of modeling, Structural, RTL, and HL. > > --- > | Before the PyPy you might want a quick summary to this point. > | * Describe the HDL in Python/MyHDL > | * Simulate > | * Convert to Verilog or VHDL > | * Co-simulate to verify conversion correctness > | * Synthesize to target technology > | * Co-simulate with synthesized netlist > \-- > > Yes, I have demos for the examples, and will use them during the presentation. That isn't what I was thinking. I was thinking a slide that summarizes up to the point (just before the pypy) will be a better transition to the pypy information. Summarizing the design flow is one possibility. > > --- > | Using PyPy > | ----------- > | Since these tables are directly from Jan's wikipage I would add the > | reference to this slide. > \-- > > Updated. > > --- > | Also, it > | would be nice to see this table in a normalized percentage. You could > | normalize to MyHDL@pypy or Normalize to one of the others, example. > | > | Or you could normalize to pypy and see > | how the others compare. > | > | I guess, normalizing to pypy is the most dramatic and gets the > | information across the fastest, IMO. > \-- > > Updated. Thanks for your feedback! > > SK > No problem, have fun. Chris Felton |
From: Shakthi K. <sha...@gm...> - 2011-09-16 16:17:54
|
Hi Christopher: --- On Fri, Sep 16, 2011 at 8:19 PM, Christopher Felton <chr...@gm...> wrote: | The number of slides dedicated to explaining Gray code is a little | excessive. After the truth table slide, you can probably jump to slide | 20 (think it is 20). And then jump to the last slide of the Gray table. | Explicitly walking through XOR of 0s is ... kinda ... yawn. \-- :) There will be quite a few students in the audience, and I don't want them to ponder on the logic as to how it works. Hence, the explicit walk through. Others can just skip through that part. The manual is well documented, so I didn't want to repeat the same in the presentation. Moreover, the focus is on how Python and python-myhdl is useful for HDL. Hence, just the code examples and illustrations. --- | I would show a signed and unsigned representation before converting | (extending) an unsigned to a signed. | a = intbv(12, min=0, max=16) # Unsigned | a = intbv(-12, min=0, max=16) # What happens if unsigned to set sign | b = intbv(-12, min=16, max=16) # Signed | # Then conversion to signed \-- For "a = intbv(-12, min=0, max=16)", how can we set value to -12, when minimum is 0? Same with "b = intbv(-12, min=16, max=16)". These return a ValueError. Can you elaborate on what you would like to emphasize here? --- | Unit Tests | ----------- | Emphasize, this is one of the reasons why Python HDL *rules*. You | inherit the world of Python and its ecosystem and you don't have to | reinvent the world -- unless that is what you do and you can convince | your customers to pay for it :) --. \-- True :) --- | I guess this is the same for the "Structural Modeling", in the | discussion make clear the "structural" and "RTL" are convertible and the | other "modeling" is not. \-- I have now explicitly mentioned "High Level Modelling" for the sparse memory and fifo examples at: http://www.myhdl.org/doc/current/manual/modeling.html#high-level-modeling --- | Before the PyPy you might want a quick summary to this point. | * Describe the HDL in Python/MyHDL | * Simulate | * Convert to Verilog or VHDL | * Co-simulate to verify conversion correctness | * Synthesize to target technology | * Co-simulate with synthesized netlist \-- Yes, I have demos for the examples, and will use them during the presentation. --- | Using PyPy | ----------- | Since these tables are directly from Jan's wikipage I would add the | reference to this slide. \-- Updated. --- | Also, it | would be nice to see this table in a normalized percentage. You could | normalize to MyHDL@pypy or Normalize to one of the others, example. | | Or you could normalize to pypy and see | how the others compare. | | I guess, normalizing to pypy is the most dramatic and gets the | information across the fastest, IMO. \-- Updated. Thanks for your feedback! SK -- Shakthi Kannan http://www.shakthimaan.com |
From: Christopher F. <chr...@gm...> - 2011-09-16 14:57:28
|
On 9/16/2011 9:36 AM, Sébastien Bourdeauducq wrote: > On 09/16/2011 03:41 PM, Christopher Felton wrote: >> Here is one of mine,http://goo.gl/z5G8Z. > > This is awesome! If only I had known about MyHDL when I started > Milkymist SoC... > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > http://p.sf.net/sfu/rim-devcon-copy2 I ran across your Milkymist project, briefly, while reviewing your test code for the patches on GitHub. I didn't dig around much but it looks like an exciting project and you have been busy! You might be able to branc Milkymist (I am stretching here, I no very little knowledge of your project). You might be able to replace the LLVM with the pypy byte code. If you take the MyHDL -> pypy -> pypy.bytecode -> Milkymist you might be able to support this flow? I have no idea how much work it would be to support the pypy byte code instead of the LLVM? Keep up the good work. Chris |
From: Christopher F. <chr...@gm...> - 2011-09-16 14:49:48
|
On 9/15/2011 12:49 PM, Shakthi Kannan wrote: > Greetings! > > I will be presenting python-myhdl at PyCon India 2011 [1]. Thanks to > Jan Decaluwe for his review, and code examples. The presentation is > available under CC BY-SA 3.0 [2]. > > The LaTeX Beamer sources of the presentation are available at gitorious.org [3]. > > I'd appreciate any feedback/suggestions regarding the presentation. > > Thanks! > > SK > > [1] PyCon India 2011. http://in.pycon.org/ > > [2] From Python to Silicon: python-myhdl. > http://shakthimaan.com/downloads/glv/2011/pycon-2011/from-python-to-silicon.pdf > > [3] Presentation git sources at gitorious. > https://gitorious.org/from-python-to-silicon/ > (Guess I did have some more feedback) The number of slides dedicated to explaining Gray code is a little excessive. After the truth table slide, you can probably jump to slide 20 (think it is 20). And then jump to the last slide of the Gray table. Explicitly walking through XOR of 0s is ... kinda ... yawn. Unsigned and Signed Representation ----------------------------------- I would show a signed and unsigned representation before converting (extending) an unsigned to a signed. a = intbv(12, min=0, max=16) # Unsigned a = intbv(-12, min=0, max=16) # What happens if unsigned to set sign b = intbv(-12, min=16, max=16) # Signed # Then conversion to signed Unit Tests ----------- Emphasize, this is one of the reasons why Python HDL *rules*. You inherit the world of Python and its ecosystem and you don't have to reinvent the world -- unless that is what you do and you can convince your customers to pay for it :) --. Conditional Instantiation -------------------------- This might be a good spot (if not already discussed) the idea of the elaboration phase. And those familiar with HDLs how this replaces the generates. * Modeling ----------- I think "Modeling" has been used in two different context and it might be confusing. MyHDL can be used for a bunch of different purposes, including: (1) Modeling complex event based systems, (2) advanced verification/testbenches, and (3) as a convertible (synthesizable) HDL. In these slides "Modeling" refers to items 1 and 3. You use "RTL Modeling" for the generators that can be converted and plain "Modeling" for those that do not convert. I would make additional effort to make sure this is clear when presenting. I guess this is the same for the "Structural Modeling", in the discussion make clear the "structural" and "RTL" are convertible and the other "modeling" is not. Intermediate Summary (So far have learned) ------------------------------------------ Before the PyPy you might want a quick summary to this point. * Describe the HDL in Python/MyHDL * Simulate * Convert to Verilog or VHDL * Co-simulate to verify conversion correctness * Synthesize to target technology * Co-simulate with synthesized netlist Using PyPy ----------- Since these tables are directly from Jan's wikipage I would add the reference to this slide. Indicate where the info was taken. Also, it would be nice to see this table in a normalized percentage. You could normalize to MyHDL@pypy or Normalize to one of the others, example. MPyPy_Compare Out[364]: {'findmax': {'ghdl': 2256, 'iver': 56, 'pypy': 86, 'ver1': 21, 'vhdl1': 37}, 'lfsr24': {'ghdl': 71, 'iver': 79, 'pypy': 66, 'ver1': 266, 'vhdl1': 240}, 'logdiv': {'ghdl': 224, 'iver': 43, 'pypy': 69, 'ver1': 96, 'vhdl1': 98}, 'randgen': {'ghdl': 24, 'iver': 197, 'pypy': 62, 'ver1': 76, 'vhdl1': 67}, 'timer': {'ghdl': 146, 'iver': 106, 'pypy': 62, 'ver1': 260, 'vhdl1': 219}} for k,v in MPyPy_Compare.items(): print("%8s : %1.2f | %1.2f | %1.2f | %1.2f | %1.2f" % (k, v['pypy']/float(v['iver']), v['iver']/float(v['iver']), v['ghdl']/float(v['iver']), v['ver1']/float(v['iver']), v['vhdl1']/float(v['iver']) ) ) timer : 0.58 | 1.00 | 1.38 | 2.45 | 2.07 lfsr24 : 0.84 | 1.00 | 0.90 | 3.37 | 3.04 randgen : 0.31 | 1.00 | 0.12 | 0.39 | 0.34 logdiv : 1.60 | 1.00 | 5.21 | 2.23 | 2.28 findmax : 1.54 | 1.00 | 40.29 | 0.38 | 0.66 The above normalized to Icarus. Or you could normalize to pypy and see how the others compare. timer : 1.00 | 1.71 | 2.35 | 4.19 | 3.53 lfsr24 : 1.00 | 1.20 | 1.08 | 4.03 | 3.64 randgen : 1.00 | 3.18 | 0.39 | 1.23 | 1.08 logdiv : 1.00 | 0.62 | 3.25 | 1.39 | 1.42 findmax : 1.00 | 0.65 | 26.23 | 0.24 | 0.43 I guess, normalizing to pypy is the most dramatic and gets the information across the fastest, IMO. Regards, Chris Felton |
From: Sébastien B. <seb...@mi...> - 2011-09-16 14:39:48
|
On 09/16/2011 03:41 PM, Christopher Felton wrote: > Here is one of mine,http://goo.gl/z5G8Z. This is awesome! If only I had known about MyHDL when I started Milkymist SoC... |
From: Christopher F. <chr...@gm...> - 2011-09-16 13:41:28
|
Here is one of mine, http://goo.gl/z5G8Z. Updated an old blog post, might be interesting, might not. Comments, criticism, accolades, aspersions, etc always welcome :) Regards, Chris Felton |
From: Christopher F. <chr...@gm...> - 2011-09-16 13:35:16
|
On 9/15/2011 12:49 PM, Shakthi Kannan wrote: > Greetings! > > I will be presenting python-myhdl at PyCon India 2011 [1]. Thanks to > Jan Decaluwe for his review, and code examples. The presentation is > available under CC BY-SA 3.0 [2]. > > The LaTeX Beamer sources of the presentation are available at gitorious.org [3]. > > I'd appreciate any feedback/suggestions regarding the presentation. > > Thanks! > > SK > > [1] PyCon India 2011. http://in.pycon.org/ > > [2] From Python to Silicon: python-myhdl. > http://shakthimaan.com/downloads/glv/2011/pycon-2011/from-python-to-silicon.pdf > > [3] Presentation git sources at gitorious. > https://gitorious.org/from-python-to-silicon/ > Given the presentation is mainly code snippets, there isn't much feedback to give other than: If the code snippets are incorrect or a suggestion for a better code snippet. Neither I found or have. I notice you have an hour for this presentation, I presume there will be a fair amount of dialogue accompanying the slides. The topic I believe is interesting to this crowd is *why*? Why is Python suitable for an HDL and why is it needed? And this is where you start, illustrating generators and decorators. The Python flexibility and generality to become an HDL. Additional information is available on Jan's "why" page, http://goo.gl/hoUKk. Good luck, sounds like fun. I believe someone presented MyHDL at pyIndia 2010 as well, http://goo.gl/FNm8P. Penny short today, so this is my one cent opinion, which are mine and don't reflect the opinions of others (individuals or organizations) I am involved with. Regards, Chris Felton |
From: Shakthi K. <sha...@gm...> - 2011-09-15 17:49:38
|
Greetings! I will be presenting python-myhdl at PyCon India 2011 [1]. Thanks to Jan Decaluwe for his review, and code examples. The presentation is available under CC BY-SA 3.0 [2]. The LaTeX Beamer sources of the presentation are available at gitorious.org [3]. I'd appreciate any feedback/suggestions regarding the presentation. Thanks! SK [1] PyCon India 2011. http://in.pycon.org/ [2] From Python to Silicon: python-myhdl. http://shakthimaan.com/downloads/glv/2011/pycon-2011/from-python-to-silicon.pdf [3] Presentation git sources at gitorious. https://gitorious.org/from-python-to-silicon/ -- Shakthi Kannan http://www.shakthimaan.com |
From: Sébastien B. <seb...@mi...> - 2011-09-12 15:19:48
|
Using always_comb with an instance that reads signals from a class does not work (not even in simulation) because those signals do not get included in the sensitivity list. The proposed patch below fixes this problem for the simulation. Example/test code: https://github.com/sbourdeauducq/TokenEngine/blob/master/myhdl-limits/slclass.py Correct result: Setting object signal to False Value outside object: False Setting object signal to True Value outside object: True Incorrect result (currently what MyHDL does without the patch): Setting object signal to False Value outside object: False Setting object signal to True Value outside object: False ^^^^^ diff -r e18493b5f0ad myhdl/_always_comb.py --- a/myhdl/_always_comb.py Fri May 20 21:07:24 2011 +0200 +++ b/myhdl/_always_comb.py Mon Sep 12 17:06:06 2011 +0200 @@ -76,6 +75,7 @@ self.toplevel = 1 self.symdict = symdict self.context = INPUT + self.attrs = [] def visit_Module(self, node): inputs = self.inputs @@ -106,15 +106,27 @@ if id not in self.symdict: return s = self.symdict[id] - if isinstance(s, _Signal) or _isListOfSigs(s): - if self.context == INPUT: - self.inputs.add(id) - elif self.context == OUTPUT: - self.outputs.add(id) - elif self.context == INOUT: - raise AlwaysCombError(_error.SignalAsInout % id) - else: - raise AssertionError("bug in always_comb") + ia = iter(self.attrs) + while True: + if isinstance(s, _Signal) or _isListOfSigs(s): + if self.context == INPUT: + self.inputs.add(id) + self.symdict[id] = s + break + elif self.context == OUTPUT: + self.outputs.add(id) + self.symdict[id] = s + break + elif self.context == INOUT: + raise AlwaysCombError(_error.SignalAsInout % id) + else: + raise AssertionError("bug in always_comb") + try: + a = ia.next() + except StopIteration: + break + id += "." + a + s = getattr(s, a) def visit_Assign(self, node): self.context = OUTPUT @@ -124,7 +136,9 @@ self.visit(node.value) def visit_Attribute(self, node): + self.attrs.insert(0, node.attr) self.visit(node.value) + self.attrs.pop(0) def visit_Subscript(self, node, access=INPUT): self.visit(node.value) |