Thread: [myhdl-list] Cosimulation newbie questions
Brought to you by:
jandecaluwe
From: Bob C. <Fl...@gm...> - 2011-09-24 07:56:52
|
I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) As an aside, I've tried to run all the cosimulation examples and tests in the myHDL distribution (myhdl-0.7/cosimulation/* and myhdl-0.7/myhdl/test/conversion/*), but only Icarus works. Where can I find more information about getting the other environments to work with myHDL cosimulation? Or should I not worry about it, and simply stick with Iverilog? The main reason I want to use cosimulation is because WebPACK ISE is v-e-r-y s-l-o-w for even simple examples. I want to invoke it only on designs that have a good chance of working the first time they are written to an FPGA. TIA, -BobC |
From: Christopher F. <chr...@gm...> - 2011-09-24 20:05:36
|
On 9/24/11 2:56 AM, Bob Cunningham wrote: > I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" > > Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) > Couple ways to look at this, say you are co-simulating MyHDL and Verilog (not converted MyHDL -> Verilog). And your Verilog code has the following: always(*) begin x = #10 y; end The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate back to anything in the MyHDL world. The Verilog simulator will have its time step based on the settings or `timescale and the MyHDL simulator has it time step, these may not be the same in other-words they Verilog simulator and MyHDL simulator simulation steps are not related. They are syncronized when the MyHDL is the simulation master. When the MyHDL testbench generates the singals that sync e.g. clocks. A delay is not synthesizable, a synthesizer will not try and generate some circuit (wound wire) to match the delay. In general, if you are converting MyHDL to Verilog/VHDL this should not be much of an issue, unless you have "yield delay(N)" in the code you are trying to convert (I don't know if that statement will convert, it might for the testbench conversions). > As an aside, I've tried to run all the cosimulation examples and tests in the myHDL distribution (myhdl-0.7/cosimulation/* and myhdl-0.7/myhdl/test/conversion/*), but only Icarus works. Where can I find more information about getting the other environments to work with myHDL cosimulation? Or should I not worry about it, and simply stick with Iverilog? > > The main reason I want to use cosimulation is because WebPACK ISE is v-e-r-y s-l-o-w for even simple examples. I want to invoke it only on designs that have a good chance of working the first time they are written to an FPGA. > I have successfully co-simulated with the following: iverilog (Icarus) vsim (Modelsim) cver (open-source cver) cvc (Tachyon Design Automation) ncsim (Cadence) I don't know if there is much reason not to use Icarus, unless there is some feature that it is missing and another simulator supports. There might not be any directions explicitly on the wiki for other simulators. If there is a different simulator you want to use, the newsgroup might be able to give some advice. Regards, Chris |
From: Bob C. <Fl...@gm...> - 2011-09-24 22:49:48
|
On 09/24/2011 01:05 PM, Christopher Felton wrote: > On 9/24/11 2:56 AM, Bob Cunningham wrote: >> I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" >> >> Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) >> > Couple ways to look at this, say you are co-simulating MyHDL and Verilog > (not converted MyHDL -> Verilog). And your Verilog code has the following: > > always(*) begin > x = #10 y; > end > > The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate > back to anything in the MyHDL world. The Verilog simulator will have > its time step based on the settings or `timescale and the MyHDL > simulator has it time step, these may not be the same in other-words > they Verilog simulator and MyHDL simulator simulation steps are not > related. They are syncronized when the MyHDL is the simulation master. > When the MyHDL testbench generates the singals that sync e.g. clocks. > > A delay is not synthesizable, a synthesizer will not try and generate > some circuit (wound wire) to match the delay. > > In general, if you are converting MyHDL to Verilog/VHDL this should not > be much of an issue, unless you have "yield delay(N)" in the code you > are trying to convert (I don't know if that statement will convert, it > might for the testbench conversions). From what little I know, it seems "yield delay(n)" is mainly used to avoid adding yet another clock, to avoid race conditions and allow more delta-cycles to occur during simulation. Is that right? If so, is there always a clean way to avoid using "yield delay(n)" without causing a clock explosion? (From what I've seen, creating and routing clocks can quickly get expensive.) I suppose my main concern is more about how to get the most from my myHDL simulations and Verilog cosimulations before testing a programmed FPGA. If I'm doing all my own work in myHDL, is there really any need to do Verilog cosimulation? Should I reasonably expect to go from myHDL to a programmed part? (Adding only I/O assignment.) When I'm merging OPIP (Other-People's IP) into my own projects, then I suspect I'll want to do lots of cosimulation, especially to ensure the Verilog generated from my myHDL code integrates well with the other code I'm using (such as from OpenCores). If so, I'll want to integrate cosimulation into my workflow right from the start (mainly to get lots of practice with it). In the specific case you mention, can it be handled by writing a myHDL wrapper that, when converted to Verilog, would permit the rest of the myHDL test bench to work without problems? That is, can I reasonably expect to be able to shove such problems completely into the Verilog domain? I also want my myHDL testbenches to be as useful as possible for testing programmed parts. I see three ways to go in this area: 1. Create synthesizable testbenches. This is not preferred in the general case, since it would eliminate much of the power of having a Python test environment. 2. Create a logic analyzer(-ish) interface between the myHDL test bench and the programmed part. 3. A merged approach, where a more-or-less generic test interface is programmed with the DUT with a higher-level interface to the myHDL testbench. (Put the fast stuff in the FPGA, with slower interactions on the PC.) Perhaps some perspective on where I'm coming from would help: When I started doing embedded/real-time systems 25 years ago, I had to hand-tune assembler to maximize performance from 8-bit micros operating at 1 MHz, where it was rare to have an integer divide instruction. Next, as problem complexity and precision grew and timing deadlines shrank, I often used hand-tuned DSP code. As silicon speeds increased and compiler technology improved, it became practical to tackle ever more complex problems using higher-level languages. Increased memory sizes permitted use of ever more sophisticated RTOSes. Then came embedded CISC processors with caches and FPUs, and I was back to hand-tuning to avoid cache thrashing and pipeline stalls. Next came MMX and SSE, soon followed by multiprocessing and multicore, all of which required more hand-tuning to keep the hardware fully utilized. Yet I was increasingly able to cost-effectively solve relatively big/hard problems using commodity platforms and a tweaked Linux kernel. The next big change came with programming for GPUs. It felt like I was starting over. GPUs are almost antithetical to use in embedded/real-time systems given the long startup and pipeline delay times required to make full use of a GPU. That's when I decided to pursue FPGA development, since the tools are far more mature than current GPU tools (OpenCL and CUDA). Plus, after finding myHDL, I get to use Python! Basically, I'm an algorithm guy with a fetish for fast, efficient implementations. I'm a fanatic about testing, and about getting from concept to implementation as quickly as possible while maximizing quality. How much should I expect to rely on cosimulation? -BobC |
From: Christopher F. <chr...@gm...> - 2011-09-25 02:51:00
|
On 9/24/11 5:49 PM, Bob Cunningham wrote: > On 09/24/2011 01:05 PM, Christopher Felton wrote: >> On 9/24/11 2:56 AM, Bob Cunningham wrote: >>> I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" >>> >>> Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) >>> >> Couple ways to look at this, say you are co-simulating MyHDL and Verilog >> (not converted MyHDL -> Verilog). And your Verilog code has the following: >> >> always(*) begin >> x = #10 y; >> end >> >> The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate >> back to anything in the MyHDL world. The Verilog simulator will have >> its time step based on the settings or `timescale and the MyHDL >> simulator has it time step, these may not be the same in other-words >> they Verilog simulator and MyHDL simulator simulation steps are not >> related. They are syncronized when the MyHDL is the simulation master. >> When the MyHDL testbench generates the singals that sync e.g. clocks. >> >> A delay is not synthesizable, a synthesizer will not try and generate >> some circuit (wound wire) to match the delay. >> >> In general, if you are converting MyHDL to Verilog/VHDL this should not >> be much of an issue, unless you have "yield delay(N)" in the code you >> are trying to convert (I don't know if that statement will convert, it >> might for the testbench conversions). > > From what little I know, it seems "yield delay(n)" is mainly used to avoid adding yet another clock, to avoid race conditions and allow more delta-cycles to occur during simulation. Is that right? If so, is there always a clean way to avoid using "yield delay(n)" without causing a clock explosion? (From what I've seen, creating and routing clocks can quickly get expensive.) > > I suppose my main concern is more about how to get the most from my myHDL simulations and Verilog cosimulations before testing a programmed FPGA. > If you follow the tutorials and cookbook examples these will get you started with usable testbenches. In general, just use the "delay" in the test code and not code that is intended to be converted. The "yield delay(n)" is used to control time. One way to do this is use the "clock" and count ticks another is to delay *n* simulation ticks/steps. > If I'm doing all my own work in myHDL, is there really any need to do Verilog cosimulation? Should I reasonably expect to go from myHDL to a programmed part? (Adding only I/O assignment.) > In my mind, yes. This is a very common flow. After each translation (conversion, synthesis, etc) some method is used to validate the transition. You still want to verify that the converted and synthesized circuits act as you expect. > When I'm merging OPIP (Other-People's IP) into my own projects, then I suspect I'll want to do lots of cosimulation, especially to ensure the Verilog generated from my myHDL code integrates well with the other code I'm using (such as from OpenCores). If so, I'll want to integrate cosimulation into my workflow right from the start (mainly to get lots of practice with it). > Yes, I would agree. > In the specific case you mention, can it be handled by writing a myHDL wrapper that, when converted to Verilog, would permit the rest of the myHDL test bench to work without problems? That is, can I reasonably expect to be able to shove such problems completely into the Verilog domain? > > I also want my myHDL testbenches to be as useful as possible for testing programmed parts. I see three ways to go in this area: > 1. Create synthesizable testbenches. This is not preferred in the general case, since it would eliminate much of the power of having a Python test environment. > 2. Create a logic analyzer(-ish) interface between the myHDL test bench and the programmed part. > 3. A merged approach, where a more-or-less generic test interface is programmed with the DUT with a higher-level interface to the myHDL testbench. (Put the fast stuff in the FPGA, with slower interactions on the PC.) > I would just abstract some of the interfaces. They you can test functionally the design. I did this with the USBP project, the USB interface was abstracted, in simulation I could use the same test code that would be used with the hardware. Example, if you wanted to test an OpenCores UART. You could create test code with an interface as simple as write and read. For simulation you would need to create an cycle and bit accurate interface for the real hardware simply call the pyserial functions. > > Perhaps some perspective on where I'm coming from would help: When I started doing embedded/real-time systems 25 years ago, I had to hand-tune assembler to maximize performance from 8-bit micros operating at 1 MHz, where it was rare to have an integer divide instruction. Next, as problem complexity and precision grew and timing deadlines shrank, I often used hand-tuned DSP code. As silicon speeds increased and compiler technology improved, it became practical to tackle ever more complex problems using higher-level languages. Increased memory sizes permitted use of ever more sophisticated RTOSes. Then came embedded CISC processors with caches and FPUs, and I was back to hand-tuning to avoid cache thrashing and pipeline stalls. Next came MMX and SSE, soon followed by multiprocessing an > d multicore, all of which required more hand-tuning to keep the hardware fully utilized. Yet I was increasingly able to cost-effectively solve relatively big/hard problems using commodity > platforms and a tweaked Linux kernel. > > The next big change came with programming for GPUs. It felt like I was starting over. GPUs are almost antithetical to use in embedded/real-time systems given the long startup and pipeline delay times required to make full use of a GPU. That's when I decided to pursue FPGA development, since the tools are far more mature than current GPU tools (OpenCL and CUDA). Plus, after finding myHDL, I get to use Python! > > Basically, I'm an algorithm guy with a fetish for fast, efficient implementations. I'm a fanatic about testing, and about getting from concept to implementation as quickly as possible while maximizing quality. > Sounds like MyHDL is a good fit for your goals. > How much should I expect to rely on cosimulation? > As mentioned and since you state you are a fanatic about testing (and verification) I would say you would prefer to include co-simulation as much as possible. I see many FPGA developers use very little simulation period because they do all the testing in hardware. Personally, I like to simulated and co-simulate and be done :) Is there a particular issue you are having with co-simulation? Or does it seem like a steep learning curve? Which it might be if you have not used other HDL simulators. I think if you follow the examples and use Icarus that will get you pretty far. Regards, Chris |
From: Bob C. <Fl...@gm...> - 2011-09-25 08:38:51
|
On 09/24/2011 07:50 PM, Christopher Felton wrote: > Is there a particular issue you are having with co-simulation? Or does it seem like a steep learning curve? Which it might be if you have not used other HDL simulators. I think if you follow the examples and use Icarus that will get you pretty far. I'm mainly trying to get a feel for how flexible my FPGA development process should be. In software, we frequently create multiple independent tests for a single piece of code that runs in a single environment. In hardware, it seems we should first do our basic testbenches in all available environments (myHDL, cosimulation, hardware). In software we have many well-known metrics we use to determine when testing is thorough: Static code inspection, modeling, input/output corner analysis, random vector analysis (black box testing), path/state analysis, coverage analysis, and the list goes on. Embedded/real-time software basically gets tested to death. We also need to test that our software responds appropriately in the presence of significant hardware failures. Do equivalent metrics (and their support tools) exist in the hardware domain? The test benches I've seen so far appear to primarily be simple I/O stimulus. Do tools exist that ensure a testbench does indeed access all critical internal states (including corner cases)? That all gates have been forced to change state at least once (coverage)? Or do I need to insert instrumentation into my circuit to expose internal states for access by a more sophisticated testbench? And couldn't some or all of the results acquired using such added instrumentation be invalidated when the circuit is run without the instrumentation? I would imagine the synthesis output could be very different, resulting in some changed functionality in the implementation. I suppose much of this may reside in the synthesis toolchain, something I have not yet thoroughly explored. I don't yet understand some of the content of the WebPACK ISE synthesis reports. For example, one concern for me would be finding the maximum usable clock rate for a particular design in a particular FPGA. I haven't yet seen anything in a testbench that would help determine this. In software, we are seldom worried if the CPU is going too fast! Quite the opposite. I'm wondering how gnarly I can expect my testing to get as my designs become more complex. -BobC |
From: Jan D. <ja...@ja...> - 2011-09-27 10:52:39
|
On 09/25/2011 10:38 AM, Bob Cunningham wrote: > > I'm mainly trying to get a feel for how flexible my FPGA development > process should be. In software, we frequently create multiple > independent tests for a single piece of code that runs in a single > environment. In hardware, it seems we should first do our basic > testbenches in all available environments (myHDL, cosimulation, > hardware). I don' understand what you mean here. > In software we have many well-known metrics we use to determine when > testing is thorough: Static code inspection, modeling, input/output > corner analysis, random vector analysis (black box testing), > path/state analysis, coverage analysis, and the list goes on. > Embedded/real-time software basically gets tested to death. We also > need to test that our software responds appropriately in the presence > of significant hardware failures. > > Do equivalent metrics (and their support tools) exist in the hardware > domain? The test benches I've seen so far appear to primarily be > simple I/O stimulus. Do tools exist that ensure a testbench does > indeed access all critical internal states (including corner cases)? > That all gates ha A core idea of MyHDL is that you should be able to use any verification methodology or tool that is available for general Python development. Personally, I use test-driven design and unit testing systematically for hardware design now. > Or do I need to insert instrumentation into my circuit to expose > internal states for access by a more sophisticated testbench? And > couldn't some or all of the results acquired using such added > instrumentation be invalidated when the circuit is run without the > instrumentation? I would imagine the synthesis output could be very > different, resulting in some changed functionality in the > implementation. > > I suppose much of this may reside in the synthesis toolchain, > something I have not yet thoroughly explored. I don't yet understand > some of the content of the WebPACK ISE synthesis reports. It seems to me still have to come to terms with one fundamental difference with software design: modeling/simulation and synthesis are completely different things. Modeling with MyHDL can be very high level and general. You can use full Python power and tools. However, MyHDL is not going to magically solve the fact that synthesis is conceptually quite low-level and restrictive. > For example, one concern for me would be finding the maximum usable > clock rate for a particular design in a particular FPGA. I haven't > yet seen anything in a testbench that would help determine this. In > software, we are seldom worried if the CPU is going too fast! Quite > the opposite. Surely in software there are also many metrics that your "test bench" or verification code itself cannot help you with directly. Take profiling for example, to see how fast particular functions run. Finding out the clock rate is similar, with the big advantage that it is a static check. In both cases, you can use such feedback to go back to the code and make changes. > > I'm wondering how gnarly I can expect my testing to get as my designs > become more complex. > > -BobC > > > ------------------------------------------------------------------------------ > > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2dcopy2 -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |