Re: [myhdl-list] Cosimulation newbie questions
Brought to you by:
jandecaluwe
From: Christopher F. <chr...@gm...> - 2011-09-25 02:51:00
|
On 9/24/11 5:49 PM, Bob Cunningham wrote: > On 09/24/2011 01:05 PM, Christopher Felton wrote: >> On 9/24/11 2:56 AM, Bob Cunningham wrote: >>> I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" >>> >>> Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) >>> >> Couple ways to look at this, say you are co-simulating MyHDL and Verilog >> (not converted MyHDL -> Verilog). And your Verilog code has the following: >> >> always(*) begin >> x = #10 y; >> end >> >> The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate >> back to anything in the MyHDL world. The Verilog simulator will have >> its time step based on the settings or `timescale and the MyHDL >> simulator has it time step, these may not be the same in other-words >> they Verilog simulator and MyHDL simulator simulation steps are not >> related. They are syncronized when the MyHDL is the simulation master. >> When the MyHDL testbench generates the singals that sync e.g. clocks. >> >> A delay is not synthesizable, a synthesizer will not try and generate >> some circuit (wound wire) to match the delay. >> >> In general, if you are converting MyHDL to Verilog/VHDL this should not >> be much of an issue, unless you have "yield delay(N)" in the code you >> are trying to convert (I don't know if that statement will convert, it >> might for the testbench conversions). > > From what little I know, it seems "yield delay(n)" is mainly used to avoid adding yet another clock, to avoid race conditions and allow more delta-cycles to occur during simulation. Is that right? If so, is there always a clean way to avoid using "yield delay(n)" without causing a clock explosion? (From what I've seen, creating and routing clocks can quickly get expensive.) > > I suppose my main concern is more about how to get the most from my myHDL simulations and Verilog cosimulations before testing a programmed FPGA. > If you follow the tutorials and cookbook examples these will get you started with usable testbenches. In general, just use the "delay" in the test code and not code that is intended to be converted. The "yield delay(n)" is used to control time. One way to do this is use the "clock" and count ticks another is to delay *n* simulation ticks/steps. > If I'm doing all my own work in myHDL, is there really any need to do Verilog cosimulation? Should I reasonably expect to go from myHDL to a programmed part? (Adding only I/O assignment.) > In my mind, yes. This is a very common flow. After each translation (conversion, synthesis, etc) some method is used to validate the transition. You still want to verify that the converted and synthesized circuits act as you expect. > When I'm merging OPIP (Other-People's IP) into my own projects, then I suspect I'll want to do lots of cosimulation, especially to ensure the Verilog generated from my myHDL code integrates well with the other code I'm using (such as from OpenCores). If so, I'll want to integrate cosimulation into my workflow right from the start (mainly to get lots of practice with it). > Yes, I would agree. > In the specific case you mention, can it be handled by writing a myHDL wrapper that, when converted to Verilog, would permit the rest of the myHDL test bench to work without problems? That is, can I reasonably expect to be able to shove such problems completely into the Verilog domain? > > I also want my myHDL testbenches to be as useful as possible for testing programmed parts. I see three ways to go in this area: > 1. Create synthesizable testbenches. This is not preferred in the general case, since it would eliminate much of the power of having a Python test environment. > 2. Create a logic analyzer(-ish) interface between the myHDL test bench and the programmed part. > 3. A merged approach, where a more-or-less generic test interface is programmed with the DUT with a higher-level interface to the myHDL testbench. (Put the fast stuff in the FPGA, with slower interactions on the PC.) > I would just abstract some of the interfaces. They you can test functionally the design. I did this with the USBP project, the USB interface was abstracted, in simulation I could use the same test code that would be used with the hardware. Example, if you wanted to test an OpenCores UART. You could create test code with an interface as simple as write and read. For simulation you would need to create an cycle and bit accurate interface for the real hardware simply call the pyserial functions. > > Perhaps some perspective on where I'm coming from would help: When I started doing embedded/real-time systems 25 years ago, I had to hand-tune assembler to maximize performance from 8-bit micros operating at 1 MHz, where it was rare to have an integer divide instruction. Next, as problem complexity and precision grew and timing deadlines shrank, I often used hand-tuned DSP code. As silicon speeds increased and compiler technology improved, it became practical to tackle ever more complex problems using higher-level languages. Increased memory sizes permitted use of ever more sophisticated RTOSes. Then came embedded CISC processors with caches and FPUs, and I was back to hand-tuning to avoid cache thrashing and pipeline stalls. Next came MMX and SSE, soon followed by multiprocessing an > d multicore, all of which required more hand-tuning to keep the hardware fully utilized. Yet I was increasingly able to cost-effectively solve relatively big/hard problems using commodity > platforms and a tweaked Linux kernel. > > The next big change came with programming for GPUs. It felt like I was starting over. GPUs are almost antithetical to use in embedded/real-time systems given the long startup and pipeline delay times required to make full use of a GPU. That's when I decided to pursue FPGA development, since the tools are far more mature than current GPU tools (OpenCL and CUDA). Plus, after finding myHDL, I get to use Python! > > Basically, I'm an algorithm guy with a fetish for fast, efficient implementations. I'm a fanatic about testing, and about getting from concept to implementation as quickly as possible while maximizing quality. > Sounds like MyHDL is a good fit for your goals. > How much should I expect to rely on cosimulation? > As mentioned and since you state you are a fanatic about testing (and verification) I would say you would prefer to include co-simulation as much as possible. I see many FPGA developers use very little simulation period because they do all the testing in hardware. Personally, I like to simulated and co-simulate and be done :) Is there a particular issue you are having with co-simulation? Or does it seem like a steep learning curve? Which it might be if you have not used other HDL simulators. I think if you follow the examples and use Icarus that will get you pretty far. Regards, Chris |