Re: [myhdl-list] Cosimulation newbie questions
Brought to you by:
jandecaluwe
From: Bob C. <Fl...@gm...> - 2011-09-24 22:49:48
|
On 09/24/2011 01:05 PM, Christopher Felton wrote: > On 9/24/11 2:56 AM, Bob Cunningham wrote: >> I'm trying to get a better grasp of the limits of cosimulation within myHDL. I've been reading this manual section (http://www.myhdl.org/doc/0.6/manual/cosimulation.html#only-passive-hdl-can-be-co-simulated) and I want to be very clear about the meaning of one phrase in that section: "time delays are meaningless in synthesizable code" >> >> Does this simply mean that, in a synthesized implementation, the delays are "whatever" they are, and nothing done in the cosimulation domain can properly account or even allow for them? (I'm not yet a digital designer: I'm a 25-year veteran embedded/real-time software wonk and algorithm specialist finally making the leap into FPGAs.) >> > Couple ways to look at this, say you are co-simulating MyHDL and Verilog > (not converted MyHDL -> Verilog). And your Verilog code has the following: > > always(*) begin > x = #10 y; > end > > The "delay" (#10) in Verilog (in the Verilog simulator) doesn't relate > back to anything in the MyHDL world. The Verilog simulator will have > its time step based on the settings or `timescale and the MyHDL > simulator has it time step, these may not be the same in other-words > they Verilog simulator and MyHDL simulator simulation steps are not > related. They are syncronized when the MyHDL is the simulation master. > When the MyHDL testbench generates the singals that sync e.g. clocks. > > A delay is not synthesizable, a synthesizer will not try and generate > some circuit (wound wire) to match the delay. > > In general, if you are converting MyHDL to Verilog/VHDL this should not > be much of an issue, unless you have "yield delay(N)" in the code you > are trying to convert (I don't know if that statement will convert, it > might for the testbench conversions). From what little I know, it seems "yield delay(n)" is mainly used to avoid adding yet another clock, to avoid race conditions and allow more delta-cycles to occur during simulation. Is that right? If so, is there always a clean way to avoid using "yield delay(n)" without causing a clock explosion? (From what I've seen, creating and routing clocks can quickly get expensive.) I suppose my main concern is more about how to get the most from my myHDL simulations and Verilog cosimulations before testing a programmed FPGA. If I'm doing all my own work in myHDL, is there really any need to do Verilog cosimulation? Should I reasonably expect to go from myHDL to a programmed part? (Adding only I/O assignment.) When I'm merging OPIP (Other-People's IP) into my own projects, then I suspect I'll want to do lots of cosimulation, especially to ensure the Verilog generated from my myHDL code integrates well with the other code I'm using (such as from OpenCores). If so, I'll want to integrate cosimulation into my workflow right from the start (mainly to get lots of practice with it). In the specific case you mention, can it be handled by writing a myHDL wrapper that, when converted to Verilog, would permit the rest of the myHDL test bench to work without problems? That is, can I reasonably expect to be able to shove such problems completely into the Verilog domain? I also want my myHDL testbenches to be as useful as possible for testing programmed parts. I see three ways to go in this area: 1. Create synthesizable testbenches. This is not preferred in the general case, since it would eliminate much of the power of having a Python test environment. 2. Create a logic analyzer(-ish) interface between the myHDL test bench and the programmed part. 3. A merged approach, where a more-or-less generic test interface is programmed with the DUT with a higher-level interface to the myHDL testbench. (Put the fast stuff in the FPGA, with slower interactions on the PC.) Perhaps some perspective on where I'm coming from would help: When I started doing embedded/real-time systems 25 years ago, I had to hand-tune assembler to maximize performance from 8-bit micros operating at 1 MHz, where it was rare to have an integer divide instruction. Next, as problem complexity and precision grew and timing deadlines shrank, I often used hand-tuned DSP code. As silicon speeds increased and compiler technology improved, it became practical to tackle ever more complex problems using higher-level languages. Increased memory sizes permitted use of ever more sophisticated RTOSes. Then came embedded CISC processors with caches and FPUs, and I was back to hand-tuning to avoid cache thrashing and pipeline stalls. Next came MMX and SSE, soon followed by multiprocessing and multicore, all of which required more hand-tuning to keep the hardware fully utilized. Yet I was increasingly able to cost-effectively solve relatively big/hard problems using commodity platforms and a tweaked Linux kernel. The next big change came with programming for GPUs. It felt like I was starting over. GPUs are almost antithetical to use in embedded/real-time systems given the long startup and pipeline delay times required to make full use of a GPU. That's when I decided to pursue FPGA development, since the tools are far more mature than current GPU tools (OpenCL and CUDA). Plus, after finding myHDL, I get to use Python! Basically, I'm an algorithm guy with a fetish for fast, efficient implementations. I'm a fanatic about testing, and about getting from concept to implementation as quickly as possible while maximizing quality. How much should I expect to rely on cosimulation? -BobC |