myhdl-list Mailing List for MyHDL (Page 98)
Brought to you by:
jandecaluwe
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(14) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(10) |
Mar
(19) |
Apr
(14) |
May
(1) |
Jun
(4) |
Jul
(10) |
Aug
|
Sep
(2) |
Oct
(7) |
Nov
(17) |
Dec
(12) |
2005 |
Jan
(6) |
Feb
(10) |
Mar
(17) |
Apr
(10) |
May
(9) |
Jun
(5) |
Jul
(26) |
Aug
(34) |
Sep
(10) |
Oct
(38) |
Nov
(71) |
Dec
(74) |
2006 |
Jan
(20) |
Feb
(20) |
Mar
(7) |
Apr
(2) |
May
(13) |
Jun
|
Jul
|
Aug
(4) |
Sep
(37) |
Oct
(43) |
Nov
(30) |
Dec
(33) |
2007 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
(30) |
Jun
(9) |
Jul
(1) |
Aug
|
Sep
(8) |
Oct
(13) |
Nov
|
Dec
(4) |
2008 |
Jan
(13) |
Feb
(46) |
Mar
(25) |
Apr
(7) |
May
(20) |
Jun
(73) |
Jul
(38) |
Aug
(47) |
Sep
(24) |
Oct
(18) |
Nov
(9) |
Dec
(36) |
2009 |
Jan
(31) |
Feb
(24) |
Mar
(73) |
Apr
(13) |
May
(47) |
Jun
(28) |
Jul
(36) |
Aug
(2) |
Sep
(5) |
Oct
(8) |
Nov
(16) |
Dec
(29) |
2010 |
Jan
(34) |
Feb
(18) |
Mar
(18) |
Apr
(5) |
May
|
Jun
(24) |
Jul
(53) |
Aug
(3) |
Sep
(18) |
Oct
(33) |
Nov
(19) |
Dec
(15) |
2011 |
Jan
(9) |
Feb
(4) |
Mar
(39) |
Apr
(213) |
May
(86) |
Jun
(46) |
Jul
(22) |
Aug
(11) |
Sep
(78) |
Oct
(59) |
Nov
(38) |
Dec
(24) |
2012 |
Jan
(9) |
Feb
(22) |
Mar
(89) |
Apr
(55) |
May
(222) |
Jun
(86) |
Jul
(57) |
Aug
(32) |
Sep
(49) |
Oct
(69) |
Nov
(12) |
Dec
(35) |
2013 |
Jan
(67) |
Feb
(39) |
Mar
(18) |
Apr
(42) |
May
(79) |
Jun
(1) |
Jul
(19) |
Aug
(18) |
Sep
(54) |
Oct
(79) |
Nov
(9) |
Dec
(26) |
2014 |
Jan
(30) |
Feb
(44) |
Mar
(26) |
Apr
(11) |
May
(39) |
Jun
(1) |
Jul
(89) |
Aug
(15) |
Sep
(7) |
Oct
(6) |
Nov
(20) |
Dec
(27) |
2015 |
Jan
(107) |
Feb
(106) |
Mar
(130) |
Apr
(90) |
May
(147) |
Jun
(28) |
Jul
(53) |
Aug
(16) |
Sep
(23) |
Oct
(7) |
Nov
|
Dec
(16) |
2016 |
Jan
(86) |
Feb
(41) |
Mar
(38) |
Apr
(31) |
May
(37) |
Jun
(11) |
Jul
(1) |
Aug
(1) |
Sep
(3) |
Oct
(1) |
Nov
(5) |
Dec
(3) |
2017 |
Jan
|
Feb
(4) |
Mar
(2) |
Apr
(2) |
May
|
Jun
(3) |
Jul
(2) |
Aug
(2) |
Sep
(1) |
Oct
(2) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
(1) |
Feb
(1) |
Mar
(7) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(1) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2020 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(3) |
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
(12) |
Dec
(11) |
2022 |
Jan
(7) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
(1) |
Feb
(2) |
Mar
(4) |
Apr
(2) |
May
(2) |
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(2) |
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jan D. <ja...@ja...> - 2012-04-23 10:24:19
|
I have written a page describing what MyHDL is not: http://www.myhdl.org/doku.php/whatitisnot -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Jan D. <ja...@ja...> - 2012-04-23 07:55:02
|
Bob: Now I think we are getting somewhere. A couple of things: First, there is some confusion about what Migen exactly means. My fault partially. Apparently Migen packages both a HDL, FHDL, and some IP blocks. When I criticize Migen, I'm really addressing FHDL, because that's my interest in comparing with MyHDL. The problem is that it is not packaged separately. I don't think it's a good idea to package IP blocks together with an HDL, and I will not do that for MyHDL. It's a totally different, and much bigger and diverse task. However, you do point to an important weakness of the MyHDL ecosystem: the lack of strong open source IP blocks. This is somewhat frustrating, because I think MyHDL is the ideal platform for such developments. There is no reason why it should be less powerful than Migen for this, to the contrary (because of the more powerful underlying HDL). I add that I personally have not plans in this direction. You cannot do it all. The Migen developers apparently included exactly the IP that they need for their specific purposes. I hope to see MyHDL IP in a variety of domains, including ones that I know nothing about. The main conclusion from your post is that I should do a better job in upfront expectation management. The very next thing I'm going to do is to write a page "What MyHDL is not". Jan On 04/22/2012 08:53 PM, Bob Cunningham wrote: > Thank-you Jan for your careful and caring response. It is clear I > haven't gone far enough down my own learning path to make > well-founded criticisms of HDL tools or approaches. While I continue > to closely monitor this list and developments in the general world of > "alternative" HDLs, a recent job change has temporarily eliminated my > free time to pursue learning HDLs: My FPGA development board is > gathering dust on my workbench. Still, when I finally do resume my > efforts, I hope to have a better path to pursue. > > Rather than attempt to respond to your points individually, please > permit me instead to step back and restate my goals and describe how > they have evolved since encountering MyHDL and Migen. > > As an embedded real-time engineer specializing primarily in software, > but with proven skills in system design and high-level circuit design > (mainly processor and peripheral chip selection and integration), my > greatest goal is to minimize the amount of new software in the > products I create. Statistically, nothing adds bugs to a system > faster, and delays product shipping longer, than adding a single new > line of code. The buggiest code tends to occur at the lowest levels: > At the functional level, the error rate for assembler is a multiple > of that for C which in turn is a multiple of that for Python. > > It's not just about levels of abstraction, though that is certainly a > vital factor. It also concerns the availability of proven code, and > how easily that code can be integrated onto the current project. > Part of the issue is how easy or difficult it is to create portable > and reusable code on any system, the rest concerns how to make use of > that code. I love projects where most of my effort is spent writing > "glue" code to integrate known-good libraries to create a new > application. My work as an engineer then becomes finding that good > code, and selecting from among the candidates the code that best > meets my needs and will be easiest to integrate. > > That said, what I love most is writing code from scratch, > implementing something important that has never existed before. But > that's not what gets products out the door: Professionally, I must > hold my own code authoring skills and desires at bay, and use the > rest of my skills to assess what's best for the system and product as > a whole. It's kind of schizophrenic: I ship better systems sooner > when I keep my own coding skills as a last resort, and prioritize > integrating the work of others. But I always yearn for projects > where I get to do something truly new and innovative. > > I view the FPGA as an ideal canvas for an embedded engineer, where I > can work more closely with the EEs while simultaneously using using > my system design and integration skills to further reduce the > software needed to produce a product. > > The first FPGA skill I'd most like to have to be productive would be > the ability to write glue between the FPGA and devices in the outside > world, and between IP cores within the FPGA. I became very excited > when I saw that Migen included the Wishbone interface as a > pre-integrated macro: There are many cores over at OpenCores that use > the Wishbone interface, but getting them to talk to each other > required a level of skill that was way beyond my knowledge. > > Why would I need to know how to implement a Wishbone interface? What > I need to know is how to design and test a system that uses > components integrated with standard interfaces, and those skills are > readily transferred from my embedded experience, where I have had to > test and integrate many hardware systems and components. I like > using logic analyzers and o'scopes far more than I like using > software debuggers! > > I suppose this isn't really an HDL issue: It's much higher than > that. I suppose it's more of a tool-level issue with language > implications. > > When I finally do start to write my own complex circuits from scratch > (something I greatly look forward to, and expect to have lots of fun > doing), I'll want to use a language (or languages) that are > appropriate to the task at hand, much as I use assembler, C and > Python to create embedded software. > > However, identifying the most appropriate language can be difficult, > and may even be a needless effort if the highest-level languages can > "play nice" at the lowest level. Permit me to share a case in > point: > > Several years ago I had to develop an instrument based on a complex > sensor system that used inherently noisy sensors from which we needed > to extract "soft" real-time performance. The statistics alone were > quite daunting, and I decided to prototype the system in Python so I > could use Numpy and related libraries to quickly develop the > algorithms I needed. The prototype was a success, but it had one > interesting characteristic: It met our performance sped when running > on the desktop, and was only a factor of 2-3 slower than our minimum > required performance spec when running on our intended target system. > Surprising performance for a prototype in an interpreted language. > > I profiled the code and found that the Python numeric and statistical > libraries were wicked-fast: They were more than capable of doing all > the math needed well within our timing requirements. That meant the > slow performance was due to the code I had written to interface to > the sensors and prepare their data for use by the fast libraries, and > passing data between those libraries. I first moved as much > functionality as possible into the libraries, which yielded an > immediate 25% speed improvement. Next, I stumbled across Psyco > (these were pre-PyPy days), and used it to more than double the > performance of the non-library code I had written. > > That left me tantalizingly close to meeting my timing requirements, > but I was still missing them by 30%. I had never before considered > using Python to implement a near-real-time embedded system. Not only > that, I also had never shipped Linux in a delivered system: I had > always been driven to minimize system power, which meant small > processors, which in turn meant running no code that didn't directly > support the end result, which in turn meant often running with no OS, > or at most an RTOS. For this system I had a much more generous power > envelope due to the generous maximum weight my battery-powered > instrument could have. If I went with a Linux platform, I'd need to > double my power budget, which meant doubling the current rating and > capacity of the batteries: Switching from NiMh to Li-Ion permitted me > to get the power I needed with only a modest increase in system > weight. > > But how to obtain that last bit of performance I needed? I was > already using the fastest CPU I could get that met our packaging > requirements (no fan, no extensive or expensive passive heatsinks, > which at the time meant a ULV Celeron). A final round of profiling > indicated my interface code was spending lots of time reading and > converting data: Each data element was causing individual library > calls. Implementing larger buffers helped, but still didn't get me > close to the time I needed: Servicing the hardware was the problem. > The final speed boost came when the buffers were moved from software > to hardware, so each interface access returned 1000x more data, a > design change that also improved the timing between the individual > sensors, which in turn simplified the statistics needed to proce ss > the data. > > Sorry for the length of that story, but that experience has affected > all the work I've done since, and also affects my expectations when > it comes to FPGA development. I want my FPGA development to > integrate nicely into the above process, to be a powerful tool that > permits me not just to engineer a fully-specified circuit into a > fully-specified target, but also to experiment with integrating > different components, to move solutions or parts of the problem from > one domain to another, to quickly make use of the work of others, and > be able to create my own when needed. > > The key element that must be present to facilitate such flexibility > is the ability to implement common aspects of interfaces across > multiple domains, to switch interfaces while preserving > functionality, to be able to work with gates, busses, processors, > instructions, libraries, languages and whatever else is needed to > permit a given slice of functionality to be implemented and accessed > in its best and most effective domain. I want everything from simple > FIFOs to complex processor interfaces to be available to me in > multiple domains without having to create them from scratch each > time. > > And THAT is what initially attracted me to MyHDL: If I can use > Python to move software into hardware, and interface with it from the > Python domain, why, I could become a system development god! And the > advent of PyPy makes that approach even more tractable. > > I soon learned that such power is not the goal of MyHDL: The goal of > MyHDL is to use Python to make a incrementally better flavor Verilog > or VHDL, a relatively small but significant evolutionary step > compared to the transformative change I was dreaming of. > > I desire to climb up and skip between levels of abstraction and > implementation domains that are almost orthogonal to what MyHDL > pursues. MyHDL may prove to be a piece of the final process, but it > does not, and perhaps cannot, encompass that process on its own. > > Hence my excitement with Migen: Even in its embryonic, incomplete, > and flawed initial state, it clearly seeks to bridge abstractions in > ways alien to MyHDL, to make interfaces an integral part of the > package and process, rather than something to code to at a low level. > In this respect, MyHDL feels more like C, and Migen aims to be Python > with its powerful libraries being made available to higher-level > code. > > Again, I'm only mapping from domains I know to a domain I hope to > enter and in which I hope become proficient and productive. I'm > certain my mapping has flaws and gaps owing to my lack of general HDL > knowledge. But I suspect this very lack may enable me to conceive of > and pursue a path that may yield a greater whole. > > MyHDL and Migen both seem to me to be stepping stones in a stream I > wish to see crossed by a 4-lane bridge. And arguments between and > about them seem like arguments about C vs. Python: It's about the > solution path and the power of the process, not about any particular > tool! It's about how tools interact and combine, not about how or > why they differ or overlap. > > > And if Python/PyPy doesn't evolve more quickly, it may get devoured > by Julia, an interpreted/JIT language which abstracts into areas > Python presently supports badly (such as multiprocessing and > coroutines), with speeds PyPy has yet to attain. It may be that the > concepts embodied in both MyHDL and Migen could eventually see more > effective and more flexible implementations in the Julia ecosystem. > Python, too, is just one tool in a procession of tools over time. > > > -BobC > > > On 04/22/2012 03:25 AM, Jan Decaluwe wrote: >> I still want to take the time to clarify my position on the many >> issues raised in this post. >> >> On 03/17/2012 09:20 PM, Bob Cunningham wrote: >>> On 03/16/2012 02:03 PM, Jan Decaluwe wrote: >>>> My conclusion is that Migen found an easy target in you, as a >>>> newbie, to confuse you. It made you think it can be used for >>>> serious design work. >>> Sorry, Jan. If I have to be "confused" to play with my FPGA, >>> then so be it. I'm very "serious" about being able to play with >>> my FPGA! >>> >>> Your statement has an obvious implicit context: To me, you are >>> shouting, "MyHDL is for Serious Designers Only! Newbies and >>> Pragmatists should Go Away!" >>> >>> If that's what you are saying, then please be direct: Don't >>> attack Migen for being what it was intentionally created to be, >>> or for being something MyHDL is not intended to be. Are you >>> upset about Migen existing, or that there is an audience MyHDL >>> can't help as well as Migen may be able to? >> I am disturbed by the suggestion that my critique on Migen is based >> on anything else than a purely technical assessment. >> >> Let me be clear. I don't like Mr. Bourdeauducq's attitude for one >> bit. But do you think that would be a reason for me to ignore any >> good idea that he might come up with? Of course not. I am not a >> masochist. >> >> It is quite simple. During my career, I have found that when you >> think you have seen the worst in HDL-based design, it always gets >> worse. To date, Migen is the worst that I have seen. But to >> understand why I am saying this, you have to be prepared to follow >> my technical arguments and to engage in technical discussions. I >> have made a few starts, but I certainly was not done yet. However, >> I see close to zero enthousiasm to continue such discussions. >> >> I am therefore frustrated by the fact that I hear all kinds of >> opinions and suggestions to "merge" but that whenever things get a >> little technical then the "I am a beginner" umbrella opens. >> >> Migen is not my problem. It will disappear in the milky mist of HDL >> history, just like the many HDLs based on the same flawed paradigm. >> I am addressing it simply because misleading posts about it appear >> on this newsgroup. >> >> What I am really targetting instead is the conventional wisdom in >> mainstream HDL design, which often has it all wrong. >> >>> If you'd rather beginners like myself go elsewhere, just say so. >> MyHDL welcomes beginners. It is the first item on "Why MyHDL": >> >> http://myhdl.org/doku.php/why#you_are_new_to_digital_hardware_design >> >> >> In fact, I put most of my hopes on beginners, as they have not >> yet been brainwashed by the conventional wisdom. >> >>> Remember, Migen wasn't created to be useful to beginners: It was >>> created by an experienced FPGA designer concerned about >>> practicality and productivity for a specific Open Source hardware >>> project. It >> I am not impressed by "arguments from experience". The >> conventional wisdom that I am targetting was created by experienced >> designers, mostly Verilog ones. Experience can lead to conservatism >> and can become a hindrance to clear thinking. >> >>> simply happened that some things became a bit clearer to me >>> after looking at Migen and the problems it was created to >>> address. >> Was it because of Migen or simply because you were spending more >> time on the problem? >> >>> Whatever Migen leaves out may have been what was getting in my >>> way! >> I find that strange. I understand that you have a lot of experience >> with embedded software. Therefore, you must know procedural >> software techniques very well. That is exactly what Migen leaves >> out. What it leaves is low-level concurrency at the statement >> level, which must be new to you. And now you suggest that the >> obstacle is exactly that what you are most familiar with. Beats >> me. >> >>> I'm actually quite lazy: What is the *least* I need to know to >>> make useful digital designs *now*? >> No secrets here. The first sentence of "Why MyHDL" warns you: >> "There's a lot to learn and it will be hard work". Therefore, if >> you are intellectually lazy (not prepared to learn new things even >> when they will save you lots of time and effort later on), MyHDL or >> HDL-based design is not for you. >> >> MyHDL is for those who are lazy in the good engineering sense, >> wanting to accomplish more with less effort eventually. >> >>> I'm a beginner: Though I'd love to someday be able to design >>> circuits like Shakespeare wrote sonnets, I'd be more than happy >>> today if I were able to work at the level of "Green Eggs and >>> Ham", a true masterpiece written with an absolute minimum of >>> linguistic complexity. >> Come on, let's keep some perspective here. It's not *that* >> difficult or complex either. And there is a cookbook that shows you >> the way. >> >>>> When Migen claims that the "event-driven" paradigm is too >>>> general, what it really dumps is procedural support in your HDL >>>> descriptions - the interesting stuff. >>> What's "interesting" to you can be a frustrating block for a >>> newbie. I hope to one day also be very interested in those >>> aspects of MyHDL, but it seems to have little to do with what I >>> want to get done today, >> I don't understand. Your specification seems very extensive and >> ambitious. It would seem that you have a big need for raising the >> abstaction level as high as possible, and for an easy path to >> strong verification. >> >>> which is to find the simplest path to get basic circuits out of >>> my mind and in to my FPGA. Then use them as building-blocks in >>> my hobby projects. >> There is a broad concensus about the "building blocks" paradigm in >> hardware design. That is really not the issue. The issue is what >> the abstraction level of the building blocks should be. >> >>> I am a MyHDL fan. Unfortunately, I simply remain unable to use >>> MyHDL to accomplish my own immediate hobby needs. That does not >>> indicate any flaw in MyHDL, merely the extent my own limitations. >>> Do not be surprised that I am interested in any tool that helps >>> me circumvent those limitations! >>> >>> I actually *like* how Migen slices and dices the process of FPGA >>> design: The parts that are left out are ones I doubt newbies like >>> me would notice, much less care about, until confronted with >>> unusually challenging designs. I suspect Sebastien would agree >>> with much of your analysis, the proper response being: "So >>> What?" >> Suppose that I teach a class to newbies in embedded software >> design based on assembler. Would any of the newbies, except for >> the rare genius, miss the capabilities of say, C? Does this prove >> that teaching assembler was a good choice? >> >>> It's not about theoretical power or completeness: It's about >>> barriers to entry. It's not about what I can do in 5 years, but >>> about what I can do in 5 weeks. Migen is primarily about >>> pragmatism and productivity, making powerful circuits quickly and >>> easily, and far less about expert capabilities, theoretical >>> purity or even consistency. >> Again, I find this strange. I understand that you have not been >> successful with MyHDL. However, as I understand it you have not >> been successful with Migen either. So what is your defense based >> upon? Of course, we are about 5 weeks further now :-) >> >> More to the point. >> >> Barriers to entry - ok, but what is the task? I told you that I >> believe the main problem in HDL-based design is verification, and >> how MyHDL (unlike Migen) helps you by the fact that you can use the >> same modelling paradigm for high-level models and test benches as >> for synthesizable logic. >> >> You seemed surprized, which I found suprizing in turn. Is it so >> different in software? Really, getting those gates into an FPGA is >> the easy part. The difficult part is getting them to work >> properly. >> >> You will have noticed that Mr. Bourdeauducq made an error in the >> first simple "one-liner" circuit that I presented to him, as if he >> wanted to prove my point. Of course, the reason is not >> incompetence, but simply that he did not verify his design. >> >> There is a pattern however. Mr. Bourdeauducq cried foul because I >> didn't accept his "simple patches". What he ignored, and continued >> to ignore despite my insistence is that they broke MyHDL. Perhaps >> Mr. Bourdeaducq considers verification a "detail". >> >> Well, I don't. Verification is the problem. The reason why I think >> the abstraction level of synthesizable logic should be as high as >> possible, is because that leaves more time for verification. >> >>> I seek tools that will help me do what I want to get done today, >>> and so far Migen seems like it will be most useful. Tomorrow >>> will likely require additional tools, and I *absolutely* expect >>> (and want) MyHDL to the first of those tools. It is not an >>> either-or proposition: I want Migen *and* MyHDL. I realize MyHDL >>> will take more time to master, and I'm willing to commit that >>> time. But I also want to create something sooner, rather than >>> later. And I believe that 100% of what I learn using Migen will >>> later prove useful with MyHDL. I believe using Migen will keep >>> me motivated toward learning MyHDL. >> Sounds good, but I think it is cheap talk. >> >> Most of Migen's technical choices, starting with its basic >> paradigm, are almost the opposite of MyHDL's. As a result, >> verification is not addressed, and it forces you to think at an >> artificially low level for synthesizable logic. What good can one >> learn from that? >> >>> Right next to me I have a Spartan 3E-500 that contains nothing of >>> my own design. That must change! >> Perhaps you are too ambitious. >> >> In your shoes, I would start as follows: >> >> * isolate a simple function out of your spec * try to concentrate >> on what it does, instead of how it should be implemented * write >> that behavior in a (clocked) MyHDL process or processes * also >> describe it in high-level python, and use that in a unit-test to >> verify * experiment with synthesis to see whether it works and the >> result is acceptable * iterate with the synthesizable description >> as necessary >> >> > > ------------------------------------------------------------------------------ > > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. Monitor Your > Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: David G. <dsg...@gm...> - 2012-04-22 21:22:03
|
Luckily, Clojure's really made some amazing innovations to be much better than the lisps of old (like {:key value :key value} syntax for maps and [1 2 3] syntax for vectors, instead of linked lists everywhere). If you're interested in the expressive power, I just wrote this example of what piplin can do (just side-by-side snippets): http://blog.dgrnbrg.com/post/21600152008/hooray-first-synthesized-code-result-with-piplin I feel like the HDL language landscape is decades behind software, and I'm closely watching developments all over to understand how I can get the same expressivity I have with software using FPGAs. Thanks, David On Sun, Apr 22, 2012 at 4:30 PM, Bob Cunningham <fl...@gm...> wrote: > Hi David, > > I have negligible experience with Clojure, possibly because it resurrects my Lisp nightmares of paren counting and infinite recursion through tangled lists. > > I will take a look at your project as soon as time permits, which unfortunately may not be until next weekend. I am motivated to see how you have approached the general issue, and what factors have shaped your particular approach. > > Most of all, thanks for your supportive reply! I really hate criticizing Jan, especially when I know I'm trying to do battle unarmed. Yet I'm light-years away from even formulating, much less implementing, my own solution to the problem. So I'm highly motivated to learn what others, such as you, are doing. > > Thanks, > > -BobC > > > On 04/22/2012 01:02 PM, David Greenberg wrote: >> Hi Bob, >> I wanted to let you know about my own solution to this problem, a >> language embedded in Clojure called Piplin. I initially sought to use >> MyHDL for some of the reasons you did (integration ease) and some >> other reasons (leveraging type systems and new syntax to improve >> expressiveness). Yesterday, I successfully synthesized the code to >> verilog. I've implemented the type system and simulator too, but >> there's a lot of work to be done. Since it's embedded in Java, it's >> easy to use all kinds of numerical libraries and integrate with many >> systems. Here's the code: github.com/dgrnbrg/piplin >> >> The best way to see what it looks like in context is here: >> https://github.com/dgrnbrg/piplin/blob/master/src/piplin/mips.clj >> >> I started this project in February from a blank slate, so although >> it's not ready yet, the ideas are there, and I'd love your feedback on >> syntax, semantics, and features that are important. >> >> Thanks, >> David >> >> On Sun, Apr 22, 2012 at 2:53 PM, Bob Cunningham<fl...@gm...> wrote: >>> Thank-you Jan for your careful and caring response. It is clear I haven't gone far enough down my own learning path to make well-founded criticisms of HDL tools or approaches. While I continue to closely monitor this list and developments in the general world of "alternative" HDLs, a recent job change has temporarily eliminated my free time to pursue learning HDLs: My FPGA development board is gathering dust on my workbench. Still, when I finally do resume my efforts, I hope to have a better path to pursue. >>> >>> Rather than attempt to respond to your points individually, please permit me instead to step back and restate my goals and describe how they have evolved since encountering MyHDL and Migen. >>> >>> As an embedded real-time engineer specializing primarily in software, but with proven skills in system design and high-level circuit design (mainly processor and peripheral chip selection and integration), my greatest goal is to minimize the amount of new software in the products I create. Statistically, nothing adds bugs to a system faster, and delays product shipping longer, than adding a single new line of code. The buggiest code tends to occur at the lowest levels: At the functional level, the error rate for assembler is a multiple of that for C which in turn is a multiple of that for Python. >>> >>> It's not just about levels of abstraction, though that is certainly a vital factor. It also concerns the availability of proven code, and how easily that code can be integrated onto the current project. Part of the issue is how easy or difficult it is to create portable and reusable code on any system, the rest concerns how to make use of that code. I love projects where most of my effort is spent writing "glue" code to integrate known-good libraries to create a new application. My work as an engineer then becomes finding that good code, and selecting from among the candidates the code that best meets my needs and will be easiest to integrate. >>> >>> That said, what I love most is writing code from scratch, implementing something important that has never existed before. But that's not what gets products out the door: Professionally, I must hold my own code authoring skills and desires at bay, and use the rest of my skills to assess what's best for the system and product as a whole. It's kind of schizophrenic: I ship better systems sooner when I keep my own coding skills as a last resort, and prioritize integrating the work of others. But I always yearn for projects where I get to do something truly new and innovative. >>> >>> I view the FPGA as an ideal canvas for an embedded engineer, where I can work more closely with the EEs while simultaneously using using my system design and integration skills to further reduce the software needed to produce a product. >>> >>> The first FPGA skill I'd most like to have to be productive would be the ability to write glue between the FPGA and devices in the outside world, and between IP cores within the FPGA. I became very excited when I saw that Migen included the Wishbone interface as a pre-integrated macro: There are many cores over at OpenCores that use the Wishbone interface, but getting them to talk to each other required a level of skill that was way beyond my knowledge. >>> >>> Why would I need to know how to implement a Wishbone interface? What I need to know is how to design and test a system that uses components integrated with standard interfaces, and those skills are readily transferred from my embedded experience, where I have had to test and integrate many hardware systems and components. I like using logic analyzers and o'scopes far more than I like using software debuggers! >>> >>> I suppose this isn't really an HDL issue: It's much higher than that. I suppose it's more of a tool-level issue with language implications. >>> >>> When I finally do start to write my own complex circuits from scratch (something I greatly look forward to, and expect to have lots of fun doing), I'll want to use a language (or languages) that are appropriate to the task at hand, much as I use assembler, C and Python to create embedded software. >>> >>> However, identifying the most appropriate language can be difficult, and may even be a needless effort if the highest-level languages can "play nice" at the lowest level. Permit me to share a case in point: >>> >>> Several years ago I had to develop an instrument based on a complex sensor system that used inherently noisy sensors from which we needed to extract "soft" real-time performance. The statistics alone were quite daunting, and I decided to prototype the system in Python so I could use Numpy and related libraries to quickly develop the algorithms I needed. The prototype was a success, but it had one interesting characteristic: It met our performance sped when running on the desktop, and was only a factor of 2-3 slower than our minimum required performance spec when running on our intended target system. Surprising performance for a prototype in an interpreted language. >>> >>> I profiled the code and found that the Python numeric and statistical libraries were wicked-fast: They were more than capable of doing all the math needed well within our timing requirements. That meant the slow performance was due to the code I had written to interface to the sensors and prepare their data for use by the fast libraries, and passing data between those libraries. I first moved as much functionality as possible into the libraries, which yielded an immediate 25% speed improvement. Next, I stumbled across Psyco (these were pre-PyPy days), and used it to more than double the performance of the non-library code I had written. >>> >>> That left me tantalizingly close to meeting my timing requirements, but I was still missing them by 30%. I had never before considered using Python to implement a near-real-time embedded system. Not only that, I also had never shipped Linux in a delivered system: I had always been driven to minimize system power, which meant small processors, which in turn meant running no code that didn't directly support the end result, which in turn meant often running with no OS, or at most an RTOS. For this system I had a much more generous power envelope due to the generous maximum weight my battery-powered instrument could have. If I went with a Linux platform, I'd need to double my power budget, which meant doubling the current rating and capacity of the batteries: Switching from NiMh to Li-Ion permitted me to get the power I needed with only a modest increase in system weight. >>> >>> But how to obtain that last bit of performance I needed? I was already using the fastest CPU I could get that met our packaging requirements (no fan, no extensive or expensive passive heatsinks, which at the time meant a ULV Celeron). A final round of profiling indicated my interface code was spending lots of time reading and converting data: Each data element was causing individual library calls. Implementing larger buffers helped, but still didn't get me close to the time I needed: Servicing the hardware was the problem. The final speed boost came when the buffers were moved from software to hardware, so each interface access returned 1000x more data, a design change that also improved the timing between the individual sensors, which in turn simplified the statistics needed to process the data. >>> >>> Sorry for the length of that story, but that experience has affected all the work I've done since, and also affects my expectations when it comes to FPGA development. I want my FPGA development to integrate nicely into the above process, to be a powerful tool that permits me not just to engineer a fully-specified circuit into a fully-specified target, but also to experiment with integrating different components, to move solutions or parts of the problem from one domain to another, to quickly make use of the work of others, and be able to create my own when needed. >>> >>> The key element that must be present to facilitate such flexibility is the ability to implement common aspects of interfaces across multiple domains, to switch interfaces while preserving functionality, to be able to work with gates, busses, processors, instructions, libraries, languages and whatever else is needed to permit a given slice of functionality to be implemented and accessed in its best and most effective domain. I want everything from simple FIFOs to complex processor interfaces to be available to me in multiple domains without having to create them from scratch each time. >>> >>> And THAT is what initially attracted me to MyHDL: If I can use Python to move software into hardware, and interface with it from the Python domain, why, I could become a system development god! And the advent of PyPy makes that approach even more tractable. >>> >>> I soon learned that such power is not the goal of MyHDL: The goal of MyHDL is to use Python to make a incrementally better flavor Verilog or VHDL, a relatively small but significant evolutionary step compared to the transformative change I was dreaming of. >>> >>> I desire to climb up and skip between levels of abstraction and implementation domains that are almost orthogonal to what MyHDL pursues. MyHDL may prove to be a piece of the final process, but it does not, and perhaps cannot, encompass that process on its own. >>> >>> Hence my excitement with Migen: Even in its embryonic, incomplete, and flawed initial state, it clearly seeks to bridge abstractions in ways alien to MyHDL, to make interfaces an integral part of the package and process, rather than something to code to at a low level. In this respect, MyHDL feels more like C, and Migen aims to be Python with its powerful libraries being made available to higher-level code. >>> >>> Again, I'm only mapping from domains I know to a domain I hope to enter and in which I hope become proficient and productive. I'm certain my mapping has flaws and gaps owing to my lack of general HDL knowledge. But I suspect this very lack may enable me to conceive of and pursue a path that may yield a greater whole. >>> >>> MyHDL and Migen both seem to me to be stepping stones in a stream I wish to see crossed by a 4-lane bridge. And arguments between and about them seem like arguments about C vs. Python: It's about the solution path and the power of the process, not about any particular tool! It's about how tools interact and combine, not about how or why they differ or overlap. >>> >>> >>> And if Python/PyPy doesn't evolve more quickly, it may get devoured by Julia, an interpreted/JIT language which abstracts into areas Python presently supports badly (such as multiprocessing and coroutines), with speeds PyPy has yet to attain. It may be that the concepts embodied in both MyHDL and Migen could eventually see more effective and more flexible implementations in the Julia ecosystem. Python, too, is just one tool in a procession of tools over time. >>> >>> >>> -BobC >>> >>> >>> On 04/22/2012 03:25 AM, Jan Decaluwe wrote: >>>> I still want to take the time to clarify my position >>>> on the many issues raised in this post. >>>> >>>> On 03/17/2012 09:20 PM, Bob Cunningham wrote: >>>>> On 03/16/2012 02:03 PM, Jan Decaluwe wrote: >>>>>> My conclusion is that Migen found an easy target in you, as a >>>>>> newbie, to confuse you. It made you think it can be used for >>>>>> serious design work. >>>>> Sorry, Jan. If I have to be "confused" to play with my FPGA, then so >>>>> be it. I'm very "serious" about being able to play with my FPGA! >>>>> >>>>> Your statement has an obvious implicit context: To me, you are >>>>> shouting, "MyHDL is for Serious Designers Only! Newbies and >>>>> Pragmatists should Go Away!" >>>>> >>>>> If that's what you are saying, then please be direct: Don't attack >>>>> Migen for being what it was intentionally created to be, or for being >>>>> something MyHDL is not intended to be. Are you upset about Migen >>>>> existing, or that there is an audience MyHDL can't help as well as >>>>> Migen may be able to? >>>> I am disturbed by the suggestion that my critique on Migen is >>>> based on anything else than a purely technical assessment. >>>> >>>> Let me be clear. I don't like Mr. Bourdeauducq's attitude for >>>> one bit. But do you think that would be a reason for me to >>>> ignore any good idea that he might come up with? Of course >>>> not. I am not a masochist. >>>> >>>> It is quite simple. During my career, I have found that when >>>> you think you have seen the worst in HDL-based design, it >>>> always gets worse. To date, Migen is the worst that I have >>>> seen. But to understand why I am saying this, you have to >>>> be prepared to follow my technical arguments and to >>>> engage in technical discussions. I have made a few starts, >>>> but I certainly was not done yet. However, I see close to >>>> zero enthousiasm to continue such discussions. >>>> >>>> I am therefore frustrated by the fact that I hear all kinds >>>> of opinions and suggestions to "merge" but that whenever things >>>> get a little technical then the "I am a beginner" umbrella opens. >>>> >>>> Migen is not my problem. It will disappear in the milky mist >>>> of HDL history, just like the many HDLs based on the same >>>> flawed paradigm. I am addressing it simply because misleading >>>> posts about it appear on this newsgroup. >>>> >>>> What I am really targetting instead is the conventional wisdom in >>>> mainstream HDL design, which often has it all wrong. >>>> >>>>> If you'd rather beginners like myself go >>>>> elsewhere, just say so. >>>> MyHDL welcomes beginners. It is the first item on "Why MyHDL": >>>> >>>> http://myhdl.org/doku.php/why#you_are_new_to_digital_hardware_design >>>> >>>> In fact, I put most of my hopes on beginners, as they have not >>>> yet been brainwashed by the conventional wisdom. >>>> >>>>> Remember, Migen wasn't created to be useful to beginners: It was >>>>> created by an experienced FPGA designer concerned about practicality >>>>> and productivity for a specific Open Source hardware project. It >>>> I am not impressed by "arguments from experience". The conventional >>>> wisdom that I am targetting was created by experienced designers, >>>> mostly Verilog ones. Experience can lead to conservatism and >>>> can become a hindrance to clear thinking. >>>> >>>>> simply happened that some things became a bit clearer to me after >>>>> looking at Migen and the problems it was created to address. >>>> Was it because of Migen or simply because you were spending more >>>> time on the problem? >>>> >>>>> Whatever Migen leaves out may have been what was getting in my way! >>>> I find that strange. I understand that you have a lot of >>>> experience with embedded software. Therefore, you must know >>>> procedural software techniques very well. That is exactly what >>>> Migen leaves out. What it leaves is low-level concurrency at the >>>> statement level, which must be new to you. And now you suggest >>>> that the obstacle is exactly that what you are most familiar >>>> with. Beats me. >>>> >>>>> I'm actually quite lazy: What is the *least* I need to know to make >>>>> useful digital designs *now*? >>>> No secrets here. The first sentence of "Why MyHDL" warns you: >>>> "There's a lot to learn and it will be hard work". Therefore, if >>>> you are intellectually lazy (not prepared to learn new things even >>>> when they will save you lots of time and effort later on), MyHDL >>>> or HDL-based design is not for you. >>>> >>>> MyHDL is for those who are lazy in the good engineering sense, >>>> wanting to accomplish more with less effort eventually. >>>> >>>>> I'm a beginner: Though I'd love to someday be able to design >>>>> circuits like Shakespeare wrote sonnets, I'd be more than happy today >>>>> if I were able to work at the level of "Green Eggs and Ham", a true >>>>> masterpiece written with an absolute minimum of linguistic >>>>> complexity. >>>> Come on, let's keep some perspective here. It's not *that* difficult >>>> or complex either. And there is a cookbook that shows you the way. >>>> >>>>>> When Migen claims that the "event-driven" paradigm is too general, >>>>>> what it really dumps is procedural support in your HDL descriptions >>>>>> - the interesting stuff. >>>>> What's "interesting" to you can be a frustrating block for a newbie. >>>>> I hope to one day also be very interested in those aspects of MyHDL, >>>>> but it seems to have little to do with what I want to get done today, >>>> I don't understand. Your specification seems very extensive and >>>> ambitious. It would seem that you have a big need for raising the >>>> abstaction level as high as possible, and for an easy path to >>>> strong verification. >>>> >>>>> which is to find the simplest path to get basic circuits out of my >>>>> mind and in to my FPGA. Then use them as building-blocks in my hobby >>>>> projects. >>>> There is a broad concensus about the "building blocks" paradigm >>>> in hardware design. That is really not the issue. The issue is >>>> what the abstraction level of the building blocks should be. >>>> >>>>> I am a MyHDL fan. Unfortunately, I simply remain unable to use MyHDL >>>>> to accomplish my own immediate hobby needs. That does not indicate >>>>> any flaw in MyHDL, merely the extent my own limitations. Do not be >>>>> surprised that I am interested in any tool that helps me circumvent >>>>> those limitations! >>>>> >>>>> I actually *like* how Migen slices and dices the process of FPGA >>>>> design: The parts that are left out are ones I doubt newbies like me >>>>> would notice, much less care about, until confronted with unusually >>>>> challenging designs. I suspect Sebastien would agree with much of >>>>> your analysis, the proper response being: "So What?" >>>> Suppose that I teach a class to newbies in embedded software design >>>> based on assembler. Would any of the newbies, except for the >>>> rare genius, miss the capabilities of say, C? Does this prove that >>>> teaching assembler was a good choice? >>>> >>>>> It's not about theoretical power or completeness: It's about barriers >>>>> to entry. It's not about what I can do in 5 years, but about what I >>>>> can do in 5 weeks. Migen is primarily about pragmatism and >>>>> productivity, making powerful circuits quickly and easily, and far >>>>> less about expert capabilities, theoretical purity or even >>>>> consistency. >>>> Again, I find this strange. I understand that you have not been >>>> successful with MyHDL. However, as I understand it you have not >>>> been successful with Migen either. So what is your defense based >>>> upon? Of course, we are about 5 weeks further now :-) >>>> >>>> More to the point. >>>> >>>> Barriers to entry - ok, but what is the task? I told you that I >>>> believe the main problem in HDL-based design is verification, and >>>> how MyHDL (unlike Migen) helps you by the fact that you >>>> can use the same modelling paradigm for high-level models and >>>> test benches as for synthesizable logic. >>>> >>>> You seemed surprized, which I found suprizing in turn. Is >>>> it so different in software? Really, getting those gates into >>>> an FPGA is the easy part. The difficult part is getting >>>> them to work properly. >>>> >>>> You will have noticed that Mr. Bourdeauducq made an error in >>>> the first simple "one-liner" circuit that I presented to him, >>>> as if he wanted to prove my point. Of course, the reason is >>>> not incompetence, but simply that he did not verify his >>>> design. >>>> >>>> There is a pattern however. Mr. Bourdeauducq cried foul because >>>> I didn't accept his "simple patches". What he ignored, and >>>> continued to ignore despite my insistence is that they broke >>>> MyHDL. Perhaps Mr. Bourdeaducq considers verification a "detail". >>>> >>>> Well, I don't. Verification is the problem. The reason why I >>>> think the abstraction level of synthesizable logic should >>>> be as high as possible, is because that leaves more time >>>> for verification. >>>> >>>>> I seek tools that will help me do what I want to get done today, and >>>>> so far Migen seems like it will be most useful. Tomorrow will likely >>>>> require additional tools, and I *absolutely* expect (and want) MyHDL >>>>> to the first of those tools. It is not an either-or proposition: I >>>>> want Migen *and* MyHDL. I realize MyHDL will take more time to >>>>> master, and I'm willing to commit that time. But I also want to >>>>> create something sooner, rather than later. And I believe that 100% >>>>> of what I learn using Migen will later prove useful with MyHDL. I >>>>> believe using Migen will keep me motivated toward learning MyHDL. >>>> Sounds good, but I think it is cheap talk. >>>> >>>> Most of Migen's technical choices, starting with its basic paradigm, >>>> are almost the opposite of MyHDL's. As a result, verification is not >>>> addressed, and it forces you to think at an artificially low level >>>> for synthesizable logic. What good can one learn from that? >>>> >>>>> Right next to me I have a Spartan 3E-500 that contains nothing of my >>>>> own design. That must change! >>>> Perhaps you are too ambitious. >>>> >>>> In your shoes, I would start as follows: >>>> >>>> * isolate a simple function out of your spec >>>> * try to concentrate on what it does, instead of how it should be implemented >>>> * write that behavior in a (clocked) MyHDL process or processes >>>> * also describe it in high-level python, and use that in a unit-test to verify >>>> * experiment with synthesis to see whether it works and the result is acceptable >>>> * iterate with the synthesizable description as necessary >>>> >>>> >>> ------------------------------------------------------------------------------ >>> For Developers, A Lot Can Happen In A Second. >>> Boundary is the first to Know...and Tell You. >>> Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! >>> http://p.sf.net/sfu/Boundary-d2dvs2 >>> _______________________________________________ >>> myhdl-list mailing list >>> myh...@li... >>> https://lists.sourceforge.net/lists/listinfo/myhdl-list >> ------------------------------------------------------------------------------ >> For Developers, A Lot Can Happen In A Second. >> Boundary is the first to Know...and Tell You. >> Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! >> http://p.sf.net/sfu/Boundary-d2dvs2 >> _______________________________________________ >> myhdl-list mailing list >> myh...@li... >> https://lists.sourceforge.net/lists/listinfo/myhdl-list >> > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Bob C. <fl...@gm...> - 2012-04-22 20:30:57
|
Hi David, I have negligible experience with Clojure, possibly because it resurrects my Lisp nightmares of paren counting and infinite recursion through tangled lists. I will take a look at your project as soon as time permits, which unfortunately may not be until next weekend. I am motivated to see how you have approached the general issue, and what factors have shaped your particular approach. Most of all, thanks for your supportive reply! I really hate criticizing Jan, especially when I know I'm trying to do battle unarmed. Yet I'm light-years away from even formulating, much less implementing, my own solution to the problem. So I'm highly motivated to learn what others, such as you, are doing. Thanks, -BobC On 04/22/2012 01:02 PM, David Greenberg wrote: > Hi Bob, > I wanted to let you know about my own solution to this problem, a > language embedded in Clojure called Piplin. I initially sought to use > MyHDL for some of the reasons you did (integration ease) and some > other reasons (leveraging type systems and new syntax to improve > expressiveness). Yesterday, I successfully synthesized the code to > verilog. I've implemented the type system and simulator too, but > there's a lot of work to be done. Since it's embedded in Java, it's > easy to use all kinds of numerical libraries and integrate with many > systems. Here's the code: github.com/dgrnbrg/piplin > > The best way to see what it looks like in context is here: > https://github.com/dgrnbrg/piplin/blob/master/src/piplin/mips.clj > > I started this project in February from a blank slate, so although > it's not ready yet, the ideas are there, and I'd love your feedback on > syntax, semantics, and features that are important. > > Thanks, > David > > On Sun, Apr 22, 2012 at 2:53 PM, Bob Cunningham<fl...@gm...> wrote: >> Thank-you Jan for your careful and caring response. It is clear I haven't gone far enough down my own learning path to make well-founded criticisms of HDL tools or approaches. While I continue to closely monitor this list and developments in the general world of "alternative" HDLs, a recent job change has temporarily eliminated my free time to pursue learning HDLs: My FPGA development board is gathering dust on my workbench. Still, when I finally do resume my efforts, I hope to have a better path to pursue. >> >> Rather than attempt to respond to your points individually, please permit me instead to step back and restate my goals and describe how they have evolved since encountering MyHDL and Migen. >> >> As an embedded real-time engineer specializing primarily in software, but with proven skills in system design and high-level circuit design (mainly processor and peripheral chip selection and integration), my greatest goal is to minimize the amount of new software in the products I create. Statistically, nothing adds bugs to a system faster, and delays product shipping longer, than adding a single new line of code. The buggiest code tends to occur at the lowest levels: At the functional level, the error rate for assembler is a multiple of that for C which in turn is a multiple of that for Python. >> >> It's not just about levels of abstraction, though that is certainly a vital factor. It also concerns the availability of proven code, and how easily that code can be integrated onto the current project. Part of the issue is how easy or difficult it is to create portable and reusable code on any system, the rest concerns how to make use of that code. I love projects where most of my effort is spent writing "glue" code to integrate known-good libraries to create a new application. My work as an engineer then becomes finding that good code, and selecting from among the candidates the code that best meets my needs and will be easiest to integrate. >> >> That said, what I love most is writing code from scratch, implementing something important that has never existed before. But that's not what gets products out the door: Professionally, I must hold my own code authoring skills and desires at bay, and use the rest of my skills to assess what's best for the system and product as a whole. It's kind of schizophrenic: I ship better systems sooner when I keep my own coding skills as a last resort, and prioritize integrating the work of others. But I always yearn for projects where I get to do something truly new and innovative. >> >> I view the FPGA as an ideal canvas for an embedded engineer, where I can work more closely with the EEs while simultaneously using using my system design and integration skills to further reduce the software needed to produce a product. >> >> The first FPGA skill I'd most like to have to be productive would be the ability to write glue between the FPGA and devices in the outside world, and between IP cores within the FPGA. I became very excited when I saw that Migen included the Wishbone interface as a pre-integrated macro: There are many cores over at OpenCores that use the Wishbone interface, but getting them to talk to each other required a level of skill that was way beyond my knowledge. >> >> Why would I need to know how to implement a Wishbone interface? What I need to know is how to design and test a system that uses components integrated with standard interfaces, and those skills are readily transferred from my embedded experience, where I have had to test and integrate many hardware systems and components. I like using logic analyzers and o'scopes far more than I like using software debuggers! >> >> I suppose this isn't really an HDL issue: It's much higher than that. I suppose it's more of a tool-level issue with language implications. >> >> When I finally do start to write my own complex circuits from scratch (something I greatly look forward to, and expect to have lots of fun doing), I'll want to use a language (or languages) that are appropriate to the task at hand, much as I use assembler, C and Python to create embedded software. >> >> However, identifying the most appropriate language can be difficult, and may even be a needless effort if the highest-level languages can "play nice" at the lowest level. Permit me to share a case in point: >> >> Several years ago I had to develop an instrument based on a complex sensor system that used inherently noisy sensors from which we needed to extract "soft" real-time performance. The statistics alone were quite daunting, and I decided to prototype the system in Python so I could use Numpy and related libraries to quickly develop the algorithms I needed. The prototype was a success, but it had one interesting characteristic: It met our performance sped when running on the desktop, and was only a factor of 2-3 slower than our minimum required performance spec when running on our intended target system. Surprising performance for a prototype in an interpreted language. >> >> I profiled the code and found that the Python numeric and statistical libraries were wicked-fast: They were more than capable of doing all the math needed well within our timing requirements. That meant the slow performance was due to the code I had written to interface to the sensors and prepare their data for use by the fast libraries, and passing data between those libraries. I first moved as much functionality as possible into the libraries, which yielded an immediate 25% speed improvement. Next, I stumbled across Psyco (these were pre-PyPy days), and used it to more than double the performance of the non-library code I had written. >> >> That left me tantalizingly close to meeting my timing requirements, but I was still missing them by 30%. I had never before considered using Python to implement a near-real-time embedded system. Not only that, I also had never shipped Linux in a delivered system: I had always been driven to minimize system power, which meant small processors, which in turn meant running no code that didn't directly support the end result, which in turn meant often running with no OS, or at most an RTOS. For this system I had a much more generous power envelope due to the generous maximum weight my battery-powered instrument could have. If I went with a Linux platform, I'd need to double my power budget, which meant doubling the current rating and capacity of the batteries: Switching from NiMh to Li-Ion permitted me to get the power I needed with only a modest increase in system weight. >> >> But how to obtain that last bit of performance I needed? I was already using the fastest CPU I could get that met our packaging requirements (no fan, no extensive or expensive passive heatsinks, which at the time meant a ULV Celeron). A final round of profiling indicated my interface code was spending lots of time reading and converting data: Each data element was causing individual library calls. Implementing larger buffers helped, but still didn't get me close to the time I needed: Servicing the hardware was the problem. The final speed boost came when the buffers were moved from software to hardware, so each interface access returned 1000x more data, a design change that also improved the timing between the individual sensors, which in turn simplified the statistics needed to process the data. >> >> Sorry for the length of that story, but that experience has affected all the work I've done since, and also affects my expectations when it comes to FPGA development. I want my FPGA development to integrate nicely into the above process, to be a powerful tool that permits me not just to engineer a fully-specified circuit into a fully-specified target, but also to experiment with integrating different components, to move solutions or parts of the problem from one domain to another, to quickly make use of the work of others, and be able to create my own when needed. >> >> The key element that must be present to facilitate such flexibility is the ability to implement common aspects of interfaces across multiple domains, to switch interfaces while preserving functionality, to be able to work with gates, busses, processors, instructions, libraries, languages and whatever else is needed to permit a given slice of functionality to be implemented and accessed in its best and most effective domain. I want everything from simple FIFOs to complex processor interfaces to be available to me in multiple domains without having to create them from scratch each time. >> >> And THAT is what initially attracted me to MyHDL: If I can use Python to move software into hardware, and interface with it from the Python domain, why, I could become a system development god! And the advent of PyPy makes that approach even more tractable. >> >> I soon learned that such power is not the goal of MyHDL: The goal of MyHDL is to use Python to make a incrementally better flavor Verilog or VHDL, a relatively small but significant evolutionary step compared to the transformative change I was dreaming of. >> >> I desire to climb up and skip between levels of abstraction and implementation domains that are almost orthogonal to what MyHDL pursues. MyHDL may prove to be a piece of the final process, but it does not, and perhaps cannot, encompass that process on its own. >> >> Hence my excitement with Migen: Even in its embryonic, incomplete, and flawed initial state, it clearly seeks to bridge abstractions in ways alien to MyHDL, to make interfaces an integral part of the package and process, rather than something to code to at a low level. In this respect, MyHDL feels more like C, and Migen aims to be Python with its powerful libraries being made available to higher-level code. >> >> Again, I'm only mapping from domains I know to a domain I hope to enter and in which I hope become proficient and productive. I'm certain my mapping has flaws and gaps owing to my lack of general HDL knowledge. But I suspect this very lack may enable me to conceive of and pursue a path that may yield a greater whole. >> >> MyHDL and Migen both seem to me to be stepping stones in a stream I wish to see crossed by a 4-lane bridge. And arguments between and about them seem like arguments about C vs. Python: It's about the solution path and the power of the process, not about any particular tool! It's about how tools interact and combine, not about how or why they differ or overlap. >> >> >> And if Python/PyPy doesn't evolve more quickly, it may get devoured by Julia, an interpreted/JIT language which abstracts into areas Python presently supports badly (such as multiprocessing and coroutines), with speeds PyPy has yet to attain. It may be that the concepts embodied in both MyHDL and Migen could eventually see more effective and more flexible implementations in the Julia ecosystem. Python, too, is just one tool in a procession of tools over time. >> >> >> -BobC >> >> >> On 04/22/2012 03:25 AM, Jan Decaluwe wrote: >>> I still want to take the time to clarify my position >>> on the many issues raised in this post. >>> >>> On 03/17/2012 09:20 PM, Bob Cunningham wrote: >>>> On 03/16/2012 02:03 PM, Jan Decaluwe wrote: >>>>> My conclusion is that Migen found an easy target in you, as a >>>>> newbie, to confuse you. It made you think it can be used for >>>>> serious design work. >>>> Sorry, Jan. If I have to be "confused" to play with my FPGA, then so >>>> be it. I'm very "serious" about being able to play with my FPGA! >>>> >>>> Your statement has an obvious implicit context: To me, you are >>>> shouting, "MyHDL is for Serious Designers Only! Newbies and >>>> Pragmatists should Go Away!" >>>> >>>> If that's what you are saying, then please be direct: Don't attack >>>> Migen for being what it was intentionally created to be, or for being >>>> something MyHDL is not intended to be. Are you upset about Migen >>>> existing, or that there is an audience MyHDL can't help as well as >>>> Migen may be able to? >>> I am disturbed by the suggestion that my critique on Migen is >>> based on anything else than a purely technical assessment. >>> >>> Let me be clear. I don't like Mr. Bourdeauducq's attitude for >>> one bit. But do you think that would be a reason for me to >>> ignore any good idea that he might come up with? Of course >>> not. I am not a masochist. >>> >>> It is quite simple. During my career, I have found that when >>> you think you have seen the worst in HDL-based design, it >>> always gets worse. To date, Migen is the worst that I have >>> seen. But to understand why I am saying this, you have to >>> be prepared to follow my technical arguments and to >>> engage in technical discussions. I have made a few starts, >>> but I certainly was not done yet. However, I see close to >>> zero enthousiasm to continue such discussions. >>> >>> I am therefore frustrated by the fact that I hear all kinds >>> of opinions and suggestions to "merge" but that whenever things >>> get a little technical then the "I am a beginner" umbrella opens. >>> >>> Migen is not my problem. It will disappear in the milky mist >>> of HDL history, just like the many HDLs based on the same >>> flawed paradigm. I am addressing it simply because misleading >>> posts about it appear on this newsgroup. >>> >>> What I am really targetting instead is the conventional wisdom in >>> mainstream HDL design, which often has it all wrong. >>> >>>> If you'd rather beginners like myself go >>>> elsewhere, just say so. >>> MyHDL welcomes beginners. It is the first item on "Why MyHDL": >>> >>> http://myhdl.org/doku.php/why#you_are_new_to_digital_hardware_design >>> >>> In fact, I put most of my hopes on beginners, as they have not >>> yet been brainwashed by the conventional wisdom. >>> >>>> Remember, Migen wasn't created to be useful to beginners: It was >>>> created by an experienced FPGA designer concerned about practicality >>>> and productivity for a specific Open Source hardware project. It >>> I am not impressed by "arguments from experience". The conventional >>> wisdom that I am targetting was created by experienced designers, >>> mostly Verilog ones. Experience can lead to conservatism and >>> can become a hindrance to clear thinking. >>> >>>> simply happened that some things became a bit clearer to me after >>>> looking at Migen and the problems it was created to address. >>> Was it because of Migen or simply because you were spending more >>> time on the problem? >>> >>>> Whatever Migen leaves out may have been what was getting in my way! >>> I find that strange. I understand that you have a lot of >>> experience with embedded software. Therefore, you must know >>> procedural software techniques very well. That is exactly what >>> Migen leaves out. What it leaves is low-level concurrency at the >>> statement level, which must be new to you. And now you suggest >>> that the obstacle is exactly that what you are most familiar >>> with. Beats me. >>> >>>> I'm actually quite lazy: What is the *least* I need to know to make >>>> useful digital designs *now*? >>> No secrets here. The first sentence of "Why MyHDL" warns you: >>> "There's a lot to learn and it will be hard work". Therefore, if >>> you are intellectually lazy (not prepared to learn new things even >>> when they will save you lots of time and effort later on), MyHDL >>> or HDL-based design is not for you. >>> >>> MyHDL is for those who are lazy in the good engineering sense, >>> wanting to accomplish more with less effort eventually. >>> >>>> I'm a beginner: Though I'd love to someday be able to design >>>> circuits like Shakespeare wrote sonnets, I'd be more than happy today >>>> if I were able to work at the level of "Green Eggs and Ham", a true >>>> masterpiece written with an absolute minimum of linguistic >>>> complexity. >>> Come on, let's keep some perspective here. It's not *that* difficult >>> or complex either. And there is a cookbook that shows you the way. >>> >>>>> When Migen claims that the "event-driven" paradigm is too general, >>>>> what it really dumps is procedural support in your HDL descriptions >>>>> - the interesting stuff. >>>> What's "interesting" to you can be a frustrating block for a newbie. >>>> I hope to one day also be very interested in those aspects of MyHDL, >>>> but it seems to have little to do with what I want to get done today, >>> I don't understand. Your specification seems very extensive and >>> ambitious. It would seem that you have a big need for raising the >>> abstaction level as high as possible, and for an easy path to >>> strong verification. >>> >>>> which is to find the simplest path to get basic circuits out of my >>>> mind and in to my FPGA. Then use them as building-blocks in my hobby >>>> projects. >>> There is a broad concensus about the "building blocks" paradigm >>> in hardware design. That is really not the issue. The issue is >>> what the abstraction level of the building blocks should be. >>> >>>> I am a MyHDL fan. Unfortunately, I simply remain unable to use MyHDL >>>> to accomplish my own immediate hobby needs. That does not indicate >>>> any flaw in MyHDL, merely the extent my own limitations. Do not be >>>> surprised that I am interested in any tool that helps me circumvent >>>> those limitations! >>>> >>>> I actually *like* how Migen slices and dices the process of FPGA >>>> design: The parts that are left out are ones I doubt newbies like me >>>> would notice, much less care about, until confronted with unusually >>>> challenging designs. I suspect Sebastien would agree with much of >>>> your analysis, the proper response being: "So What?" >>> Suppose that I teach a class to newbies in embedded software design >>> based on assembler. Would any of the newbies, except for the >>> rare genius, miss the capabilities of say, C? Does this prove that >>> teaching assembler was a good choice? >>> >>>> It's not about theoretical power or completeness: It's about barriers >>>> to entry. It's not about what I can do in 5 years, but about what I >>>> can do in 5 weeks. Migen is primarily about pragmatism and >>>> productivity, making powerful circuits quickly and easily, and far >>>> less about expert capabilities, theoretical purity or even >>>> consistency. >>> Again, I find this strange. I understand that you have not been >>> successful with MyHDL. However, as I understand it you have not >>> been successful with Migen either. So what is your defense based >>> upon? Of course, we are about 5 weeks further now :-) >>> >>> More to the point. >>> >>> Barriers to entry - ok, but what is the task? I told you that I >>> believe the main problem in HDL-based design is verification, and >>> how MyHDL (unlike Migen) helps you by the fact that you >>> can use the same modelling paradigm for high-level models and >>> test benches as for synthesizable logic. >>> >>> You seemed surprized, which I found suprizing in turn. Is >>> it so different in software? Really, getting those gates into >>> an FPGA is the easy part. The difficult part is getting >>> them to work properly. >>> >>> You will have noticed that Mr. Bourdeauducq made an error in >>> the first simple "one-liner" circuit that I presented to him, >>> as if he wanted to prove my point. Of course, the reason is >>> not incompetence, but simply that he did not verify his >>> design. >>> >>> There is a pattern however. Mr. Bourdeauducq cried foul because >>> I didn't accept his "simple patches". What he ignored, and >>> continued to ignore despite my insistence is that they broke >>> MyHDL. Perhaps Mr. Bourdeaducq considers verification a "detail". >>> >>> Well, I don't. Verification is the problem. The reason why I >>> think the abstraction level of synthesizable logic should >>> be as high as possible, is because that leaves more time >>> for verification. >>> >>>> I seek tools that will help me do what I want to get done today, and >>>> so far Migen seems like it will be most useful. Tomorrow will likely >>>> require additional tools, and I *absolutely* expect (and want) MyHDL >>>> to the first of those tools. It is not an either-or proposition: I >>>> want Migen *and* MyHDL. I realize MyHDL will take more time to >>>> master, and I'm willing to commit that time. But I also want to >>>> create something sooner, rather than later. And I believe that 100% >>>> of what I learn using Migen will later prove useful with MyHDL. I >>>> believe using Migen will keep me motivated toward learning MyHDL. >>> Sounds good, but I think it is cheap talk. >>> >>> Most of Migen's technical choices, starting with its basic paradigm, >>> are almost the opposite of MyHDL's. As a result, verification is not >>> addressed, and it forces you to think at an artificially low level >>> for synthesizable logic. What good can one learn from that? >>> >>>> Right next to me I have a Spartan 3E-500 that contains nothing of my >>>> own design. That must change! >>> Perhaps you are too ambitious. >>> >>> In your shoes, I would start as follows: >>> >>> * isolate a simple function out of your spec >>> * try to concentrate on what it does, instead of how it should be implemented >>> * write that behavior in a (clocked) MyHDL process or processes >>> * also describe it in high-level python, and use that in a unit-test to verify >>> * experiment with synthesis to see whether it works and the result is acceptable >>> * iterate with the synthesizable description as necessary >>> >>> >> ------------------------------------------------------------------------------ >> For Developers, A Lot Can Happen In A Second. >> Boundary is the first to Know...and Tell You. >> Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! >> http://p.sf.net/sfu/Boundary-d2dvs2 >> _______________________________________________ >> myhdl-list mailing list >> myh...@li... >> https://lists.sourceforge.net/lists/listinfo/myhdl-list > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list > |
From: David G. <dsg...@gm...> - 2012-04-22 20:02:09
|
Hi Bob, I wanted to let you know about my own solution to this problem, a language embedded in Clojure called Piplin. I initially sought to use MyHDL for some of the reasons you did (integration ease) and some other reasons (leveraging type systems and new syntax to improve expressiveness). Yesterday, I successfully synthesized the code to verilog. I've implemented the type system and simulator too, but there's a lot of work to be done. Since it's embedded in Java, it's easy to use all kinds of numerical libraries and integrate with many systems. Here's the code: github.com/dgrnbrg/piplin The best way to see what it looks like in context is here: https://github.com/dgrnbrg/piplin/blob/master/src/piplin/mips.clj I started this project in February from a blank slate, so although it's not ready yet, the ideas are there, and I'd love your feedback on syntax, semantics, and features that are important. Thanks, David On Sun, Apr 22, 2012 at 2:53 PM, Bob Cunningham <fl...@gm...> wrote: > Thank-you Jan for your careful and caring response. It is clear I haven't gone far enough down my own learning path to make well-founded criticisms of HDL tools or approaches. While I continue to closely monitor this list and developments in the general world of "alternative" HDLs, a recent job change has temporarily eliminated my free time to pursue learning HDLs: My FPGA development board is gathering dust on my workbench. Still, when I finally do resume my efforts, I hope to have a better path to pursue. > > Rather than attempt to respond to your points individually, please permit me instead to step back and restate my goals and describe how they have evolved since encountering MyHDL and Migen. > > As an embedded real-time engineer specializing primarily in software, but with proven skills in system design and high-level circuit design (mainly processor and peripheral chip selection and integration), my greatest goal is to minimize the amount of new software in the products I create. Statistically, nothing adds bugs to a system faster, and delays product shipping longer, than adding a single new line of code. The buggiest code tends to occur at the lowest levels: At the functional level, the error rate for assembler is a multiple of that for C which in turn is a multiple of that for Python. > > It's not just about levels of abstraction, though that is certainly a vital factor. It also concerns the availability of proven code, and how easily that code can be integrated onto the current project. Part of the issue is how easy or difficult it is to create portable and reusable code on any system, the rest concerns how to make use of that code. I love projects where most of my effort is spent writing "glue" code to integrate known-good libraries to create a new application. My work as an engineer then becomes finding that good code, and selecting from among the candidates the code that best meets my needs and will be easiest to integrate. > > That said, what I love most is writing code from scratch, implementing something important that has never existed before. But that's not what gets products out the door: Professionally, I must hold my own code authoring skills and desires at bay, and use the rest of my skills to assess what's best for the system and product as a whole. It's kind of schizophrenic: I ship better systems sooner when I keep my own coding skills as a last resort, and prioritize integrating the work of others. But I always yearn for projects where I get to do something truly new and innovative. > > I view the FPGA as an ideal canvas for an embedded engineer, where I can work more closely with the EEs while simultaneously using using my system design and integration skills to further reduce the software needed to produce a product. > > The first FPGA skill I'd most like to have to be productive would be the ability to write glue between the FPGA and devices in the outside world, and between IP cores within the FPGA. I became very excited when I saw that Migen included the Wishbone interface as a pre-integrated macro: There are many cores over at OpenCores that use the Wishbone interface, but getting them to talk to each other required a level of skill that was way beyond my knowledge. > > Why would I need to know how to implement a Wishbone interface? What I need to know is how to design and test a system that uses components integrated with standard interfaces, and those skills are readily transferred from my embedded experience, where I have had to test and integrate many hardware systems and components. I like using logic analyzers and o'scopes far more than I like using software debuggers! > > I suppose this isn't really an HDL issue: It's much higher than that. I suppose it's more of a tool-level issue with language implications. > > When I finally do start to write my own complex circuits from scratch (something I greatly look forward to, and expect to have lots of fun doing), I'll want to use a language (or languages) that are appropriate to the task at hand, much as I use assembler, C and Python to create embedded software. > > However, identifying the most appropriate language can be difficult, and may even be a needless effort if the highest-level languages can "play nice" at the lowest level. Permit me to share a case in point: > > Several years ago I had to develop an instrument based on a complex sensor system that used inherently noisy sensors from which we needed to extract "soft" real-time performance. The statistics alone were quite daunting, and I decided to prototype the system in Python so I could use Numpy and related libraries to quickly develop the algorithms I needed. The prototype was a success, but it had one interesting characteristic: It met our performance sped when running on the desktop, and was only a factor of 2-3 slower than our minimum required performance spec when running on our intended target system. Surprising performance for a prototype in an interpreted language. > > I profiled the code and found that the Python numeric and statistical libraries were wicked-fast: They were more than capable of doing all the math needed well within our timing requirements. That meant the slow performance was due to the code I had written to interface to the sensors and prepare their data for use by the fast libraries, and passing data between those libraries. I first moved as much functionality as possible into the libraries, which yielded an immediate 25% speed improvement. Next, I stumbled across Psyco (these were pre-PyPy days), and used it to more than double the performance of the non-library code I had written. > > That left me tantalizingly close to meeting my timing requirements, but I was still missing them by 30%. I had never before considered using Python to implement a near-real-time embedded system. Not only that, I also had never shipped Linux in a delivered system: I had always been driven to minimize system power, which meant small processors, which in turn meant running no code that didn't directly support the end result, which in turn meant often running with no OS, or at most an RTOS. For this system I had a much more generous power envelope due to the generous maximum weight my battery-powered instrument could have. If I went with a Linux platform, I'd need to double my power budget, which meant doubling the current rating and capacity of the batteries: Switching from NiMh to Li-Ion permitted me to get the power I needed with only a modest increase in system weight. > > But how to obtain that last bit of performance I needed? I was already using the fastest CPU I could get that met our packaging requirements (no fan, no extensive or expensive passive heatsinks, which at the time meant a ULV Celeron). A final round of profiling indicated my interface code was spending lots of time reading and converting data: Each data element was causing individual library calls. Implementing larger buffers helped, but still didn't get me close to the time I needed: Servicing the hardware was the problem. The final speed boost came when the buffers were moved from software to hardware, so each interface access returned 1000x more data, a design change that also improved the timing between the individual sensors, which in turn simplified the statistics needed to process the data. > > Sorry for the length of that story, but that experience has affected all the work I've done since, and also affects my expectations when it comes to FPGA development. I want my FPGA development to integrate nicely into the above process, to be a powerful tool that permits me not just to engineer a fully-specified circuit into a fully-specified target, but also to experiment with integrating different components, to move solutions or parts of the problem from one domain to another, to quickly make use of the work of others, and be able to create my own when needed. > > The key element that must be present to facilitate such flexibility is the ability to implement common aspects of interfaces across multiple domains, to switch interfaces while preserving functionality, to be able to work with gates, busses, processors, instructions, libraries, languages and whatever else is needed to permit a given slice of functionality to be implemented and accessed in its best and most effective domain. I want everything from simple FIFOs to complex processor interfaces to be available to me in multiple domains without having to create them from scratch each time. > > And THAT is what initially attracted me to MyHDL: If I can use Python to move software into hardware, and interface with it from the Python domain, why, I could become a system development god! And the advent of PyPy makes that approach even more tractable. > > I soon learned that such power is not the goal of MyHDL: The goal of MyHDL is to use Python to make a incrementally better flavor Verilog or VHDL, a relatively small but significant evolutionary step compared to the transformative change I was dreaming of. > > I desire to climb up and skip between levels of abstraction and implementation domains that are almost orthogonal to what MyHDL pursues. MyHDL may prove to be a piece of the final process, but it does not, and perhaps cannot, encompass that process on its own. > > Hence my excitement with Migen: Even in its embryonic, incomplete, and flawed initial state, it clearly seeks to bridge abstractions in ways alien to MyHDL, to make interfaces an integral part of the package and process, rather than something to code to at a low level. In this respect, MyHDL feels more like C, and Migen aims to be Python with its powerful libraries being made available to higher-level code. > > Again, I'm only mapping from domains I know to a domain I hope to enter and in which I hope become proficient and productive. I'm certain my mapping has flaws and gaps owing to my lack of general HDL knowledge. But I suspect this very lack may enable me to conceive of and pursue a path that may yield a greater whole. > > MyHDL and Migen both seem to me to be stepping stones in a stream I wish to see crossed by a 4-lane bridge. And arguments between and about them seem like arguments about C vs. Python: It's about the solution path and the power of the process, not about any particular tool! It's about how tools interact and combine, not about how or why they differ or overlap. > > > And if Python/PyPy doesn't evolve more quickly, it may get devoured by Julia, an interpreted/JIT language which abstracts into areas Python presently supports badly (such as multiprocessing and coroutines), with speeds PyPy has yet to attain. It may be that the concepts embodied in both MyHDL and Migen could eventually see more effective and more flexible implementations in the Julia ecosystem. Python, too, is just one tool in a procession of tools over time. > > > -BobC > > > On 04/22/2012 03:25 AM, Jan Decaluwe wrote: >> I still want to take the time to clarify my position >> on the many issues raised in this post. >> >> On 03/17/2012 09:20 PM, Bob Cunningham wrote: >>> On 03/16/2012 02:03 PM, Jan Decaluwe wrote: >>>> My conclusion is that Migen found an easy target in you, as a >>>> newbie, to confuse you. It made you think it can be used for >>>> serious design work. >>> Sorry, Jan. If I have to be "confused" to play with my FPGA, then so >>> be it. I'm very "serious" about being able to play with my FPGA! >>> >>> Your statement has an obvious implicit context: To me, you are >>> shouting, "MyHDL is for Serious Designers Only! Newbies and >>> Pragmatists should Go Away!" >>> >>> If that's what you are saying, then please be direct: Don't attack >>> Migen for being what it was intentionally created to be, or for being >>> something MyHDL is not intended to be. Are you upset about Migen >>> existing, or that there is an audience MyHDL can't help as well as >>> Migen may be able to? >> I am disturbed by the suggestion that my critique on Migen is >> based on anything else than a purely technical assessment. >> >> Let me be clear. I don't like Mr. Bourdeauducq's attitude for >> one bit. But do you think that would be a reason for me to >> ignore any good idea that he might come up with? Of course >> not. I am not a masochist. >> >> It is quite simple. During my career, I have found that when >> you think you have seen the worst in HDL-based design, it >> always gets worse. To date, Migen is the worst that I have >> seen. But to understand why I am saying this, you have to >> be prepared to follow my technical arguments and to >> engage in technical discussions. I have made a few starts, >> but I certainly was not done yet. However, I see close to >> zero enthousiasm to continue such discussions. >> >> I am therefore frustrated by the fact that I hear all kinds >> of opinions and suggestions to "merge" but that whenever things >> get a little technical then the "I am a beginner" umbrella opens. >> >> Migen is not my problem. It will disappear in the milky mist >> of HDL history, just like the many HDLs based on the same >> flawed paradigm. I am addressing it simply because misleading >> posts about it appear on this newsgroup. >> >> What I am really targetting instead is the conventional wisdom in >> mainstream HDL design, which often has it all wrong. >> >>> If you'd rather beginners like myself go >>> elsewhere, just say so. >> MyHDL welcomes beginners. It is the first item on "Why MyHDL": >> >> http://myhdl.org/doku.php/why#you_are_new_to_digital_hardware_design >> >> In fact, I put most of my hopes on beginners, as they have not >> yet been brainwashed by the conventional wisdom. >> >>> Remember, Migen wasn't created to be useful to beginners: It was >>> created by an experienced FPGA designer concerned about practicality >>> and productivity for a specific Open Source hardware project. It >> I am not impressed by "arguments from experience". The conventional >> wisdom that I am targetting was created by experienced designers, >> mostly Verilog ones. Experience can lead to conservatism and >> can become a hindrance to clear thinking. >> >>> simply happened that some things became a bit clearer to me after >>> looking at Migen and the problems it was created to address. >> Was it because of Migen or simply because you were spending more >> time on the problem? >> >>> Whatever Migen leaves out may have been what was getting in my way! >> I find that strange. I understand that you have a lot of >> experience with embedded software. Therefore, you must know >> procedural software techniques very well. That is exactly what >> Migen leaves out. What it leaves is low-level concurrency at the >> statement level, which must be new to you. And now you suggest >> that the obstacle is exactly that what you are most familiar >> with. Beats me. >> >>> I'm actually quite lazy: What is the *least* I need to know to make >>> useful digital designs *now*? >> No secrets here. The first sentence of "Why MyHDL" warns you: >> "There's a lot to learn and it will be hard work". Therefore, if >> you are intellectually lazy (not prepared to learn new things even >> when they will save you lots of time and effort later on), MyHDL >> or HDL-based design is not for you. >> >> MyHDL is for those who are lazy in the good engineering sense, >> wanting to accomplish more with less effort eventually. >> >>> I'm a beginner: Though I'd love to someday be able to design >>> circuits like Shakespeare wrote sonnets, I'd be more than happy today >>> if I were able to work at the level of "Green Eggs and Ham", a true >>> masterpiece written with an absolute minimum of linguistic >>> complexity. >> Come on, let's keep some perspective here. It's not *that* difficult >> or complex either. And there is a cookbook that shows you the way. >> >>>> When Migen claims that the "event-driven" paradigm is too general, >>>> what it really dumps is procedural support in your HDL descriptions >>>> - the interesting stuff. >>> What's "interesting" to you can be a frustrating block for a newbie. >>> I hope to one day also be very interested in those aspects of MyHDL, >>> but it seems to have little to do with what I want to get done today, >> I don't understand. Your specification seems very extensive and >> ambitious. It would seem that you have a big need for raising the >> abstaction level as high as possible, and for an easy path to >> strong verification. >> >>> which is to find the simplest path to get basic circuits out of my >>> mind and in to my FPGA. Then use them as building-blocks in my hobby >>> projects. >> There is a broad concensus about the "building blocks" paradigm >> in hardware design. That is really not the issue. The issue is >> what the abstraction level of the building blocks should be. >> >>> I am a MyHDL fan. Unfortunately, I simply remain unable to use MyHDL >>> to accomplish my own immediate hobby needs. That does not indicate >>> any flaw in MyHDL, merely the extent my own limitations. Do not be >>> surprised that I am interested in any tool that helps me circumvent >>> those limitations! >>> >>> I actually *like* how Migen slices and dices the process of FPGA >>> design: The parts that are left out are ones I doubt newbies like me >>> would notice, much less care about, until confronted with unusually >>> challenging designs. I suspect Sebastien would agree with much of >>> your analysis, the proper response being: "So What?" >> Suppose that I teach a class to newbies in embedded software design >> based on assembler. Would any of the newbies, except for the >> rare genius, miss the capabilities of say, C? Does this prove that >> teaching assembler was a good choice? >> >>> It's not about theoretical power or completeness: It's about barriers >>> to entry. It's not about what I can do in 5 years, but about what I >>> can do in 5 weeks. Migen is primarily about pragmatism and >>> productivity, making powerful circuits quickly and easily, and far >>> less about expert capabilities, theoretical purity or even >>> consistency. >> Again, I find this strange. I understand that you have not been >> successful with MyHDL. However, as I understand it you have not >> been successful with Migen either. So what is your defense based >> upon? Of course, we are about 5 weeks further now :-) >> >> More to the point. >> >> Barriers to entry - ok, but what is the task? I told you that I >> believe the main problem in HDL-based design is verification, and >> how MyHDL (unlike Migen) helps you by the fact that you >> can use the same modelling paradigm for high-level models and >> test benches as for synthesizable logic. >> >> You seemed surprized, which I found suprizing in turn. Is >> it so different in software? Really, getting those gates into >> an FPGA is the easy part. The difficult part is getting >> them to work properly. >> >> You will have noticed that Mr. Bourdeauducq made an error in >> the first simple "one-liner" circuit that I presented to him, >> as if he wanted to prove my point. Of course, the reason is >> not incompetence, but simply that he did not verify his >> design. >> >> There is a pattern however. Mr. Bourdeauducq cried foul because >> I didn't accept his "simple patches". What he ignored, and >> continued to ignore despite my insistence is that they broke >> MyHDL. Perhaps Mr. Bourdeaducq considers verification a "detail". >> >> Well, I don't. Verification is the problem. The reason why I >> think the abstraction level of synthesizable logic should >> be as high as possible, is because that leaves more time >> for verification. >> >>> I seek tools that will help me do what I want to get done today, and >>> so far Migen seems like it will be most useful. Tomorrow will likely >>> require additional tools, and I *absolutely* expect (and want) MyHDL >>> to the first of those tools. It is not an either-or proposition: I >>> want Migen *and* MyHDL. I realize MyHDL will take more time to >>> master, and I'm willing to commit that time. But I also want to >>> create something sooner, rather than later. And I believe that 100% >>> of what I learn using Migen will later prove useful with MyHDL. I >>> believe using Migen will keep me motivated toward learning MyHDL. >> Sounds good, but I think it is cheap talk. >> >> Most of Migen's technical choices, starting with its basic paradigm, >> are almost the opposite of MyHDL's. As a result, verification is not >> addressed, and it forces you to think at an artificially low level >> for synthesizable logic. What good can one learn from that? >> >>> Right next to me I have a Spartan 3E-500 that contains nothing of my >>> own design. That must change! >> Perhaps you are too ambitious. >> >> In your shoes, I would start as follows: >> >> * isolate a simple function out of your spec >> * try to concentrate on what it does, instead of how it should be implemented >> * write that behavior in a (clocked) MyHDL process or processes >> * also describe it in high-level python, and use that in a unit-test to verify >> * experiment with synthesis to see whether it works and the result is acceptable >> * iterate with the synthesizable description as necessary >> >> > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Bob C. <fl...@gm...> - 2012-04-22 18:53:18
|
Thank-you Jan for your careful and caring response. It is clear I haven't gone far enough down my own learning path to make well-founded criticisms of HDL tools or approaches. While I continue to closely monitor this list and developments in the general world of "alternative" HDLs, a recent job change has temporarily eliminated my free time to pursue learning HDLs: My FPGA development board is gathering dust on my workbench. Still, when I finally do resume my efforts, I hope to have a better path to pursue. Rather than attempt to respond to your points individually, please permit me instead to step back and restate my goals and describe how they have evolved since encountering MyHDL and Migen. As an embedded real-time engineer specializing primarily in software, but with proven skills in system design and high-level circuit design (mainly processor and peripheral chip selection and integration), my greatest goal is to minimize the amount of new software in the products I create. Statistically, nothing adds bugs to a system faster, and delays product shipping longer, than adding a single new line of code. The buggiest code tends to occur at the lowest levels: At the functional level, the error rate for assembler is a multiple of that for C which in turn is a multiple of that for Python. It's not just about levels of abstraction, though that is certainly a vital factor. It also concerns the availability of proven code, and how easily that code can be integrated onto the current project. Part of the issue is how easy or difficult it is to create portable and reusable code on any system, the rest concerns how to make use of that code. I love projects where most of my effort is spent writing "glue" code to integrate known-good libraries to create a new application. My work as an engineer then becomes finding that good code, and selecting from among the candidates the code that best meets my needs and will be easiest to integrate. That said, what I love most is writing code from scratch, implementing something important that has never existed before. But that's not what gets products out the door: Professionally, I must hold my own code authoring skills and desires at bay, and use the rest of my skills to assess what's best for the system and product as a whole. It's kind of schizophrenic: I ship better systems sooner when I keep my own coding skills as a last resort, and prioritize integrating the work of others. But I always yearn for projects where I get to do something truly new and innovative. I view the FPGA as an ideal canvas for an embedded engineer, where I can work more closely with the EEs while simultaneously using using my system design and integration skills to further reduce the software needed to produce a product. The first FPGA skill I'd most like to have to be productive would be the ability to write glue between the FPGA and devices in the outside world, and between IP cores within the FPGA. I became very excited when I saw that Migen included the Wishbone interface as a pre-integrated macro: There are many cores over at OpenCores that use the Wishbone interface, but getting them to talk to each other required a level of skill that was way beyond my knowledge. Why would I need to know how to implement a Wishbone interface? What I need to know is how to design and test a system that uses components integrated with standard interfaces, and those skills are readily transferred from my embedded experience, where I have had to test and integrate many hardware systems and components. I like using logic analyzers and o'scopes far more than I like using software debuggers! I suppose this isn't really an HDL issue: It's much higher than that. I suppose it's more of a tool-level issue with language implications. When I finally do start to write my own complex circuits from scratch (something I greatly look forward to, and expect to have lots of fun doing), I'll want to use a language (or languages) that are appropriate to the task at hand, much as I use assembler, C and Python to create embedded software. However, identifying the most appropriate language can be difficult, and may even be a needless effort if the highest-level languages can "play nice" at the lowest level. Permit me to share a case in point: Several years ago I had to develop an instrument based on a complex sensor system that used inherently noisy sensors from which we needed to extract "soft" real-time performance. The statistics alone were quite daunting, and I decided to prototype the system in Python so I could use Numpy and related libraries to quickly develop the algorithms I needed. The prototype was a success, but it had one interesting characteristic: It met our performance sped when running on the desktop, and was only a factor of 2-3 slower than our minimum required performance spec when running on our intended target system. Surprising performance for a prototype in an interpreted language. I profiled the code and found that the Python numeric and statistical libraries were wicked-fast: They were more than capable of doing all the math needed well within our timing requirements. That meant the slow performance was due to the code I had written to interface to the sensors and prepare their data for use by the fast libraries, and passing data between those libraries. I first moved as much functionality as possible into the libraries, which yielded an immediate 25% speed improvement. Next, I stumbled across Psyco (these were pre-PyPy days), and used it to more than double the performance of the non-library code I had written. That left me tantalizingly close to meeting my timing requirements, but I was still missing them by 30%. I had never before considered using Python to implement a near-real-time embedded system. Not only that, I also had never shipped Linux in a delivered system: I had always been driven to minimize system power, which meant small processors, which in turn meant running no code that didn't directly support the end result, which in turn meant often running with no OS, or at most an RTOS. For this system I had a much more generous power envelope due to the generous maximum weight my battery-powered instrument could have. If I went with a Linux platform, I'd need to double my power budget, which meant doubling the current rating and capacity of the batteries: Switching from NiMh to Li-Ion permitted me to get the power I needed with only a modest increase in system weight. But how to obtain that last bit of performance I needed? I was already using the fastest CPU I could get that met our packaging requirements (no fan, no extensive or expensive passive heatsinks, which at the time meant a ULV Celeron). A final round of profiling indicated my interface code was spending lots of time reading and converting data: Each data element was causing individual library calls. Implementing larger buffers helped, but still didn't get me close to the time I needed: Servicing the hardware was the problem. The final speed boost came when the buffers were moved from software to hardware, so each interface access returned 1000x more data, a design change that also improved the timing between the individual sensors, which in turn simplified the statistics needed to process the data. Sorry for the length of that story, but that experience has affected all the work I've done since, and also affects my expectations when it comes to FPGA development. I want my FPGA development to integrate nicely into the above process, to be a powerful tool that permits me not just to engineer a fully-specified circuit into a fully-specified target, but also to experiment with integrating different components, to move solutions or parts of the problem from one domain to another, to quickly make use of the work of others, and be able to create my own when needed. The key element that must be present to facilitate such flexibility is the ability to implement common aspects of interfaces across multiple domains, to switch interfaces while preserving functionality, to be able to work with gates, busses, processors, instructions, libraries, languages and whatever else is needed to permit a given slice of functionality to be implemented and accessed in its best and most effective domain. I want everything from simple FIFOs to complex processor interfaces to be available to me in multiple domains without having to create them from scratch each time. And THAT is what initially attracted me to MyHDL: If I can use Python to move software into hardware, and interface with it from the Python domain, why, I could become a system development god! And the advent of PyPy makes that approach even more tractable. I soon learned that such power is not the goal of MyHDL: The goal of MyHDL is to use Python to make a incrementally better flavor Verilog or VHDL, a relatively small but significant evolutionary step compared to the transformative change I was dreaming of. I desire to climb up and skip between levels of abstraction and implementation domains that are almost orthogonal to what MyHDL pursues. MyHDL may prove to be a piece of the final process, but it does not, and perhaps cannot, encompass that process on its own. Hence my excitement with Migen: Even in its embryonic, incomplete, and flawed initial state, it clearly seeks to bridge abstractions in ways alien to MyHDL, to make interfaces an integral part of the package and process, rather than something to code to at a low level. In this respect, MyHDL feels more like C, and Migen aims to be Python with its powerful libraries being made available to higher-level code. Again, I'm only mapping from domains I know to a domain I hope to enter and in which I hope become proficient and productive. I'm certain my mapping has flaws and gaps owing to my lack of general HDL knowledge. But I suspect this very lack may enable me to conceive of and pursue a path that may yield a greater whole. MyHDL and Migen both seem to me to be stepping stones in a stream I wish to see crossed by a 4-lane bridge. And arguments between and about them seem like arguments about C vs. Python: It's about the solution path and the power of the process, not about any particular tool! It's about how tools interact and combine, not about how or why they differ or overlap. And if Python/PyPy doesn't evolve more quickly, it may get devoured by Julia, an interpreted/JIT language which abstracts into areas Python presently supports badly (such as multiprocessing and coroutines), with speeds PyPy has yet to attain. It may be that the concepts embodied in both MyHDL and Migen could eventually see more effective and more flexible implementations in the Julia ecosystem. Python, too, is just one tool in a procession of tools over time. -BobC On 04/22/2012 03:25 AM, Jan Decaluwe wrote: > I still want to take the time to clarify my position > on the many issues raised in this post. > > On 03/17/2012 09:20 PM, Bob Cunningham wrote: >> On 03/16/2012 02:03 PM, Jan Decaluwe wrote: >>> My conclusion is that Migen found an easy target in you, as a >>> newbie, to confuse you. It made you think it can be used for >>> serious design work. >> Sorry, Jan. If I have to be "confused" to play with my FPGA, then so >> be it. I'm very "serious" about being able to play with my FPGA! >> >> Your statement has an obvious implicit context: To me, you are >> shouting, "MyHDL is for Serious Designers Only! Newbies and >> Pragmatists should Go Away!" >> >> If that's what you are saying, then please be direct: Don't attack >> Migen for being what it was intentionally created to be, or for being >> something MyHDL is not intended to be. Are you upset about Migen >> existing, or that there is an audience MyHDL can't help as well as >> Migen may be able to? > I am disturbed by the suggestion that my critique on Migen is > based on anything else than a purely technical assessment. > > Let me be clear. I don't like Mr. Bourdeauducq's attitude for > one bit. But do you think that would be a reason for me to > ignore any good idea that he might come up with? Of course > not. I am not a masochist. > > It is quite simple. During my career, I have found that when > you think you have seen the worst in HDL-based design, it > always gets worse. To date, Migen is the worst that I have > seen. But to understand why I am saying this, you have to > be prepared to follow my technical arguments and to > engage in technical discussions. I have made a few starts, > but I certainly was not done yet. However, I see close to > zero enthousiasm to continue such discussions. > > I am therefore frustrated by the fact that I hear all kinds > of opinions and suggestions to "merge" but that whenever things > get a little technical then the "I am a beginner" umbrella opens. > > Migen is not my problem. It will disappear in the milky mist > of HDL history, just like the many HDLs based on the same > flawed paradigm. I am addressing it simply because misleading > posts about it appear on this newsgroup. > > What I am really targetting instead is the conventional wisdom in > mainstream HDL design, which often has it all wrong. > >> If you'd rather beginners like myself go >> elsewhere, just say so. > MyHDL welcomes beginners. It is the first item on "Why MyHDL": > > http://myhdl.org/doku.php/why#you_are_new_to_digital_hardware_design > > In fact, I put most of my hopes on beginners, as they have not > yet been brainwashed by the conventional wisdom. > >> Remember, Migen wasn't created to be useful to beginners: It was >> created by an experienced FPGA designer concerned about practicality >> and productivity for a specific Open Source hardware project. It > I am not impressed by "arguments from experience". The conventional > wisdom that I am targetting was created by experienced designers, > mostly Verilog ones. Experience can lead to conservatism and > can become a hindrance to clear thinking. > >> simply happened that some things became a bit clearer to me after >> looking at Migen and the problems it was created to address. > Was it because of Migen or simply because you were spending more > time on the problem? > >> Whatever Migen leaves out may have been what was getting in my way! > I find that strange. I understand that you have a lot of > experience with embedded software. Therefore, you must know > procedural software techniques very well. That is exactly what > Migen leaves out. What it leaves is low-level concurrency at the > statement level, which must be new to you. And now you suggest > that the obstacle is exactly that what you are most familiar > with. Beats me. > >> I'm actually quite lazy: What is the *least* I need to know to make >> useful digital designs *now*? > No secrets here. The first sentence of "Why MyHDL" warns you: > "There's a lot to learn and it will be hard work". Therefore, if > you are intellectually lazy (not prepared to learn new things even > when they will save you lots of time and effort later on), MyHDL > or HDL-based design is not for you. > > MyHDL is for those who are lazy in the good engineering sense, > wanting to accomplish more with less effort eventually. > >> I'm a beginner: Though I'd love to someday be able to design >> circuits like Shakespeare wrote sonnets, I'd be more than happy today >> if I were able to work at the level of "Green Eggs and Ham", a true >> masterpiece written with an absolute minimum of linguistic >> complexity. > Come on, let's keep some perspective here. It's not *that* difficult > or complex either. And there is a cookbook that shows you the way. > >>> When Migen claims that the "event-driven" paradigm is too general, >>> what it really dumps is procedural support in your HDL descriptions >>> - the interesting stuff. >> What's "interesting" to you can be a frustrating block for a newbie. >> I hope to one day also be very interested in those aspects of MyHDL, >> but it seems to have little to do with what I want to get done today, > I don't understand. Your specification seems very extensive and > ambitious. It would seem that you have a big need for raising the > abstaction level as high as possible, and for an easy path to > strong verification. > >> which is to find the simplest path to get basic circuits out of my >> mind and in to my FPGA. Then use them as building-blocks in my hobby >> projects. > There is a broad concensus about the "building blocks" paradigm > in hardware design. That is really not the issue. The issue is > what the abstraction level of the building blocks should be. > >> I am a MyHDL fan. Unfortunately, I simply remain unable to use MyHDL >> to accomplish my own immediate hobby needs. That does not indicate >> any flaw in MyHDL, merely the extent my own limitations. Do not be >> surprised that I am interested in any tool that helps me circumvent >> those limitations! >> >> I actually *like* how Migen slices and dices the process of FPGA >> design: The parts that are left out are ones I doubt newbies like me >> would notice, much less care about, until confronted with unusually >> challenging designs. I suspect Sebastien would agree with much of >> your analysis, the proper response being: "So What?" > Suppose that I teach a class to newbies in embedded software design > based on assembler. Would any of the newbies, except for the > rare genius, miss the capabilities of say, C? Does this prove that > teaching assembler was a good choice? > >> It's not about theoretical power or completeness: It's about barriers >> to entry. It's not about what I can do in 5 years, but about what I >> can do in 5 weeks. Migen is primarily about pragmatism and >> productivity, making powerful circuits quickly and easily, and far >> less about expert capabilities, theoretical purity or even >> consistency. > Again, I find this strange. I understand that you have not been > successful with MyHDL. However, as I understand it you have not > been successful with Migen either. So what is your defense based > upon? Of course, we are about 5 weeks further now :-) > > More to the point. > > Barriers to entry - ok, but what is the task? I told you that I > believe the main problem in HDL-based design is verification, and > how MyHDL (unlike Migen) helps you by the fact that you > can use the same modelling paradigm for high-level models and > test benches as for synthesizable logic. > > You seemed surprized, which I found suprizing in turn. Is > it so different in software? Really, getting those gates into > an FPGA is the easy part. The difficult part is getting > them to work properly. > > You will have noticed that Mr. Bourdeauducq made an error in > the first simple "one-liner" circuit that I presented to him, > as if he wanted to prove my point. Of course, the reason is > not incompetence, but simply that he did not verify his > design. > > There is a pattern however. Mr. Bourdeauducq cried foul because > I didn't accept his "simple patches". What he ignored, and > continued to ignore despite my insistence is that they broke > MyHDL. Perhaps Mr. Bourdeaducq considers verification a "detail". > > Well, I don't. Verification is the problem. The reason why I > think the abstraction level of synthesizable logic should > be as high as possible, is because that leaves more time > for verification. > >> I seek tools that will help me do what I want to get done today, and >> so far Migen seems like it will be most useful. Tomorrow will likely >> require additional tools, and I *absolutely* expect (and want) MyHDL >> to the first of those tools. It is not an either-or proposition: I >> want Migen *and* MyHDL. I realize MyHDL will take more time to >> master, and I'm willing to commit that time. But I also want to >> create something sooner, rather than later. And I believe that 100% >> of what I learn using Migen will later prove useful with MyHDL. I >> believe using Migen will keep me motivated toward learning MyHDL. > Sounds good, but I think it is cheap talk. > > Most of Migen's technical choices, starting with its basic paradigm, > are almost the opposite of MyHDL's. As a result, verification is not > addressed, and it forces you to think at an artificially low level > for synthesizable logic. What good can one learn from that? > >> Right next to me I have a Spartan 3E-500 that contains nothing of my >> own design. That must change! > Perhaps you are too ambitious. > > In your shoes, I would start as follows: > > * isolate a simple function out of your spec > * try to concentrate on what it does, instead of how it should be implemented > * write that behavior in a (clocked) MyHDL process or processes > * also describe it in high-level python, and use that in a unit-test to verify > * experiment with synthesis to see whether it works and the result is acceptable > * iterate with the synthesizable description as necessary > > |
From: Norbo <Nor...@gm...> - 2012-04-22 13:28:49
|
Just wanted to provide a more highlighting example about the more general support of indexed constants if the array is initialized in the verilog or vhdl code, especially about using multiple indices in the same expression: from myhdl import * def TOP(out1,in_data,in_addr): aListSig=[Signal(intbv(i)[8:]) for i in range(10)] @always_comb def comb_logic(): out1.next=aListSig[3]*3+aListSig[in_addr]*in_data return comb_logic def test_bench(): sig1=Signal(intbv(0)[8:]) in_data=Signal(intbv(0)[8:]) in_addr=Signal(intbv(0)[8:]) instanc_top=TOP(sig1,in_data,in_addr) #interval = delay(10) @instance def stimulus(): in_data.next=0 in_addr.next=0 yield delay(1) print "Value1 is: ",sig1," Value2 is: ",in_data in_data.next=2 in_addr.next=6 yield delay(1) print "Value1 is: ",sig1," Value2 is: ",in_data raise StopSimulation return stimulus,instanc_top sim = Simulation(test_bench()) sim.run(20) a,b,c = [Signal(intbv(0)[8:]) for i in range(3)] toVHDL(TOP,a,b,c) toVerilog(TOP,a,b,c) Of curse this doesnt get mapped into ram, but the code is synthesisable and the netlist-viewer seems to show a good result. In the patch i posted previously i noticed an error i made: The verilog code is not synthesisable if the list of signals is only used in reading, because then the toVerilog conversion defines the list of signals with the "wire" keyword ( in the case above "wire [7:0] aListSig [0:10-1];" ) but it is not possible to use the "initial block" with the wire keyword. so i changed it in this case to "reg" (in the case above "reg [7:0] aListSig [0:10-1];") I just appended the new patch. greetings Norbo |
From: Christopher L. <loz...@fr...> - 2012-04-22 11:26:43
|
First thank you for your excellent email two days ago, and even more so for what you wrote this morning. On 4/22/12 5:25 AM, Jan Decaluwe wrote: > I > believe the main problem in HDL-based design is verification, and > how MyHDL (unlike Migen) helps you by the fact that you > can use the same modelling paradigm for high-level models and > test benches as for synthesizable logic. > > You seemed surprized, which I found suprizing in turn. Is > it so different in software? Really, getting those gates into > an FPGA is the easy part. The difficult part is getting > them to work properly. > > > Well, I don't. Verification is the problem. The reason why I > think the abstraction level of synthesizable logic should > be as high as possible, is because that leaves more time > for verification. I think this is the central point from a marketing perspective. And once you believe this, then the advantage of MyHDL is clear. Only MyHDL gives you both structural and dynamic information in the same computational model. And the latter is clearly needed for verification. On my wiki, I will be using the same creative commons license as on your wiki. Once that is posted on my wiki, may I go ahead and quote from your emails using that same license? -- Regards Christopher Lozinski Check out my iPhone apps TextFaster and EmailFaster http://textfaster.com Expect a paradigm shift. http://MyHDL.org |
From: Jan D. <ja...@ja...> - 2012-04-22 10:25:42
|
I still want to take the time to clarify my position on the many issues raised in this post. On 03/17/2012 09:20 PM, Bob Cunningham wrote: > On 03/16/2012 02:03 PM, Jan Decaluwe wrote: >> My conclusion is that Migen found an easy target in you, as a >> newbie, to confuse you. It made you think it can be used for >> serious design work. > > Sorry, Jan. If I have to be "confused" to play with my FPGA, then so > be it. I'm very "serious" about being able to play with my FPGA! > > Your statement has an obvious implicit context: To me, you are > shouting, "MyHDL is for Serious Designers Only! Newbies and > Pragmatists should Go Away!" > > If that's what you are saying, then please be direct: Don't attack > Migen for being what it was intentionally created to be, or for being > something MyHDL is not intended to be. Are you upset about Migen > existing, or that there is an audience MyHDL can't help as well as > Migen may be able to? I am disturbed by the suggestion that my critique on Migen is based on anything else than a purely technical assessment. Let me be clear. I don't like Mr. Bourdeauducq's attitude for one bit. But do you think that would be a reason for me to ignore any good idea that he might come up with? Of course not. I am not a masochist. It is quite simple. During my career, I have found that when you think you have seen the worst in HDL-based design, it always gets worse. To date, Migen is the worst that I have seen. But to understand why I am saying this, you have to be prepared to follow my technical arguments and to engage in technical discussions. I have made a few starts, but I certainly was not done yet. However, I see close to zero enthousiasm to continue such discussions. I am therefore frustrated by the fact that I hear all kinds of opinions and suggestions to "merge" but that whenever things get a little technical then the "I am a beginner" umbrella opens. Migen is not my problem. It will disappear in the milky mist of HDL history, just like the many HDLs based on the same flawed paradigm. I am addressing it simply because misleading posts about it appear on this newsgroup. What I am really targetting instead is the conventional wisdom in mainstream HDL design, which often has it all wrong. > If you'd rather beginners like myself go > elsewhere, just say so. MyHDL welcomes beginners. It is the first item on "Why MyHDL": http://myhdl.org/doku.php/why#you_are_new_to_digital_hardware_design In fact, I put most of my hopes on beginners, as they have not yet been brainwashed by the conventional wisdom. > Remember, Migen wasn't created to be useful to beginners: It was > created by an experienced FPGA designer concerned about practicality > and productivity for a specific Open Source hardware project. It I am not impressed by "arguments from experience". The conventional wisdom that I am targetting was created by experienced designers, mostly Verilog ones. Experience can lead to conservatism and can become a hindrance to clear thinking. > simply happened that some things became a bit clearer to me after > looking at Migen and the problems it was created to address. Was it because of Migen or simply because you were spending more time on the problem? > Whatever Migen leaves out may have been what was getting in my way! I find that strange. I understand that you have a lot of experience with embedded software. Therefore, you must know procedural software techniques very well. That is exactly what Migen leaves out. What it leaves is low-level concurrency at the statement level, which must be new to you. And now you suggest that the obstacle is exactly that what you are most familiar with. Beats me. > I'm actually quite lazy: What is the *least* I need to know to make > useful digital designs *now*? No secrets here. The first sentence of "Why MyHDL" warns you: "There's a lot to learn and it will be hard work". Therefore, if you are intellectually lazy (not prepared to learn new things even when they will save you lots of time and effort later on), MyHDL or HDL-based design is not for you. MyHDL is for those who are lazy in the good engineering sense, wanting to accomplish more with less effort eventually. > I'm a beginner: Though I'd love to someday be able to design > circuits like Shakespeare wrote sonnets, I'd be more than happy today > if I were able to work at the level of "Green Eggs and Ham", a true > masterpiece written with an absolute minimum of linguistic > complexity. Come on, let's keep some perspective here. It's not *that* difficult or complex either. And there is a cookbook that shows you the way. >> When Migen claims that the "event-driven" paradigm is too general, >> what it really dumps is procedural support in your HDL descriptions >> - the interesting stuff. > > What's "interesting" to you can be a frustrating block for a newbie. > I hope to one day also be very interested in those aspects of MyHDL, > but it seems to have little to do with what I want to get done today, I don't understand. Your specification seems very extensive and ambitious. It would seem that you have a big need for raising the abstaction level as high as possible, and for an easy path to strong verification. > which is to find the simplest path to get basic circuits out of my > mind and in to my FPGA. Then use them as building-blocks in my hobby > projects. There is a broad concensus about the "building blocks" paradigm in hardware design. That is really not the issue. The issue is what the abstraction level of the building blocks should be. > I am a MyHDL fan. Unfortunately, I simply remain unable to use MyHDL > to accomplish my own immediate hobby needs. That does not indicate > any flaw in MyHDL, merely the extent my own limitations. Do not be > surprised that I am interested in any tool that helps me circumvent > those limitations! > > I actually *like* how Migen slices and dices the process of FPGA > design: The parts that are left out are ones I doubt newbies like me > would notice, much less care about, until confronted with unusually > challenging designs. I suspect Sebastien would agree with much of > your analysis, the proper response being: "So What?" Suppose that I teach a class to newbies in embedded software design based on assembler. Would any of the newbies, except for the rare genius, miss the capabilities of say, C? Does this prove that teaching assembler was a good choice? > It's not about theoretical power or completeness: It's about barriers > to entry. It's not about what I can do in 5 years, but about what I > can do in 5 weeks. Migen is primarily about pragmatism and > productivity, making powerful circuits quickly and easily, and far > less about expert capabilities, theoretical purity or even > consistency. Again, I find this strange. I understand that you have not been successful with MyHDL. However, as I understand it you have not been successful with Migen either. So what is your defense based upon? Of course, we are about 5 weeks further now :-) More to the point. Barriers to entry - ok, but what is the task? I told you that I believe the main problem in HDL-based design is verification, and how MyHDL (unlike Migen) helps you by the fact that you can use the same modelling paradigm for high-level models and test benches as for synthesizable logic. You seemed surprized, which I found suprizing in turn. Is it so different in software? Really, getting those gates into an FPGA is the easy part. The difficult part is getting them to work properly. You will have noticed that Mr. Bourdeauducq made an error in the first simple "one-liner" circuit that I presented to him, as if he wanted to prove my point. Of course, the reason is not incompetence, but simply that he did not verify his design. There is a pattern however. Mr. Bourdeauducq cried foul because I didn't accept his "simple patches". What he ignored, and continued to ignore despite my insistence is that they broke MyHDL. Perhaps Mr. Bourdeaducq considers verification a "detail". Well, I don't. Verification is the problem. The reason why I think the abstraction level of synthesizable logic should be as high as possible, is because that leaves more time for verification. > I seek tools that will help me do what I want to get done today, and > so far Migen seems like it will be most useful. Tomorrow will likely > require additional tools, and I *absolutely* expect (and want) MyHDL > to the first of those tools. It is not an either-or proposition: I > want Migen *and* MyHDL. I realize MyHDL will take more time to > master, and I'm willing to commit that time. But I also want to > create something sooner, rather than later. And I believe that 100% > of what I learn using Migen will later prove useful with MyHDL. I > believe using Migen will keep me motivated toward learning MyHDL. Sounds good, but I think it is cheap talk. Most of Migen's technical choices, starting with its basic paradigm, are almost the opposite of MyHDL's. As a result, verification is not addressed, and it forces you to think at an artificially low level for synthesizable logic. What good can one learn from that? > Right next to me I have a Spartan 3E-500 that contains nothing of my > own design. That must change! Perhaps you are too ambitious. In your shoes, I would start as follows: * isolate a simple function out of your spec * try to concentrate on what it does, instead of how it should be implemented * write that behavior in a (clocked) MyHDL process or processes * also describe it in high-level python, and use that in a unit-test to verify * experiment with synthesis to see whether it works and the result is acceptable * iterate with the synthesizable description as necessary -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Norbo <Nor...@gm...> - 2012-04-21 15:22:46
|
> As you are aware the use cases supported by MyHDL are RAM and ROM and > examples for each and descriptions can be found here: > > http://www.myhdl.org/doc/0.7/manual/conversion_examples.html#ram-inference I am actually not particular happy with this example because it is asynchron. The following statment from [page14 , http://www.altera.com/literature/hb/qts/qts_qii51007.pdf] kind of nails it, and from my point of view it is not only true for altera devices. """Altera recommends using synchronous memory blocks for Altera designs. Because memory blocks in the newest devices from Altera are synchronous, RAM designs that are targeted towards architectures that contain these dedicated memory blocks must be synchronous to be mapped directly into the device architecture. For these devices, asynchronous memory logic is implemented in regular logic cells. Synchronous memory offers several advantages over asynchronous memory, including higher frequencies and thus higher memory bandwidth, increased reliability, and less standby power.""" > In the past discussions the Altera recommended guidelines for RAM and > ROM instantiation has been referenced. > http://www.altera.com/literature/hb/qts/qts_qii51007.pdf. > > From my experience tool specific pragmas or init files are used to > pre-init RAMS in the FPGA vendor tools. The guidelines for "specifying > initial memory contents at power-up" is described start at section > 11-32, which seem to suggest the same (.mif file). But it does go on to > describe a method for initializing, using an initial block in Verilog > and a function in VHDL. To get to an actual synthesizable approach it > appears the initial values would not be enough? The initial values for the array signal are enough for a synthesizable approach. The Function with is used in the VHDL code is just another way which can be used to give the signals in the array there initial values in a more procedural way (for vhdl). If you have the procedure wich gives the values in python, you can write the initial values in vhdl to the array directly. The "specifying initial memory contents at power-up" > One of the reasons why initial values has not been implemented (it is on > the todo list, > http://www.myhdl.org/doku.php/dev:tasks#initial_values_suppot) is that > it was observed that Quartus did not support initial value support in > Verilog. There would be a mis-match between the Verilog conversion and > VHDL conversion. I just can say that i use quartus v11.1 and i dont see this limitation there. But this is probably not satisfactorily if someone uses a older version. There is this another open task: http://www.myhdl.org/doku.php/dev:tasks # More general support of indexed constants If the array of signals is initialized with values (also in the vhdl an verilig code) you can then use it also as a indexed "constant" value in a more general way, this signal can be used everyvere where like a normal signal. if you never write to this signal it is just like a constant. greetings Norbo |
From: Christopher F. <chr...@gm...> - 2012-04-20 16:23:16
|
As you are aware the use cases supported by MyHDL are RAM and ROM and examples for each and descriptions can be found here: http://www.myhdl.org/doc/0.7/manual/conversion_examples.html#ram-inference http://www.myhdl.org/doc/0.7/manual/conversion_examples.html#rom-inference What you are looking for is pre-initialize RAM, that is most notably used in FPGA designs. If you are only looking for ROM using a tuple of ints should get you what you want. It will load the correct examples. Your example below using the ROM template would convert to what you expect. # MyHDL description from myhdl import * def TOP(clk,out1,out2): rom = tuple([ii for ii in range(10)]) @always(clk.posedge) def rom_logic(): out1.next=rom[3] out2.next=rom[9] return rom_logic -- Converted VHDL TOP_ROM_LOGIC: process (clk) is begin if rising_edge(clk) then case 3 is when 0 => out1 <= "00000000"; when 1 => out1 <= "00000001"; when 2 => out1 <= "00000010"; when 3 => out1 <= "00000011"; when 4 => out1 <= "00000100"; when 5 => out1 <= "00000101"; when 6 => out1 <= "00000110"; when 7 => out1 <= "00000111"; when 8 => out1 <= "00001000"; when others => out1 <= "00001001"; end case; case 9 is when 0 => out2 <= "00000000"; when 1 => out2 <= "00000001"; when 2 => out2 <= "00000010"; when 3 => out2 <= "00000011"; when 4 => out2 <= "00000100"; when 5 => out2 <= "00000101"; when 6 => out2 <= "00000110"; when 7 => out2 <= "00000111"; when 8 => out2 <= "00001000"; when others => out2 <= "00001001"; end case; end if; end process TOP_ROM_LOGIC; In the past discussions the Altera recommended guidelines for RAM and ROM instantiation has been referenced. http://www.altera.com/literature/hb/qts/qts_qii51007.pdf. From my experience tool specific pragmas or init files are used to pre-init RAMS in the FPGA vendor tools. The guidelines for "specifying initial memory contents at power-up" is described start at section 11-32, which seem to suggest the same (.mif file). But it does go on to describe a method for initializing, using an initial block in Verilog and a function in VHDL. To get to an actual synthesizable approach it appears the initial values would not be enough? One of the reasons why initial values has not been implemented (it is on the todo list, http://www.myhdl.org/doku.php/dev:tasks#initial_values_suppot) is that it was observed that Quartus did not support initial value support in Verilog. There would be a mis-match between the Verilog conversion and VHDL conversion. It is not clear to me at this point what is the best path forward. I believe adding initial value support is possible just need to test with a bunch of tools, this is doable. If you actual goal is synthesiable pre-init RAM this path might not get you there. Regards, Chris On 4/20/12 8:16 AM, Norbo wrote: > I Considered the following example: > > from myhdl import * > > def TOP(out1,out2): > aListSig=[Signal(intbv(i)[8:]) for i in range(10)] > > @always_comb > def comb_logic(): > out1.next=aListSig[3] > out2.next=aListSig[9] > > return comb_logic > > > def test_bench(): > sig1=Signal(intbv(0)[8:]) > sig2=Signal(intbv(0)[8:]) > instanc_top=TOP(sig1,sig2) > > interval = delay(10) > @always(interval) > def stimulus(): > print "Value1 is: ",sig1," Value2 is: ",sig2 > > return stimulus,instanc_top > > > sim = Simulation(test_bench()) > sim.run(10) > toVHDL(TOP,Signal(intbv(0)[8:]),Signal(intbv(0)[8:] > > > > As expected the output of the simulation is: >>> Value1 is: 3 Value2 is: 9 > The genereted VHDL code however is: > > library IEEE; > use IEEE.std_logic_1164.all; > use IEEE.numeric_std.all; > use std.textio.all; > > use work.pck_myhdl_07.all; > > entity TOP is > port ( > out1: out unsigned(7 downto 0); > out2: out unsigned(7 downto 0) > ); > end entity TOP; > > architecture MyHDL of TOP is > > type t_array_aListSig is array(0 to 10-1) of unsigned(7 downto 0); > signal aListSig: t_array_aListSig; > > begin > > > > > > out1 <= aListSig(3); > out2 <= aListSig(9); > > end architecture MyHDL; > > this code obviously would not show these values at the ouput ports. > if in the myhdl code seperated signal instead of the list of signals is > used, > then the myhdl code would look like this: > > def TOP(out1,out2): > #aListSig=[Signal(intbv(i)[8:]) for i in range(10)] > aSig1=Signal(intbv(3)[8:]) > aSig2=Signal(intbv(9)[8:]) > > @always_comb > def comb_logic(): > out1.next=aSig1 #aListSig[3] > out2.next=aSig2 #aListSig[9] > > return comb_logic > > > > Then the generated in vhdl code looks like this: > > architecture MyHDL of TOP is > > signal aSig1: unsigned(7 downto 0); > signal aSig2: unsigned(7 downto 0); > > begin > > aSig1 <= to_unsigned(3, 8); > aSig2 <= to_unsigned(9, 8); > > > > > out1 <= aSig1; > out2 <= aSig2; > > end architecture MyHDL; > > In this example the values (3 and 9) are put on the output ports. > Do you see where i want to point at? > > > Now consider the case of a synchron Ram with pre-initialized values > which you want to infere, there the memory-list needs to have initial > values. > > from myhdl import * > > def sync_RAM(dout, din, addr, we, clk, CONTENT=None): > """sync Ram model """ > mem = [Signal(intbv(CONTENT[i],min=dout.min,max=dout.max)) for i in > range(len(CONTENT))] > > @always(clk.posedge) > def read_write(): > if we: > mem[addr].next = din > dout.next = mem[addr] > > return read_write > > def TESTBENCH_XX(): > we=Signal(bool()) > clk=Signal(bool()) > MemoryContent=[2,121,43,8,32]+range(3,10) > addr=Signal(intbv(0,min=0,max=len(MemoryContent))) > din=Signal(intbv(min(MemoryContent),min=min(MemoryContent),max=max(MemoryContent)+1 > )) > dout=Signal(intbv(min(MemoryContent),min=min(MemoryContent),max=max(MemoryContent)+1 > )) > > toVHDL(sync_RAM,dout,din,addr,we,clk,MemoryContent) > toVerilog(sync_RAM,dout,din,addr,we,clk,MemoryContent) > sync_RAM_inst=sync_RAM(dout,din,addr,we,clk,CONTENT=MemoryContent) > > @always(delay(10)) > def clkgen(): > clk.next = not clk > > @instance > def stimulus(): > for i,data in enumerate(MemoryContent): > addr.next=i > yield clk.negedge > print "Data is:", dout > > raise StopSimulation > > return sync_RAM_inst,clkgen,stimulus > > sim = Simulation(TESTBENCH_XX()) > sim.run() > > this is bassicaly very similar to the examples above where the > simulation has the initial values but the generated code > doesnt have them and therfore behaves different than the simulation. > Another thing is that you probably dont want to have every list of > signal which you declare in myhdl to be initialized in the generated code. > For that i used the "None" keyword of python e.g: > mem = [Signal(intbv(None)[8:]) for i in range(255)] > > So for me i basically changed the myhdl sources so that list of Signals > gets initialzed in the generated code, so the the above synchron RAM > descriptions gets converted to the following vhdl code: > > > library IEEE; > use IEEE.std_logic_1164.all; > use IEEE.numeric_std.all; > use std.textio.all; > > use work.pck_myhdl_07.all; > > entity sync_RAM is > port ( > dout: out unsigned(6 downto 0); > din: in unsigned(6 downto 0); > addr: in unsigned(3 downto 0); > we: in std_logic; > clk: in std_logic > ); > end entity sync_RAM; > -- Ram model > > architecture MyHDL of sync_RAM is > > type t_array_mem is array(0 to 12-1) of unsigned(6 downto 0); > signal mem: t_array_mem :=(0=>"0000010", > 1=>"1111001", > 2=>"0101011", > 3=>"0001000", > 4=>"0100000", > 5=>"0000011", > 6=>"0000100", > 7=>"0000101", > 8=>"0000110", > 9=>"0000111", > 10=>"0001000", > 11=>"0001001"); > > begin > > > > > SYNC_RAM_READ_WRITE: process (clk) is > begin > if rising_edge(clk) then > if to_boolean(we) then > mem(to_integer(addr)) <= din; > end if; > dout <= mem(to_integer(addr)); > end if; > end process SYNC_RAM_READ_WRITE; > > end architecture MyHDL; > > > > > > > > > And the generated Verilog code looks like this: > > > > > `timescale 1ns/10ps > > module sync_RAM ( > dout, > din, > addr, > we, > clk > ); > // Ram model > > output [6:0] dout; > reg [6:0] dout; > input [6:0] din; > input [3:0] addr; > input we; > input clk; > > > reg [6:0] mem [0:12-1]; > > initial > begin : INIT_mem > mem[0]=2; > mem[1]=121; > mem[2]=43; > mem[3]=8; > mem[4]=32; > mem[5]=3; > mem[6]=4; > mem[7]=5; > mem[8]=6; > mem[9]=7; > mem[10]=8; > mem[11]=9; > end > > > > > always @(posedge clk) begin: SYNC_RAM_READ_WRITE > if (we) begin > mem[addr] <= din; > end > dout <= mem[addr]; > end > > endmodule > > > I synthesised this syncron ram description in symplify and alter quartus > in vhdl and verilog and from the first view they both create the ram > with initial values succesfully > > if i change the the line: > mem = [Signal(intbv(CONTENT[i],min=dout.min,max=dout.max)) for i in > range(len(CONTENT))] > to: > mem = [Signal(intbv(None,min=dout.min,max=dout.max)) for i in > range(len(CONTENT))] > > then the code gets converted like normal, and in order to make the > boundchecks of the intbv succesfull i > set the initial value of the intbv to the lower bound value (min). if > this value is not given i set it to zero. > > > What you think about this? Have i overseen something important? Any > other suggestions? (if you want to try this, the patchfile should be > appended) > > greeting > Norbo > > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > > > > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Norbo <Nor...@gm...> - 2012-04-20 13:16:54
|
I Considered the following example: from myhdl import * def TOP(out1,out2): aListSig=[Signal(intbv(i)[8:]) for i in range(10)] @always_comb def comb_logic(): out1.next=aListSig[3] out2.next=aListSig[9] return comb_logic def test_bench(): sig1=Signal(intbv(0)[8:]) sig2=Signal(intbv(0)[8:]) instanc_top=TOP(sig1,sig2) interval = delay(10) @always(interval) def stimulus(): print "Value1 is: ",sig1," Value2 is: ",sig2 return stimulus,instanc_top sim = Simulation(test_bench()) sim.run(10) toVHDL(TOP,Signal(intbv(0)[8:]),Signal(intbv(0)[8:] As expected the output of the simulation is: >> Value1 is: 3 Value2 is: 9 The genereted VHDL code however is: library IEEE; use IEEE.std_logic_1164.all; use IEEE.numeric_std.all; use std.textio.all; use work.pck_myhdl_07.all; entity TOP is port ( out1: out unsigned(7 downto 0); out2: out unsigned(7 downto 0) ); end entity TOP; architecture MyHDL of TOP is type t_array_aListSig is array(0 to 10-1) of unsigned(7 downto 0); signal aListSig: t_array_aListSig; begin out1 <= aListSig(3); out2 <= aListSig(9); end architecture MyHDL; this code obviously would not show these values at the ouput ports. if in the myhdl code seperated signal instead of the list of signals is used, then the myhdl code would look like this: def TOP(out1,out2): #aListSig=[Signal(intbv(i)[8:]) for i in range(10)] aSig1=Signal(intbv(3)[8:]) aSig2=Signal(intbv(9)[8:]) @always_comb def comb_logic(): out1.next=aSig1 #aListSig[3] out2.next=aSig2 #aListSig[9] return comb_logic Then the generated in vhdl code looks like this: architecture MyHDL of TOP is signal aSig1: unsigned(7 downto 0); signal aSig2: unsigned(7 downto 0); begin aSig1 <= to_unsigned(3, 8); aSig2 <= to_unsigned(9, 8); out1 <= aSig1; out2 <= aSig2; end architecture MyHDL; In this example the values (3 and 9) are put on the output ports. Do you see where i want to point at? Now consider the case of a synchron Ram with pre-initialized values which you want to infere, there the memory-list needs to have initial values. from myhdl import * def sync_RAM(dout, din, addr, we, clk, CONTENT=None): """sync Ram model """ mem = [Signal(intbv(CONTENT[i],min=dout.min,max=dout.max)) for i in range(len(CONTENT))] @always(clk.posedge) def read_write(): if we: mem[addr].next = din dout.next = mem[addr] return read_write def TESTBENCH_XX(): we=Signal(bool()) clk=Signal(bool()) MemoryContent=[2,121,43,8,32]+range(3,10) addr=Signal(intbv(0,min=0,max=len(MemoryContent))) din=Signal(intbv(min(MemoryContent),min=min(MemoryContent),max=max(MemoryContent)+1 )) dout=Signal(intbv(min(MemoryContent),min=min(MemoryContent),max=max(MemoryContent)+1 )) toVHDL(sync_RAM,dout,din,addr,we,clk,MemoryContent) toVerilog(sync_RAM,dout,din,addr,we,clk,MemoryContent) sync_RAM_inst=sync_RAM(dout,din,addr,we,clk,CONTENT=MemoryContent) @always(delay(10)) def clkgen(): clk.next = not clk @instance def stimulus(): for i,data in enumerate(MemoryContent): addr.next=i yield clk.negedge print "Data is:", dout raise StopSimulation return sync_RAM_inst,clkgen,stimulus sim = Simulation(TESTBENCH_XX()) sim.run() this is bassicaly very similar to the examples above where the simulation has the initial values but the generated code doesnt have them and therfore behaves different than the simulation. Another thing is that you probably dont want to have every list of signal which you declare in myhdl to be initialized in the generated code. For that i used the "None" keyword of python e.g: mem = [Signal(intbv(None)[8:]) for i in range(255)] So for me i basically changed the myhdl sources so that list of Signals gets initialzed in the generated code, so the the above synchron RAM descriptions gets converted to the following vhdl code: library IEEE; use IEEE.std_logic_1164.all; use IEEE.numeric_std.all; use std.textio.all; use work.pck_myhdl_07.all; entity sync_RAM is port ( dout: out unsigned(6 downto 0); din: in unsigned(6 downto 0); addr: in unsigned(3 downto 0); we: in std_logic; clk: in std_logic ); end entity sync_RAM; -- Ram model architecture MyHDL of sync_RAM is type t_array_mem is array(0 to 12-1) of unsigned(6 downto 0); signal mem: t_array_mem :=(0=>"0000010", 1=>"1111001", 2=>"0101011", 3=>"0001000", 4=>"0100000", 5=>"0000011", 6=>"0000100", 7=>"0000101", 8=>"0000110", 9=>"0000111", 10=>"0001000", 11=>"0001001"); begin SYNC_RAM_READ_WRITE: process (clk) is begin if rising_edge(clk) then if to_boolean(we) then mem(to_integer(addr)) <= din; end if; dout <= mem(to_integer(addr)); end if; end process SYNC_RAM_READ_WRITE; end architecture MyHDL; And the generated Verilog code looks like this: `timescale 1ns/10ps module sync_RAM ( dout, din, addr, we, clk ); // Ram model output [6:0] dout; reg [6:0] dout; input [6:0] din; input [3:0] addr; input we; input clk; reg [6:0] mem [0:12-1]; initial begin : INIT_mem mem[0]=2; mem[1]=121; mem[2]=43; mem[3]=8; mem[4]=32; mem[5]=3; mem[6]=4; mem[7]=5; mem[8]=6; mem[9]=7; mem[10]=8; mem[11]=9; end always @(posedge clk) begin: SYNC_RAM_READ_WRITE if (we) begin mem[addr] <= din; end dout <= mem[addr]; end endmodule I synthesised this syncron ram description in symplify and alter quartus in vhdl and verilog and from the first view they both create the ram with initial values succesfully if i change the the line: mem = [Signal(intbv(CONTENT[i],min=dout.min,max=dout.max)) for i in range(len(CONTENT))] to: mem = [Signal(intbv(None,min=dout.min,max=dout.max)) for i in range(len(CONTENT))] then the code gets converted like normal, and in order to make the boundchecks of the intbv succesfull i set the initial value of the intbv to the lower bound value (min). if this value is not given i set it to zero. What you think about this? Have i overseen something important? Any other suggestions? (if you want to try this, the patchfile should be appended) greeting Norbo |
From: Jan D. <ja...@ja...> - 2012-04-20 08:11:12
|
On 04/20/2012 12:33 AM, Christopher Lozinski wrote: > Here is a web page describing why one should use MyHDL. > > http://ejr0.x.rootbsd.net:8080 > > In the long run, it will be moved to > > wiki.myhdlclass.com > > It is in moinmoin, implemented in Python. > > I once made a mistake, and so am not allowed to edit the MyHDL wiki. > Jan politely suggested I put up a new page. Great idea. > > Anyhow you are all welcome to make edits to the page or add new pages. > Let me know if you have any troubles. > > Why am I doing this? I had thought of offering a MyHDL class. But the > first thing is to > get the high level arguments right. A class is transient, the > documentation lives longer. > > Your comments are most appreciated. There are a number of errors and inaccuracies. Verilog is also based on the event-driven paradigm, as is VHDL. This is what MyHDL, Verilog and VHDL have in common, unlike most HDLs that have been proposed (and forgotten) historically. It is the winning solution. To put things in perspective: my critique on Verilog/VHDL is incremental, not fundamental. With all three HDLs, it is possible to do serious HDL-based design. Some language features in Verilog/VHDL just make it harder (or much harder) than necessary. But not impossible, like with the HDLs based on an inferior paradigm. The blocking/nonblocking issues in Verilog are an example: they make it harder for some designers to use procedural techniques as there is no strict separation between variables and signals. The fact that VHDL designers need a lot of casting is not because it is statically typed, but simply because it uses an inadequate type system (too low level.) Do integers like MyHDL does and a lot of the issues go away. Migen is definitely *not* based on MyHDL, it's not because someone says that they will fork that they do :-) In fact, it tries very hard to be the opposite. It dumps the event-driven paradigm. It has no support for procedural modelling. It makes no difference between variable/signal semantics, even less so than Verilog. And instead of following MyHDL's unique example of how integers should be done in an HDL, it re-introduces Verilog's broken way of handling unsigned and signeds. Migen ignores most of the lessons that MyHDL has to offer. Their good right of course - but it shows that I was right in not spending too much time/effort on the requests of its developer: he did not like (or understand) MyHDL fundamentally, as I suspected right from the start. One more issue. What you keep silent about, surprizingly, is the licensing scheme on your web pages. Before requesting edits, I think you owe it to all potential editors to tell them what will happen with their work. Please, let us avoid the confusion and frustration that happened the last time around. I am not asking for a discussion about pro's and con's of various licensing schemes - it is an endless discussion with no single good answer. Just tell us what you use so that every potential editor can decide. Finally, even it you had editing rights on myhdl.org, I'm not sure you would agree with its Terms of Use, which are crystal-clear by now: http://www.myhdl.org/doku.php/terms_of_use -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Christopher L. <loz...@fr...> - 2012-04-19 22:33:39
|
Here is a web page describing why one should use MyHDL. http://ejr0.x.rootbsd.net:8080 In the long run, it will be moved to wiki.myhdlclass.com It is in moinmoin, implemented in Python. I once made a mistake, and so am not allowed to edit the MyHDL wiki. Jan politely suggested I put up a new page. Great idea. Anyhow you are all welcome to make edits to the page or add new pages. Let me know if you have any troubles. Why am I doing this? I had thought of offering a MyHDL class. But the first thing is to get the high level arguments right. A class is transient, the documentation lives longer. Your comments are most appreciated. -- Regards Christopher Lozinski |
From: Christopher F. <chr...@gm...> - 2012-04-19 15:15:37
|
On 4/19/12 7:36 AM, Thomas Heller wrote: > Am 19.04.2012 10:22, schrieb Ben: >> On Thu, Apr 19, 2012 at 09:10, Thomas Heller<th...@ct...> wrote: >>> Please consider a standard counter, something like this: >>> >>> def Counter(clock, count): >>> @always(clock.posedge) >>> def logic(): >>> count.next = count + 1 >>> return logic >>> >>> When this counter is simulated, count changes a delta cycle after >>> the positive clock edge. When traceSignals is used to write a VCD file, >>> gtkwave shows that the count changes exactly at the same time as the >>> clock edge. This is confusing (at least in more complicated cases) in >>> the timing diagram. >>> >> >> The solution I found that suited the most my need for this feature was >> to alter the following line: >> >> http://hg.myhdl.org/cgi-bin/hgwebdir.cgi/myhdl/file/7a860a7fb408/myhdl/_Signal.py#l86 >> >> And change the default value of delay from None to 1 >> >> The reasoning behind that is that is that you almost never want this >> setting to be set per Signal, you want it project-wide or you're gonna >> be stucked in inconsistency loops ... This has for unique consequence >> to make the VCD easier to debug. The generated code will look exactly >> the same. > > Since I'm using factory functions to create my signals I can get a > similar effect by changing their 'delay=' default value. Except if > MyHDL decides to create signals behind my back ;-). > >> The drawback of this method is that you have to set the duration of >> your clock(s) accordingly so that your system be stable before the >> next clock rises. > > Yes, it would be nice to be able to specify delay values smaller than > 1ns; see my other post about the timescale. > Probably I should patch myhdl.traceSignals to report a timescale of > 1ps to the vcd file, and get used to specify delay values in ps. > > Thanks, > Thomas > > I think there could be a long and interesting conversation if this visual "delay" should be used or not. In both Verilog/VHDL the default behavior is the no delay. You have to explicitly state the delays in the assignments. MyHDL is unique in that the MyHDL package is the HDL (the syntax and compiler to describe hardware behavior) and the simulator. In my mind a better proposal, might be, additional arguments/methods to provide information to the simulator. E.g. what is the *precision* in absolute time and should a _global_ propagation delay be used. This would allow the HDL description to remain agnostic of absolute time and building in prop-delays that don't model anything other than a visual cue. Regards, Chris |
From: Oscar D. D. <osc...@gm...> - 2012-04-19 13:46:06
|
El Thu, 19 Apr 2012 08:55:54 +0200 Thomas Heller <th...@ct...> escribió: > MyHDL uses 'steps' as time unit. When it writes VCD files with > traceSignals, one time unit is defined as 1ns. Like you said, a step is the time unit; in practice you can assign implicitly any time length for that step. The important thing is that a step is the smallest time interval that can be modelled in the simulation. For example, I can define a clock source with delay() calls, with a frecuency of 10 steps, 50% duty cycle: @instance def clk_gen(): while True: clk.next = 0 delay(5) clk.next = 1 delay(5) I think the problem is that there isn't an explicit time unit for a MyHDL step. In the previous example, I can say the clock frecuency is 100MHz, assuming a time step of 1ns. Or 100GHz, assuming 1ps as time step. I think it's a good idea to explicitly define the time length of a simulation step, but not as a required feature. This feature can be useful to do some asserts that require a time interval check, but it isn't useful for RTL design. Also, note that explicit time intervals are not supported for synthesis, in the practice they are used only for testbenches. Finally, the code that generates VCD files has the time unit fixed at 1ns. Maybe a small patch can be written to allow custom definition of this time unit. > > Considering the speed of todays logic, this is very coarse. > > I would find it convenient to be able to specify sub-nanosecond > delays as floats, and to make the timescale written into the > vcd files configurable. Personally I see float delays as excessive complex and useless for event-based simulation. A better solution is to convert those float delays to integer multiples of the time step. > > Would this be an idea for MyHDL? Opinions? > > Thanks, > Thomas > > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list -- Oscar Díaz Key Fingerprint = 904B 306C C3C2 7487 650B BFAC EDA2 B702 90E9 9964 gpg --keyserver subkeys.pgp.net --recv-keys 90E99964 I recommend using OpenDocument Format for daily use and exchange of documents. http://www.fsf.org/campaigns/opendocument |
From: Thomas H. <th...@ct...> - 2012-04-19 12:37:18
|
Am 19.04.2012 10:22, schrieb Ben: > On Thu, Apr 19, 2012 at 09:10, Thomas Heller<th...@ct...> wrote: >> Please consider a standard counter, something like this: >> >> def Counter(clock, count): >> @always(clock.posedge) >> def logic(): >> count.next = count + 1 >> return logic >> >> When this counter is simulated, count changes a delta cycle after >> the positive clock edge. When traceSignals is used to write a VCD file, >> gtkwave shows that the count changes exactly at the same time as the >> clock edge. This is confusing (at least in more complicated cases) in >> the timing diagram. >> > > The solution I found that suited the most my need for this feature was > to alter the following line: > > http://hg.myhdl.org/cgi-bin/hgwebdir.cgi/myhdl/file/7a860a7fb408/myhdl/_Signal.py#l86 > > And change the default value of delay from None to 1 > > The reasoning behind that is that is that you almost never want this > setting to be set per Signal, you want it project-wide or you're gonna > be stucked in inconsistency loops ... This has for unique consequence > to make the VCD easier to debug. The generated code will look exactly > the same. Since I'm using factory functions to create my signals I can get a similar effect by changing their 'delay=' default value. Except if MyHDL decides to create signals behind my back ;-). > The drawback of this method is that you have to set the duration of > your clock(s) accordingly so that your system be stable before the > next clock rises. Yes, it would be nice to be able to specify delay values smaller than 1ns; see my other post about the timescale. Probably I should patch myhdl.traceSignals to report a timescale of 1ps to the vcd file, and get used to specify delay values in ps. Thanks, Thomas |
From: Ben <ben...@gm...> - 2012-04-19 08:23:24
|
On Thu, Apr 19, 2012 at 09:10, Thomas Heller <th...@ct...> wrote: > Please consider a standard counter, something like this: > > def Counter(clock, count): > @always(clock.posedge) > def logic(): > count.next = count + 1 > return logic > > When this counter is simulated, count changes a delta cycle after > the positive clock edge. When traceSignals is used to write a VCD file, > gtkwave shows that the count changes exactly at the same time as the > clock edge. This is confusing (at least in more complicated cases) in > the timing diagram. > The solution I found that suited the most my need for this feature was to alter the following line: http://hg.myhdl.org/cgi-bin/hgwebdir.cgi/myhdl/file/7a860a7fb408/myhdl/_Signal.py#l86 And change the default value of delay from None to 1 The reasoning behind that is that is that you almost never want this setting to be set per Signal, you want it project-wide or you're gonna be stucked in inconsistency loops ... This has for unique consequence to make the VCD easier to debug. The generated code will look exactly the same. The drawback of this method is that you have to set the duration of your clock(s) accordingly so that your system be stable before the next clock rises. Regards, Benoît. > Of course, MyHDL allows to simulate a propagation delay for the count > signal by specifying a delay parameter in the creation of the count > signal: > > count = Signal(intbv(0)[16:], delay=1) > > Typically, the signal instantiation takes place in the testbench, not > in the instance. I would like to define the prop-delay in the instance; > I found two ways of doing that. > > First: > > def Counter(clock, count): > @instance > def logic(): > while 1: > yield clock.posedge > yield delay(1) > count.next = count + 1 > return logic > > Second (with an appropriate definition of the clone_signal() function): > > def Counter(clock, count): > count_internal = clone_signal(count, delay=1) > @always(clock.posedge) > def logic(): > count_internal.next = count_internal + 1 > @always_comb > def output(): > count.next = count_internal > return logic, output > > I guess (haven't tried) the first one would create non-synthesizable > VHDL, but the second one should. > > > Thomas > > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > myhdl-list mailing list > myh...@li... > https://lists.sourceforge.net/lists/listinfo/myhdl-list |
From: Thomas H. <th...@ct...> - 2012-04-19 07:11:24
|
Please consider a standard counter, something like this: def Counter(clock, count): @always(clock.posedge) def logic(): count.next = count + 1 return logic When this counter is simulated, count changes a delta cycle after the positive clock edge. When traceSignals is used to write a VCD file, gtkwave shows that the count changes exactly at the same time as the clock edge. This is confusing (at least in more complicated cases) in the timing diagram. Of course, MyHDL allows to simulate a propagation delay for the count signal by specifying a delay parameter in the creation of the count signal: count = Signal(intbv(0)[16:], delay=1) Typically, the signal instantiation takes place in the testbench, not in the instance. I would like to define the prop-delay in the instance; I found two ways of doing that. First: def Counter(clock, count): @instance def logic(): while 1: yield clock.posedge yield delay(1) count.next = count + 1 return logic Second (with an appropriate definition of the clone_signal() function): def Counter(clock, count): count_internal = clone_signal(count, delay=1) @always(clock.posedge) def logic(): count_internal.next = count_internal + 1 @always_comb def output(): count.next = count_internal return logic, output I guess (haven't tried) the first one would create non-synthesizable VHDL, but the second one should. Thomas |
From: Thomas H. <th...@ct...> - 2012-04-19 06:56:18
|
MyHDL uses 'steps' as time unit. When it writes VCD files with traceSignals, one time unit is defined as 1ns. Considering the speed of todays logic, this is very coarse. I would find it convenient to be able to specify sub-nanosecond delays as floats, and to make the timescale written into the vcd files configurable. Would this be an idea for MyHDL? Opinions? Thanks, Thomas |
From: Christopher L. <loz...@fr...> - 2012-04-18 17:35:50
|
There is another python language for chip design. Python Chips. http://dawsonjon.github.com/chips/introduction/index.html#a-new-approach-to-device-design Does anyone have any opinion on this library? Most importantly, how does it compare to MyHDL. When does one choose chips, and when does one choose MyHDL? -- Regards Christopher Lozinski Check out my iPhone apps TextFaster and EmailFaster http://textfaster.com Expect a paradigm shift. http://MyHDL.org |
From: Jan C. <jan...@mu...> - 2012-04-10 18:02:08
|
On 10/04/12 13:41, Jan Decaluwe wrote: > When I returned from vacation, I noticed that myhdl.org > was down. ... Thanks, and it's very good to hear from you. Jan Coombs |
From: Jan D. <ja...@ja...> - 2012-04-10 12:42:20
|
When I returned from vacation, I noticed that myhdl.org was down. This was due to a server hardware upgrade by the provider, requiring a manual update of the IP address for DNS. Things should be back to normal now. -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Python as a HDL: http://www.myhdl.org VHDL development, the modern way: http://www.sigasi.com World-class digital design: http://www.easics.com |
From: Frederik T. <sp...@ne...> - 2012-03-30 07:29:52
|
Hello Norbo, thank you for your email. > So am not quite sure what this (attribute romstyle : string; attribute > romstyle of q : signal is "M9K";) does? I think this tells Quartus to build a rom with M9K memory blocks. The device must support the TriMatrix Memory architecture. http://quartushelp.altera.com/11.1/mergedProjects/hdl/vhdl/vhdl_file_dir_romstyle.htm.htm The synchronous rom doesn't need this extra attribute because Quartus automatically builds a rom with memory blocks if "Auto ROM Replacement" is activated. This is the default setting. http://quartushelp.altera.com/11.1/mergedProjects/logicops/logicops/def_auto_rom_recognition.htm Kind regards from Germany, Frederik |
From: John S. <jo...@sa...> - 2012-03-29 21:25:11
|
Christopher Felton <chris.felton <at> gmail.com> writes: > > Thanks for the post John, > > Not sure if you simply wanted to share some changes that you have made > or if you want to provide a patch for a future release. If the later is > the case you will want to review the following, > > http://www.myhdl.org/doku.php/dev:patches > > For this task is sounds like you tested the main open item, Quartus > support of initialized values. To address close this issue the > following would need to be completed: > > * VHDL initial value and Verilog. > * testing other synthesis (others probably can assist). > * test cases to include in the unit testing. > > Regards, > Chris At the moment I just want to share the patch. I'm not sure if it really covers off all the issues regarding initial blocks, even just for Verilog. I'm certainly not competent to do any of the VHDL-related stuff, having only really just started with Verilog. I could perhaps pursue a proper resolution with some help, but I won't be able to start that until June or July at the earliest. regards, John |