This list is closed, nobody may subscribe to it.
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(6) |
Jun
(3) |
Jul
(1) |
Aug
(5) |
Sep
|
Oct
(4) |
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2012 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(49) |
Nov
(47) |
Dec
(7) |
| 2013 |
Jan
(14) |
Feb
(2) |
Mar
(3) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(11) |
Feb
(1) |
Mar
|
Apr
|
May
(7) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(4) |
Dec
|
| 2015 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2018 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(3) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(4) |
Oct
(4) |
Nov
(2) |
Dec
(2) |
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Wesley J. L. <wj...@ic...> - 2012-11-03 01:12:54
|
On Tuesday, October 30, 2012 17:40:16 Wesley J. Landaker wrote: > In this draft I am explicitly *NOT* trying to define, yet, how things are > encoded or exactly what format or order things are in. Instead, the goal > is to first identify what absolutely needs to be in our PHDLIF files, > and what we might want to support in the future. > > I believe if you take a look at this document you will see that I've > tried to capture this in a very simple and straightforward manner. I > believe that everything that everyone has mentioned so far can be > elegantly supported without needing to complicate anything further than > necessary. To support this, I've also just pushed a header file into Git that shows a simple C++ API to read and write PHDLIF (there is no implementation yet, we first want to make sure the interface looks nice). Although the language is C++, the important point is what is contained in the PHDLIF data model itself. (As for this specific interface, other languages could use this directly (it's easy to call C++ code from any language), or could make their own simple implementation.) If anyone has any comments about this or the PHDLIF requirements I wrote up previously, let's hear them! =) In a few days I plan to write up a full specification and an accompanying working libphdlif reference implementation. |
|
From: Wesley J. L. <wj...@ic...> - 2012-11-01 02:09:09
|
On 10/31/2012 01:10 AM, peter dudley wrote: > Are you starting to think about compiler verification? It seems like the > IF is a logical place to check correctness. (I'm CCing the list since I think this is probably interesting for everybody.) By compiler verification do you mean verifying that a design is correct after compiling, or that the compiler itself is correct? For the former, you could imagine that any number of different types of semantic checks and even design-specific passes could be run over the PHDLIF. This would include a lot of what the monolithic PHDL compiler does right now, but since PHDLIF is easy to work with, could include lots and lots of other things that we dream up later. For the latter, PHDL->PHDLIF is a great way to have both unit tests and acceptance/regression tests for the compiler core, since we can have reference PHDL designs and reference PHDLIF designs (made or verified by hand) and then have them stand as automated checks for future versions. We can also use PHDLIF to help verify backends by the same method, e.g. PHDLIF->PADS, etc. This lets you exhaustively exercise a backend just by using PHDLIF, but without requiring a bunch of tests of PHDL features for each backend. Is that the kind of stuff you're thinking of? |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-30 23:40:32
|
On Tuesday, October 30, 2012 16:48:52 Wesley J. Landaker wrote: > I'm drafting the start of a PHDLIF spec (focusing first on requirements) > that I will post soon. I've just pushed the start of a draft PHDL Intermediate Format spec into the Git repository under "/doc/PHDL_Intermediate_Format.txt". The file is in asciidoc format, which means it is just a text file, but has a little markup that makes it look nicer if you run it through asciidoc first to get HTML (i.e. by running "make"). In this draft I am explicitly *NOT* trying to define, yet, how things are encoded or exactly what format or order things are in. Instead, the goal is to first identify what absolutely needs to be in our PHDLIF files, and what we might want to support in the future. I believe if you take a look at this document you will see that I've tried to capture this in a very simple and straightforward manner. I believe that everything that everyone has mentioned so far can be elegantly supported without needing to complicate anything further than necessary. Once we all are feeling like we aren't being painted into a corner, and have settled on a basic technology (e.g. XML, JSON, whatever), I will go ahead and draft the technical portion and we can give this another round of review. Comments, discussion and/or patches are welcome! |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-30 22:49:06
|
Guys, Thanks for all the feedback. I'm drafting the start of a PHDLIF spec (focusing first on requirements) that I will post soon. |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-30 22:17:40
|
On Tuesday, October 30, 2012 16:07:08 Wesley J. Landaker wrote:
> On Tuesday, October 30, 2012 14:12:29 Peter Dudley wrote:
> > The mailing list still seems to be "moderated". That means that all
> > submissions have to be approved by the administrator of the mailing
> > list before they show up. For example this email will generate a
> > request to the administrator. J If you known how to un-moderate the
> > list I think that makes sense.
>
> I turned moderation off a few weeks ago. Did it get turned back on? We
> *definitely* want it OFF.
Moderation is not on. The moderation you guys were seeing is from your
messages being very long, e.g.:
List: phd...@li...
From: hd...@ho...
Subject: RE: [phdl-devel] intermediate format
Reason: Message body is too big: 43213 bytes with a limit of 40 KB
I've upped the size limit to 250 KiB (insanely high for a mailing list), but
one way to avoid this in the future is to trim everything but the part you
are replying to from messages.
|
|
From: Wesley J. L. <wj...@ic...> - 2012-10-30 22:07:23
|
On Tuesday, October 30, 2012 14:12:29 Peter Dudley wrote: > The mailing list still seems to be "moderated". That means that all > submissions have to be approved by the administrator of the mailing list > before they show up. For example this email will generate a request to > the administrator. J If you known how to un-moderate the list I think > that makes sense. I turned moderation off a few weeks ago. Did it get turned back on? We *definitely* want it OFF. |
|
From: Peter D. <hd...@ho...> - 2012-10-30 20:12:47
|
Josh
The mailing list still seems to be "moderated". That means that all
submissions have to be approved by the administrator of the mailing list
before they show up. For example this email will generate a request to the
administrator. J If you known how to un-moderate the list I think that
makes sense.
Pete
From: Joshua Mangelson [mailto:jos...@gm...]
Sent: Tuesday, October 30, 2012 9:05 PM
To: Brad Riching; phd...@li...
Cc: peter dudley; Wesley J. Landaker
Subject: Re: [phdl-devel] intermediate format
Hey everyone,
Did you receive Brad's comments below?
We're trying to verify that his comments made it to the mailing list.
Thanks,
Josh
On Mon, Oct 29, 2012 at 8:34 PM, Brad Riching <bra...@gm...> wrote:
Hello everyone,
I believe I can offer my opinion on this topic as I have already tried it
with PHDL. At the time I pondered this problem I ended up outputting the
flattened netlist in XML format using a set of persistence tools called
XStream.
All XStream does is accept a reference to any kind of java object, and it
will serialize it in XML with an XML engine of your choice. It has its
limitations and we could probably do better, but it was all that I could
come up with on short notice to start supporting additional tools.
I also investigated other serialization scopes, but ran into difficulty in
serializing our entire circuit graph with XStream. I believe the fmc_module
design ended up as a 12MB xml file, which definitely will cause problems
scaling to larger designs. However, the parsing and generation of the files
was extremely fast. Most of the file was overhead with all of the
references in the XML hierarchy. It may be possible to tweak how XStream
writes and reads our entire graph and eliminate a lot of that overhead, but
I'm skeptical based on the results I got. If you do end up serializing the
object graph to text, I'd recommend defining an XML schema using EMF, or a
simpler DTD. Some advantages a schema has over a DTD are: (1) the
definition of the file format itself is defined in XML, (2) changes are much
easier to implement whereas a changed DTD definition is less likely to be
backwards compatible with existing generated files, and (3) schemas have
type-checking abilities where DTDs do not. I think it would be particularly
challenging to create a a DTD or even a schema that describes the entire
elaborated object graph, so if I went this route I would try and get the
tool to generate one for me based on some UML diagrams of an ecore model.
Perhaps someone else knows of a better way?
Xtext already defines an ecore model for PHDL which implements the syntax of
the language inside of eclipse. This model does not however have any
knowledge of elaboration, or what is implied by semantics of the language,
i.e. in a replicated net assignment. The Xtext model just knows that this
is a legal statement:
gnd = <gnd>;
The Xtext model does not know what you mean by it: that the single bit net
on the right must be assigned to however many bits wide the left hand side
is. The part of the tool that interprets this is what we called the
elaborator. The output of the elaborator is the complete circuit graph,
which would contain doubly-linked data structures of the above statement,
inside the entire design. For example, the left hand side gnd pin or port
would be bit-blasted into individual indices it may have had, and each would
receive a pointer to the gnd net on the right. In turn, the gnd net (or
port) on the right would receive an entry to every bit-blasted gnd pin or
port on the left in it's list of assignments. Also, each gnd pin on the
left would retain a pointer to its parent instance, and that instance would
contain a pointer to its parent design, etc. The gnd net (or port) on the
right would also retain a pointer to its parent design. This type of data
structure presents an extremely complicated graph structure to serialize (in
plain text) in its entirety.
Because we didn't understand how to develop a complete ecore model on our
own, we opted to serialize only a subset of the graph to support other
netlist formats quickly -- in this case, the flattened netlist graph which
is a much simpler data structure. Perhaps we can include other things like
attributes and hierarchical references within this format as well so we can
do some back-annotation in the future.
I do not believe that developing an ecore model would necessarily lock us in
to using eclipse and a plugin for the tool. The output of ecore is java
classes with generated factory methods to be able to create any parts of the
design that you would do on your own, just as we do with our existing data
structures in the elaborator. Since you get a lot of other things for free
(including the complicated persistence of the entire ecore model circuit
graph) this may be a path worth investigating further if everyone thinks the
intermediate format needs to contain everything post-elaboration.
Whatever way ends up getting implemented, I've noticed that we will need to
watch out for a designated hierarchical separator. We currently do not have
one formally defined (we have been using the forward slash), so we could
definitely run into some problems later on down the road -- especially since
this character is currently allowed in a PhdlID.
Somebody correct me if I'm wrong, but I think PADS interprets the forward
slash as a low-asserted signal name, so we probably need to keep this
character in the PhdlID lexer rule, which leaves us few alternatives for a
separator of our own. (backslash? eww...)
>From what I've been reading, it appears that the project is going in the
direction of more modular, smaller tools, with the exception of the plugin
(since that is the opposite of the design paradigm of eclipse). Up until
now, we had been thinking about the grammar being defined inside of the
tool, and not that the tool implemented a particular language specification.
That is why we switched completely over to Eclipse because we didn't want to
have multiple grammar definitions floating around, i.e. one in ANTLR and one
in Xtext. I think what you did (Josh) to extract the LRM from the tools was
a good idea. Let the individual tools implement the specification, and that
way anyone can create a tool that implements the language without having to
sift through the source and understand a language (Xtext) that they are
unfamiliar with.
I have many more comments, but I'll have to write them in later. Thanks for
being patient with me while I transition to my new job!
Brad
On Mon, Oct 29, 2012 at 5:39 PM, Joshua Mangelson <jos...@gm...>
wrote:
I would vote for XML as well.
Somethings we might want to take into account when creating the intermediate
format spec is the importance of back-annotation and hierarchy as well. For
example, you might want one of the lower end tools to be able to pinpoint
the line in your source code that is incorrect. This would only be possible
if we somehow include a reference such as a file-name or line-number in the
intermediate format.
Another option for the intermediate format would be the actual graph of the
design generated from the syntax graph before the error checking is done.
This graph is substantially larger than just a netlist, but it would contain
all of the data included in the original .phdl files, because it is a
complete graph of the design.
Currently, for example, in the 2.1 version, (this is probably how the 2.0
version works as well, but I'm not quite as familiar with it):
1. a source code tree is generated from the inputted .phdl files
2. tests are run at this point, (for example to make sure that a pin
assignment statement within an instance has the same number of pins on the
left side as it has nets on the right side.)
3. the source code tree is walked and a "flattened" (as pertaining to
hierarchy) in-memory representation of the design is created with objects
such as designs, instances, and connections(either ports or nets). In this
graph these objects may have fields that are references to other objects
also in the design. For example an instance object would have attribute
fields as well as fields for each of its pins with pointers or references to
the net objects within the design that they are connected to. In this graph
of the design everything has a unique name and there are in-memory objects
generated for each instanced device. For example if in the source code an
array of 8 resistors were instanced, in this flattened graph, there would be
8 individual device objects created, with each of their respective pin
reference lists and attribute lists. Hierarchy works similarly, the with the
ports in a subdesign acting as nets to connect the lower-level designs
objects to each other, as well as to the upper-level design's nets as
defined in the subinst declarations.
4. Test are run at this point as well, (for example the Electric Rule
Checker would be run at this point because the entire graph with hierarchy
flattened out is needed in-order to do checking of each of the pins on a
certain net)
5. This validated design graph can then be walked inorder to create the
netlist and output files.
When defining the specification for the PHDL Intermediate format we might
also want to take this into account. Much of the data about the original
design will be lost if the intermediate format only contains a netlist.
- Much of this data could be stored within attributes of the devices or
pins,
- or we could use the design graph itself as an intermediate format, but
this would cause alot more complexity for someone desiring to simply
generate an alternate output format.
Let me know what you guys think.
The main thing is that some testing can only be done after the graph has
been generated, we should decided whether or not we want this testing to be
done in the main front end compiler or afterwards.
If we decide that we want that to be a part that is optionally done
afterwards, we have to provide a way for the data needed by the test to be
included in the intermediate format.
Josh
On Mon, Oct 29, 2012 at 11:51 AM, Wesley J. Landaker <wj...@ic...>
wrote:
On Monday, October 29, 2012 02:46:54 peter dudley wrote:
> Guys, In response to Wes' email I have been thinking about the
> intermediate format again. Early on we considered just using XML as
> the intermediate format. The job of the PHDL compiler is to convert a
> few kilobytes of text into a few kilobytes of text. For the small
> amount of processing we do I think that XML would not introduce a
> noticeable performance hit. On the other hand, I remember that I was
> personally able to quickly define a data format (using a DTD) and parse
> XML files using that format. I suggest the following compiler flow.
> PHDL input design -> PHDL front end compiler -> XML intermediate format
> -> Back end tools (netlist generation, etc) -> output files For a while
> I have been thinking about what should go into the XML intermediate
> format.
XML is fine with me. JSON is another text-based format that is incredibly
easy to parse by existing libraries in every language, and has the advantage
that it is slightly more human readable and it's MUCH SIMPLER to parse than
XML.
Even easier than those, however, is just the de-factor standard UNIX text
record format with one data element per line. This allows for parsing with a
simple regular expression or tokenizer (or sed, awk, grep, etc), no JSON or
XML parsing libraries required.
Whatever we pick, it'll be easy enough to work with. My vote between those
would be:
1. Simple text
2. JSON
3. XML
> Should it contain a representation of the original PHDL
> instance list or should it contain the compiled netlist data format. I
> am leaning toward having the intermediate file contain both of these
> data representations. The netlist data would make the XML easy to use
> for people who want to write new netlist generators for different layout
> tools. The instance list data would make the XML good for people who
> want to write VHDL or schematic generation tools. XML is well understood
> by the open and closed source software community. Please comment. Pete
Since the purpose of the intermediate format is to be parsed and used easily
by programs, not humans, I think we don't want to duplicate any information.
Otherwise, progams have to do a lot of extra work to check that the two
different views are consistent. On the other hand, transforming from netlist
to instance list view internally in a program is trivial once you've read in
the complete information into an appropriate, simple, data structure. For
almost all *automated* purposes (backends, semantic checks, visualization,
etc), a netlist is more natural than an instance list.
Either way, the first thing we need to define is what things need to go into
the intermediate format. For instance, we need at least:
For each instance:
device name
instance name
attributes
for each pin:
name
attributes
For each net:
name
attributes
for each connection:
instance name
pin name
I don't think needs to be anything else, as *attributes* are supposed to be
our generic carrier of meta information. (Pin types, as they currently
exist, make things much more complicated here, which is why I want to either
scrap them as hard-coded elements and use pin attributes instead).
See anything I missed? Any other thoughts?
Once we agree on basic format and content, I volunteer to write up a first
draft of a spec that we can then review and modify together.
----------------------------------------------------------------------------
--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
_______________________________________________
phdl-devel mailing list
phd...@li...
https://lists.sourceforge.net/lists/listinfo/phdl-devel
----------------------------------------------------------------------------
--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
_______________________________________________
phdl-devel mailing list
phd...@li...
https://lists.sourceforge.net/lists/listinfo/phdl-devel
|
|
From: Joshua M. <jos...@gm...> - 2012-10-30 20:05:30
|
Hey everyone, Did you receive Brad's comments below? We're trying to verify that his comments made it to the mailing list. Thanks, Josh On Mon, Oct 29, 2012 at 8:34 PM, Brad Riching <bra...@gm...> wrote: > Hello everyone, > > I believe I can offer my opinion on this topic as I have already tried it > with PHDL. At the time I pondered this problem I ended up outputting the > flattened netlist in XML format using a set of persistence tools called > XStream. > > All XStream does is accept a reference to any kind of java object, and it > will serialize it in XML with an XML engine of your choice. It has its > limitations and we could probably do better, but it was all that I could > come up with on short notice to start supporting additional tools. > > I also investigated other serialization scopes, but ran into difficulty in > serializing our entire circuit graph with XStream. I believe the > fmc_module design ended up as a 12MB xml file, which definitely will cause > problems scaling to larger designs. However, the parsing and generation of > the files was extremely fast. Most of the file was overhead with all of > the references in the XML hierarchy. It may be possible to tweak how > XStream writes and reads our entire graph and eliminate a lot of that > overhead, but I'm skeptical based on the results I got. If you do end up > serializing the object graph to text, I'd recommend defining an XML schema > using EMF, or a simpler DTD. Some advantages a schema has over a DTD are: > (1) the definition of the file format itself is defined in XML, (2) changes > are much easier to implement whereas a changed DTD definition is less > likely to be backwards compatible with existing generated files, and (3) > schemas have type-checking abilities where DTDs do not. I think it would > be particularly challenging to create a a DTD or even a schema that > describes the entire elaborated object graph, so if I went this route I > would try and get the tool to generate one for me based on some UML > diagrams of an ecore model. Perhaps someone else knows of a better way? > > Xtext already defines an ecore model for PHDL which implements the syntax > of the language inside of eclipse. This model does not however have any > knowledge of elaboration, or what is implied by semantics of the language, > i.e. in a replicated net assignment. The Xtext model just knows that this > is a legal statement: > > gnd = <gnd>; > > The Xtext model does not know what you mean by it: that the single bit > net on the right must be assigned to however many bits wide the left hand > side is. The part of the tool that interprets this is what we called the > elaborator. The output of the elaborator is the complete circuit graph, > which would contain doubly-linked data structures of the above statement, > inside the entire design. For example, the left hand side gnd pin or port > would be bit-blasted into individual indices it may have had, and each > would receive a pointer to the gnd net on the right. In turn, the gnd net > (or port) on the right would receive an entry to every bit-blasted gnd pin > or port on the left in it's list of assignments. Also, each gnd pin on the > left would retain a pointer to its parent instance, and that instance would > contain a pointer to its parent design, etc. The gnd net (or port) on the > right would also retain a pointer to its parent design. This type of data > structure presents an extremely complicated graph structure to serialize > (in plain text) in its entirety. > > Because we didn't understand how to develop a complete ecore model on our > own, we opted to serialize only a subset of the graph to support other > netlist formats quickly -- in this case, the flattened netlist graph which > is a much simpler data structure. Perhaps we can include other things like > attributes and hierarchical references within this format as well so we can > do some back-annotation in the future. > > I do not believe that developing an ecore model would necessarily lock us > in to using eclipse and a plugin for the tool. The output of ecore is java > classes with generated factory methods to be able to create any parts of > the design that you would do on your own, just as we do with our existing > data structures in the elaborator. Since you get a lot of other things for > free (including the complicated persistence of the entire ecore model > circuit graph) this may be a path worth investigating further if everyone > thinks the intermediate format needs to contain everything post-elaboration. > > Whatever way ends up getting implemented, I've noticed that we will need > to watch out for a designated hierarchical separator. We currently do not > have one formally defined (we have been using the forward slash), so we > could definitely run into some problems later on down the road -- > especially since this character is currently allowed in a PhdlID. > > Somebody correct me if I'm wrong, but I think PADS interprets the forward > slash as a low-asserted signal name, so we probably need to keep this > character in the PhdlID lexer rule, which leaves us few alternatives for a > separator of our own. (backslash? eww...) > > From what I've been reading, it appears that the project is going in the > direction of more modular, smaller tools, with the exception of the plugin > (since that is the opposite of the design paradigm of eclipse). Up until > now, we had been thinking about the grammar being defined inside of the > tool, and not that the tool implemented a particular language > specification. That is why we switched completely over to Eclipse because > we didn't want to have multiple grammar definitions floating around, i.e. > one in ANTLR and one in Xtext. I think what you did (Josh) to extract the > LRM from the tools was a good idea. Let the individual tools implement the > specification, and that way anyone can create a tool that implements the > language without having to sift through the source and understand a > language (Xtext) that they are unfamiliar with. > > I have many more comments, but I'll have to write them in later. Thanks > for being patient with me while I transition to my new job! > > Brad > > On Mon, Oct 29, 2012 at 5:39 PM, Joshua Mangelson <jos...@gm...>wrote: > >> I would vote for XML as well. >> >> Somethings we might want to take into account when creating the >> intermediate format spec is the importance of back-annotation and hierarchy >> as well. For example, you might want one of the lower end tools to be able >> to pinpoint the line in your source code that is incorrect. This would only >> be possible if we somehow include a reference such as a file-name or >> line-number in the intermediate format. >> >> Another option for the intermediate format would be the actual graph of >> the design generated from the syntax graph before the error checking is >> done. This graph is substantially larger than just a netlist, but it would >> contain all of the data included in the original .phdl files, because it is >> a complete graph of the design. >> >> Currently, for example, in the 2.1 version, (this is probably how the 2.0 >> version works as well, but I'm not quite as familiar with it): >> 1. a source code tree is generated from the inputted .phdl files >> 2. tests are run at this point, (for example to make sure that a pin >> assignment statement within an instance has the same number of pins on the >> left side as it has nets on the right side.) >> 3. the source code tree is walked and a "flattened" (as pertaining to >> hierarchy) in-memory representation of the design is created with objects >> such as designs, instances, and connections(either ports or nets). In this >> graph these objects may have fields that are references to other objects >> also in the design. For example an instance object would have attribute >> fields as well as fields for each of its pins with pointers or references >> to the net objects within the design that they are connected to. In this >> graph of the design everything has a unique name and there are in-memory >> objects generated for each instanced device. For example if in the source >> code an array of 8 resistors were instanced, in this flattened graph, there >> would be 8 individual device objects created, with each of their respective >> pin reference lists and attribute lists. Hierarchy works similarly, the >> with the ports in a subdesign acting as nets to connect the lower-level >> designs objects to each other, as well as to the upper-level design's nets >> as defined in the subinst declarations. >> 4. Test are run at this point as well, (for example the Electric Rule >> Checker would be run at this point because the entire graph with hierarchy >> flattened out is needed in-order to do checking of each of the pins on a >> certain net) >> 5. This validated design graph can then be walked inorder to create the >> netlist and output files. >> >> When defining the specification for the PHDL Intermediate format we might >> also want to take this into account. Much of the data about the original >> design will be lost if the intermediate format only contains a netlist. >> - Much of this data could be stored within attributes of the devices or >> pins, >> - or we could use the design graph itself as an intermediate format, but >> this would cause alot more complexity for someone desiring to simply >> generate an alternate output format. >> >> Let me know what you guys think. >> The main thing is that some testing can only be done after the graph has >> been generated, we should decided whether or not we want this testing to be >> done in the main front end compiler or afterwards. >> If we decide that we want that to be a part that is optionally done >> afterwards, we have to provide a way for the data needed by the test to be >> included in the intermediate format. >> >> Josh >> >> >> >> >> On Mon, Oct 29, 2012 at 11:51 AM, Wesley J. Landaker <wj...@ic...>wrote: >> >>> On Monday, October 29, 2012 02:46:54 peter dudley wrote: >>> > Guys, In response to Wes' email I have been thinking about the >>> > intermediate format again. Early on we considered just using XML as >>> > the intermediate format. The job of the PHDL compiler is to convert a >>> > few kilobytes of text into a few kilobytes of text. For the small >>> > amount of processing we do I think that XML would not introduce a >>> > noticeable performance hit. On the other hand, I remember that I was >>> > personally able to quickly define a data format (using a DTD) and parse >>> > XML files using that format. I suggest the following compiler flow. >>> > PHDL input design -> PHDL front end compiler -> XML intermediate format >>> > -> Back end tools (netlist generation, etc) -> output files For a while >>> > I have been thinking about what should go into the XML intermediate >>> > format. >>> >>> XML is fine with me. JSON is another text-based format that is incredibly >>> easy to parse by existing libraries in every language, and has the >>> advantage >>> that it is slightly more human readable and it's MUCH SIMPLER to parse >>> than >>> XML. >>> >>> Even easier than those, however, is just the de-factor standard UNIX text >>> record format with one data element per line. This allows for parsing >>> with a >>> simple regular expression or tokenizer (or sed, awk, grep, etc), no JSON >>> or >>> XML parsing libraries required. >>> >>> Whatever we pick, it'll be easy enough to work with. My vote between >>> those >>> would be: >>> >>> 1. Simple text >>> 2. JSON >>> 3. XML >>> >>> > Should it contain a representation of the original PHDL >>> > instance list or should it contain the compiled netlist data format. I >>> > am leaning toward having the intermediate file contain both of these >>> > data representations. The netlist data would make the XML easy to use >>> > for people who want to write new netlist generators for different >>> layout >>> > tools. The instance list data would make the XML good for people who >>> > want to write VHDL or schematic generation tools. XML is well >>> understood >>> > by the open and closed source software community. Please comment. >>> Pete >>> >>> Since the purpose of the intermediate format is to be parsed and used >>> easily >>> by programs, not humans, I think we don't want to duplicate any >>> information. >>> Otherwise, progams have to do a lot of extra work to check that the two >>> different views are consistent. On the other hand, transforming from >>> netlist >>> to instance list view internally in a program is trivial once you've >>> read in >>> the complete information into an appropriate, simple, data structure. For >>> almost all *automated* purposes (backends, semantic checks, >>> visualization, >>> etc), a netlist is more natural than an instance list. >>> >>> Either way, the first thing we need to define is what things need to go >>> into >>> the intermediate format. For instance, we need at least: >>> >>> For each instance: >>> device name >>> instance name >>> attributes >>> for each pin: >>> name >>> attributes >>> >>> For each net: >>> name >>> attributes >>> for each connection: >>> instance name >>> pin name >>> >>> I don't think needs to be anything else, as *attributes* are supposed to >>> be >>> our generic carrier of meta information. (Pin types, as they currently >>> exist, make things much more complicated here, which is why I want to >>> either >>> scrap them as hard-coded elements and use pin attributes instead). >>> >>> See anything I missed? Any other thoughts? >>> >>> Once we agree on basic format and content, I volunteer to write up a >>> first >>> draft of a spec that we can then review and modify together. >>> >>> >>> ------------------------------------------------------------------------------ >>> The Windows 8 Center - In partnership with Sourceforge >>> Your idea - your app - 30 days. >>> Get started! >>> http://windows8center.sourceforge.net/ >>> >>> what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ >>> _______________________________________________ >>> phdl-devel mailing list >>> phd...@li... >>> https://lists.sourceforge.net/lists/listinfo/phdl-devel >>> >>> >> >> >> ------------------------------------------------------------------------------ >> The Windows 8 Center - In partnership with Sourceforge >> Your idea - your app - 30 days. >> Get started! >> http://windows8center.sourceforge.net/ >> what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ >> _______________________________________________ >> phdl-devel mailing list >> phd...@li... >> https://lists.sourceforge.net/lists/listinfo/phdl-devel >> >> > |
|
From: Peter D. <hd...@ho...> - 2012-10-30 09:25:01
|
Brad,
Here is what I take away from your email.
1. You know XML pretty well and you are willing to advise us.
2. Your approach of the intermediate format was to include a great
number of interrelationships.
I'm thinking we can go super simple, just
ankle-bone-connects-to-the-leg-bone stuff.
Josh,
Here is what I got from yours.
1. You know XML pretty well.
2. It may be useful to include more information in the intermediate
format to support references back to the PHDL source.
I think the X in XML means eXtensible. Maybe we can start out with a simple
intermediate format and then extend it, in a backward compatible way, as
needed.
Wes,
For the intermediate file data format you advised (best first) 1. Custom
text, 2. JSON based, 3. XML. I'm guessing this is because you want to avoid
a bunch of external dependencies for writer/parser libraries, etc. You
would just rather code it up in C++ and then your source is your source.
Is it possible to start with a simple text format and still be able to
extend it as necessary? Is XML ok if we don't get carried away?
I'm hoping we can get down to something simple enough and reliable enough to
survive in an open-source environment.
Pete
From: Brad Riching [mailto:bra...@gm...]
Sent: Tuesday, October 30, 2012 3:34 AM
To: Joshua Mangelson
Cc: Wesley J. Landaker; peter dudley; phd...@li...
Subject: Re: [phdl-devel] intermediate format
Hello everyone,
I believe I can offer my opinion on this topic as I have already tried it
with PHDL. At the time I pondered this problem I ended up outputting the
flattened netlist in XML format using a set of persistence tools called
XStream.
All XStream does is accept a reference to any kind of java object, and it
will serialize it in XML with an XML engine of your choice. It has its
limitations and we could probably do better, but it was all that I could
come up with on short notice to start supporting additional tools.
I also investigated other serialization scopes, but ran into difficulty in
serializing our entire circuit graph with XStream. I believe the fmc_module
design ended up as a 12MB xml file, which definitely will cause problems
scaling to larger designs. However, the parsing and generation of the files
was extremely fast. Most of the file was overhead with all of the
references in the XML hierarchy. It may be possible to tweak how XStream
writes and reads our entire graph and eliminate a lot of that overhead, but
I'm skeptical based on the results I got. If you do end up serializing the
object graph to text, I'd recommend defining an XML schema using EMF, or a
simpler DTD. Some advantages a schema has over a DTD are: (1) the
definition of the file format itself is defined in XML, (2) changes are much
easier to implement whereas a changed DTD definition is less likely to be
backwards compatible with existing generated files, and (3) schemas have
type-checking abilities where DTDs do not. I think it would be particularly
challenging to create a a DTD or even a schema that describes the entire
elaborated object graph, so if I went this route I would try and get the
tool to generate one for me based on some UML diagrams of an ecore model.
Perhaps someone else knows of a better way?
Xtext already defines an ecore model for PHDL which implements the syntax of
the language inside of eclipse. This model does not however have any
knowledge of elaboration, or what is implied by semantics of the language,
i.e. in a replicated net assignment. The Xtext model just knows that this
is a legal statement:
gnd = <gnd>;
The Xtext model does not know what you mean by it: that the single bit net
on the right must be assigned to however many bits wide the left hand side
is. The part of the tool that interprets this is what we called the
elaborator. The output of the elaborator is the complete circuit graph,
which would contain doubly-linked data structures of the above statement,
inside the entire design. For example, the left hand side gnd pin or port
would be bit-blasted into individual indices it may have had, and each would
receive a pointer to the gnd net on the right. In turn, the gnd net (or
port) on the right would receive an entry to every bit-blasted gnd pin or
port on the left in it's list of assignments. Also, each gnd pin on the
left would retain a pointer to its parent instance, and that instance would
contain a pointer to its parent design, etc. The gnd net (or port) on the
right would also retain a pointer to its parent design. This type of data
structure presents an extremely complicated graph structure to serialize (in
plain text) in its entirety.
Because we didn't understand how to develop a complete ecore model on our
own, we opted to serialize only a subset of the graph to support other
netlist formats quickly -- in this case, the flattened netlist graph which
is a much simpler data structure. Perhaps we can include other things like
attributes and hierarchical references within this format as well so we can
do some back-annotation in the future.
I do not believe that developing an ecore model would necessarily lock us in
to using eclipse and a plugin for the tool. The output of ecore is java
classes with generated factory methods to be able to create any parts of the
design that you would do on your own, just as we do with our existing data
structures in the elaborator. Since you get a lot of other things for free
(including the complicated persistence of the entire ecore model circuit
graph) this may be a path worth investigating further if everyone thinks the
intermediate format needs to contain everything post-elaboration.
Whatever way ends up getting implemented, I've noticed that we will need to
watch out for a designated hierarchical separator. We currently do not have
one formally defined (we have been using the forward slash), so we could
definitely run into some problems later on down the road -- especially since
this character is currently allowed in a PhdlID.
Somebody correct me if I'm wrong, but I think PADS interprets the forward
slash as a low-asserted signal name, so we probably need to keep this
character in the PhdlID lexer rule, which leaves us few alternatives for a
separator of our own. (backslash? eww...)
>From what I've been reading, it appears that the project is going in the
direction of more modular, smaller tools, with the exception of the plugin
(since that is the opposite of the design paradigm of eclipse). Up until
now, we had been thinking about the grammar being defined inside of the
tool, and not that the tool implemented a particular language specification.
That is why we switched completely over to Eclipse because we didn't want to
have multiple grammar definitions floating around, i.e. one in ANTLR and one
in Xtext. I think what you did (Josh) to extract the LRM from the tools was
a good idea. Let the individual tools implement the specification, and that
way anyone can create a tool that implements the language without having to
sift through the source and understand a language (Xtext) that they are
unfamiliar with.
I have many more comments, but I'll have to write them in later. Thanks for
being patient with me while I transition to my new job!
Brad
On Mon, Oct 29, 2012 at 5:39 PM, Joshua Mangelson <jos...@gm...>
wrote:
I would vote for XML as well.
Somethings we might want to take into account when creating the intermediate
format spec is the importance of back-annotation and hierarchy as well. For
example, you might want one of the lower end tools to be able to pinpoint
the line in your source code that is incorrect. This would only be possible
if we somehow include a reference such as a file-name or line-number in the
intermediate format.
Another option for the intermediate format would be the actual graph of the
design generated from the syntax graph before the error checking is done.
This graph is substantially larger than just a netlist, but it would contain
all of the data included in the original .phdl files, because it is a
complete graph of the design.
Currently, for example, in the 2.1 version, (this is probably how the 2.0
version works as well, but I'm not quite as familiar with it):
1. a source code tree is generated from the inputted .phdl files
2. tests are run at this point, (for example to make sure that a pin
assignment statement within an instance has the same number of pins on the
left side as it has nets on the right side.)
3. the source code tree is walked and a "flattened" (as pertaining to
hierarchy) in-memory representation of the design is created with objects
such as designs, instances, and connections(either ports or nets). In this
graph these objects may have fields that are references to other objects
also in the design. For example an instance object would have attribute
fields as well as fields for each of its pins with pointers or references to
the net objects within the design that they are connected to. In this graph
of the design everything has a unique name and there are in-memory objects
generated for each instanced device. For example if in the source code an
array of 8 resistors were instanced, in this flattened graph, there would be
8 individual device objects created, with each of their respective pin
reference lists and attribute lists. Hierarchy works similarly, the with the
ports in a subdesign acting as nets to connect the lower-level designs
objects to each other, as well as to the upper-level design's nets as
defined in the subinst declarations.
4. Test are run at this point as well, (for example the Electric Rule
Checker would be run at this point because the entire graph with hierarchy
flattened out is needed in-order to do checking of each of the pins on a
certain net)
5. This validated design graph can then be walked inorder to create the
netlist and output files.
When defining the specification for the PHDL Intermediate format we might
also want to take this into account. Much of the data about the original
design will be lost if the intermediate format only contains a netlist.
- Much of this data could be stored within attributes of the devices or
pins,
- or we could use the design graph itself as an intermediate format, but
this would cause alot more complexity for someone desiring to simply
generate an alternate output format.
Let me know what you guys think.
The main thing is that some testing can only be done after the graph has
been generated, we should decided whether or not we want this testing to be
done in the main front end compiler or afterwards.
If we decide that we want that to be a part that is optionally done
afterwards, we have to provide a way for the data needed by the test to be
included in the intermediate format.
Josh
On Mon, Oct 29, 2012 at 11:51 AM, Wesley J. Landaker <wj...@ic...>
wrote:
On Monday, October 29, 2012 02:46:54 peter dudley wrote:
> Guys, In response to Wes' email I have been thinking about the
> intermediate format again. Early on we considered just using XML as
> the intermediate format. The job of the PHDL compiler is to convert a
> few kilobytes of text into a few kilobytes of text. For the small
> amount of processing we do I think that XML would not introduce a
> noticeable performance hit. On the other hand, I remember that I was
> personally able to quickly define a data format (using a DTD) and parse
> XML files using that format. I suggest the following compiler flow.
> PHDL input design -> PHDL front end compiler -> XML intermediate format
> -> Back end tools (netlist generation, etc) -> output files For a while
> I have been thinking about what should go into the XML intermediate
> format.
XML is fine with me. JSON is another text-based format that is incredibly
easy to parse by existing libraries in every language, and has the advantage
that it is slightly more human readable and it's MUCH SIMPLER to parse than
XML.
Even easier than those, however, is just the de-factor standard UNIX text
record format with one data element per line. This allows for parsing with a
simple regular expression or tokenizer (or sed, awk, grep, etc), no JSON or
XML parsing libraries required.
Whatever we pick, it'll be easy enough to work with. My vote between those
would be:
1. Simple text
2. JSON
3. XML
> Should it contain a representation of the original PHDL
> instance list or should it contain the compiled netlist data format. I
> am leaning toward having the intermediate file contain both of these
> data representations. The netlist data would make the XML easy to use
> for people who want to write new netlist generators for different layout
> tools. The instance list data would make the XML good for people who
> want to write VHDL or schematic generation tools. XML is well understood
> by the open and closed source software community. Please comment. Pete
Since the purpose of the intermediate format is to be parsed and used easily
by programs, not humans, I think we don't want to duplicate any information.
Otherwise, progams have to do a lot of extra work to check that the two
different views are consistent. On the other hand, transforming from netlist
to instance list view internally in a program is trivial once you've read in
the complete information into an appropriate, simple, data structure. For
almost all *automated* purposes (backends, semantic checks, visualization,
etc), a netlist is more natural than an instance list.
Either way, the first thing we need to define is what things need to go into
the intermediate format. For instance, we need at least:
For each instance:
device name
instance name
attributes
for each pin:
name
attributes
For each net:
name
attributes
for each connection:
instance name
pin name
I don't think needs to be anything else, as *attributes* are supposed to be
our generic carrier of meta information. (Pin types, as they currently
exist, make things much more complicated here, which is why I want to either
scrap them as hard-coded elements and use pin attributes instead).
See anything I missed? Any other thoughts?
Once we agree on basic format and content, I volunteer to write up a first
draft of a spec that we can then review and modify together.
----------------------------------------------------------------------------
--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
_______________________________________________
phdl-devel mailing list
phd...@li...
https://lists.sourceforge.net/lists/listinfo/phdl-devel
----------------------------------------------------------------------------
--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
_______________________________________________
phdl-devel mailing list
phd...@li...
https://lists.sourceforge.net/lists/listinfo/phdl-devel
|
|
From: Brad R. <bra...@gm...> - 2012-10-30 02:34:28
|
Hello everyone, I believe I can offer my opinion on this topic as I have already tried it with PHDL. At the time I pondered this problem I ended up outputting the flattened netlist in XML format using a set of persistence tools called XStream. All XStream does is accept a reference to any kind of java object, and it will serialize it in XML with an XML engine of your choice. It has its limitations and we could probably do better, but it was all that I could come up with on short notice to start supporting additional tools. I also investigated other serialization scopes, but ran into difficulty in serializing our entire circuit graph with XStream. I believe the fmc_module design ended up as a 12MB xml file, which definitely will cause problems scaling to larger designs. However, the parsing and generation of the files was extremely fast. Most of the file was overhead with all of the references in the XML hierarchy. It may be possible to tweak how XStream writes and reads our entire graph and eliminate a lot of that overhead, but I'm skeptical based on the results I got. If you do end up serializing the object graph to text, I'd recommend defining an XML schema using EMF, or a simpler DTD. Some advantages a schema has over a DTD are: (1) the definition of the file format itself is defined in XML, (2) changes are much easier to implement whereas a changed DTD definition is less likely to be backwards compatible with existing generated files, and (3) schemas have type-checking abilities where DTDs do not. I think it would be particularly challenging to create a a DTD or even a schema that describes the entire elaborated object graph, so if I went this route I would try and get the tool to generate one for me based on some UML diagrams of an ecore model. Perhaps someone else knows of a better way? Xtext already defines an ecore model for PHDL which implements the syntax of the language inside of eclipse. This model does not however have any knowledge of elaboration, or what is implied by semantics of the language, i.e. in a replicated net assignment. The Xtext model just knows that this is a legal statement: gnd = <gnd>; The Xtext model does not know what you mean by it: that the single bit net on the right must be assigned to however many bits wide the left hand side is. The part of the tool that interprets this is what we called the elaborator. The output of the elaborator is the complete circuit graph, which would contain doubly-linked data structures of the above statement, inside the entire design. For example, the left hand side gnd pin or port would be bit-blasted into individual indices it may have had, and each would receive a pointer to the gnd net on the right. In turn, the gnd net (or port) on the right would receive an entry to every bit-blasted gnd pin or port on the left in it's list of assignments. Also, each gnd pin on the left would retain a pointer to its parent instance, and that instance would contain a pointer to its parent design, etc. The gnd net (or port) on the right would also retain a pointer to its parent design. This type of data structure presents an extremely complicated graph structure to serialize (in plain text) in its entirety. Because we didn't understand how to develop a complete ecore model on our own, we opted to serialize only a subset of the graph to support other netlist formats quickly -- in this case, the flattened netlist graph which is a much simpler data structure. Perhaps we can include other things like attributes and hierarchical references within this format as well so we can do some back-annotation in the future. I do not believe that developing an ecore model would necessarily lock us in to using eclipse and a plugin for the tool. The output of ecore is java classes with generated factory methods to be able to create any parts of the design that you would do on your own, just as we do with our existing data structures in the elaborator. Since you get a lot of other things for free (including the complicated persistence of the entire ecore model circuit graph) this may be a path worth investigating further if everyone thinks the intermediate format needs to contain everything post-elaboration. Whatever way ends up getting implemented, I've noticed that we will need to watch out for a designated hierarchical separator. We currently do not have one formally defined (we have been using the forward slash), so we could definitely run into some problems later on down the road -- especially since this character is currently allowed in a PhdlID. Somebody correct me if I'm wrong, but I think PADS interprets the forward slash as a low-asserted signal name, so we probably need to keep this character in the PhdlID lexer rule, which leaves us few alternatives for a separator of our own. (backslash? eww...) >From what I've been reading, it appears that the project is going in the direction of more modular, smaller tools, with the exception of the plugin (since that is the opposite of the design paradigm of eclipse). Up until now, we had been thinking about the grammar being defined inside of the tool, and not that the tool implemented a particular language specification. That is why we switched completely over to Eclipse because we didn't want to have multiple grammar definitions floating around, i.e. one in ANTLR and one in Xtext. I think what you did (Josh) to extract the LRM from the tools was a good idea. Let the individual tools implement the specification, and that way anyone can create a tool that implements the language without having to sift through the source and understand a language (Xtext) that they are unfamiliar with. I have many more comments, but I'll have to write them in later. Thanks for being patient with me while I transition to my new job! Brad On Mon, Oct 29, 2012 at 5:39 PM, Joshua Mangelson <jos...@gm...>wrote: > I would vote for XML as well. > > Somethings we might want to take into account when creating the > intermediate format spec is the importance of back-annotation and hierarchy > as well. For example, you might want one of the lower end tools to be able > to pinpoint the line in your source code that is incorrect. This would only > be possible if we somehow include a reference such as a file-name or > line-number in the intermediate format. > > Another option for the intermediate format would be the actual graph of > the design generated from the syntax graph before the error checking is > done. This graph is substantially larger than just a netlist, but it would > contain all of the data included in the original .phdl files, because it is > a complete graph of the design. > > Currently, for example, in the 2.1 version, (this is probably how the 2.0 > version works as well, but I'm not quite as familiar with it): > 1. a source code tree is generated from the inputted .phdl files > 2. tests are run at this point, (for example to make sure that a pin > assignment statement within an instance has the same number of pins on the > left side as it has nets on the right side.) > 3. the source code tree is walked and a "flattened" (as pertaining to > hierarchy) in-memory representation of the design is created with objects > such as designs, instances, and connections(either ports or nets). In this > graph these objects may have fields that are references to other objects > also in the design. For example an instance object would have attribute > fields as well as fields for each of its pins with pointers or references > to the net objects within the design that they are connected to. In this > graph of the design everything has a unique name and there are in-memory > objects generated for each instanced device. For example if in the source > code an array of 8 resistors were instanced, in this flattened graph, there > would be 8 individual device objects created, with each of their respective > pin reference lists and attribute lists. Hierarchy works similarly, the > with the ports in a subdesign acting as nets to connect the lower-level > designs objects to each other, as well as to the upper-level design's nets > as defined in the subinst declarations. > 4. Test are run at this point as well, (for example the Electric Rule > Checker would be run at this point because the entire graph with hierarchy > flattened out is needed in-order to do checking of each of the pins on a > certain net) > 5. This validated design graph can then be walked inorder to create the > netlist and output files. > > When defining the specification for the PHDL Intermediate format we might > also want to take this into account. Much of the data about the original > design will be lost if the intermediate format only contains a netlist. > - Much of this data could be stored within attributes of the devices or > pins, > - or we could use the design graph itself as an intermediate format, but > this would cause alot more complexity for someone desiring to simply > generate an alternate output format. > > Let me know what you guys think. > The main thing is that some testing can only be done after the graph has > been generated, we should decided whether or not we want this testing to be > done in the main front end compiler or afterwards. > If we decide that we want that to be a part that is optionally done > afterwards, we have to provide a way for the data needed by the test to be > included in the intermediate format. > > Josh > > > > > On Mon, Oct 29, 2012 at 11:51 AM, Wesley J. Landaker <wj...@ic...>wrote: > >> On Monday, October 29, 2012 02:46:54 peter dudley wrote: >> > Guys, In response to Wes' email I have been thinking about the >> > intermediate format again. Early on we considered just using XML as >> > the intermediate format. The job of the PHDL compiler is to convert a >> > few kilobytes of text into a few kilobytes of text. For the small >> > amount of processing we do I think that XML would not introduce a >> > noticeable performance hit. On the other hand, I remember that I was >> > personally able to quickly define a data format (using a DTD) and parse >> > XML files using that format. I suggest the following compiler flow. >> > PHDL input design -> PHDL front end compiler -> XML intermediate format >> > -> Back end tools (netlist generation, etc) -> output files For a while >> > I have been thinking about what should go into the XML intermediate >> > format. >> >> XML is fine with me. JSON is another text-based format that is incredibly >> easy to parse by existing libraries in every language, and has the >> advantage >> that it is slightly more human readable and it's MUCH SIMPLER to parse >> than >> XML. >> >> Even easier than those, however, is just the de-factor standard UNIX text >> record format with one data element per line. This allows for parsing >> with a >> simple regular expression or tokenizer (or sed, awk, grep, etc), no JSON >> or >> XML parsing libraries required. >> >> Whatever we pick, it'll be easy enough to work with. My vote between those >> would be: >> >> 1. Simple text >> 2. JSON >> 3. XML >> >> > Should it contain a representation of the original PHDL >> > instance list or should it contain the compiled netlist data format. I >> > am leaning toward having the intermediate file contain both of these >> > data representations. The netlist data would make the XML easy to use >> > for people who want to write new netlist generators for different layout >> > tools. The instance list data would make the XML good for people who >> > want to write VHDL or schematic generation tools. XML is well understood >> > by the open and closed source software community. Please comment. Pete >> >> Since the purpose of the intermediate format is to be parsed and used >> easily >> by programs, not humans, I think we don't want to duplicate any >> information. >> Otherwise, progams have to do a lot of extra work to check that the two >> different views are consistent. On the other hand, transforming from >> netlist >> to instance list view internally in a program is trivial once you've read >> in >> the complete information into an appropriate, simple, data structure. For >> almost all *automated* purposes (backends, semantic checks, visualization, >> etc), a netlist is more natural than an instance list. >> >> Either way, the first thing we need to define is what things need to go >> into >> the intermediate format. For instance, we need at least: >> >> For each instance: >> device name >> instance name >> attributes >> for each pin: >> name >> attributes >> >> For each net: >> name >> attributes >> for each connection: >> instance name >> pin name >> >> I don't think needs to be anything else, as *attributes* are supposed to >> be >> our generic carrier of meta information. (Pin types, as they currently >> exist, make things much more complicated here, which is why I want to >> either >> scrap them as hard-coded elements and use pin attributes instead). >> >> See anything I missed? Any other thoughts? >> >> Once we agree on basic format and content, I volunteer to write up a first >> draft of a spec that we can then review and modify together. >> >> >> ------------------------------------------------------------------------------ >> The Windows 8 Center - In partnership with Sourceforge >> Your idea - your app - 30 days. >> Get started! >> http://windows8center.sourceforge.net/ >> what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ >> _______________________________________________ >> phdl-devel mailing list >> phd...@li... >> https://lists.sourceforge.net/lists/listinfo/phdl-devel >> >> > > > ------------------------------------------------------------------------------ > The Windows 8 Center - In partnership with Sourceforge > Your idea - your app - 30 days. > Get started! > http://windows8center.sourceforge.net/ > what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ > _______________________________________________ > phdl-devel mailing list > phd...@li... > https://lists.sourceforge.net/lists/listinfo/phdl-devel > > |
|
From: Joshua M. <jos...@gm...> - 2012-10-29 23:39:27
|
I would vote for XML as well. Somethings we might want to take into account when creating the intermediate format spec is the importance of back-annotation and hierarchy as well. For example, you might want one of the lower end tools to be able to pinpoint the line in your source code that is incorrect. This would only be possible if we somehow include a reference such as a file-name or line-number in the intermediate format. Another option for the intermediate format would be the actual graph of the design generated from the syntax graph before the error checking is done. This graph is substantially larger than just a netlist, but it would contain all of the data included in the original .phdl files, because it is a complete graph of the design. Currently, for example, in the 2.1 version, (this is probably how the 2.0 version works as well, but I'm not quite as familiar with it): 1. a source code tree is generated from the inputted .phdl files 2. tests are run at this point, (for example to make sure that a pin assignment statement within an instance has the same number of pins on the left side as it has nets on the right side.) 3. the source code tree is walked and a "flattened" (as pertaining to hierarchy) in-memory representation of the design is created with objects such as designs, instances, and connections(either ports or nets). In this graph these objects may have fields that are references to other objects also in the design. For example an instance object would have attribute fields as well as fields for each of its pins with pointers or references to the net objects within the design that they are connected to. In this graph of the design everything has a unique name and there are in-memory objects generated for each instanced device. For example if in the source code an array of 8 resistors were instanced, in this flattened graph, there would be 8 individual device objects created, with each of their respective pin reference lists and attribute lists. Hierarchy works similarly, the with the ports in a subdesign acting as nets to connect the lower-level designs objects to each other, as well as to the upper-level design's nets as defined in the subinst declarations. 4. Test are run at this point as well, (for example the Electric Rule Checker would be run at this point because the entire graph with hierarchy flattened out is needed in-order to do checking of each of the pins on a certain net) 5. This validated design graph can then be walked inorder to create the netlist and output files. When defining the specification for the PHDL Intermediate format we might also want to take this into account. Much of the data about the original design will be lost if the intermediate format only contains a netlist. - Much of this data could be stored within attributes of the devices or pins, - or we could use the design graph itself as an intermediate format, but this would cause alot more complexity for someone desiring to simply generate an alternate output format. Let me know what you guys think. The main thing is that some testing can only be done after the graph has been generated, we should decided whether or not we want this testing to be done in the main front end compiler or afterwards. If we decide that we want that to be a part that is optionally done afterwards, we have to provide a way for the data needed by the test to be included in the intermediate format. Josh On Mon, Oct 29, 2012 at 11:51 AM, Wesley J. Landaker <wj...@ic...>wrote: > On Monday, October 29, 2012 02:46:54 peter dudley wrote: > > Guys, In response to Wes' email I have been thinking about the > > intermediate format again. Early on we considered just using XML as > > the intermediate format. The job of the PHDL compiler is to convert a > > few kilobytes of text into a few kilobytes of text. For the small > > amount of processing we do I think that XML would not introduce a > > noticeable performance hit. On the other hand, I remember that I was > > personally able to quickly define a data format (using a DTD) and parse > > XML files using that format. I suggest the following compiler flow. > > PHDL input design -> PHDL front end compiler -> XML intermediate format > > -> Back end tools (netlist generation, etc) -> output files For a while > > I have been thinking about what should go into the XML intermediate > > format. > > XML is fine with me. JSON is another text-based format that is incredibly > easy to parse by existing libraries in every language, and has the > advantage > that it is slightly more human readable and it's MUCH SIMPLER to parse than > XML. > > Even easier than those, however, is just the de-factor standard UNIX text > record format with one data element per line. This allows for parsing with > a > simple regular expression or tokenizer (or sed, awk, grep, etc), no JSON or > XML parsing libraries required. > > Whatever we pick, it'll be easy enough to work with. My vote between those > would be: > > 1. Simple text > 2. JSON > 3. XML > > > Should it contain a representation of the original PHDL > > instance list or should it contain the compiled netlist data format. I > > am leaning toward having the intermediate file contain both of these > > data representations. The netlist data would make the XML easy to use > > for people who want to write new netlist generators for different layout > > tools. The instance list data would make the XML good for people who > > want to write VHDL or schematic generation tools. XML is well understood > > by the open and closed source software community. Please comment. Pete > > Since the purpose of the intermediate format is to be parsed and used > easily > by programs, not humans, I think we don't want to duplicate any > information. > Otherwise, progams have to do a lot of extra work to check that the two > different views are consistent. On the other hand, transforming from > netlist > to instance list view internally in a program is trivial once you've read > in > the complete information into an appropriate, simple, data structure. For > almost all *automated* purposes (backends, semantic checks, visualization, > etc), a netlist is more natural than an instance list. > > Either way, the first thing we need to define is what things need to go > into > the intermediate format. For instance, we need at least: > > For each instance: > device name > instance name > attributes > for each pin: > name > attributes > > For each net: > name > attributes > for each connection: > instance name > pin name > > I don't think needs to be anything else, as *attributes* are supposed to be > our generic carrier of meta information. (Pin types, as they currently > exist, make things much more complicated here, which is why I want to > either > scrap them as hard-coded elements and use pin attributes instead). > > See anything I missed? Any other thoughts? > > Once we agree on basic format and content, I volunteer to write up a first > draft of a spec that we can then review and modify together. > > > ------------------------------------------------------------------------------ > The Windows 8 Center - In partnership with Sourceforge > Your idea - your app - 30 days. > Get started! > http://windows8center.sourceforge.net/ > what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ > _______________________________________________ > phdl-devel mailing list > phd...@li... > https://lists.sourceforge.net/lists/listinfo/phdl-devel > > |
|
From: Peter D. <hd...@ho...> - 2012-10-29 17:09:31
|
I am more than OK with starting with a simple spec. Honestly, I think the idea of generating VHDL for a board design is pretty silly. It is the kind of thing that really impresses mid level managers. It almost never makes sense to simulate at the board level. I only ever wanted PHDL -> VHDL generation to fend off the "Why don't you just use VHDL for board design?" question. I could just say we generate VHDL and move on. I have some slides somewhere on http://pcbl.sourceforge.net/ that show how horrible it is to design boards using VHDL. The PHDL language we have is much more expressive and concise for board design. Pete -----Original Message----- From: Wesley J. Landaker [mailto:wj...@ic...] Sent: Monday, October 29, 2012 5:29 PM To: phd...@li... Cc: Peter Dudley Subject: Re: [phdl-devel] a small point on PinType On Monday, October 29, 2012 03:22:17 Peter Dudley wrote: > Directional Pins, PinType - were added to support ERC as Josh > indicates in his comments. These were also introduced to allow VHDL > output generation. VHDL requires pin direction in the entity > statement. To produce complete and correct VHDL for a board you need > the in's, out's and inout's. :-) Pin attributes (we already have net attributes) could be used for this in a more generic, flexible, and extensible way than hard-coding "in", "out", "io", "pwr", "sup", "oc", "oe", "tri", "pass", and "nc" into the language in a mutually exclusive way. These types aren't very clear. If a port is "io" it pretty much has to be "tri", but a "tri" pin doesn't say if it's "in" or "out" from a device. You can't have it be both. Lots of "pwr" pins are going to be hooked to resistors and capacitors that just have "io" pins, which either should fail rule checking, in which case the checking is too heavy handed to be useful, or will pass rule checking, in which case it's too lenient to catch much of any errors. There are all SORTS of problems I can see with the pin definitions we have right now, so I'm not even seeing what real value our checking is giving us right now. As for VHDL, some of those types are impossible to translate to VHDL already (e.g. "pwr" (?), "oe" (what direction?), "nc" (direction?!), etc). Plus, these are all all digital perspective types; if we are hardcoded these, someone is going to wonder why we don't support analog types, etc, etc. INSTEAD, just using attributes (already in the language, put in just for this kind of annotation) allows this kind of checking to be done as a separate semantic pass than something hard-coded in the language, where we are then completely painted into a corner. Anyway, even if we want to have this feature 100% as-is in the language eventually, is this something we really want to put this into our language spec right now? We can, but then we have to live with it basically forever once people start using it seriously. |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-29 16:52:16
|
On Monday, October 29, 2012 02:46:54 peter dudley wrote:
> Guys, In response to Wes' email I have been thinking about the
> intermediate format again. Early on we considered just using XML as
> the intermediate format. The job of the PHDL compiler is to convert a
> few kilobytes of text into a few kilobytes of text. For the small
> amount of processing we do I think that XML would not introduce a
> noticeable performance hit. On the other hand, I remember that I was
> personally able to quickly define a data format (using a DTD) and parse
> XML files using that format. I suggest the following compiler flow.
> PHDL input design -> PHDL front end compiler -> XML intermediate format
> -> Back end tools (netlist generation, etc) -> output files For a while
> I have been thinking about what should go into the XML intermediate
> format.
XML is fine with me. JSON is another text-based format that is incredibly
easy to parse by existing libraries in every language, and has the advantage
that it is slightly more human readable and it's MUCH SIMPLER to parse than
XML.
Even easier than those, however, is just the de-factor standard UNIX text
record format with one data element per line. This allows for parsing with a
simple regular expression or tokenizer (or sed, awk, grep, etc), no JSON or
XML parsing libraries required.
Whatever we pick, it'll be easy enough to work with. My vote between those
would be:
1. Simple text
2. JSON
3. XML
> Should it contain a representation of the original PHDL
> instance list or should it contain the compiled netlist data format. I
> am leaning toward having the intermediate file contain both of these
> data representations. The netlist data would make the XML easy to use
> for people who want to write new netlist generators for different layout
> tools. The instance list data would make the XML good for people who
> want to write VHDL or schematic generation tools. XML is well understood
> by the open and closed source software community. Please comment. Pete
Since the purpose of the intermediate format is to be parsed and used easily
by programs, not humans, I think we don't want to duplicate any information.
Otherwise, progams have to do a lot of extra work to check that the two
different views are consistent. On the other hand, transforming from netlist
to instance list view internally in a program is trivial once you've read in
the complete information into an appropriate, simple, data structure. For
almost all *automated* purposes (backends, semantic checks, visualization,
etc), a netlist is more natural than an instance list.
Either way, the first thing we need to define is what things need to go into
the intermediate format. For instance, we need at least:
For each instance:
device name
instance name
attributes
for each pin:
name
attributes
For each net:
name
attributes
for each connection:
instance name
pin name
I don't think needs to be anything else, as *attributes* are supposed to be
our generic carrier of meta information. (Pin types, as they currently
exist, make things much more complicated here, which is why I want to either
scrap them as hard-coded elements and use pin attributes instead).
See anything I missed? Any other thoughts?
Once we agree on basic format and content, I volunteer to write up a first
draft of a spec that we can then review and modify together.
|
|
From: Wesley J. L. <wj...@ic...> - 2012-10-29 16:29:18
|
On Monday, October 29, 2012 03:22:17 Peter Dudley wrote: > Directional Pins, PinType - were added to support ERC as Josh indicates > in his comments. These were also introduced to allow VHDL output > generation. VHDL requires pin direction in the entity statement. To > produce complete and correct VHDL for a board you need the in's, out's > and inout's. :-) Pin attributes (we already have net attributes) could be used for this in a more generic, flexible, and extensible way than hard-coding "in", "out", "io", "pwr", "sup", "oc", "oe", "tri", "pass", and "nc" into the language in a mutually exclusive way. These types aren't very clear. If a port is "io" it pretty much has to be "tri", but a "tri" pin doesn't say if it's "in" or "out" from a device. You can't have it be both. Lots of "pwr" pins are going to be hooked to resistors and capacitors that just have "io" pins, which either should fail rule checking, in which case the checking is too heavy handed to be useful, or will pass rule checking, in which case it's too lenient to catch much of any errors. There are all SORTS of problems I can see with the pin definitions we have right now, so I'm not even seeing what real value our checking is giving us right now. As for VHDL, some of those types are impossible to translate to VHDL already (e.g. "pwr" (?), "oe" (what direction?), "nc" (direction?!), etc). Plus, these are all all digital perspective types; if we are hardcoded these, someone is going to wonder why we don't support analog types, etc, etc. INSTEAD, just using attributes (already in the language, put in just for this kind of annotation) allows this kind of checking to be done as a separate semantic pass than something hard-coded in the language, where we are then completely painted into a corner. Anyway, even if we want to have this feature 100% as-is in the language eventually, is this something we really want to put this into our language spec right now? We can, but then we have to live with it basically forever once people start using it seriously. |
|
From: Peter D. <hd...@ho...> - 2012-10-29 09:45:44
|
Guys Directional Pins, PinType - were added to support ERC as Josh indicates in his comments. These were also introduced to allow VHDL output generation. VHDL requires pin direction in the entity statement. To produce complete and correct VHDL for a board you need the in's, out's and inout's. :-) Pete |
|
From: peter d. <hd...@ho...> - 2012-10-29 08:47:04
|
Guys, In response to Wes' email I have been thinking about the intermediate format again. Early on we considered just using XML as the intermediate format. The job of the PHDL compiler is to convert a few kilobytes of text into a few kilobytes of text. For the small amount of processing we do I think that XML would not introduce a noticeable performance hit. On the other hand, I remember that I was personally able to quickly define a data format (using a DTD) and parse XML files using that format. I suggest the following compiler flow. PHDL input design -> PHDL front end compiler -> XML intermediate format -> Back end tools (netlist generation, etc) -> output files For a while I have been thinking about what should go into the XML intermediate format. Should it contain a representation of the original PHDL instance list or should it contain the compiled netlist data format. I am leaning toward having the intermediate file contain both of these data representations. The netlist data would make the XML easy to use for people who want to write new netlist generators for different layout tools. The instance list data would make the XML good for people who want to write VHDL or schematic generation tools. XML is well understood by the open and closed source software community. Please comment. Pete |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-28 23:18:34
|
On 10/23/2012 10:57 AM, Brent Nelson wrote:
> Below is a BNF for PHDL. It was created from the 2.1 version of the tool by Josh.
>
> As has been pointed out, you need both a grammar spec as well as additional description to really understand a language. We have added comments to some of the rules to help describe them and also point out some of the discussion points of recent emails that show where we may have differences of opinions on various language features.
>
> Moving forward, it seems that the next steps include:
>
> 1. Everyone looking at it and deciding if this is what we want for language version 3.0 and, if not, how we might want to change it.
>
> 2. Create additional verbiage to help describe how various constructs are to be interpreted (the non-BNF stuff). This could be done with comments or we could adopt some other mechanism (ideas?)
I've gone ahead and put some initial comments inline here:
> Import ::= "import" QualifiedNameWithWildCard ";"
> // This is how you include packages of devices and subdesigns that
> // were previously defined in other files.
>
> Package ::= "package" ID "{" Import* ( Device | Design )* "}"
> // Packages can contain imports and device and design declarations
Currently, when we include packages, and those packages have imports in
them, do we inherit the imports? It would be best for encapsulation if
we did not.
> Attr ::= "attr" ID "=" STRING ";"
> // The ID and value associated with an attribute is strictly
> // user-defined. However, there are some required attributes
> // including: REFPREFIX, FOOTPRINT, and LIBRARY. By convention,
> // they are upper case but that is not required and they are not
> // case sensitive. There has been some discussion about whether
> // VALUE and Value should be the same or different attributes and
> // whether the user should be warned. The current tools give a
> // warning and then combine them.
Everything else is case-sensitive. Why are we treating attribute names
as a special case?
Also, let's talk about required attributes.
REFPREFIX doesn't necessarily need to be required. If not given, it
could just default to something sensible, like "U" (giving, U1, U2, U3
-- very common for the majority of IC devices), or even just "" (giving
simple numbers, 1, 2, 3, 4 -- maybe not what you WANT, but by default
it's as good as anything else). We can have specific semantic checks
that check for refprefixes that most people will want, but it doesn't
need to be a core part of the language.
FOOTPRINT doesn't necessarily need to be required. While most of the
time you'd want to have a footprint defined, it could also just be
defined later in the PCB tool. Obviously we'll want a semantic check
available to identify missing footprints accidentally left out in most
designs, but I'm not sure we should force this in the language itself.
LIBRARY is required, but so far we've just left it blank in most
designs. Why is this required? If it is useful, we can make a semantic
check available to look for it, but there isn't a lot of reason the core
language needs to enforce this, right?
> Pin ::= PinType [Vector] PhdlID "=" "{" PhdlID [( "," PhdlID )*] "}" ";"
> // Pins can be single bits wide or can be vectors of multiple bits.
> // The values in { } are the physical pin names presumably taken
> // from the footprint in the library..
>
> PinType ::= "pin"
> | "inpin"
> | "outpin"
> | "iopin"
> | "pwrpin"
> | "suppin"
> | "ocpin"
> | "oepin"
> | "tripin"
> | "passpin"
> | "ncpin"
> // The purpose of the pin directions is to allow for ERC where it
> // checks for multiple outpins on a net, for example. To date, all
> // designs have just used "pin" but an ERC has been implemented in
> // the 2.1 tools.
I propose to drop directed pins in our first PHDL language spec. It
isn't widely used and I think it's trying to enforce too much in the
core language. We aren't using this on any real boards yet, I don't
think, and there are a number of problems with the pin types as defined
here.
A better (future!) idea would probably be the idea of generic pin
attributes that can be checked by optional semantic check passes/tools.
> Info ::= "info" "{" STRING "}"
> // The purpose of info is to allow the designer to attach
> // information to specific devices, nets, ports, designs/subdesigns,
> // and instances/subinstances. This gets propagated to the output
> // files and can be used to communicate with the layout engineer.
This feature overlaps with the purpose of comments and additionally
assumes that that the layout engineer can't just read the PHDL. Should
we keep this?
> Design ::= ( "design" ID "{" DesignElement* "}" )
> | ( "subdesign" ID "{" SubDesignElement* "}" )
> // This rule encompasses both design declarations and subdesign declarations
>
> DesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info
> // Designs can contain nets, instances, info statements, and
> // net-to-net assignments.
>
> SubDesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info | PortDeclaration
> // Subdesigns can contain everything designs can plus port declarations.
>
> NetDeclaration ::= "net" [Vector] ConnectionName ( "," ConnectionName )*
> ( "{" NetElement* "}" ) | ";"
> NetElement ::= Attr | Info
> // Net declarations can be multiple bits wide. Muliple names in one
> // declaration are all of that type. If desired, attributes or info
> // can be attached to the nets as in:
> // net a, b, c { info { "These adsfgkjadsfljkas df" }}
Oh, we already have net attributes! Is anyone using these? How are they
exposed currently in the output? Are there any semantic checks
currently? I think they are a good idea, but like device attributes,
let's be careful about what we called required and how we use these.
> PortDeclaration ::= "port" [Vector] ConnectionName ( "," ConnectionName )* ( "{"
> Info* "}" ) | ";"
> // These are similar to net declarations but bring out signals for subdesigns
Ports can have attributes too? Am I reading that right?
> RefAttr ::= [Qualifier] ID "=" STRING ";"
> // This is when you refer to and modify an existing attribute on a
> // device. That is, you can over-ride the attribute value
> // specifiecd when the device was declared.
> // In constrast, the Attr rule allows you to create a new attribute
> // (either in a device decl or when instancing).
What if a device has a pin named "A1" and an attribute named "A1"? Is
this disambiguated based on what is assigned (pin vs. net)? Or am I
reading this wrong?
> Concatenation ::= ( "{" ConnectionRef ( "," ConnectionRef )* "}" ) |
> ( ConnectionRef ( "&" ConnectionRef )* ) |
> ( "<" ConnectionRef ">" ) |
> ( ConnectionRef "*" ) |
> ( "open" )
> // A concatenation can take a number of forms:
> // {a, b, c} or a & b & c which are similar to Verilog's - the signals are lined up left-to-right to make a bus
> // <a> or a* which make as many copies of the signal as needed for the LHS of the assignment statement
> // "open" means that the pin or port is purposely being left unconnected
Why are we supporting both {} and & concatenation? Is there a difference?
Why are we supporting both <> and * expansion? Is there a difference?
Also, I thought '*' was supposed to be an available character for pins,
since '*' is often used, like #, _n, and _b to indicate low-assertion.
> QualifiedNameWithWildCard ::= QualifiedName ["." "*"]
> // These are used when importing packages
What's the difference between:
import foo
and
import foo.*
Is it that one brings all the subdesigns and devices into the current
package scope?
Does the foo.* method inherit previous imports done with foo.*? What
about ones done with just foo?
> ID: ('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9')*
> INT: ('0'..'9') | (('1'..'9') ('0'..'9')+ )
> PINNUM: ('0'..'9'|'a'..'z'|'A'..'Z'|'_'|'+'|'-'|'$'|'/'|'@'|'!')+
I thought PINNUM originally included '*'. Am I mistaken, or did we take
that out?
Also, we should consider supporting Unicode in the future, but maybe not
this iterations.
> STRING: '"' ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|'"' ) )* '"' |
> "'" ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|"'" ) )* "'"
Some of these escapes are rather obscure and arbitrary (e.g. why do we
have \b but not \v?) What is \u all about? We don't support Unicode
currently (although we ought to in a future version, and I will propose
that then). If our charset is ASCII or Latin1, we need at least an \x
escape.
|
|
From: Wesley J. L. <wj...@ic...> - 2012-10-28 21:14:50
|
On Wednesday, October 24, 2012 08:41:56 Brent Nelson wrote: > The BNF could be either posted on the wiki as a language spec or it could > be checked into SVN. There are advantages to both. BTW, we definitely at some point want our language spec to be something we can version control, so I think it needs to be in our VCS rather than just on the Wiki. Best would be in a textual format such as AsciiDoc. That doesn't mean that we can't *also* have the Wiki have a link and/or give a lot more information. Since we are *drafting* the first copy of our language spec, we could go ahead and do that right on the wiki, and then pull it out as a document to version control once we are done. In that way we'd be using the wiki simply as a collaborative editor. Starting either way seems fine to me. Probably the best way to choose is basically for someone to just start doing it one way or the other and invite everyone else to join in. Any volunteers to bootstrap the process?! =) |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-28 21:10:03
|
On Thursday, October 25, 2012 19:19:12 Joshua Mangelson wrote: > I was wondering. Branches should be copies of the trunk used for > experimentation right? If used in SVN, branches should be made as copies of the trunk, e.g. svn cp <URL>/trunk <URL>/branches/<mybranch> > So Branches should be created and then committed to, in-order to test > different ideas which you don't necessarily want to change the main > development code. I just wanted to make sure that was right. > > I actually just created the branch folder again in the repository for > that purpose, let me know if I should change that back. Branches can be used for a lot of purposes, but basically, yes, you do work in branches, and when it's done, you either merge it into the trunk or another branch, or you if it was a bad idea, discard it. Branches are *MUCH*, *MUCH* easier to manage in Git, but since we haven't migrated yet, SVN branches will work fine as long as you're careful about how you create, merge, and remove them. |
|
From: Wesley J. L. <wj...@ic...> - 2012-10-28 21:06:03
|
Guys, Seems like we are making progress on getting a PHDL Language Definition created, starting with the BNF grammar that you guys just put together. We need to flesh that out with language semantics, etc. Another critically important thing we need to define is an PHDL Intermediate Format that can be put between front-ends and back-ends. I think this one of the big things we're missing right now. The nice thing about having a well-defined intermediate format is how it can very easily decouple the pieces of our toolflow and allow the insertion of just about *anything* we can dream up in the middle. For example of how this might look: This might sound like extra work, but this is *less* work overall, because we have small little tools that can be used together. Here is an example of how this helps. Here is an example of bite-sized tasks that you or I (or anybody) could tackle, completely in isolation to how anything else works, one by one, without any all-at-once effort: Core Parser: * Write a parser for PHDL that slurps up a set of .phdl files. * Only do any error checking that core language requires. * You can use any data structures you want. * Write it in any language you want. * The output should just be our PHDL Intermediate Format. * Our first cut could be based off our current Java+Antlr PHDL 2.0. Semantic Checker(s): * Read in PHDL Intermediate Format. * Check that the design follows whatever additional rules we want. * You can use any data structures you want. * Write it in any language you want. * The output should just be PHDL Intermediate Format or errors/warnings. * Someone can write additional checkers very easily. * Checkers specific to designs can be almost like unit tests. * Our first tool could give all the warnings the PHDL 2.1 compiler gives. Backend(s): * Read in PHDL Intermediate Format. * You can use any data structures you want. * Write it in any language you want. * The output is whatever this backend supports. * Our existing backends could be adapted from our existing Java code. Language API(s): * Expose an API to read in PHDL Intermediate Format. * Expose an API to read and/or modify the PHDL design programmatically. * Expose an API to write it back out in PHDL Intermediate Format. * You can use any data structures you want. * Write it in any language you want. * You can do *easily* for multiple languages (C, C++, Java, Python). * You can now use this API to write parsers, checkers, or backends. Other Weird Things: * Eclipse Plugins -- read in PHDL separately, because Eclipse makes you. * VIM Plugin -- handle just the PHDL surface lexemes for syntax & folds * Convert PHDL to DxDesigner -- run a langauge API in Mentor Tcl directly. * ... etc ... * It all comes down to flexibility provided by a well-specified format. The thing is, whether we write the core parsers it Java or C++, or whatever, people are going to want to write other tools in *whatever they feel like*. Our existing code is all in Java. I probably would write little checkers for a specific design or something that needs to just spit out Graphviz in Python, but more complicated reusable backends in C++. But no matter what we use, the key is that the interface to the user is the intermediate format. If and/or when we provide language APIs, it's just convenience around the defined intermediate format. Anyway, I guess you can see what I think the most important thing we do first off is to define 1) the language itself and 2) an intermediate format. I think everything else falls right out of that. I'm hoping that everyone will kind of jump in into areas that they are interested in, but in the short term we need to plan a little so that we're not duplicating too much work and everyone knows what everyone else is planning. For my part, I'm sort of waiting for us to get a first cut of a language specification done before I do anything, but there are several (for example) visualation tools that I plan to start messing with, and I know both Pete and I have considered that we'd long term like to have a simpler core parser that is easy to build and maintain. Any thoughts? What do you guys see as the steps forward to make this happen? What is everyone working on or planning to work on? |
|
From: Joshua M. <jos...@gm...> - 2012-10-26 01:19:19
|
Hey Wes, I was wondering. Branches should be copies of the trunk used for experimentation right? So Branches should be created and then committed to, in-order to test different ideas which you don't necessarily want to change the main development code. I just wanted to make sure that was right. I actually just created the branch folder again in the repository for that purpose, let me know if I should change that back. Thanks, Josh |
|
From: Brent N. <bre...@by...> - 2012-10-24 14:42:15
|
Hi Pete, Good questions. The BNF could be either posted on the wiki as a language spec or it could be checked into SVN. There are advantages to both. Depending on the parser generator used, the input you give it can be very close to the BNF. But, usually it is in a slightly different form based on the requirements of the tool. For example, the language spec you give a parser generator will have "annotations" in it to control the building of parse trees and other associated data structures needed by later phases of the compiler. Some generators do much of that for you automatically (ANTLR is of this type but requires two grammars - one for the language and one to control the building of the parse tree). Others give you very little and require you to manually define the intermediate form data structure you want to build and include code mixed in with the rules to do that (lex/yacc and flex/bison are in this category). I have used both. The 2.0 compiler is based on Antlr. If you go to the old documentation page (phdl.sf.net) and select the old documentation and then go to the Legacy tab and then into Documentation you will see that we have a set of RR diagrams there for the 2.0 language. That was automatically generated by the Antlr tool. The reason the 2.1 grammar was generated by hand (by Josh) is that XText doesn't have a similar documentation producing function. Another consideration is that a grammar tells what is considered legal syntax but does not capture much about the language that can only be discovered later in the compilation process. One example is the requirement that a program variable be declared before use. The BNF tells how variables are legally declared and also how they can be used in expressions and assigned to. But, it does not capture the "declare before use" requirement. That is implemented by maintaining a symbol table with scoping capabilities in the compiler. When a variable is declared that declaration is put into the symbol table and when a variable is used the symbol can be interrogated to find its declaration. That is why there are inevitably additional descriptions (formal or otherwise) that go with a BNF to help capture some of this other information. Brent > Guys, > > Thank you for writing out the BNF for PHDL. I find it actually quite > readable with the comments in there. I have a few questions. > > Will that BNF be checked in somewhere into the repository? > > Will that BNF be true source code or is it just for documentation? > > Can that BNF be used as source to parser generators like bison, yacc or > Antlr? > > Pete > > > -----Original Message----- > From: Brent Nelson [mailto:bre...@by...] > Sent: Tuesday, October 23, 2012 6:57 PM > To: Wesley J. Landaker > Cc: <phd...@li...> > Subject: [phdl-devel] BNF for PHDL Language > > Hi All, > > Below is a BNF for PHDL. It was created from the 2.1 version of the tool by > Josh. > > As has been pointed out, you need both a grammar spec as well as additional > description to really understand a language. We have added comments to some > of the rules to help describe them and also point out some of the discussion > points of recent emails that show where we may have differences of opinions > on various language features. > > Moving forward, it seems that the next steps include: > > 1. Everyone looking at it and deciding if this is what we want for language > version 3.0 and, if not, how we might want to change it. > > 2. Create additional verbiage to help describe how various constructs are to > be interpreted (the non-BNF stuff). This could be done with comments or we > could adopt some other mechanism (ideas?) > > By of clarification and to answer Pete's questions: > > 1. This is what the Eclipse plugin version implements. > > 2. The 2.0 version of the language has two major differences and a few nits. > The major differences are: > - 2.1 has packages and imports instead of the text includes of 2.0 > - the 2.0 grammar doesn't have the RefTail rule (meaning you can't > reach arbitrarily deep into the hierarchy to modify attributes > Josh has documented all the differences between 2.0 and 2.1 and it is > in the SVN repository under /trunk/doc. > > Brent and Josh > > ////////////////////////////////////////////////////////////////////// > // Parser Rules > ////////////////////////////////////////////////////////////////////// > > PHDLDesign ::= Import* ( Package | Device | Design )* > // All imports need to come before the remaining declarations > > Import ::= "import" QualifiedNameWithWildCard ";" > // This is how you include packages of devices and subdesigns that > // were previously defined in other files. > > Package ::= "package" ID "{" Import* ( Device | Design )* "}" > // Packages can contain imports and device and design declarations > > Device ::= "device" PhdlID "{" DeviceElement* "}" > > DeviceElement ::= Attr | Pin | Info > > Attr ::= "attr" ID "=" STRING ";" > // The ID and value associated with an attribute is strictly > // user-defined. However, there are some required attributes > // including: REFPREFIX, FOOTPRINT, and LIBRARY. By convention, > // they are upper case but that is not required and they are not > // case sensitive. There has been some discussion about whether > // VALUE and Value should be the same or different attributes and > // whether the user should be warned. The current tools give a > // warning and then combine them. > > Pin ::= PinType [Vector] PhdlID "=" "{" PhdlID [( "," PhdlID )*] "}" ";" > // Pins can be single bits wide or can be vectors of multiple bits. > // The values in { } are the physical pin names presumably taken > // from the footprint in the library.. > > PinType ::= "pin" > | "inpin" > | "outpin" > | "iopin" > | "pwrpin" > | "suppin" > | "ocpin" > | "oepin" > | "tripin" > | "passpin" > | "ncpin" > // The purpose of the pin directions is to allow for ERC where it > // checks for multiple outpins on a net, for example. To date, all > // designs have just used "pin" but an ERC has been implemented in > // the 2.1 tools. > > Info ::= "info" "{" STRING "}" > // The purpose of info is to allow the designer to attach > // information to specific devices, nets, ports, designs/subdesigns, > // and instances/subinstances. This gets propagated to the output > // files and can be used to communicate with the layout engineer. > > Design ::= ( "design" ID "{" DesignElement* "}" ) > | ( "subdesign" ID "{" SubDesignElement* "}" ) > // This rule encompasses both design declarations and subdesign > declarations > > DesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info > // Designs can contain nets, instances, info statements, and > // net-to-net assignments. > > SubDesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info | > PortDeclaration > // Subdesigns can contain everything designs can plus port declarations. > > NetDeclaration ::= "net" [Vector] ConnectionName ( "," ConnectionName )* > ( "{" NetElement* "}" ) | ";" > NetElement ::= Attr | Info > // Net declarations can be multiple bits wide. Muliple names in one > // declaration are all of that type. If desired, attributes or info > // can be attached to the nets as in: > // net a, b, c { info { "These adsfgkjadsfljkas df" }} > > > PortDeclaration ::= "port" [Vector] ConnectionName ( "," ConnectionName )* ( > "{" > Info* "}" ) | ";" > // These are similar to net declarations but bring out signals for > subdesigns > > > Instance ::= ( "inst" [Array] ID "of" QualifiedName "{" > InstanceElement* "}" ) > | ( "subinst" [Array] ID "of" QualifiedName STRING "{" > SubInstanceElement* "}" ) > > InstanceElement ::= Attr | RefAttr | PinAssign | Info > > SubInstanceElement ::= Attr | SubAttr | PortAssign | Info > > RefAttr ::= [Qualifier] ID "=" STRING ";" > // This is when you refer to and modify an existing attribute on a > // device. That is, you can over-ride the attribute value > // specifiecd when the device was declared. > // In constrast, the Attr rule allows you to create a new attribute > // (either in a device decl or when instancing). > > PinAssign ::= CombinedAssign | QualifiedAssign > > CombinedAssign ::= "combine" "(" QualifiedReference ")" "=" Concatenation > ";" > > QualifiedAssign ::= QualifiedReference "=" Concatenation ";" > > QualifiedReference ::= [Qualifier] PhdlID [Slices] > > SubAttr ::= [Qualifier] ID [Indices] RefTail "=" STRING ";" > // This is how you reach down into the hierarchy (arbitrarily deep) > // and change an attribute value on an instance > > RefTail ::= "." ID [Indices] [RefTail] > > PortAssign ::= CombinedPortAssign | QualifiedPortAssign > // Port assignments closely parallel pin assignments > > CombinedPortAssign ::= "combine" "(" QualifiedPortReference ")" "=" > Concatenation ";" > > QualifiedPortAssign ::= QualifiedPortReference "=" Concatenation ";" > > QualifiedPortReference ::= [Qualifier] ConnectionName [Slices] > > ConnectionAssign ::= ConnectionName [Slices] "=" Concatenation ";" > > Concatenation ::= ( "{" ConnectionRef ( "," ConnectionRef )* "}" ) | > ( ConnectionRef ( "&" ConnectionRef )* ) | > ( "<" ConnectionRef ">" ) | > ( ConnectionRef "*" ) | > ( "open" ) > // A concatenation can take a number of forms: > // {a, b, c} or a & b & c which are similar to Verilog's - the signals are > lined up left-to-right to make a bus > // <a> or a* which make as many copies of the signal as needed for the LHS > of the assignment statement > // "open" means that the pin or port is purposely being left unconnected > > ConnectionRef ::= ConnectionName [Slices] > > ConnectionName ::= PhdlID > > Indices ::= "(" ( ( INT ":" INT ) | ( INT ( "," INT )* ) ) ")" > // Round braces are used for arrays of things (instances or subdesigns) > > Slices ::= "[" ( ( INT ":" INT ) | ( INT ( "," INT )* ) ) "]" > // Square braces are used for multi-bit things > > Vector ::= "[" INT ":" INT "]" > > Array ::= "(" INT ";" INT ")" > > Range ::= INT ":" INT > // Ranges for arrays, vectors, slices, and indices can be either > // ascending or descending. They imply a left-to-right ordering. > > Qualifier ::= "this" [Indices] "." > > QualifiedNameWithWildCard ::= QualifiedName ["." "*"] > // These are used when importing packages > > QualfiedName ::= PhdlID [( "." PhdlID )] > > PhdlID ::= INT | ID | PINUM > > //////////////////////////////////////////////////////////////////////////// > // Lexing Rules > //////////////////////////////////////////////////////////////////////////// > > ID: ('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9')* > INT: ('0'..'9') | (('1'..'9') ('0'..'9')+ ) > PINNUM: ('0'..'9'|'a'..'z'|'A'..'Z'|'_'|'+'|'-'|'$'|'/'|'@'|'!')+ > > STRING: '"' ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|'"' ) > )* '"' | > "'" ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|"'" ) > )* "'" > > ML_COMMENT: '/*' - '*/' > SL_COMMENT: '//' !( '\n'|'\r' )* [( ['\r'] '\n' )] > WS: ( ' '|'\t'|'\r'|'\n' )+ > > > On Oct 22, 2012, at 4:51 PM, Wesley J. Landaker wrote: > >> On Monday, October 22, 2012 14:40:48 Joshua Mangelson wrote: >>> Just as a word of caution. The Railroad diagrams currently on the >>> website were generated before a few changes to the grammar were made, >>> so there are a few things that may be different or incorrect about >>> the railroad diagrams. >>> >>> We'll be mailing a BNF to everyone in just a bit. >> >> Okay, looking forward to it! =) >> ---------------------------------------------------------------------- >> -------- Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics Download AppDynamics Lite >> for free today: >> http://p.sf.net/sfu/appdyn_sfd2d_oct__________________________________ >> _____________ >> phdl-devel mailing list >> phd...@li... >> https://lists.sourceforge.net/lists/listinfo/phdl-devel > > > ---------------------------------------------------------------------------- > -- > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics Download AppDynamics Lite for > free today: > http://p.sf.net/sfu/appdyn_sfd2d_oct > _______________________________________________ > phdl-devel mailing list > phd...@li... > https://lists.sourceforge.net/lists/listinfo/phdl-devel > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_sfd2d_oct > _______________________________________________ > phdl-devel mailing list > phd...@li... > https://lists.sourceforge.net/lists/listinfo/phdl-devel |
|
From: Peter D. <hd...@ho...> - 2012-10-24 08:33:36
|
Guys,
Thank you for writing out the BNF for PHDL. I find it actually quite
readable with the comments in there. I have a few questions.
Will that BNF be checked in somewhere into the repository?
Will that BNF be true source code or is it just for documentation?
Can that BNF be used as source to parser generators like bison, yacc or
Antlr?
Pete
-----Original Message-----
From: Brent Nelson [mailto:bre...@by...]
Sent: Tuesday, October 23, 2012 6:57 PM
To: Wesley J. Landaker
Cc: <phd...@li...>
Subject: [phdl-devel] BNF for PHDL Language
Hi All,
Below is a BNF for PHDL. It was created from the 2.1 version of the tool by
Josh.
As has been pointed out, you need both a grammar spec as well as additional
description to really understand a language. We have added comments to some
of the rules to help describe them and also point out some of the discussion
points of recent emails that show where we may have differences of opinions
on various language features.
Moving forward, it seems that the next steps include:
1. Everyone looking at it and deciding if this is what we want for language
version 3.0 and, if not, how we might want to change it.
2. Create additional verbiage to help describe how various constructs are to
be interpreted (the non-BNF stuff). This could be done with comments or we
could adopt some other mechanism (ideas?)
By of clarification and to answer Pete's questions:
1. This is what the Eclipse plugin version implements.
2. The 2.0 version of the language has two major differences and a few nits.
The major differences are:
- 2.1 has packages and imports instead of the text includes of 2.0
- the 2.0 grammar doesn't have the RefTail rule (meaning you can't
reach arbitrarily deep into the hierarchy to modify attributes
Josh has documented all the differences between 2.0 and 2.1 and it is
in the SVN repository under /trunk/doc.
Brent and Josh
//////////////////////////////////////////////////////////////////////
// Parser Rules
//////////////////////////////////////////////////////////////////////
PHDLDesign ::= Import* ( Package | Device | Design )*
// All imports need to come before the remaining declarations
Import ::= "import" QualifiedNameWithWildCard ";"
// This is how you include packages of devices and subdesigns that
// were previously defined in other files.
Package ::= "package" ID "{" Import* ( Device | Design )* "}"
// Packages can contain imports and device and design declarations
Device ::= "device" PhdlID "{" DeviceElement* "}"
DeviceElement ::= Attr | Pin | Info
Attr ::= "attr" ID "=" STRING ";"
// The ID and value associated with an attribute is strictly
// user-defined. However, there are some required attributes
// including: REFPREFIX, FOOTPRINT, and LIBRARY. By convention,
// they are upper case but that is not required and they are not
// case sensitive. There has been some discussion about whether
// VALUE and Value should be the same or different attributes and
// whether the user should be warned. The current tools give a
// warning and then combine them.
Pin ::= PinType [Vector] PhdlID "=" "{" PhdlID [( "," PhdlID )*] "}" ";"
// Pins can be single bits wide or can be vectors of multiple bits.
// The values in { } are the physical pin names presumably taken
// from the footprint in the library..
PinType ::= "pin"
| "inpin"
| "outpin"
| "iopin"
| "pwrpin"
| "suppin"
| "ocpin"
| "oepin"
| "tripin"
| "passpin"
| "ncpin"
// The purpose of the pin directions is to allow for ERC where it
// checks for multiple outpins on a net, for example. To date, all
// designs have just used "pin" but an ERC has been implemented in
// the 2.1 tools.
Info ::= "info" "{" STRING "}"
// The purpose of info is to allow the designer to attach
// information to specific devices, nets, ports, designs/subdesigns,
// and instances/subinstances. This gets propagated to the output
// files and can be used to communicate with the layout engineer.
Design ::= ( "design" ID "{" DesignElement* "}" )
| ( "subdesign" ID "{" SubDesignElement* "}" )
// This rule encompasses both design declarations and subdesign
declarations
DesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info
// Designs can contain nets, instances, info statements, and
// net-to-net assignments.
SubDesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info |
PortDeclaration
// Subdesigns can contain everything designs can plus port declarations.
NetDeclaration ::= "net" [Vector] ConnectionName ( "," ConnectionName )*
( "{" NetElement* "}" ) | ";"
NetElement ::= Attr | Info
// Net declarations can be multiple bits wide. Muliple names in one
// declaration are all of that type. If desired, attributes or info
// can be attached to the nets as in:
// net a, b, c { info { "These adsfgkjadsfljkas df" }}
PortDeclaration ::= "port" [Vector] ConnectionName ( "," ConnectionName )* (
"{"
Info* "}" ) | ";"
// These are similar to net declarations but bring out signals for
subdesigns
Instance ::= ( "inst" [Array] ID "of" QualifiedName "{"
InstanceElement* "}" )
| ( "subinst" [Array] ID "of" QualifiedName STRING "{"
SubInstanceElement* "}" )
InstanceElement ::= Attr | RefAttr | PinAssign | Info
SubInstanceElement ::= Attr | SubAttr | PortAssign | Info
RefAttr ::= [Qualifier] ID "=" STRING ";"
// This is when you refer to and modify an existing attribute on a
// device. That is, you can over-ride the attribute value
// specifiecd when the device was declared.
// In constrast, the Attr rule allows you to create a new attribute
// (either in a device decl or when instancing).
PinAssign ::= CombinedAssign | QualifiedAssign
CombinedAssign ::= "combine" "(" QualifiedReference ")" "=" Concatenation
";"
QualifiedAssign ::= QualifiedReference "=" Concatenation ";"
QualifiedReference ::= [Qualifier] PhdlID [Slices]
SubAttr ::= [Qualifier] ID [Indices] RefTail "=" STRING ";"
// This is how you reach down into the hierarchy (arbitrarily deep)
// and change an attribute value on an instance
RefTail ::= "." ID [Indices] [RefTail]
PortAssign ::= CombinedPortAssign | QualifiedPortAssign
// Port assignments closely parallel pin assignments
CombinedPortAssign ::= "combine" "(" QualifiedPortReference ")" "="
Concatenation ";"
QualifiedPortAssign ::= QualifiedPortReference "=" Concatenation ";"
QualifiedPortReference ::= [Qualifier] ConnectionName [Slices]
ConnectionAssign ::= ConnectionName [Slices] "=" Concatenation ";"
Concatenation ::= ( "{" ConnectionRef ( "," ConnectionRef )* "}" ) |
( ConnectionRef ( "&" ConnectionRef )* ) |
( "<" ConnectionRef ">" ) |
( ConnectionRef "*" ) |
( "open" )
// A concatenation can take a number of forms:
// {a, b, c} or a & b & c which are similar to Verilog's - the signals are
lined up left-to-right to make a bus
// <a> or a* which make as many copies of the signal as needed for the LHS
of the assignment statement
// "open" means that the pin or port is purposely being left unconnected
ConnectionRef ::= ConnectionName [Slices]
ConnectionName ::= PhdlID
Indices ::= "(" ( ( INT ":" INT ) | ( INT ( "," INT )* ) ) ")"
// Round braces are used for arrays of things (instances or subdesigns)
Slices ::= "[" ( ( INT ":" INT ) | ( INT ( "," INT )* ) ) "]"
// Square braces are used for multi-bit things
Vector ::= "[" INT ":" INT "]"
Array ::= "(" INT ";" INT ")"
Range ::= INT ":" INT
// Ranges for arrays, vectors, slices, and indices can be either
// ascending or descending. They imply a left-to-right ordering.
Qualifier ::= "this" [Indices] "."
QualifiedNameWithWildCard ::= QualifiedName ["." "*"]
// These are used when importing packages
QualfiedName ::= PhdlID [( "." PhdlID )]
PhdlID ::= INT | ID | PINUM
////////////////////////////////////////////////////////////////////////////
// Lexing Rules
////////////////////////////////////////////////////////////////////////////
ID: ('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9')*
INT: ('0'..'9') | (('1'..'9') ('0'..'9')+ )
PINNUM: ('0'..'9'|'a'..'z'|'A'..'Z'|'_'|'+'|'-'|'$'|'/'|'@'|'!')+
STRING: '"' ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|'"' )
)* '"' |
"'" ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|"'" )
)* "'"
ML_COMMENT: '/*' - '*/'
SL_COMMENT: '//' !( '\n'|'\r' )* [( ['\r'] '\n' )]
WS: ( ' '|'\t'|'\r'|'\n' )+
On Oct 22, 2012, at 4:51 PM, Wesley J. Landaker wrote:
> On Monday, October 22, 2012 14:40:48 Joshua Mangelson wrote:
>> Just as a word of caution. The Railroad diagrams currently on the
>> website were generated before a few changes to the grammar were made,
>> so there are a few things that may be different or incorrect about
>> the railroad diagrams.
>>
>> We'll be mailing a BNF to everyone in just a bit.
>
> Okay, looking forward to it! =)
> ----------------------------------------------------------------------
> -------- Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics Download AppDynamics Lite
> for free today:
> http://p.sf.net/sfu/appdyn_sfd2d_oct__________________________________
> _____________
> phdl-devel mailing list
> phd...@li...
> https://lists.sourceforge.net/lists/listinfo/phdl-devel
----------------------------------------------------------------------------
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics Download AppDynamics Lite for
free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
phdl-devel mailing list
phd...@li...
https://lists.sourceforge.net/lists/listinfo/phdl-devel
|
|
From: Brent N. <bre...@by...> - 2012-10-23 16:57:56
|
Hi All,
Below is a BNF for PHDL. It was created from the 2.1 version of the tool by Josh.
As has been pointed out, you need both a grammar spec as well as additional description to really understand a language. We have added comments to some of the rules to help describe them and also point out some of the discussion points of recent emails that show where we may have differences of opinions on various language features.
Moving forward, it seems that the next steps include:
1. Everyone looking at it and deciding if this is what we want for language version 3.0 and, if not, how we might want to change it.
2. Create additional verbiage to help describe how various constructs are to be interpreted (the non-BNF stuff). This could be done with comments or we could adopt some other mechanism (ideas?)
By of clarification and to answer Pete's questions:
1. This is what the Eclipse plugin version implements.
2. The 2.0 version of the language has two major differences and a few nits. The major differences are:
- 2.1 has packages and imports instead of the text includes of 2.0
- the 2.0 grammar doesn't have the RefTail rule (meaning you can't reach arbitrarily deep into the hierarchy to modify attributes
Josh has documented all the differences between 2.0 and 2.1 and it is in the SVN repository under /trunk/doc.
Brent and Josh
//////////////////////////////////////////////////////////////////////
// Parser Rules
//////////////////////////////////////////////////////////////////////
PHDLDesign ::= Import* ( Package | Device | Design )*
// All imports need to come before the remaining declarations
Import ::= "import" QualifiedNameWithWildCard ";"
// This is how you include packages of devices and subdesigns that
// were previously defined in other files.
Package ::= "package" ID "{" Import* ( Device | Design )* "}"
// Packages can contain imports and device and design declarations
Device ::= "device" PhdlID "{" DeviceElement* "}"
DeviceElement ::= Attr | Pin | Info
Attr ::= "attr" ID "=" STRING ";"
// The ID and value associated with an attribute is strictly
// user-defined. However, there are some required attributes
// including: REFPREFIX, FOOTPRINT, and LIBRARY. By convention,
// they are upper case but that is not required and they are not
// case sensitive. There has been some discussion about whether
// VALUE and Value should be the same or different attributes and
// whether the user should be warned. The current tools give a
// warning and then combine them.
Pin ::= PinType [Vector] PhdlID "=" "{" PhdlID [( "," PhdlID )*] "}" ";"
// Pins can be single bits wide or can be vectors of multiple bits.
// The values in { } are the physical pin names presumably taken
// from the footprint in the library..
PinType ::= "pin"
| "inpin"
| "outpin"
| "iopin"
| "pwrpin"
| "suppin"
| "ocpin"
| "oepin"
| "tripin"
| "passpin"
| "ncpin"
// The purpose of the pin directions is to allow for ERC where it
// checks for multiple outpins on a net, for example. To date, all
// designs have just used "pin" but an ERC has been implemented in
// the 2.1 tools.
Info ::= "info" "{" STRING "}"
// The purpose of info is to allow the designer to attach
// information to specific devices, nets, ports, designs/subdesigns,
// and instances/subinstances. This gets propagated to the output
// files and can be used to communicate with the layout engineer.
Design ::= ( "design" ID "{" DesignElement* "}" )
| ( "subdesign" ID "{" SubDesignElement* "}" )
// This rule encompasses both design declarations and subdesign declarations
DesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info
// Designs can contain nets, instances, info statements, and
// net-to-net assignments.
SubDesignElement ::= NetDeclaration | Instance | ConnectionAssign | Info | PortDeclaration
// Subdesigns can contain everything designs can plus port declarations.
NetDeclaration ::= "net" [Vector] ConnectionName ( "," ConnectionName )*
( "{" NetElement* "}" ) | ";"
NetElement ::= Attr | Info
// Net declarations can be multiple bits wide. Muliple names in one
// declaration are all of that type. If desired, attributes or info
// can be attached to the nets as in:
// net a, b, c { info { "These adsfgkjadsfljkas df" }}
PortDeclaration ::= "port" [Vector] ConnectionName ( "," ConnectionName )* ( "{"
Info* "}" ) | ";"
// These are similar to net declarations but bring out signals for subdesigns
Instance ::= ( "inst" [Array] ID "of" QualifiedName "{"
InstanceElement* "}" )
| ( "subinst" [Array] ID "of" QualifiedName STRING "{"
SubInstanceElement* "}" )
InstanceElement ::= Attr | RefAttr | PinAssign | Info
SubInstanceElement ::= Attr | SubAttr | PortAssign | Info
RefAttr ::= [Qualifier] ID "=" STRING ";"
// This is when you refer to and modify an existing attribute on a
// device. That is, you can over-ride the attribute value
// specifiecd when the device was declared.
// In constrast, the Attr rule allows you to create a new attribute
// (either in a device decl or when instancing).
PinAssign ::= CombinedAssign | QualifiedAssign
CombinedAssign ::= "combine" "(" QualifiedReference ")" "=" Concatenation ";"
QualifiedAssign ::= QualifiedReference "=" Concatenation ";"
QualifiedReference ::= [Qualifier] PhdlID [Slices]
SubAttr ::= [Qualifier] ID [Indices] RefTail "=" STRING ";"
// This is how you reach down into the hierarchy (arbitrarily deep)
// and change an attribute value on an instance
RefTail ::= "." ID [Indices] [RefTail]
PortAssign ::= CombinedPortAssign | QualifiedPortAssign
// Port assignments closely parallel pin assignments
CombinedPortAssign ::= "combine" "(" QualifiedPortReference ")" "=" Concatenation ";"
QualifiedPortAssign ::= QualifiedPortReference "=" Concatenation ";"
QualifiedPortReference ::= [Qualifier] ConnectionName [Slices]
ConnectionAssign ::= ConnectionName [Slices] "=" Concatenation ";"
Concatenation ::= ( "{" ConnectionRef ( "," ConnectionRef )* "}" ) |
( ConnectionRef ( "&" ConnectionRef )* ) |
( "<" ConnectionRef ">" ) |
( ConnectionRef "*" ) |
( "open" )
// A concatenation can take a number of forms:
// {a, b, c} or a & b & c which are similar to Verilog's - the signals are lined up left-to-right to make a bus
// <a> or a* which make as many copies of the signal as needed for the LHS of the assignment statement
// "open" means that the pin or port is purposely being left unconnected
ConnectionRef ::= ConnectionName [Slices]
ConnectionName ::= PhdlID
Indices ::= "(" ( ( INT ":" INT ) | ( INT ( "," INT )* ) ) ")"
// Round braces are used for arrays of things (instances or subdesigns)
Slices ::= "[" ( ( INT ":" INT ) | ( INT ( "," INT )* ) ) "]"
// Square braces are used for multi-bit things
Vector ::= "[" INT ":" INT "]"
Array ::= "(" INT ";" INT ")"
Range ::= INT ":" INT
// Ranges for arrays, vectors, slices, and indices can be either
// ascending or descending. They imply a left-to-right ordering.
Qualifier ::= "this" [Indices] "."
QualifiedNameWithWildCard ::= QualifiedName ["." "*"]
// These are used when importing packages
QualfiedName ::= PhdlID [( "." PhdlID )]
PhdlID ::= INT | ID | PINUM
////////////////////////////////////////////////////////////////////////////
// Lexing Rules
////////////////////////////////////////////////////////////////////////////
ID: ('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9')*
INT: ('0'..'9') | (('1'..'9') ('0'..'9')+ )
PINNUM: ('0'..'9'|'a'..'z'|'A'..'Z'|'_'|'+'|'-'|'$'|'/'|'@'|'!')+
STRING: '"' ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|'"' ) )* '"' |
"'" ( '\\' ( 'b'|'t'|'n'|'f'|'r'|'u'|'"'|"'"|'\\' ) | !( '\\'|"'" ) )* "'"
ML_COMMENT: '/*' - '*/'
SL_COMMENT: '//' !( '\n'|'\r' )* [( ['\r'] '\n' )]
WS: ( ' '|'\t'|'\r'|'\n' )+
On Oct 22, 2012, at 4:51 PM, Wesley J. Landaker wrote:
> On Monday, October 22, 2012 14:40:48 Joshua Mangelson wrote:
>> Just as a word of caution. The Railroad diagrams currently on the website
>> were generated before a few changes to the grammar were made, so there
>> are a few things that may be different or incorrect about the railroad
>> diagrams.
>>
>> We'll be mailing a BNF to everyone in just a bit.
>
> Okay, looking forward to it! =)
> ------------------------------------------------------------------------------
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_sfd2d_oct_______________________________________________
> phdl-devel mailing list
> phd...@li...
> https://lists.sourceforge.net/lists/listinfo/phdl-devel
|
|
From: Wesley J. L. <wj...@ic...> - 2012-10-22 22:51:29
|
On Monday, October 22, 2012 14:40:48 Joshua Mangelson wrote: > Just as a word of caution. The Railroad diagrams currently on the website > were generated before a few changes to the grammar were made, so there > are a few things that may be different or incorrect about the railroad > diagrams. > > We'll be mailing a BNF to everyone in just a bit. Okay, looking forward to it! =) |