You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(9) |
Oct
(16) |
Nov
(14) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(14) |
Feb
(57) |
Mar
(72) |
Apr
(37) |
May
(21) |
Jun
(12) |
Jul
(16) |
Aug
(33) |
Sep
(24) |
Oct
|
Nov
(10) |
Dec
(8) |
2004 |
Jan
(6) |
Feb
(14) |
Mar
(47) |
Apr
(41) |
May
(16) |
Jun
(31) |
Jul
(78) |
Aug
(62) |
Sep
(99) |
Oct
(43) |
Nov
(35) |
Dec
(9) |
2005 |
Jan
(19) |
Feb
(22) |
Mar
(7) |
Apr
|
May
(5) |
Jun
(4) |
Jul
(2) |
Aug
(9) |
Sep
(15) |
Oct
(23) |
Nov
(2) |
Dec
(20) |
2006 |
Jan
|
Feb
(2) |
Mar
(7) |
Apr
|
May
|
Jun
(8) |
Jul
(15) |
Aug
(1) |
Sep
(4) |
Oct
|
Nov
(9) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(11) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Stefan S. <se...@sy...> - 2004-06-04 13:06:02
|
Grzegorz Jakacki wrote: >>* provide a much better regression test coverage on different levels so we can >> measure to what degree changes break compatibility (some will be unavoidable) > > > This seems to be a lot of work. but it's worth it, I believe. [...ptree -> AST abstraction...] > I can see potential problems where existing code keeps e.g. definitions and > expressions on the same list, which now is a list of Ptree*, but could be no > more if we want to inject more type information. agreed. On the other hand, I'm not suggesting that we change the ptree structure, just that we make the ptree class tree more rich (type and API wise) such that users *can* use the high level type information and API. Walking the ptree via the 'untyped' Ptree nodes will still be possible. >>* I'm looking into the 'ctool' code I imported into synopsis. >> http://synopsis.fresco.org/viewsvn/synopsis-Synopsis/trunk/Synopsis/Parsers/C/ >> contains a class hierarchy for C, which shouldn't be that different from a C++ grammar, >> so it may serve as inspiration. > > > Observe that existing Ptree classes already constitute a hierarchy > (however majority of the code flattens it by upcasting to Leaf/NonLeaf). exactly. I realize the attempt to get more type info into the ptree, but it doesn't look complete. I think completeness could/should be defined on the criteria that I could write a Visitor that is able to *completely* traverse the ptree recovering all the type info without ever touching methods 'Cdr()' and 'Car()'. > Not at all. Node<IfStatement> is meant to work as a smart pointer to > Ptree. Is is to be used by value and not stored anywhere, e.g.: > > Node<Definition> d = ParseDefinition("int main() {}"); > > string id = d.GetIdentifier(); // instead of d->Cdr()->Car()-> etc. hmm, but then the user already knows that he is parsing a definition, so the type info isn't adding much value. Read: I want an *abstract* syntax tree, such that I can run 'StatementList *ast = parse(my_file);' and then inspect the returned 'ast' by some custom visitors. In particular, if I want to expose this ast to a scripting frontend such as python, it is impractical to have these wrapper classes be temporary objects, as that would make the binding quite complex and slow. > Example 1: You decide to change the structure of a Ptree nodes > representing some C++ construct (e.g. to store more information in it). > This will break all clients depending directly on Ptree, because > they need to update Car/Cdr paths to reflect new structure. However, > if there is Node<> iface in the middle, you update Car/Cdr paths > only in Node<>. But what about the Node's type ? If my wrapped ptree is a declaration, but by simply modifying the ptree I change that to be a function call, the wrapper's type ('Node<Declaration>') would be wrong. Of course, if we expose the two APIs in parallel we'll always have this problem. May be the ptree API should not expose any modifiers, i.e. exclusively operate on const ptrees. > Example 2: You add new kind of node, e.g. for "using" declaration. > All existing clients of Node<> will break if you just add new type to > Node visitor. However, you can branch Node<> iface into two versions: That's a good point. The Visitor pattern really assumes the type hierarchy of the visited objects to be stable. [...] >>Of course, such a move is not a simple decision. You really have to evaluate >>what future directions you want to take, and whether that fits with the >>synopsis framework. As I said, my interest into opencxx is in the context >>of synopsis, so I'll probably do most of my work from there. Please take >>this as an offer and invitation, not an attempt to fork your project. > > > This requires some thought indeed. There is no need for a quick decision. I'm just observing that right now we are each working on a separate branch, so merging them would be practical. And that even more so if we are going to look into adapting qmtest as a unit testing framework (Right now I don't do unit testing on the opencxx backend, just the generated synopsis AST which I dump). The most important thing I believe is that we define what we each expect from opencxx (and synopsis) in the future, i.e. whether we are aiming at the same things, and whether the common goals suggest that we both work from a common code base, or whether the overlap is simply not large enough to be worth a merge. >>It shouldn't be hard, and, it should be able to do that incrementally. >>The important step is some basic restructuring of the Parser class into >>an interface and implementation of the statements that are common between >>all flavours of C and C++, and put the rest into subclasses (K&R, ansi C, C89, etc.) > > > An important thing to consider here is if we want OpenC++ to be validating. > If not, then we don't need to bother that "class" occures in C source. > Personally I think that validating parser/elaborator is much harder to write > and IMO this should not be our priority, since we have no chances to reach > the quality of validation that e.g. g++, MSVC or EDG present today. I don't understand what you mean with 'validate'. opencxx (and ctool) are well able to indicate 'parse errors', even though the error message is not very high level, as that would indeed require that more language specific semantics be available to the parser (or the object that is trying to issue a meaningful error message). What do you mean with 'bother that "class" occures in C' ? Right the lexer will return a 'CLASS' token if it runs into the 'class' string. That doesn't make sense in C, as it would have to be an ordinary identifier. Similarly for all the other keywords. So removing those C++-specific keywords when scanning in C-mode sounds like the (easy) first step towards C compatibility. > However, we do have a chance to do something new and useful in providing > refactoring tool and frontend libraries, and I think this should be our > focus now. agreed. I'm just wondering how much efford it would be to extend opencxx's scope to C, as I can see a lot of use in a tool like that for example to the GNOME folks. Regards, Stefan |
From: Grzegorz J. <ja...@he...> - 2004-06-04 09:56:51
|
Hi! On Fri, 4 Jun 2004, Stefan Seefeld wrote: > Grzegorz Jakacki wrote: > > > Nevertheless, I think that *replacing* Ptree hierarchy with more typed > > form will be extremely difficult, because: > > [...] > > I fully agree with your arguments. I believe that before we start to > undertake any serious efford to refactor this stuff, we need to > > * document the existing API to be able to fully understand what the individual > types / methods do and what ramifications any change would have on those types. Agreed. I begun documenting Ptree and PtreeUtils, this is going to be my major focus after the merge. > * provide a much better regression test coverage on different levels so we can > measure to what degree changes break compatibility (some will be unavoidable) This seems to be a lot of work. > Once we have a good grasp at the complete workflow involved in the implementation > of the various use cases (parse tree / syntax tree construction, > code generation, etc.) we can suggest migration paths that provide optimal control. > > > (1) Find out and write down mappings from the "less typed" AST > > to "more typed" AST, e.g.: > > I have two sources of documentation: > > * documentation of the Parser API, such as: > > /* > definition > : null.declaration > | typedef > | template.decl > | metaclass.decl > | linkage.spec > | namespace.spec > | namespace.alias > | using.declaration > | extern.template.decl > | declaration > */ > bool Parser::rDefinition(Ptree*& p) > ... > > which indicates that the method 'rDefinition' returns a ptree that is a definition > with the given subtypes as shown in the comment above. Wouldn't it be possible to > go over all these comments and model a class tree that models this grammar ? Hm, don't know. > Is that possible at all ? Why not? Each nonterminal translates into abstract class, each production translates into concrete class. However, it does not necessarily yield a reasonable AST. Some nonterminals in grammar are just shortcuts, in particular those having just one production. You don't want them in AST. > It seems if the parser is currently able to construct > a 'Definition' object as a ptree, it should be possible to do the same but with > more typed objects instead... I can see potential problems where existing code keeps e.g. definitions and expressions on the same list, which now is a list of Ptree*, but could be no more if we want to inject more type information. > > * I'm looking into the 'ctool' code I imported into synopsis. > http://synopsis.fresco.org/viewsvn/synopsis-Synopsis/trunk/Synopsis/Parsers/C/ > contains a class hierarchy for C, which shouldn't be that different from a C++ grammar, > so it may serve as inspiration. Observe that existing Ptree classes already constitute a hierarchy (however majority of the code flattens it by upcasting to Leaf/NonLeaf). > > > template <> > > class Node<IfStatement> > > { > > public: > > Node(Node<Expr> c, Node<Statement> t, Node<Statement> e); > > Node<Expr> Cond() { return p_->Cdr()->Car(); } > > Node<Statement> Then() { ... } > > Node<Statement> Else() { ... } > > private: > > Node(Ptree* p) : p_(p) {} > > friend class AstFactory; > > Ptree* p_; > > }; > > > > class AstFactory > > { > > public: > > template <class T> > > static Node<T> Create<T>( /* ... */ ); > > }; > > Interesting idea ! The price for this non-intrusiveness, however, would be that we > essentially have to have two parsing stages, the first generating the parse tree, > the second to add the AST as a superstructure on top. Right ? Not at all. Node<IfStatement> is meant to work as a smart pointer to Ptree. Is is to be used by value and not stored anywhere, e.g.: Node<Definition> d = ParseDefinition("int main() {}"); string id = d.GetIdentifier(); // instead of d->Cdr()->Car()-> etc. or Node<Definition> d = AstFactory::Create<FunctionDefinition>( ParseType("void") , "main" , std::vector<Node<ArgDecl> >() , std::vector<Node<Statement> >() ); or void DoSomething(Node<Expr> n) { ... } Node<>'s by themselves do not have links to other Node<>'s. Each contains just one wrapped Ptree* and that's it. The only purpose of node is to keep downcasting and Crd/Car magic hidden from a client. As an extra bonus wrapped Ptree* could be replaced with boost::shared_ptr or Loki::SmartPtr or whatever without clients even knowing about it. > I was really thinking of modifying the Parser itself so it only > generates the typed objects (which could still derive from Ptree so we > don't have to rewrite everything...) In fact Ptree's are typed even today (see PtreeIfStatement). The issue is that elaborator and translator do not use this type information much. > > (5) At this point we will have usable, type-safe interface. > > Moreover, clients not satisfied with this interface > > (e.g. those who prefer multiary "+") will have a chance > > to reuse it nonintrusively or just write another > > interface to Ptree structure from scratch. > > That is a good point ! The generated tree could be visited in terms > of ptree, *as well* as a typed syntax tree. A little like the other > poster referring to the different DOM APIs... > > > (6) Having a wrapper interface atop Ptree structure allows > > for changing Ptree structure without affecting clients > > of wrapper interface. (Changes in Ptree, e.g. addition > > of new nodes, can be compensated in wrapper layer.) > > I'm not sure about that. Could you elaborate ? In my mind an AST node > would rely on a specific ptree structure it is wrapping, so if you can > modify the parse tree underneath, wouldn't that invalidate the AST nodes, > i.e. break some invariants ? Example 1: You decide to change the structure of a Ptree nodes representing some C++ construct (e.g. to store more information in it). This will break all clients depending directly on Ptree, because they need to update Car/Cdr paths to reflect new structure. However, if there is Node<> iface in the middle, you update Car/Cdr paths only in Node<>. Example 2: You add new kind of node, e.g. for "using" declaration. All existing clients of Node<> will break if you just add new type to Node visitor. However, you can branch Node<> iface into two versions: (a) new, incompatible with existing clients, but exposing "using" in visitor. (b) old (transitional), which hides "using" node from clients (or e.g. aborts at an attempt to visit Node<Declaration> that indeed represents "using"; or maybe Node<> iface should provide for Node<Unknown> to handle such situations. In some cases there is reasonable visitation for a new node, e.g. if it has "decorative" character (in terms of Decorator pattern), like e.g. parentheses node.) Old clients would then be able to compile against (b) and work without regressions. > > I second that. However, I think the interfaces should not be published as > > they are now, because IMO they are not encapsulating enough. I think parser > > iterface is OK, but others interfaces should be seriously reviewed before we > > commit to them. For example program object model is very much coupled with > > translator. I think we should untangle them first and transform translator > > framework into exemplary, nonprivledged client of frontend > > library (libraries). > > agreed. On the other hand the whole process of changing the APIs will be > incremental and iterative, so as long as we don't commit to a fixed and > stable API I don't see why we can not apply the changes 'in public'. Everything happens "in public", but some parts are not "published", in the sense that I don't feel we are bound to keep them stable. > >>As I'm already maintaining an opencxx 'branch' as part of the synopsis > >>project, I'm experimenting with things there. Synopsis uses subversion, so > >>directory-layout related refactoring is much simpler than with cvs. > > > > > > As for Subversion in OpenC++: I have had heard very positive opinions about > > Subversion, however I don't see an easy way to move OpenC++ development to > > Subversion now. Currently we rely on SF.net, which provides CVS, > > CompileFarm, mailing lists, shell+cron accounts and web hosting (and other > > features which we don't use at the moment). I don't see any other > > organization that would provide this level of service and comittment to > > OpenC++ project. > > Well, as I said, synopsis presently contains a branch of opencxx. As synopsis > is my central focus, and as for me opencxx could well be part of synopsis > (with re-defined scope), I could imagine to move development effords there. > The synopsis project is hosted by SPI ('software in the public interest') > and it has its own set of infrastructure tools ('roundup' issue tracker, > 'qmtest' unit testing framework, 'subversion' configuration management tool, > 'mailman' mailing lists, etc.) > Of course, such a move is not a simple decision. You really have to evaluate > what future directions you want to take, and whether that fits with the > synopsis framework. As I said, my interest into opencxx is in the context > of synopsis, so I'll probably do most of my work from there. Please take > this as an offer and invitation, not an attempt to fork your project. This requires some thought indeed. > > I very much support extending the lexer/parser to support full C syntax, > > however I don't know how much work it takes to get there. What issues do you > > see? > > I see different steps: > > * optionally remove the tokens that are not keywords in ordinary C > ('class', 'virtual', ...) > > * find all expressions that are valid C but not C++, i.e. define > additions to the parser that have to be enabled for C but not C++ > > * do the contrary, i.e. find expressions that are valid C++ but not C... > > It shouldn't be hard, and, it should be able to do that incrementally. > The important step is some basic restructuring of the Parser class into > an interface and implementation of the statements that are common between > all flavours of C and C++, and put the rest into subclasses (K&R, ansi C, C89, etc.) An important thing to consider here is if we want OpenC++ to be validating. If not, then we don't need to bother that "class" occures in C source. Personally I think that validating parser/elaborator is much harder to write and IMO this should not be our priority, since we have no chances to reach the quality of validation that e.g. g++, MSVC or EDG present today. However, we do have a chance to do something new and useful in providing refactoring tool and frontend libraries, and I think this should be our focus now. Best regards Grzegorz ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: Stefan S. <se...@sy...> - 2004-06-04 05:10:18
|
Hi Grzegorz, Grzegorz Jakacki wrote: > Nevertheless, I think that *replacing* Ptree hierarchy with more typed > form will be extremely difficult, because: [...] I fully agree with your arguments. I believe that before we start to undertake any serious efford to refactor this stuff, we need to * document the existing API to be able to fully understand what the individual types / methods do and what ramifications any change would have on those types. * provide a much better regression test coverage on different levels so we can measure to what degree changes break compatibility (some will be unavoidable) Once we have a good grasp at the complete workflow involved in the implementation of the various use cases (parse tree / syntax tree construction, code generation, etc.) we can suggest migration paths that provide optimal control. > (1) Find out and write down mappings from the "less typed" AST > to "more typed" AST, e.g.: I have two sources of documentation: * documentation of the Parser API, such as: /* definition : null.declaration | typedef | template.decl | metaclass.decl | linkage.spec | namespace.spec | namespace.alias | using.declaration | extern.template.decl | declaration */ bool Parser::rDefinition(Ptree*& p) ... which indicates that the method 'rDefinition' returns a ptree that is a definition with the given subtypes as shown in the comment above. Wouldn't it be possible to go over all these comments and model a class tree that models this grammar ? Is that possible at all ? It seems if the parser is currently able to construct a 'Definition' object as a ptree, it should be possible to do the same but with more typed objects instead... * I'm looking into the 'ctool' code I imported into synopsis. http://synopsis.fresco.org/viewsvn/synopsis-Synopsis/trunk/Synopsis/Parsers/C/ contains a class hierarchy for C, which shouldn't be that different from a C++ grammar, so it may serve as inspiration. > template <> > class Node<IfStatement> > { > public: > Node(Node<Expr> c, Node<Statement> t, Node<Statement> e); > Node<Expr> Cond() { return p_->Cdr()->Car(); } > Node<Statement> Then() { ... } > Node<Statement> Else() { ... } > private: > Node(Ptree* p) : p_(p) {} > friend class AstFactory; > Ptree* p_; > }; > > class AstFactory > { > public: > template <class T> > static Node<T> Create<T>( /* ... */ ); > }; Interesting idea ! The price for this non-intrusiveness, however, would be that we essentially have to have two parsing stages, the first generating the parse tree, the second to add the AST as a superstructure on top. Right ? I was really thinking of modifying the Parser itself so it only generates the typed objects (which could still derive from Ptree so we don't have to rewrite everything...) > (5) At this point we will have usable, type-safe interface. > Moreover, clients not satisfied with this interface > (e.g. those who prefer multiary "+") will have a chance > to reuse it nonintrusively or just write another > interface to Ptree structure from scratch. That is a good point ! The generated tree could be visited in terms of ptree, *as well* as a typed syntax tree. A little like the other poster referring to the different DOM APIs... > (6) Having a wrapper interface atop Ptree structure allows > for changing Ptree structure without affecting clients > of wrapper interface. (Changes in Ptree, e.g. addition > of new nodes, can be compensated in wrapper layer.) I'm not sure about that. Could you elaborate ? In my mind an AST node would rely on a specific ptree structure it is wrapping, so if you can modify the parse tree underneath, wouldn't that invalidate the AST nodes, i.e. break some invariants ? > I second that. However, I think the interfaces should not be published as > they are now, because IMO they are not encapsulating enough. I think parser > iterface is OK, but others interfaces should be seriously reviewed before we > commit to them. For example program object model is very much coupled with > translator. I think we should untangle them first and transform translator > framework into exemplary, nonprivledged client of frontend > library (libraries). agreed. On the other hand the whole process of changing the APIs will be incremental and iterative, so as long as we don't commit to a fixed and stable API I don't see why we can not apply the changes 'in public'. >>As I'm already maintaining an opencxx 'branch' as part of the synopsis >>project, I'm experimenting with things there. Synopsis uses subversion, so >>directory-layout related refactoring is much simpler than with cvs. > > > As for Subversion in OpenC++: I have had heard very positive opinions about > Subversion, however I don't see an easy way to move OpenC++ development to > Subversion now. Currently we rely on SF.net, which provides CVS, > CompileFarm, mailing lists, shell+cron accounts and web hosting (and other > features which we don't use at the moment). I don't see any other > organization that would provide this level of service and comittment to > OpenC++ project. Well, as I said, synopsis presently contains a branch of opencxx. As synopsis is my central focus, and as for me opencxx could well be part of synopsis (with re-defined scope), I could imagine to move development effords there. The synopsis project is hosted by SPI ('software in the public interest') and it has its own set of infrastructure tools ('roundup' issue tracker, 'qmtest' unit testing framework, 'subversion' configuration management tool, 'mailman' mailing lists, etc.) Of course, such a move is not a simple decision. You really have to evaluate what future directions you want to take, and whether that fits with the synopsis framework. As I said, my interest into opencxx is in the context of synopsis, so I'll probably do most of my work from there. Please take this as an offer and invitation, not an attempt to fork your project. > I very much support extending the lexer/parser to support full C syntax, > however I don't know how much work it takes to get there. What issues do you > see? I see different steps: * optionally remove the tokens that are not keywords in ordinary C ('class', 'virtual', ...) * find all expressions that are valid C but not C++, i.e. define additions to the parser that have to be enabled for C but not C++ * do the contrary, i.e. find expressions that are valid C++ but not C... It shouldn't be hard, and, it should be able to do that incrementally. The important step is some basic restructuring of the Parser class into an interface and implementation of the statements that are common between all flavours of C and C++, and put the rest into subclasses (K&R, ansi C, C89, etc.) Regards, Stefan |
From: Grzegorz J. <ja...@he...> - 2004-06-04 01:58:23
|
Hi Stefan and All, On Tue, 1 Jun 2004, Stefan Seefeld wrote: [snip] > * I suggest the ptree hierarchy to be refactored into a more typed > form. That could simply mean that a big number of new 'Statement', > 'Expression', and other classes should be derived from 'Ptree', or > it could be done in a different way, I don't know yet. > However, this would mean that it would be much more straight forward > to inspect an AST, as these types would be more or less self-explanatory > (ever wondered what 'node->Cdr()->Cdr()->Car()' represents ??) I agree that 'node->Cdr()->Cdr()->Car()' is unsafe and difficult to use. The same goes for creating new Ptree nodes. Moreover, the mapping of C++ syntax into combinations of Ptree nodes is not documented, which makes this area even more unclear. Nevertheless, I think that *replacing* Ptree hierarchy with more typed form will be extremely difficult, because: (1) Parser, type elaborator and translator all use Cdr/Car. Even if we ignore translator for the moment, parser and elaborator themselves are 20KLOC of highly nontrivial code. Reworking this code is a huge job, especially if you do this part-time, and will be a wonderfull source of bugs (and we don't have a decent regression testsuite). Moreover, I think that directly replacing the AST datastructure is difficult, because it has to be done practically in one big step, mostly because grammar is not a hierarchical system (there is a lot of recursion in grammar, which means that you cannot start replacing things piece by piece going bottom-up, because dependency graph between different AST classes is not acyclic). (2) Contrary to popular belief, creating an object model for AST is a lot of conceptual work. Better yet, for a language like C++ there is no unique and optimal AST object model. Example: how should "a+b+c" be represented? Some clients would like to see two binary "+" nodes, others would rather take advantage of "+" being associative and view it as one ternary node. Some clients are interested in nodes representing parentheses, while others would rather treat them as textual decorations not belonging in AST, etc. Here are my suggestion on how to improve usability of AST gradually: (1) Find out and write down mappings from the "less typed" AST to "more typed" AST, e.g.: IfStatement: Cond ->Cdr()->Car(); Then ->Cdr()->Cdr()->Cdr()->Car(); Else ->Cdr()->Cdr()->Cdr()->Cdr()->Cdr()->Car(); (2) Use it to write or generate a set of wrappers, that would encapsulate AST in a typesafe interface: template <> class Node<IfStatement> { public: Node(Node<Expr> c, Node<Statement> t, Node<Statement> e); Node<Expr> Cond() { return p_->Cdr()->Car(); } Node<Statement> Then() { ... } Node<Statement> Else() { ... } private: Node(Ptree* p) : p_(p) {} friend class AstFactory; Ptree* p_; }; class AstFactory { public: template <class T> static Node<T> Create<T>( /* ... */ ); }; This can be done non-intrusively, without touching the existing codebase (= without introducing bugs). (3) Write parser wrapper, that would wrap Ptree*'s returned from parser in Node<>'s. (4) Write abstract walker for Node<>'s and make Node<>'s visitable. (5) At this point we will have usable, type-safe interface. Moreover, clients not satisfied with this interface (e.g. those who prefer multiary "+") will have a chance to reuse it nonintrusively or just write another interface to Ptree structure from scratch. (6) Having a wrapper interface atop Ptree structure allows for changing Ptree structure without affecting clients of wrapper interface. (Changes in Ptree, e.g. addition of new nodes, can be compensated in wrapper layer.) (7) Typesafe wrapper interface would enable automatic generation of parser regression tests (e.g. from gcc testsuite), that should be used as a safety net if we ever decide to refactor parser so that it uses the type-safe interface. (8) At this point we can think about moving typesafe interface into parser, elaborator and translator, and later about totally removing Ptree classes and actually replacing Ptree's with a canonical implementation of Composite pattern. However, I don't think this is the way to go, mainly because of (5). I believe it is better to have low-level implementation plus high-level wrapper(s). (Side note: In fact implementation of AST in OpenC++ is more tricky than just Leaf/NonLeaf, see e.g. PtreeIfStatement etc. Nevertheless this implementation still forces Cdr/Car on clients, and AFAIU this is something we want to escape from.) > * I suggest to open up opencxx in a way that exposes the basic API (parser > / ptree generation, walkers / ptree transformation, metaclass and the > other introspection stuff) as a C++ library as well as a python module. > This means that the occ executable will be very much obsolete, or at > least it would only be a convenience for the most popular features, but > more fine-grained control would be accessible through the APIs, through > which users can customize opencxx to their needs. It also means that > all the platform-specific code to run subprocesses such as the > preprocessor as well as load metaclass plugins could be isolated such > that the backend library would be more platform neutral and robust. I second that. However, I think the interfaces should not be published as they are now, because IMO they are not encapsulating enough. I think parser iterface is OK, but others interfaces should be seriously reviewed before we commit to them. For example program object model is very much coupled with translator. I think we should untangle them first and transform translator framework into exemplary, nonprivledged client of frontend library (libraries). > As I'm already maintaining an opencxx 'branch' as part of the synopsis > project, I'm experimenting with things there. Synopsis uses subversion, so > directory-layout related refactoring is much simpler than with cvs. As for Subversion in OpenC++: I have had heard very positive opinions about Subversion, however I don't see an easy way to move OpenC++ development to Subversion now. Currently we rely on SF.net, which provides CVS, CompileFarm, mailing lists, shell+cron accounts and web hosting (and other features which we don't use at the moment). I don't see any other organization that would provide this level of service and comittment to OpenC++ project. AFAIU the biggest issue is moving files/dirs in a repo. I am using the standard CVS way (delete here, add there) and ideed it looses history and makes merges more difficult, but so far it was not very painful. The lost of history can be mitigated by mentioning the old location of a file in the initial comment of moved (=added) file. > I'v also a number of advanced features that I don't want to loose, such as > preprocessor data integrated with the ast (synopsis records macro > definitions and calls, file inclusion information, etc.). The ability of OpenC++ to understand preprocessor, so that code can be transformed without expanding preprocessor macros, would be very desirable. I support any effort in this direction. This is in general difficult in C++, but it is doable. (See CRefactory project.) Together with Python scripting it would create very powerful refactoring tool. > I'm thus tempted to work off of my own opencxx branch, though I'm happily > sharing my changes with opencxx. In particular, I'm thinking of a simple > bootstrapping process, whithin which I would rework the ptree hierarchy, > and then use opencxx itself to *generate* the C++-to-python binding to > expose this class hierarchy to python. This sounds exciting, but would that have any advantages over using SWIG? (I don't know, I have not used SWIG myself.) > Once I have that, people can introspect and manipulate the source code > from within python, That would be really great. > with a direct C++ API as a fallback, in case they find > the python-API inacceptable for various reasons (which I can't really > imagine :-) Some people may be concerned about performance (but still, I would love Python API). > Finally, I'm wondering whether it wouldn't be simpler for me to modify the > opencxx lexer and parser to be able to parse C code (all the various > flavours that still exist, such as K&R), so I can drop the ctool backend. > A C parser / processor with the features of opencxx would in particular be > useful to all those GNOME / Mono developers, with language binding > generation being just one example usage. I very much support extending the lexer/parser to support full C syntax, however I don't know how much work it takes to get there. What issues do you see? Best regards Grzegorz ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: SF M. E. <el...@us...> - 2004-06-03 15:38:32
|
> > C++ source code can be mapped to XML. Superx++ comes near to such a solution. > > http://en.wikipedia.org/wiki/Superx_Plus_Plus_programming_language > > Having just had little exposure to this, I fail to see the use of this mapping. > Could you provide some rational for such a binding ? In particular, what's the > relationship to introspection and metaprogramming ? If C++ code would be mapped to XML text, the existing infrastructure like XSLT, XQuery and document object model can be used to transform the sources. The mapping between DOM and IDL would be easier, too. Examples: - What should the interface for a constructor be? - How do you design the IDL specification for a method or member function call? |
From: Stefan S. <se...@sy...> - 2004-06-02 21:17:43
|
SF Markus Elfring wrote: > C++ source code can be mapped to XML. Superx++ comes near to such a solution. > http://en.wikipedia.org/wiki/Superx_Plus_Plus_programming_language Having just had little exposure to this, I fail to see the use of this mapping. Could you provide some rational for such a binding ? In particular, what's the relationship to introspection and metaprogramming ? > An object-oriented mapping between the sources and the MOP transformation can be used like it is decscribed in the section "1.1.4 Inheritance vs. Flattened Views of the API". > http://www.w3.org/TR/DOM-Level-3-Core/core.html#ID-1CED5498 > - A node interface is available for OpenC++ at the moment. > - Higher level abstractions and corresponding classes seem to be missing so far. How do you think to define them with the CORBA interface definition language like it is specified for HTML? > http://www.w3.org/TR/DOM-Level-2-HTML/idl-definitions.html yes, that sounds like what I'v been proposing, i.e. deriving more typed classes from 'Ptree' with a more high-level API to access the AST structure. As such an undertaking involves quite a bit of work, I encourage everybody to get involved. Regards, Stefan |
From: SF M. E. <el...@us...> - 2004-06-02 20:45:56
|
... > This leads me to a couple of items on my wishlist, which I'd like > to discuss / propose here: > > * I suggest the ptree hierarchy to be refactored into a more typed > form. That could simply mean that a big number of new 'Statement', > 'Expression', and other classes should be derived from 'Ptree', or > it could be done in a different way, I don't know yet. > However, this would mean that it would be much more straight forward > to inspect an AST, as these types would be more or less self-explanatory > (ever wondered what 'node->Cdr()->Cdr()->Car()' represents ??) > > * I suggest to open up opencxx in a way that exposes the basic API > (parser / ptree generation, walkers / ptree transformation, metaclass and > the other introspection stuff) as a C++ library as well as a python module. > This means that the occ executable will be very much obsolete, or at least > it would only be a convenience for the most popular features, but more fine-grained > control would be accessible through the APIs, through which users can customize > opencxx to their needs. It also means that all the platform-specific code > to run subprocesses such as the preprocessor as well as load metaclass plugins > could be isolated such that the backend library would be more platform neutral and robust. ... I would like to contribute some ideas to this discussion thread. I tried to introduce the topic "C++ management extension". http://sourceforge.net/mailarchive/message.php?msg_id=7576532 There are more approaches possible: C++ source code can be mapped to XML. Superx++ comes near to such a solution. http://en.wikipedia.org/wiki/Superx_Plus_Plus_programming_language An object-oriented mapping between the sources and the MOP transformation can be used like it is decscribed in the section "1.1.4 Inheritance vs. Flattened Views of the API". http://www.w3.org/TR/DOM-Level-3-Core/core.html#ID-1CED5498 - A node interface is available for OpenC++ at the moment. - Higher level abstractions and corresponding classes seem to be missing so far. How do you think to define them with the CORBA interface definition language like it is specified for HTML? http://www.w3.org/TR/DOM-Level-2-HTML/idl-definitions.html |
From: <ch...@da...> - 2004-06-02 10:19:43
|
Hi Grzegorz, Thanks for your comments, I'll try to clarify my earlier message. I want to use computation reconstruction for Fault Tolerance issues in multi-threaded programs altough it can be used in several fields like debugging. Multi-threaded applications are non-deterministic by nature, so if a MT program fails is not simple to reproduce the state previous to the failure due to the thread scheduling. My main purpose is to be able to recover a state similar to the one that a MT application have before the failure and then continue with the computation. The general idea is: 1)if a MT program is composed only by threads and shared objects, 2)the threads' state are checkpointed regularly and 3)the order in which the objects' methods are invoked during the execution is logged; then after a program failure we just simply have to create a new process that recover the threads' state from the lastest checkpoints of the original process and re-play the method invocations that were logged in the original order and continue the execution. Of course there are a lot of issues that have to consider in order to reach a consistent state after a failure and recovery but I hope to have clarified a little bit my original question. By the way, for my purposes, objects' addresses are useless because they're valid only for the original process. Best regards. Carlos |
From: Stefan S. <se...@sy...> - 2004-06-01 17:17:53
|
Brian Kahne wrote: > > Yes, having the Walker act as a visitor, as you describe below, would be > very nice. In addition to having typed ptree-nodes, though, it would be > nice to get the actual C++ type of the expression, e.g. if the > expression is of the form "a + b * c", then be able to get a TypeInfo > object back that says that the return type of the expression is class > Foo. Is that possible today? It seems like the only way to get > TypeInfo objects is by looking up a name in the Environment, whereas > this requires figuring out that operator+() and operator*() is > overloaded for this class, getting that operator's return type, etc. I'm not sure about that. However, this shouldn't be much harder than constructing a call graph, where you have to use similar lookup rules to find the right (possibly overloaded) functions depending on the arguments' type and the C++ scoping rules. We do something similar already in synopsis when generating the cross-referenced source view I was talking about earlier, but I'd be very happy to see this functionality offered by opencxx directly. May be Grzegorz has more insights about how much work would be involved to add that. Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-06-01 15:03:18
|
Hi Brian, Brian Kahne wrote: > One feature that would be really nice (perhaps this is possible- please > let me know if it is!) would be to get the type of an arbitrary > expression. For example, if I wanted to implement a "let" block, e.g. > > let (i = <expr>, j = <expr>) { > > } > > it would be really nice if I could directly query the ptree object > storing <expr> for its return type so that I could transform this code > into normal C++ declarations. yes, that would be possible with typed ptree nodes, as then the 'Walker' classes would not only act as a traversal, but also as a visitor, i.e. one could use the double-dispatch mechanism to resolve the type that is part of of ptree nodes being traversed. Right now the parent ptree node has to detect the sub-node's type by inspecting the node's topology, i.e. instead of if (node->Car()->IsLeaf()) do_something(); one would write if (if_statement->else_block) // access the *typed* 'else' statement, as 'this' // is a visitor with various 'visit_statement(Statement *)' methods if_statement->else_block->accept(this); You get the idea... > I think that releasing the code as a library would be very helpful, but > keeping the occ executable would still be a good idea- it's a very > convenient way to use the tool. I agree. As a convenience tool it covers the majority of the use cases, so it surely has its use. Regards, Stefan |
From: Brian K. <bk...@mo...> - 2004-06-01 14:44:53
|
Hi, I've been using opencxx for a few months now to implement parallel extensions to C++ similar to C A R Hoare's CSP or the language Occam. I had to modify the parser a bit to handle extensions along the lines of: par { <statements> } but that was quite easy to do. Use of this tool has definitely cut months off of my original development schedule. I think you're right that a bit more structure in the AST might be nice- being able to select the different pieces of a for-loop block by calling individual methods would probably make things easier. One feature that would be really nice (perhaps this is possible- please let me know if it is!) would be to get the type of an arbitrary expression. For example, if I wanted to implement a "let" block, e.g. let (i = <expr>, j = <expr>) { } it would be really nice if I could directly query the ptree object storing <expr> for its return type so that I could transform this code into normal C++ declarations. I think that releasing the code as a library would be very helpful, but keeping the occ executable would still be a good idea- it's a very convenient way to use the tool. Regards, Brian Kahne Freescale Semiconductor Stefan Seefeld wrote: > hi there, > > while Grzegorz is still struggling with the merge of some > refactoring we have been working on, I'd like to discuss > some ideas about opencxx's evolution. > > I'd very much appreciate comments from people using opencxx > right now or being tempted to use it so we can understand > better what opencxx is already good at, and what it would > be useful to work on. > > I'v started to use opencxx myself as a C++ parser backend > for my synopsis framework, where I initially simply collected > all declarations from source code together with comments > directly preceding them, to generate documentation. > > Synopsis already had its own AST-like class hierarchy, so > the task was 'simply' to traverse the opencxx ptree and > map that to a synopsis AST. > > Later we went some steps further to use the power of > opencxx to generate 'cross referenced source code', i.e. > html pages that display source files, but with variables > and types being linked to their respective declaration. > > For quite some time I have been pondering to expose a 'real' AST > such as that from opencxx to python, so I could use my > processor framework to manipulate the source code directly > for code generation. However, I found the ptree stuff quite > obscure so this idea never really got off the ground. > > I'v recently started to integrate a C parser (from the 'ctool' project) > into synopsis, and there the parse tree is much simpler to read, > simply because it is more typed. Instead of just having specific > ptree topologies for 'statements', 'declarations', etc., I have > real classes 'Statement', 'Declaration', etc. > That's much more pleasing to look at ! :-) > > On the other hand, the ctool doesn't preserve the tokens in their > original form in the same way opencxx does, and doesn't tokenize > the comments (something we have been working hard to add to synopsis' > opencxx port). > > This leads me to a couple of items on my wishlist, which I'd like > to discuss / propose here: > > * I suggest the ptree hierarchy to be refactored into a more typed > form. That could simply mean that a big number of new 'Statement', > 'Expression', and other classes should be derived from 'Ptree', or > it could be done in a different way, I don't know yet. > However, this would mean that it would be much more straight forward > to inspect an AST, as these types would be more or less self-explanatory > (ever wondered what 'node->Cdr()->Cdr()->Car()' represents ??) > > * I suggest to open up opencxx in a way that exposes the basic API > (parser / ptree generation, walkers / ptree transformation, metaclass and > the other introspection stuff) as a C++ library as well as a python > module. > This means that the occ executable will be very much obsolete, or at > least > it would only be a convenience for the most popular features, but more > fine-grained > control would be accessible through the APIs, through which users can > customize > opencxx to their needs. It also means that all the platform-specific code > to run subprocesses such as the preprocessor as well as load metaclass > plugins > could be isolated such that the backend library would be more platform > neutral and robust. > > As I'm already maintaining an opencxx 'branch' as part of the synopsis > project, > I'm experimenting with things there. Synopsis uses subversion, so > directory-layout > related refactoring is much simpler than with cvs. I'v also a number of > advanced > features that I don't want to loose, such as preprocessor data > integrated with > the ast (synopsis records macro definitions and calls, file inclusion > information, etc.). > > I'm thus tempted to work off of my own opencxx branch, though I'm > happily sharing > my changes with opencxx. In particular, I'm thinking of a simple > bootstrapping > process, whithin which I would rework the ptree hierarchy, and then use > opencxx > itself to *generate* the C++-to-python binding to expose this class > hierarchy > to python. Once I have that, people can introspect and manipulate the > source > code from within python, with a direct C++ API as a fallback, in case > they find > the python-API inacceptable for various reasons (which I can't really > imagine :-) > > Finally, I'm wondering whether it wouldn't be simpler for me to modify the > opencxx lexer and parser to be able to parse C code (all the various > flavours > that still exist, such as K&R), so I can drop the ctool backend. > A C parser / processor with the features of opencxx would in particular be > useful to all those GNOME / Mono developers, with language binding > generation > being just one example usage. > > Now, please tell me what you think about these ideas, whether they make > sense > to you at all, whether you find them useful, or would even like to help. > > Best regards, > Stefan > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: Oracle 10g > Get certified on the hottest thing ever to hit the market... Oracle 10g. > Take an Oracle 10g class now, and we'll give you the exam FREE. > http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Stefan S. <se...@sy...> - 2004-06-01 04:14:06
|
hi there, while Grzegorz is still struggling with the merge of some refactoring we have been working on, I'd like to discuss some ideas about opencxx's evolution. I'd very much appreciate comments from people using opencxx right now or being tempted to use it so we can understand better what opencxx is already good at, and what it would be useful to work on. I'v started to use opencxx myself as a C++ parser backend for my synopsis framework, where I initially simply collected all declarations from source code together with comments directly preceding them, to generate documentation. Synopsis already had its own AST-like class hierarchy, so the task was 'simply' to traverse the opencxx ptree and map that to a synopsis AST. Later we went some steps further to use the power of opencxx to generate 'cross referenced source code', i.e. html pages that display source files, but with variables and types being linked to their respective declaration. For quite some time I have been pondering to expose a 'real' AST such as that from opencxx to python, so I could use my processor framework to manipulate the source code directly for code generation. However, I found the ptree stuff quite obscure so this idea never really got off the ground. I'v recently started to integrate a C parser (from the 'ctool' project) into synopsis, and there the parse tree is much simpler to read, simply because it is more typed. Instead of just having specific ptree topologies for 'statements', 'declarations', etc., I have real classes 'Statement', 'Declaration', etc. That's much more pleasing to look at ! :-) On the other hand, the ctool doesn't preserve the tokens in their original form in the same way opencxx does, and doesn't tokenize the comments (something we have been working hard to add to synopsis' opencxx port). This leads me to a couple of items on my wishlist, which I'd like to discuss / propose here: * I suggest the ptree hierarchy to be refactored into a more typed form. That could simply mean that a big number of new 'Statement', 'Expression', and other classes should be derived from 'Ptree', or it could be done in a different way, I don't know yet. However, this would mean that it would be much more straight forward to inspect an AST, as these types would be more or less self-explanatory (ever wondered what 'node->Cdr()->Cdr()->Car()' represents ??) * I suggest to open up opencxx in a way that exposes the basic API (parser / ptree generation, walkers / ptree transformation, metaclass and the other introspection stuff) as a C++ library as well as a python module. This means that the occ executable will be very much obsolete, or at least it would only be a convenience for the most popular features, but more fine-grained control would be accessible through the APIs, through which users can customize opencxx to their needs. It also means that all the platform-specific code to run subprocesses such as the preprocessor as well as load metaclass plugins could be isolated such that the backend library would be more platform neutral and robust. As I'm already maintaining an opencxx 'branch' as part of the synopsis project, I'm experimenting with things there. Synopsis uses subversion, so directory-layout related refactoring is much simpler than with cvs. I'v also a number of advanced features that I don't want to loose, such as preprocessor data integrated with the ast (synopsis records macro definitions and calls, file inclusion information, etc.). I'm thus tempted to work off of my own opencxx branch, though I'm happily sharing my changes with opencxx. In particular, I'm thinking of a simple bootstrapping process, whithin which I would rework the ptree hierarchy, and then use opencxx itself to *generate* the C++-to-python binding to expose this class hierarchy to python. Once I have that, people can introspect and manipulate the source code from within python, with a direct C++ API as a fallback, in case they find the python-API inacceptable for various reasons (which I can't really imagine :-) Finally, I'm wondering whether it wouldn't be simpler for me to modify the opencxx lexer and parser to be able to parse C code (all the various flavours that still exist, such as K&R), so I can drop the ctool backend. A C parser / processor with the features of opencxx would in particular be useful to all those GNOME / Mono developers, with language binding generation being just one example usage. Now, please tell me what you think about these ideas, whether they make sense to you at all, whether you find them useful, or would even like to help. Best regards, Stefan |
From: Grzegorz J. <ja...@he...> - 2004-06-01 01:10:54
|
Hi, On Mon, 31 May 2004 ch...@da... wrote: > Hello all, > > I=B4m a newbie that have just discovered this great tool and have a > question: it is possible to identify an object of a class? > > Let me explain: I want to implement a "log-able" class which would be mor= e > or less like the BeforeClass in the examples but, additionaly to identify > the methods that are called, I want to identify the parameters passed an= d > the object that have made the invocation in order to be able to > reconstruct the computation. > > Imagine that have a class A and two objects belonging to this class, let > say a1 and a2, how can I differenciate the invocation of the same method > in each object? Can someone recommend a reference or give an idea? (1) Do you want to add logging code at caller site or at callee site? (2) Do you want to log identity of caller or callee? (In the first paragraph you write about caller object, in the second about the callee object). (1)-caller, (2)-caller: use 'this' at call site. (1)-caller, (2)-callee: add code to take callee address just before the call is made, e.g. instrument p[14].fun->Foo(); like this: __logging_utility_log(p[14].fun, "Foo", "a", 5)->Foo("a", 5); and add template <class T> T* __logging_utility(T* p, /* other stuff */) { /* do actual logging, 'p' has identity of callee */ return p; } (1)-callee, (2)-caller: here you want calee to know at runtime who is the caller; as far as I know this cannot be done portably in C++ itself, so you either need to add some code on the caller's side or you need to use non-portable code to inspect the stack and find out the address of calling function (from which you can later figure out if it is a method of an object; if so, then it is likely to have object address as one of its arguments; this is all highly non-portable in terms of compiler and hardware, also optimization in some cases may affect the stack layout) (1)-callee, (2)-callee: use 'this' at callee site. One additional thing to watch for is that in class hierarchies with multiple inheritance or with virtual inheritance one object may have many valid addresses, so you have to normalize address obtained from 'this' by casting to one of the least derived base classes. (Finding such class is of course doable with OpenC++.) I hope one of above variants is what you meant. The thing you are trying to do (computation reconstruction) seems very interesting. Could you elaborate a little bit on the context where you want to apply it? Please let me know if you need more info. BR Grzegorz ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: <ch...@da...> - 2004-05-31 10:40:34
|
Hello all, I´m a newbie that have just discovered this great tool and have a question: it is possible to identify an object of a class? Let me explain: I want to implement a "log-able" class which would be more or less like the BeforeClass in the examples but, additionaly to identify the methods that are called, I want to identify the parameters passed and the object that have made the invocation in order to be able to reconstruct the computation. Imagine that have a class A and two objects belonging to this class, let say a1 and a2, how can I differenciate the invocation of the same method in each object? Can someone recommend a reference or give an idea? Thanks a lot. Carlos |
From: Stefan S. <se...@sy...> - 2004-05-28 12:41:49
|
Grzegorz Jakacki wrote: >>hmm, ok, once I understand how the TypeInfo class works ;-) >>In synopsis I only use the Encoding class, and the docs state to >>keep TypeInfo in sync whenever Encoding is changed, so I just >>followed the example... >>I'll have a closer look. > > > I never had time to understand if this is really necessary. Looks like a > gratituous coupling. well, the dependencies aren't gratituous. As we discussed, the 'mop' classes all depend on the 'parser' classes, and that's particularly true for the 'TypeInfo' / 'Encoding' relationship. That said, it may be possible to better encapsulate the encoding stuff inside the Encoding class such that changes as the one I applied doesn't require a code change in the TypeInfo. Regards, Stefan |
From: Grzegorz J. <ja...@he...> - 2004-05-28 08:26:24
|
Hi All, This weekend I am planning to merge the sandbox_jakacki_frontend1 branch to MAIN, please do not rely on HEAD to be stable for the next few days. BR Grzegorz ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: Grzegorz J. <ja...@he...> - 2004-05-28 08:23:07
|
On Thu, 27 May 2004, Stefan Seefeld wrote: > Grzegorz Jakacki wrote: > > >>I just committed a patch (to the 'sandbox_jakacki_frontend1' branch) > >>to add wide character support to the Encoding and TypeInfo classes. > >>It seems no opencxx user has noticed so far, only I did in synopsis. > >>I backported it anyways, for completeness. > > > > > > > > Thanks, Stefan. Indeed that must have been an omission. Could you also add > > a testcase? > > hmm, ok, once I understand how the TypeInfo class works ;-) > In synopsis I only use the Encoding class, and the docs state to > keep TypeInfo in sync whenever Encoding is changed, so I just > followed the example... > I'll have a closer look. I never had time to understand if this is really necessary. Looks like a gratituous coupling. BR Grzegorz > > Stefan > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: Oracle 10g > Get certified on the hottest thing ever to hit the market... Oracle 10g. > Take an Oracle 10g class now, and we'll give you the exam FREE. > http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users > > ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: Stefan S. <se...@sy...> - 2004-05-28 02:23:03
|
Grzegorz Jakacki wrote: >>I just committed a patch (to the 'sandbox_jakacki_frontend1' branch) >>to add wide character support to the Encoding and TypeInfo classes. >>It seems no opencxx user has noticed so far, only I did in synopsis. >>I backported it anyways, for completeness. > > > > Thanks, Stefan. Indeed that must have been an omission. Could you also add > a testcase? hmm, ok, once I understand how the TypeInfo class works ;-) In synopsis I only use the Encoding class, and the docs state to keep TypeInfo in sync whenever Encoding is changed, so I just followed the example... I'll have a closer look. Stefan |
From: Grzegorz J. <ja...@he...> - 2004-05-28 00:44:26
|
On Thu, 27 May 2004, Stefan Seefeld wrote: > Stefan Seefeld wrote: > > hi there, > > > > I'm looking into the wide character support that has been > > added to opencxx recently (?) as I try to merge an equivalent > > patch into synopsis. > > > > The 'Encoding' class doesn't appear to account for wide characters. > > Is that an omission or am I missing something ? > > I just committed a patch (to the 'sandbox_jakacki_frontend1' branch) > to add wide character support to the Encoding and TypeInfo classes. > It seems no opencxx user has noticed so far, only I did in synopsis. > I backported it anyways, for completeness. Thanks, Stefan. Indeed that must have been an omission. Could you also add a testcase? BR Grzegorz ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: Stefan S. <se...@sy...> - 2004-05-27 21:26:29
|
Stefan Seefeld wrote: > hi there, > > I'm looking into the wide character support that has been > added to opencxx recently (?) as I try to merge an equivalent > patch into synopsis. > > The 'Encoding' class doesn't appear to account for wide characters. > Is that an omission or am I missing something ? I just committed a patch (to the 'sandbox_jakacki_frontend1' branch) to add wide character support to the Encoding and TypeInfo classes. It seems no opencxx user has noticed so far, only I did in synopsis. I backported it anyways, for completeness. Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-05-27 15:41:50
|
hi there, I'm looking into the wide character support that has been added to opencxx recently (?) as I try to merge an equivalent patch into synopsis. The 'Encoding' class doesn't appear to account for wide characters. Is that an omission or am I missing something ? Regards, Stefan |
From: Stefan S. <st...@fr...> - 2004-05-27 13:05:48
|
Hi there, I'm forwarding this little exchange between the guys that are behind the boost project (http://boost.org/) and myself. If we get opencxx up to speed to serve their purpose this might be a major boost for opencxx' own development. More on that later, Stefan |
From: Stefan S. <se...@sy...> - 2004-05-13 15:22:57
|
hi there, I'm currently looking into ways to enhance the ways the parser can collaborate with the preprocessor. I'm doing this in the context of my synopsis project, where I generate cross referenced source code which is easy to navigate and introspect (see http://synopsis.fresco.org/docs/Manual/occ/index.html for an example). One of the challenges there is that the exact locations in the file that should be linked isn't available to the parser, as macro expansion has already taken place. I thus use a special preprocessor that provides me with the information about macro definitions and macro calls, so I can 'reverse apply' them to the data stream. While preprocessing I create a set of 'SourceFile' objects that contain initially just these macro definitions plus links to each other reflecting the file inclusion dependencies. Later more stuff is put in when the C and C++ parsers are run. The problem I'm currently facing is related to the lookup of the SourceFile objects during the parsing. The parser (opencxx) can be queried for the 'current file and line number' associated with a ptree node. However, that's not quite enough: As a file could be included more than once, and worse, each time with a different set of macros defined [1], I need to track exactly which inclusion this file comes from. Just think about precompiled headers for a moment and all the challenges this implies... Has anybody already thought about this problem, in particular in the context of opencxx ? As I said, while my specific interest is synopsis right now, I'd really prefer to work on opencxx directly, as this work may be useful to others, too. Best regards, Stefan [1] My short term goal is to generate a full and correct reference manual for the boost project (http://www.boost.org), and if you'v already used boost before, you know what kind of challenge that means :-) |
From: Grzegorz J. <ja...@he...> - 2004-05-11 00:42:52
|
On Mon, 10 May 2004, Aitor Garay-Romero wrote: > Hi there!, > > Lately I have been thinking on how it could be possible to implement an > iContract like tool for C++. iContract is a Design by Contract tool for > Java. The basic idea is to parse the C++ source code looking for special > comment blocks that specify the precondition/postcondition/invariants of the > class. Then the input source code is extended with code that check the > contracts at runtime. > There are a few options for implementing such a DbC tool for C++. I have > come to OpenC++ and believe that is the most promising way to implement it > "easily". Other alternatives I'm considering are implementing a standalone > application that uses some freely available grammar or may be extending the > AspectC compiler. > I have no experience at all with OpenC++ and I have a few doubts. > > - do OpenC++ meta-programs have access to the source code comments? Only at the class level, see Class::Comments(). However lexer recognizes comments, you would need to tweak parser to put the comments in the parse tree. > - are the meta-programs able to transverse the inheritance hierarchy of a > given class? Yes. > - how well does OpenC++ handle macros? OpenC++ does not see any macros, OpenC++ works on preprocessed sources. AFAIU this is what you need. Some applications, however, need to understand code with macros unexpanded (e.g. if you want to regenerate human-readable source code). Asen Kovachev, Stefan Seelfeld and myself are thinking of this functionality, but it is definitely still in "thinking" phase. > And namespaces? OpenC++ understand namespaces and qualified names except namespace aliases. Thanks to Stefan namespace alias declarations can be parsed now, but still they are not considered in lookup :-( > - is it possible to introduce new methods in the generated classes? Yes. > Is it > possible to do arbitrary transformations like nesting the body of a method > inside some try/catch blocks? Yes. > > Sorry for the long list of questions, it would be very helpful if I can get > a rough idea of OpenC++ possibilities before getting deep into it. Has > someone hear about some similar effort of implementing DbC for C++? Any > ideas? (1) Browse archives of this list to get more insight of what can be done. (2) There are two major modes of using OpenC++: * Deriving classes from "Class" in order to customize the source-to-source translation. * Taking OpenC++ source code as a code base to build your own application of top of it (it usually means deriving from "Abstract...Walker" and/or making modifications to existing code). (3) The most up-to-date version of OpenC++ is in sandbox_jakacki_frontend1, Stefan and myself are working now on merging it to MAIN and releasing it soon. BR Grzegorz > > Thanks!, > > /AITOR > > > > ------------------------------------------------------- > This SF.Net email is sponsored by Sleepycat Software > Learn developer strategies Cisco, Motorola, Ericsson & Lucent use to deliver > higher performing products faster, at low TCO. > http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users > > ################################################################## # Grzegorz Jakacki Huada Electronic Design # # Senior Engineer, CAD Dept. 1 Gaojiayuan, Chaoyang # # tel. +86-10-64365577 x2074 Beijing 100015, China # # Copyright (C) 2004 Grzegorz Jakacki, HED. All Rights Reserved. # ################################################################## |
From: Stefan S. <se...@sy...> - 2004-05-10 19:03:46
|
hi Aitor, Aitor Garay-Romero wrote: > - do OpenC++ meta-programs have access to the source code comments? the current opencxx code doesn't allow this. However, I'm using a branched version of opencxx as part of the synopsis project (http://synopsis.fresco.org) where we added support for comments (specifically to document the code, but indeed any metadata could be embedded and processed). I plan to get synopsis and opencxx closer to each other, as I believe both have important features that would be even more powerful when used together. What you suggest sounds entirely possible and actually a very nice example of C++ meta programming. > - are the meta-programs able to transverse the inheritance hierarchy of a > given class? what do you mean by 'meta-program' here ? Tools like occ and synopsis certainly have access to that information, as they operate on the parse / syntax trees of the original source code. So they could generate the additional code you want. Note that this is a compile-time issue, there is no runtime support for introspection. > - how well does OpenC++ handle macros? And namespaces? Macros are not handled at all by opencxx, I believe. Again, synopsis deals with macros, and I'd like to find a way to give opencxx access to that, too. Namespaces should be fine (I recently patched opencxx to support namespace aliases, but that patch only fixes the parser, not the occ runtime environment. > - is it possible to introduce new methods in the generated classes? Is it > possible to do arbitrary transformations like nesting the body of a method > inside some try/catch blocks? yes, any transformation of the parse tree is possible, at least in theory. A special 'Walker' class that traverses the tree can find the nodes you are looking for and then based on these apply some modifications. I don't know how flexible the occ frontend right now is, i.e. how you would teach occ what exactly to do. From personal experience I believe that a scripting frontend to opencxx would be most powerful for really fine-grained control over the code manipulation. I don't know what opencxx's current support for code generation is. Regards, Stefan |