You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(9) |
Oct
(16) |
Nov
(14) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(14) |
Feb
(57) |
Mar
(72) |
Apr
(37) |
May
(21) |
Jun
(12) |
Jul
(16) |
Aug
(33) |
Sep
(24) |
Oct
|
Nov
(10) |
Dec
(8) |
2004 |
Jan
(6) |
Feb
(14) |
Mar
(47) |
Apr
(41) |
May
(16) |
Jun
(31) |
Jul
(78) |
Aug
(62) |
Sep
(99) |
Oct
(43) |
Nov
(35) |
Dec
(9) |
2005 |
Jan
(19) |
Feb
(22) |
Mar
(7) |
Apr
|
May
(5) |
Jun
(4) |
Jul
(2) |
Aug
(9) |
Sep
(15) |
Oct
(23) |
Nov
(2) |
Dec
(20) |
2006 |
Jan
|
Feb
(2) |
Mar
(7) |
Apr
|
May
|
Jun
(8) |
Jul
(15) |
Aug
(1) |
Sep
(4) |
Oct
|
Nov
(9) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(11) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <jal...@st...> - 2004-09-02 19:56:37
|
Hello Everyone, I am having a lot of trouble with the exception handling with OpenC++ (version 2.5.12) because it does not correctly parse a method declaration with a throw clause. For example, the PersonExu constructor declaration in this header file would generate a compiler error even though this is valid c++: class PersonExu { public: PersonExu(int a) throw (int); int Age(); int BirthdayComes(); int age; }; However, a throw expression in the body of any method would be ok. For example, this would be ok: PersonExu::PersonExu(int a):age(a) { if (age == 8) throw 8; } So I was wondering if there is any patch for this or newer version of OpenC++ that handles this. Thanks. John Altidor |
From: <jal...@st...> - 2004-09-02 19:56:20
|
Hello Everyone again, I have fixed the throw clause problem that was in my previous email by changing the body of the method bool Parser::optThrowDecl(Ptree*& throw_decl) in the parser.cc file (for OpenC++ 2.5.12 but each version of OpenC++ has the same implementation for this method). Here is what I changed this method to: bool Parser::optThrowDecl(Ptree*& throw_decl) { Token tk; int t; Ptree* p = nil; if(lex->LookAhead(0) == THROW){ lex->GetToken(tk); p = Ptree::Snoc(p, new LeafReserved(tk)); if(lex->GetToken(tk) != '(') return FALSE; p = Ptree::Snoc(p, new Leaf(tk)); Ptree* q; t = lex->LookAhead(0); if(t == '\0') return FALSE; else if (rTempArgList(q)) { if(lex->GetToken(tk) != ')') return FALSE; p = Ptree::Nconc(p, Ptree::List(q, new Leaf (tk))); } else return FALSE; } throw_decl = p; return TRUE; } Hope this helps anyone. John Altidor |
From: Grzegorz J. <ja...@ac...> - 2004-09-02 13:52:35
|
Stefan Seefeld wrote: > Finally, I cleaned up the Encoding class such that PTree::Nodes now hold > PTree::Encoding instances instead of 'char *'. Does PTree now need to know about Encoding? BR Grzegorz |
From: Stefan S. <se...@sy...> - 2004-09-02 02:43:56
|
Grzegorz Jakacki wrote: > Majority of my time invested in 2.8 release went into > solving problems related to libgc, libltdl, libtool, > autoconf/automake (in this order). > > I suggest to minimize impact of these technologies > on OpenC++ Core Lib, so that we can focus on templates > and overloading, and not on dynamic linking. I totally agree. I may be missing something but I don't understand the usefulness of running occ to build a plugin in one run, loading and using it in a second run. Is anybody actually using such a procedure ? What is its purpose ? Once we get of dynamic loading we can elimiate libltdl. In synopsis I use a stripped-down build system using make and autoconf (i.e. no automake and no libtool !). The windows version is compiled using 'gcc --mno-cygwin', which is equivalent to mingw. I anticipate to move to scons (http://www.scons.org) in the not-so-distant future. That will simplify matters even further, especially concerning portability. As far as libgc is concerned, I'm eliminating GC from all but the ptree types, as in all other cases ownership is well defined and can be dealt with by other means (pass-by-value, for example, for Encodings). We may consider ref counting in some cases, too. Just some ideas... Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-09-02 00:35:35
|
Gilles J. Seguin wrote: > Those language constructs are the one of OpenC++, right. > Do you have an idea/suggestion of the structure of the test directory. I'd keep it simple. A valid, self-contained source file will always contain a mix of these types (statements and expressions). I'd just use meaningful filenames that tell people what the focus of the test is. Let's say, some files/tests for simple variable declarations with builtin types, some for struct / class declarations, some to test inheritance (testing 'virtual', 'public', 'private', MI, etc., etc.), some to test the various template types. You get the idea... Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-09-02 00:30:17
|
Grzegorz Jakacki wrote: >> As far as synopsis is concerned, I'm looking into exposing these APIs >> to python and then being able to script on top of that. > > > That would also facilitate unit testing, right? yes indeed. Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-09-02 00:29:03
|
Grzegorz Jakacki wrote: > Would that make sense to take this code as a base for OpenC++ Core Lib? I certainly hope so. That's why I'd like everybody who's interested in OpenC++ development to review what I'v been doing ! > Ideally Core Lib should be reused with no changes between Synopsys and > the rest of OpenC++. obviously, yes. Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-09-02 00:26:47
|
Grzegorz Jakacki wrote: > > > Stefan Seefeld wrote: > >>> It is a must on my list. It is important and urgent. >>> That is, duplicate the qmtest from synopsis, > > > Is there any chance that we do not duplicate, just share the common part? for the tests I'd actually want to duplicate. (No worries, setting up qmtest is quite simple, once you'v done it once or twice :-) The reason is that we really should assess the status quo so we can measure regressions. In the long run these tests will be obsolete, but on the road we'll very much appreciate such guidance. > What modules do your tests exercise? I'v three test applets right now: one for the Lexer, one for the parser (without translation), and one for encodings. As synopsis doesn't use the translation step I'v not yet set up parser with translation as another test applet. Please note that I'm talking about the testing framework. I still need to fill it with real tests, i.e. I should provide input files that cover all possible syntax. I thus hope that Gilles will help us to do that systematically. Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-09-02 00:19:08
|
Grzegorz Jakacki wrote: >> However, I must admit that I don't quite like the current design. >> As different context require different loopup rules, an IMO better >> approach would have been to provide a polymorphic 'Scope' type >> and then implement it differently for 'Function', 'Class', >> 'TemplateClass', >> etc. Then each scope would know the rules of where to look if a symbol >> isn't defined locally. This would be more flexible and in particular >> allow to implement different lookup semantics depending on whether >> we are parsing C++ or C. What do you think ? > > > Generally looks good. Questions: > > * Scopes would replace Environments, right? yes. 'Scope' would be an abstract base class with methods 'declare' and 'lookup', as well as a hash for the locally declared symbols. Subclasses would then implement 'lookup' to not only search locally but in 'outer' scopes, too, according to the specific lookup semantics of functions, classes, template classes, etc. > * What would be connection between Scopes and other objects > representing e.g. functions or classes ? Scopes are associated with those objects that are, well, scopes. Right now I would probably do this in a simple dictionary, but once we are getting into the AST stuff we could rethink about this. > * Who would own Scopes? Good question. Who's using them ? The parser of course, would need them, and all the Encoding / TypeInfo instances need them. May be we need to keep to maintain a 'scope graph' in parallel to the parse tree / syntax tree. As there are arbitrary many references to any given scope may be it's a good idea to make scopes reference counted. Regards, Stefan |
From: <se...@in...> - 2004-09-01 21:20:51
|
On Tue, 2004-08-31 at 20:13, Stefan Seefeld wrote: > > One step that I have try to do is documenting the grammar. > > My goal was to derive tests that verify that this grammar correspond > > to what the program is doing. And also validate that changed codes > > do not produce unforeseen effects. > > right, that's what regression tests are all about, aren't they ? > > > My suggestion will be to used grammar with the PCCTS style. > > What I try to get from the PCCTS style grammar is, > > the semantic|syntactic predicates. > > [...] > > Can you qualify relevance of this grammar > > (important, urgent, not useful, later) > > yes, any test with a somewhat good coverage of potential input will help > us. If you can come up with either some test applets or simply a set of > input files that feature all the important language constructs Those language constructs are the one of OpenC++, right. Do you have an idea/suggestion of the structure of the test directory. For example, here following ISO/IEC 14882 test suite +- expression +- statements +- declarations +- declarators +- classes +- derived classes +- member access control +- special member control +- overloading +- templates | +- template parameter | +- template arguments +- exception handling > we should test the parser against, I'll help us a lot. |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 14:18:49
|
Stefan Seefeld wrote: > Gilles J. Seguin wrote: > >> Templates make a C++ parser at least 'x' times harder. >> Can you explain how we can handle template specialization with current >> parser. >> >> What I understand, >> - specializing templates requires that the tokens for a template >> declaration be saved so that the template can be reprocessed when the >> actual template arguments are specified. > > > I'm not sure whether that is necessary. It might be if you wanted to > do some validation such as make sure that the types on which a template > should be instantiated fulfill the requirements of the template. > I don't think this is all that useful at the moment. Unfortunately it all is necessary. Consider this: class Q : public Foo<int>::type { }; You need to have full templates support in order to get the base class of Q. However, if you restrict yourself just to parsing, then you need only one thing: you need to know if an identifier is a template (and this knowledge is necessary only when parsing expressions, to resolve between '<' and '>' being angle brackets or relational operators). > Existing difficulties in resolving ambiguities during template parsing > aside, I believe everything we need is already in place. The parser > can parse both template declarations as well as template specializations. > We only need to look up the first when we run into the second. All we > need is an environment... > >> - it requires the syntax phase to perform near-complete type, scope, and >> expression analysis, including function overloading. > > > Right, it would, if we tried to validate the code. > >> So my believed is that we need to rejuvenate the design. >> But my question is how. > > > Functionality-wise I believe the big issue is with ambiguities the current > parser can't resolve due to the two-phase parsing. We should try to merge > these two phases into one, as we discussed in another thread. Agreed. > Another issue is that of the code being quite monolithic. Now that I'v > had some first cut at a refactoring I'm more convinced than ever that > it is possible to clear up the code substantially, and thus make it simpler > to use and enhance (the old topic of a new AST API on top of the PTree > stuff comes to mind !). > > As far as synopsis is concerned, I'm looking into exposing these APIs > to python and then being able to script on top of that. That would also facilitate unit testing, right? BR Grzegorz > That would remove > the need for a complete, portable C++ application frontend, i.e. I believe > it's far easier to write an 'occ' application in python, where running > subprocesses and loading plugins is far more simple and portable then > doing the same in C/C++. Just imagine, we could completely drop all this > libtool / lddl business...! > > Regards, > Stefan > > > ------------------------------------------------------- > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 14:13:05
|
Stefan Seefeld wrote: >> It is a must on my list. It is important and urgent. >> That is, duplicate the qmtest from synopsis, Is there any chance that we do not duplicate, just share the common part? >> Stephan is this what you are referring to, confirm. > > > yep, it is. In fact, with some luck I could check in some basic test > harness into opencxx mainline still tonight, within which we then can > add test applets as well as input files (as the ones you are proposing > above). What modules do your tests exercise? |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 14:11:12
|
Gilles J. Seguin wrote: [...] > > Templates make a C++ parser at least 'x' times harder. > Can you explain how we can handle template specialization with current > parser. Almost none in HEAD (there is some code in 'exp_templates' branch, but dated and perhaps it is not worth dragging in). > What I understand, > - specializing templates requires that the tokens for a template > declaration be saved so that the template can be reprocessed when the > actual template arguments are specified. More than that. Types in template definition that do not depend on template arguments must be bound in the point of definition; remaining types are bound in the point of instantiation. This means, that either you have to bind the types after you read the definition or you need to store the environment "snapshot" from the point of definition, to be able to bind appropriately at the point of instantiation. > - it requires the syntax phase to perform near-complete type, scope, and > expression analysis, including function overloading. Not really. First of all, if you just want to *parse*, then parser does have to perform all this analysis; for parser it is enough to know if a given identifier denotes template or not. Second, even if you want to perform all this analysis during parsing, you don't have to (and IMO you should not) put the implementation into parser. The design I am suggesting is that Parser stores a pointer to AbstractStaticAnalyzer. AbstractStaticAnalyzer is an interface that Parser uses to inform the rest of the system, that it parsed certain constructs. The implementation of AbstractStaticAnalyzer may, based on the information from Parser, maintain the data structures such as representations of scopes, classes or templates. However, another implementation of AbstractStaticAnalyzer may just do nothing, and it can be used for testing or in applications that are not concerned with full templates processing (e.g. pretty-printer). > Present design predate such change to the language. > In my vocabulary, present design is "C-Front style". That's right. > So my believed is that we need to rejuvenate the design. > But my question is how. Let me know if the description above makes sense for you. BR Grzegorz |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 14:00:50
|
Gilles J. Seguin wrote: > On Mon, 2004-08-30 at 20:44, Stefan Seefeld wrote: > >>Gilles J. Seguin wrote: >> >> >>>What do we do with synopsis, >>>- must we try to merge, >>> 1) Stephan, do you have suggestions. >> >>not at this point. I'm still (as you may see if you follow my mails ;-) >>learning my way through opencxx, cleaning up and refactoring the code >>in 'my branch'. > > > About 'my branch', I am puzzle. Let's agree on a common repo ASAP. Stefan, what are the benefits of Fresco? Can OpenC++ get space there? Do we need to move repo there, wouldn't you move your branch of OpenC++ to SF? BR Grzegorz > Does this branch is in opencxx or synopsis. > If in opencxx, I was under the impression that modification in CVS > where sending email to ope...@li.... > What is the name of the branch. I may not be aware of a lot of > work you have been doing. > I am tracking through syn...@fr... > > >>We should, however, discuss to find out whether we can agree on common >>goals, and if so, on the means to get there. If that is done, we may >>think about whether it is worth to merge, and how. >>Let's do one step at a time. > > > One step that I have try to do is documenting the grammar. > My goal was to derive tests that verify that this grammar correspond > to what the program is doing. And also validate that changed codes > do not produce unforeseen effects. > > My suggestion will be to used grammar with the PCCTS style. > What I try to get from the PCCTS style grammar is, > the semantic|syntactic predicates. > > Semantic predicate is a parsing decision, on choice of alternative rules > based, from information not available to a pure LL(k) parser. > For example, bool isTypeSpecifier() is the implementation of a > syntactic predicate. > a:= (A)* B > a:= (A)* C > is now > a:= ( (A)* B )? (A)* B > a:= (A)* C > more information from "Language Translation Using PCCTS and C++" > see <http://www.antlr.org/book.pcctsbk.pdf> page 28 > > Can you qualify relevance of this grammar > (important, urgent, not useful, later) > > >>I believe the first step should be an assessment of the current >>functionality. > > > Agree. Previous comment was about documenting internal. > > >>This involves a lot of reverse-engineering (for me at least, >>as my understanding of opencxx is still limitted), >>as well as the setup of lots of applets to demonstrates how opencxx >>is working (unit tests are the best means for that). >>I'v been doing this with synopsis over the last couple of weeks, and >>I hope I can help you do the same with opencxx, so we can compare. > > > It is a must on my list. It is important and urgent. > That is, duplicate the qmtest from synopsis, > Stephan is this what you are referring to, confirm. > > >>Does this make sense ? > > > A lot of sense. > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 13:58:17
|
Stefan Seefeld wrote: > Grzegorz Jakacki wrote: > >> I don't pretend to understand all the details of how it works, but I >> think there is more to it than you describe. Look at >> 'SearchBaseOrUsing()', which to my understanding handles the 'forks' >> in the acyclic graph. > > > you are right. I was puzzled by the 'GetOuterEnvironment()' call... > > However, I must admit that I don't quite like the current design. > As different context require different loopup rules, an IMO better > approach would have been to provide a polymorphic 'Scope' type > and then implement it differently for 'Function', 'Class', 'TemplateClass', > etc. Then each scope would know the rules of where to look if a symbol > isn't defined locally. This would be more flexible and in particular > allow to implement different lookup semantics depending on whether > we are parsing C++ or C. What do you think ? Generally looks good. Questions: * Scopes would replace Environments, right? * What would be connection between Scopes and other objects representing e.g. functions or classes ? * Who would own Scopes? > >> Also note that OpenC++ is non-validating, consequently it rightfully >> ignores, say, ambiguities in multiple inheritance. > > > yeah, I notice that the current implementation isn't complete. > >>> Does symbol lookup currently work in such cases ? >> >> >> >> Why not check empirically?... :-) > > > heh, right. That brings me back to the other point: unit testing. > Now that the release is out (congratulations !) we should try to > cover as much as possible with unit tests so we can a) make sure > changes don't incure regressions and b) learn how occ works in detail. That's right. I suggest, however, to cut off the core part of OpenC++ and now focus on unit tests for this part only. BR Grzegorz > > Regards, > Stefan > > > ------------------------------------------------------- > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 13:57:07
|
Stefan Seefeld wrote: > hi there, > > as I have been getting into the opencxx internals over the last weeks, > I'v started to refactor / redesign the code. Here are some thoughts > about this efford. > > I renamed some classes to make their goal clearer, and I changed the > method names to be more consistent with the rest of synopsis' naming > conventions. Anyways, here is the meat: [...] > That's about where I am right now. Cool! Would that make sense to take this code as a base for OpenC++ Core Lib? Ideally Core Lib should be reused with no changes between Synopsys and the rest of OpenC++. BR Grzegorz |
From: Grzegorz J. <ja...@ac...> - 2004-09-01 13:47:23
|
Hi, Stefan very reasonably wrote about first agreeing on goals. So my goal is: =============================================== To have open-source refactoring tool for C++. =============================================== And my vision on how to get there is: ================================== First focus on OpenC++ Core Lib. ================================== OpenC++ includes several modules having different complexity and different reuse potential. For example driver calling compilers and linkers is quite complex but its reuse potential is low. IMO the most important modules of OpenC++ are lexer, parser, program object model and static analyzer, and we should focus on them. I suggest to (1) Take these modules out and call them "OpenC++ Core Lib", (2) Focus on Core Lib only, forgetting about compatibility with the rest of OpenC++ for some time, (3) Implement templates and overloading in Core Lib, along with appropriate unit tests, document it honestly, (4) Go back to the rest of OpenC++ and reimplement it using Core Lib API; this will be the proof of concept that the API is complete How to do this: (1) Have one repository. Let's put the code base of OpenC++ Core Lib in one repo and work in it. Let it be hosted on SF or on fresco or wherever, but let's have ONE repo for OpenC++ Core Lib. Present situation with Gilles here and Stefan there only brings confusion and problems with merging. (2) Minimize impact of trouble-makers. Majority of my time invested in 2.8 release went into solving problems related to libgc, libltdl, libtool, autoconf/automake (in this order). I suggest to minimize impact of these technologies on OpenC++ Core Lib, so that we can focus on templates and overloading, and not on dynamic linking. (3) Keep OpenC++ Core Lib easy to compile and test. Several potential volunteers approached OpenC++ in during the last year, but were distracted by problems with compiling on WinXP. This shrinks the pool of potential volunteers. Please let me know what you think. If you guys find this agenda in common with your goals, then let's try to get down to more technical terms and maybe some timelines or schedule. BR Grzegorz |
From: Stefan S. <se...@sy...> - 2004-09-01 00:27:35
|
Gilles J. Seguin wrote: > Templates make a C++ parser at least 'x' times harder. > Can you explain how we can handle template specialization with current > parser. > > What I understand, > - specializing templates requires that the tokens for a template > declaration be saved so that the template can be reprocessed when the > actual template arguments are specified. I'm not sure whether that is necessary. It might be if you wanted to do some validation such as make sure that the types on which a template should be instantiated fulfill the requirements of the template. I don't think this is all that useful at the moment. Existing difficulties in resolving ambiguities during template parsing aside, I believe everything we need is already in place. The parser can parse both template declarations as well as template specializations. We only need to look up the first when we run into the second. All we need is an environment... > - it requires the syntax phase to perform near-complete type, scope, and > expression analysis, including function overloading. Right, it would, if we tried to validate the code. > So my believed is that we need to rejuvenate the design. > But my question is how. Functionality-wise I believe the big issue is with ambiguities the current parser can't resolve due to the two-phase parsing. We should try to merge these two phases into one, as we discussed in another thread. Another issue is that of the code being quite monolithic. Now that I'v had some first cut at a refactoring I'm more convinced than ever that it is possible to clear up the code substantially, and thus make it simpler to use and enhance (the old topic of a new AST API on top of the PTree stuff comes to mind !). As far as synopsis is concerned, I'm looking into exposing these APIs to python and then being able to script on top of that. That would remove the need for a complete, portable C++ application frontend, i.e. I believe it's far easier to write an 'occ' application in python, where running subprocesses and loading plugins is far more simple and portable then doing the same in C/C++. Just imagine, we could completely drop all this libtool / lddl business...! Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-09-01 00:16:28
|
Gilles J. Seguin wrote: > About 'my branch', I am puzzle. > Does this branch is in opencxx or synopsis. > If in opencxx, I was under the impression that modification in CVS > where sending email to ope...@li.... > What is the name of the branch. I may not be aware of a lot of > work you have been doing. > I am tracking through syn...@fr... yeah, I'm talking about synopsis. Sorry if that wasn't clear. And again, half of my checkins don't show up on the synopsis-changes list as the diffs are so big that it wouldn't make sense to mail them. I'll patch the commit-email.pl script some day to skip the diffs if they exceed some size... >>We should, however, discuss to find out whether we can agree on common >>goals, and if so, on the means to get there. If that is done, we may >>think about whether it is worth to merge, and how. >>Let's do one step at a time. > > > One step that I have try to do is documenting the grammar. > My goal was to derive tests that verify that this grammar correspond > to what the program is doing. And also validate that changed codes > do not produce unforeseen effects. right, that's what regression tests are all about, aren't they ? > My suggestion will be to used grammar with the PCCTS style. > What I try to get from the PCCTS style grammar is, > the semantic|syntactic predicates. > > Semantic predicate is a parsing decision, on choice of alternative rules > based, from information not available to a pure LL(k) parser. > For example, bool isTypeSpecifier() is the implementation of a > syntactic predicate. > a:= (A)* B > a:= (A)* C > is now > a:= ( (A)* B )? (A)* B > a:= (A)* C > more information from "Language Translation Using PCCTS and C++" > see <http://www.antlr.org/book.pcctsbk.pdf> page 28 > > Can you qualify relevance of this grammar > (important, urgent, not useful, later) yes, any test with a somewhat good coverage of potential input will help us. If you can come up with either some test applets or simply a set of input files that feature all the important language constructs we should test the parser against, I'll help us a lot. >>This involves a lot of reverse-engineering (for me at least, >>as my understanding of opencxx is still limitted), >>as well as the setup of lots of applets to demonstrates how opencxx >>is working (unit tests are the best means for that). >>I'v been doing this with synopsis over the last couple of weeks, and >>I hope I can help you do the same with opencxx, so we can compare. > > > It is a must on my list. It is important and urgent. > That is, duplicate the qmtest from synopsis, > Stephan is this what you are referring to, confirm. yep, it is. In fact, with some luck I could check in some basic test harness into opencxx mainline still tonight, within which we then can add test applets as well as input files (as the ones you are proposing above). Regards, Stefan |
From: <se...@in...> - 2004-08-31 16:03:15
|
On Tue, 2004-08-31 at 02:10, Stefan Seefeld wrote: > hi there, > > as I have been getting into the opencxx internals over the last weeks, > I'v started to refactor / redesign the code. Here are some thoughts > about this efford. > > I renamed some classes to make their goal clearer, and I changed the > method names to be more consistent with the rest of synopsis' naming > conventions. Anyways, here is the meat: [...] > That's about where I am right now. The PTree namespace looks quite clean > already. One issue I'm not quite satisfied with is the design of Encoding > and TypeInfo (and Environment, just to name all classes that touch this > design): I'd like to get back to this in the not-so-far future. > Next comes the 'Environment' class. As I said in my last mail, that looks > a bit monolithic, i.e. instead of putting specific cases into subclasses > (class scope with lookup in base classes, to name one example), it's all > contained in this single class. > > Once these points are addressed, I believe it would be a good time to > start thinking about the Parser and how the symbol lookup could be > integrated right into the parse stage (instead of doing a second pass > over the ptree via ClassWalker). > > What do you think about these changes ? Whether we decide to work on the > parser or keep it as it is, Templates make a C++ parser at least 'x' times harder. Can you explain how we can handle template specialization with current parser. What I understand, - specializing templates requires that the tokens for a template declaration be saved so that the template can be reprocessed when the actual template arguments are specified. - it requires the syntax phase to perform near-complete type, scope, and expression analysis, including function overloading. Present design predate such change to the language. In my vocabulary, present design is "C-Front style". So my believed is that we need to rejuvenate the design. But my question is how. > I believe the changes I have applied already provide an important > cleanup of the design, and thus make it much more easy to maintain > and extend the existing functionality. |
From: <se...@in...> - 2004-08-31 14:50:42
|
On Mon, 2004-08-30 at 20:44, Stefan Seefeld wrote: > Gilles J. Seguin wrote: > > > What do we do with synopsis, > > - must we try to merge, > > 1) Stephan, do you have suggestions. > > not at this point. I'm still (as you may see if you follow my mails ;-) > learning my way through opencxx, cleaning up and refactoring the code > in 'my branch'. About 'my branch', I am puzzle. Does this branch is in opencxx or synopsis. If in opencxx, I was under the impression that modification in CVS where sending email to ope...@li.... What is the name of the branch. I may not be aware of a lot of work you have been doing. I am tracking through syn...@fr... > We should, however, discuss to find out whether we can agree on common > goals, and if so, on the means to get there. If that is done, we may > think about whether it is worth to merge, and how. > Let's do one step at a time. One step that I have try to do is documenting the grammar. My goal was to derive tests that verify that this grammar correspond to what the program is doing. And also validate that changed codes do not produce unforeseen effects. My suggestion will be to used grammar with the PCCTS style. What I try to get from the PCCTS style grammar is, the semantic|syntactic predicates. Semantic predicate is a parsing decision, on choice of alternative rules based, from information not available to a pure LL(k) parser. For example, bool isTypeSpecifier() is the implementation of a syntactic predicate. a:= (A)* B a:= (A)* C is now a:= ( (A)* B )? (A)* B a:= (A)* C more information from "Language Translation Using PCCTS and C++" see <http://www.antlr.org/book.pcctsbk.pdf> page 28 Can you qualify relevance of this grammar (important, urgent, not useful, later) > I believe the first step should be an assessment of the current > functionality. Agree. Previous comment was about documenting internal. > This involves a lot of reverse-engineering (for me at least, > as my understanding of opencxx is still limitted), > as well as the setup of lots of applets to demonstrates how opencxx > is working (unit tests are the best means for that). > I'v been doing this with synopsis over the last couple of weeks, and > I hope I can help you do the same with opencxx, so we can compare. It is a must on my list. It is important and urgent. That is, duplicate the qmtest from synopsis, Stephan is this what you are referring to, confirm. > Does this make sense ? A lot of sense. |
From: Stefan S. <se...@sy...> - 2004-08-31 06:13:55
|
hi there, as I have been getting into the opencxx internals over the last weeks, I'v started to refactor / redesign the code. Here are some thoughts about this efford. I renamed some classes to make their goal clearer, and I changed the method names to be more consistent with the rest of synopsis' naming conventions. Anyways, here is the meat: * reduce unnecessary abstraction: The 'Program' classes are gone and replaced by a single 'Buffer' class, which holds the data the ptree is constructed on top of. If you want to read from file, string, or console, just use the appropriate streambuf type and pass that to the Buffer's constructor. * rewrite the Lexer: enhance Token and Lexer classes. In particular, consider the Buffer data const, i.e. make all methods take 'const char *' instead of 'char *'. Simplify the Lexer code to use std::map and std::queue instead of its own types. The performance penalty is tolerable (< 10%). The benefit is that individual keywords can be added/removed at runtime, which allows for 'cross-compilation' as well as support for C++ as well as C. * clean up the parse tree types: Some type renaming aside, I cleaned up the 'PTree::Node' (former 'Ptree') interface substantially: I introduced a generic 'Visitor' class and added 'accept' methods to all ptree node types. I then reimplemented the 'display' and 'write' methods by Visitor subclasses. I did the same for the 'type_of' and 'translate' methods, so now PTree nodes don't know about neither TypeInfo nor Walker any more. (Phew !!) Finally, I cleaned up the Encoding class such that PTree::Nodes now hold PTree::Encoding instances instead of 'char *'. That's about where I am right now. The PTree namespace looks quite clean already. One issue I'm not quite satisfied with is the design of Encoding and TypeInfo (and Environment, just to name all classes that touch this design): I'd like to get back to this in the not-so-far future. Next comes the 'Environment' class. As I said in my last mail, that looks a bit monolithic, i.e. instead of putting specific cases into subclasses (class scope with lookup in base classes, to name one example), it's all contained in this single class. Once these points are addressed, I believe it would be a good time to start thinking about the Parser and how the symbol lookup could be integrated right into the parse stage (instead of doing a second pass over the ptree via ClassWalker). What do you think about these changes ? Whether we decide to work on the parser or keep it as it is, I believe the changes I have applied already provide an important cleanup of the design, and thus make it much more easy to maintain and extend the existing functionality. Comments are highly appreciated, Stefan |
From: Stefan S. <se...@sy...> - 2004-08-31 00:47:31
|
Gilles J. Seguin wrote: > What do we do with synopsis, > - must we try to merge, > 1) Stephan, do you have suggestions. not at this point. I'm still (as you may see if you follow my mails ;-) learning my way through opencxx, cleaning up and refactoring the code in 'my branch'. We should, however, discuss to find out whether we can agree on common goals, and if so, on the means to get there. If that is done, we may think about whether it is worth to merge, and how. Let's do one step at a time. I believe the first step should be an assessment of the current functionality. This involves a lot of reverse-engineering (for me at least, as my understanding of opencxx is still limitted), as well as the setup of lots of applets to demonstrates how opencxx is working (unit tests are the best means for that). I'v been doing this with synopsis over the last couple of weeks, and I hope I can help you do the same with opencxx, so we can compare. Does this make sense ? Regards, Stefan PS: I'll write down some thoughts on the design in another mail... |
From: Stefan S. <se...@sy...> - 2004-08-31 00:34:43
|
Grzegorz Jakacki wrote: > I don't pretend to understand all the details of how it works, but I > think there is more to it than you describe. Look at > 'SearchBaseOrUsing()', which to my understanding handles the 'forks' in > the acyclic graph. you are right. I was puzzled by the 'GetOuterEnvironment()' call... However, I must admit that I don't quite like the current design. As different context require different loopup rules, an IMO better approach would have been to provide a polymorphic 'Scope' type and then implement it differently for 'Function', 'Class', 'TemplateClass', etc. Then each scope would know the rules of where to look if a symbol isn't defined locally. This would be more flexible and in particular allow to implement different lookup semantics depending on whether we are parsing C++ or C. What do you think ? > Also note that OpenC++ is non-validating, consequently it rightfully > ignores, say, ambiguities in multiple inheritance. yeah, I notice that the current implementation isn't complete. >> Does symbol lookup currently work in such cases ? > > > Why not check empirically?... :-) heh, right. That brings me back to the other point: unit testing. Now that the release is out (congratulations !) we should try to cover as much as possible with unit tests so we can a) make sure changes don't incure regressions and b) learn how occ works in detail. Regards, Stefan |
From: Grigorenko D. <pos...@na...> - 2004-08-30 20:38:09
|
Hello > Dmitriy can you help with with the Cygwin compatibility. With wat? With Synopsis? I am not sure. In any case I can try to compile it and post results. |