You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(9) |
Oct
(16) |
Nov
(14) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(14) |
Feb
(57) |
Mar
(72) |
Apr
(37) |
May
(21) |
Jun
(12) |
Jul
(16) |
Aug
(33) |
Sep
(24) |
Oct
|
Nov
(10) |
Dec
(8) |
2004 |
Jan
(6) |
Feb
(14) |
Mar
(47) |
Apr
(41) |
May
(16) |
Jun
(31) |
Jul
(78) |
Aug
(62) |
Sep
(99) |
Oct
(43) |
Nov
(35) |
Dec
(9) |
2005 |
Jan
(19) |
Feb
(22) |
Mar
(7) |
Apr
|
May
(5) |
Jun
(4) |
Jul
(2) |
Aug
(9) |
Sep
(15) |
Oct
(23) |
Nov
(2) |
Dec
(20) |
2006 |
Jan
|
Feb
(2) |
Mar
(7) |
Apr
|
May
|
Jun
(8) |
Jul
(15) |
Aug
(1) |
Sep
(4) |
Oct
|
Nov
(9) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(11) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Stefan S. <se...@sy...> - 2004-08-17 04:47:45
|
Grzegorz Jakacki wrote: > Encoding no longer depends on TypeInfo, offending methods have been > promoted to EncodingUtil. yeah, I noticed. But is that really a solution ? The Encoding class should encapsulate the representation of declarator type/names, and moving half of it out into 'EncodingUtil' breaks that encapsulation, i.e. even though EncodingUtil is now a separate piece of code, it very much depends on the intrinsics of the Encoding class. In fact even the TypeInfo class depends on a large degree on Encoding, so I wonder whether we shouldn't try to refactor these classes. Another issue is that of TypeInfo's expanded vs. non-expanded form... >>> Typeinfo --- this class is intended to represent types, however the >> >> >> representation is not unique, because: >> >>> - typedefs are expanded lazily (thus many operations need >>> an environment, to be able to look up typedefs definitions), >>> - dereferences ('*') are applied lazily. >> >> >> >> yeah. Is this lazyness really necessary ? Does it gain performance ? >> I'm just wondering because without it (i.e. putting 'Normalize' into >> the constructor), most methods could become 'const'... >> Or alternatively the inner representation (encoding) could become >> 'mutable', >> such that normalizing a TypeInfo keeps the invariants intact and thus can >> semantically be considered a no-op. > > > This is a profound question. > > In theory this laziness is not necessary. However in practice it may be > very useful (for the users, not implementors). 'gcc' is very much > praised for expanding typedefs lazily, which results in much more > readable error messages (especially if one uses lots of templates). My > suggestion is not to remove it unless it is really really really > troublesome to maintain it. > > The solution with 'mutable' has a drawback: the const operations will > actually change the observable state of the object, because > 'FullTypeName()' and 'MakePtree()' will not necesarily return the same > result after normalization. Speaking of which, what does 'MakePtree' actually generate ? Or, to push the question further, what do the two encodings of a declarator contain ? What is a 'type encoding', and what is a 'name encoding' ? Why isn't one enough ? A declarator either declares a type or an instance (let's count function as an instance for now), and in the first case I need the type encoding, in the second the name encoding, no ? What am I missing ? Thanks for any clarifications, Stefan |
From: Grzegorz J. <ja...@ac...> - 2004-08-16 15:01:04
|
Stefan Seefeld wrote: [...] > Hmm, but to understand the matter better I have to see encodings and > typeinfos > in use. This may even be a good candidate for some unit tests: > > * parse some declarations > * inspect (dump) the generated encodings > * regenerate new ptrees from these encodings > * compare these with the original declarations > > That would be a very good demonstration / documentation as well as a good > test case. Definitely. I can try to tackle it, at least to build several model unit test (perhaps with Qmtest), however I would like to get 2.8 out first. I will try to get down to these unit tests within three weeks from now. BR Grzegorz > > Regards, > Stefan > > > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Grzegorz J. <jak...@tr...> - 2004-08-16 11:42:26
|
Hi, This is a fragment of some libocc_mop.la file created by libtool 1.5.6: # The name that we can dlopen(3). dlname='libocc_mop.0' # Names of this library. library_names='libocc_mop.0.0.0 libocc_mop.0 libocc_mop' # The name of the static archive. old_library='libocc_mop.a' Why no '.so' in dlname and library_names ??? Thanks Grzegorz --- Free C++ frontend library: http://opencxx.sourceforge.net China from the inside: http://www.staryhutong.com Sender of this e-mail: http://www.dziupla.net/gj/cv |
From: Stefan S. <se...@sy...> - 2004-08-16 01:12:37
|
Grzegorz Jakacki wrote: > Hm... Maybe we really should not bother with this laziness for now... > However we may have problems later, since e.g. in refactoring when using > some existing type to produce new declarations it is very desirable to > have it in least expanded form. Could you briefly show what exactly > would be substantially easier? well, right now it is for me still a matter of understanding, not implementing. Your hint at intelligible error messages using the un-normalized name is very valuable. It suggests that normalizing is not just an internal detail of optimizing, but rather that it may be useful to make this more explicit, i.e. make it possible for the user to access a typeinfo both in its expanded and non-expanded form. Hmm, but to understand the matter better I have to see encodings and typeinfos in use. This may even be a good candidate for some unit tests: * parse some declarations * inspect (dump) the generated encodings * regenerate new ptrees from these encodings * compare these with the original declarations That would be a very good demonstration / documentation as well as a good test case. Regards, Stefan |
From: Grzegorz J. <ja...@ac...> - 2004-08-14 10:43:49
|
Hi, Stefan Seefeld wrote: > Hi Grzegorz, > > thanks for taking this up :-) No problem. > I'll try to rephrase the documentation. Let's > iterate over it until it is correct. > > Once everything is covered and correct, it should > be put into the code, so synopsis can extract it > into a reference manual :-) That's what I had in mind. > Grzegorz Jakacki wrote: > >>>* Encoding::GetBaseName() >> >> >>This is now EncodingUtil::GetBaseName(). Comment says: >> >>// GetBaseName() returns "Foo" if ENCODE is "Q[2][1]X[3]Foo", for >>// example. >>// If an error occurs, the function returns 0. >> >>Let me know what is unclear. > > > Here is my rephrased doc: > > The result of GetBaseName is a substring of the 'encoding' argument. > [but what is 'base' ?] > The Environment will point to the scope containing the declaration. Preconditions: - (encode, len@pre) is encoding of type T Postconditions: - U is type T with qualification stripped, assuming all qualifying typedefs and templates were looked up starting from environment 'env'@pre - 'env'@post is environment in which U is declared. - (return_value, len@post) is encoding of U Example: Assuming (encode, len@pre) is "Q[2][1]X[3]Foo" (i.e. 'X::Foo'), (return_value, len@post) is "[3]Foo" (i.e. 'Foo'). > > >>>* Encoding::ResolveTypedefName() >> >> >>Also has been moved to EncodingUntil::ResolveTypedefName(). >> >>Takes environment 'env' and typedef encoding ('name' & 'len'), returns > > environment in which the given typedef is defined. > > so the precondition is that 'name' is a typedef ? I think the precondition is that (name, len) is name (not sure how to call it really), e.g. typedef name, class name, but not e.g. pointer type. > By the way, I wonder whether this method can be reimplemented without > TypeInfo. Looks like so, but I am not 100% sure. Perhaps it is just enough to replace bind->GetType(tinfo, env); c = tinfo.ClassMetaobject(); with c = bind->ClassMetaobject() ??? ['env' is anyway reset to 0 in a moment] > AFAICS it's the only place where Encoding depends on TypeInfo, thus > introducing > a circular dependency. Encoding no longer depends on TypeInfo, offending methods have been promoted to EncodingUtil. >>Typeinfo --- this class is intended to represent types, however the > > representation is not unique, because: > >> - typedefs are expanded lazily (thus many operations need >> an environment, to be able to look up typedefs definitions), >> - dereferences ('*') are applied lazily. > > > yeah. Is this lazyness really necessary ? Does it gain performance ? > I'm just wondering because without it (i.e. putting 'Normalize' into > the constructor), most methods could become 'const'... > Or alternatively the inner representation (encoding) could become 'mutable', > such that normalizing a TypeInfo keeps the invariants intact and thus can > semantically be considered a no-op. This is a profound question. In theory this laziness is not necessary. However in practice it may be very useful (for the users, not implementors). 'gcc' is very much praised for expanding typedefs lazily, which results in much more readable error messages (especially if one uses lots of templates). My suggestion is not to remove it unless it is really really really troublesome to maintain it. The solution with 'mutable' has a drawback: the const operations will actually change the observable state of the object, because 'FullTypeName()' and 'MakePtree()' will not necesarily return the same result after normalization. Quick solution allowing to keep laziness, yet introduce constness (and preserve correct behaviour of 'FullTypeName()' and co.) is e.g.: (1) Privatize mutating member function 'Foo()' and rename to, say, 'MutatingFoo()' (2) Introduce public bool TypeInfo::Foo() const { TypeInfo tmp(*this); return tmp.MutatingFoo(); } What do you think? [...] >>>* TypeInfo::Normalize() >> >> >>[I fail to understand this function clearly; my findings:] >>- strips top-level cv-qualifiers >>- if *this represents dereferenced function pointer changes *this >> so that it represents function return type, >>- if *this represents dereferenced typedef, expands typedef and >> tries to proceed with dereferencing >> >>Name suggests that the actual represented type should not change, although > > its representation may (e.g. typedefs are expanded if necessary). However > some details are misterious (member function pointers for instance). Another mysterious details is why cv-qualifiers are stripped. > This method is indeed the most puzzling: it only operates > on temporary variables, so what does it *really* do ? > It looks like the lazy evaluation of (de)reference and typedefs, > but the result is never used, as the member variables aren't reset > to the values of the local variables...?!? It calls ResolveTypedef(), and this is mutating member function. >>>* TypeInfo::SkipCv() >> >> >>Strips all top-level cv-qualifiers from type represented by *this; if > > after stripping *this represents a typedef, expands typdef and proceeds. > > good. (though here again: all the complexity stems from the fact that refs > and typedefs > are handled lazily. If it wouldn't, the implementation would become *much* > simpler !) Hm... Maybe we really should not bother with this laziness for now... However we may have problems later, since e.g. in refactoring when using some existing type to produce new declarations it is very desirable to have it in least expanded form. Could you briefly show what exactly would be substantially easier? Best regards Grzegorz >>>* TypeInfo::SkipName(encode, e) >> >> >>Preconditions: >> - encode points into cstring which encodes a typeinfo. >> - encode points to the beginning of class or template name withing >> the encoded typeinfo cstring >>Postcondition: >> - return value points immediately after the class or template name >> in the encoded typeinfo cstring; if cstring contains typedef names, >> they are expanded in environment e >> >> >>>* TypeInfo::ResolveTypedef(e, ptr, resolvable) >> >> >>Preconditions: >> - ptr points to the beginning of a typedef name in an encoding >> - e is an environment, in which the typedef occures (lookup begins >> in this environment) >>Postconditions: >> - if typedef can be correctly looked up, *this represents the same >> type, but with the typedef name expanded. >> - otherwise *this is unchanged if 'resolvable == false' or >> becomes unknown (special state) otherwise. > > > nice explanation ! > > >>Hope this helps. > > > It does quite a lot. Thanks ! > > Best regards, > Stefan > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Stefan S. <sse...@ar...> - 2004-08-13 17:22:52
|
Hi Grzegorz, thanks for taking this up :-) I'll try to rephrase the documentation. Let's iterate over it until it is correct. Once everything is covered and correct, it should be put into the code, so synopsis can extract it into a reference manual :-) Grzegorz Jakacki wrote: >> * Encoding::GetBaseName() > > > This is now EncodingUtil::GetBaseName(). Comment says: > > // GetBaseName() returns "Foo" if ENCODE is "Q[2][1]X[3]Foo", for > // example. > // If an error occurs, the function returns 0. > > Let me know what is unclear. Here is my rephrased doc: The result of GetBaseName is a substring of the 'encoding' argument. [but what is 'base' ?] The Environment will point to the scope containing the declaration. >> * Encoding::ResolveTypedefName() > > > Also has been moved to EncodingUntil::ResolveTypedefName(). > > Takes environment 'env' and typedef encoding ('name' & 'len'), returns environment in which the given typedef is defined. so the precondition is that 'name' is a typedef ? By the way, I wonder whether this method can be reimplemented without TypeInfo. AFAICS it's the only place where Encoding depends on TypeInfo, thus introducing a circular dependency. > Typeinfo --- this class is intended to represent types, however the representation is not unique, because: > > - typedefs are expanded lazily (thus many operations need > an environment, to be able to look up typedefs definitions), > - dereferences ('*') are applied lazily. yeah. Is this lazyness really necessary ? Does it gain performance ? I'm just wondering because without it (i.e. putting 'Normalize' into the constructor), most methods could become 'const'... Or alternatively the inner representation (encoding) could become 'mutable', such that normalizing a TypeInfo keeps the invariants intact and thus can semantically be considered a no-op. >> * TypeInfo::Reference() > > > Precondition: *this represents some type T > Postcondition: *this represents type T* fine >> * TypeInfo::Dereference(t) > > > Precondition: *this represents some type equivalent to T* > Postcondition: *this represents type T fine >> * TypeInfo::Normalize() > > > [I fail to understand this function clearly; my findings:] > - strips top-level cv-qualifiers > - if *this represents dereferenced function pointer changes *this > so that it represents function return type, > - if *this represents dereferenced typedef, expands typedef and > tries to proceed with dereferencing > > Name suggests that the actual represented type should not change, although its representation may (e.g. typedefs are expanded if necessary). However some details are misterious (member function pointers for instance). > This method is indeed the most puzzling: it only operates on temporary variables, so what does it *really* do ? It looks like the lazy evaluation of (de)reference and typedefs, but the result is never used, as the member variables aren't reset to the values of the local variables...?!? >> * TypeInfo::SkipCv() > > > Strips all top-level cv-qualifiers from type represented by *this; if after stripping *this represents a typedef, expands typdef and proceeds. good. (though here again: all the complexity stems from the fact that refs and typedefs are handled lazily. If it wouldn't, the implementation would become *much* simpler !) >> * TypeInfo::SkipName(encode, e) > > > Preconditions: > - encode points into cstring which encodes a typeinfo. > - encode points to the beginning of class or template name withing > the encoded typeinfo cstring > Postcondition: > - return value points immediately after the class or template name > in the encoded typeinfo cstring; if cstring contains typedef names, > they are expanded in environment e > >> * TypeInfo::ResolveTypedef(e, ptr, resolvable) > > > Preconditions: > - ptr points to the beginning of a typedef name in an encoding > - e is an environment, in which the typedef occures (lookup begins > in this environment) > Postconditions: > - if typedef can be correctly looked up, *this represents the same > type, but with the typedef name expanded. > - otherwise *this is unchanged if 'resolvable == false' or > becomes unknown (special state) otherwise. nice explanation ! > Hope this helps. It does quite a lot. Thanks ! Best regards, Stefan |
From: Grzegorz J. <ja...@ac...> - 2004-08-13 16:44:21
|
Stefan Seefeld wrote: > hi there, > > I'm still struggling with the internal APIs as I'm > trying to make them more type-safe and const-correct. > > Here are some specific questions: > > What's the semantics (goal in general, invariants, etc.) > of the following methods: Below is what I managed to figure out quickly. Please verify. If anything does not make sense, let me know. If it does, *please* insert as a comment. > > * Encoding::GetBaseName() This is now EncodingUtil::GetBaseName(). Comment says: // GetBaseName() returns "Foo" if ENCODE is "Q[2][1]X[3]Foo", for // example. // If an error occurs, the function returns 0. Let me know what is unclear. > * Encoding::ResolveTypedefName() Also has been moved to EncodingUntil::ResolveTypedefName(). Takes environment 'env' and typedef encoding ('name' & 'len'), returns environment in which the given typedef is defined. Typeinfo --- this class is intended to represent types, however the representation is not unique, because: - typedefs are expanded lazily (thus many operations need an environment, to be able to look up typedefs definitions), - dereferences ('*') are applied lazily. > * TypeInfo::Reference() Precondition: *this represents some type T Postcondition: *this represents type T* > * TypeInfo::Dereference(t) Precondition: *this represents some type equivalent to T* Postcondition: *this represents type T > * TypeInfo::Normalize() [I fail to understand this function clearly; my findings:] - strips top-level cv-qualifiers - if *this represents dereferenced function pointer changes *this so that it represents function return type, - if *this represents dereferenced typedef, expands typedef and tries to proceed with dereferencing Name suggests that the actual represented type should not change, although its representation may (e.g. typedefs are expanded if necessary). However some details are misterious (member function pointers for instance). > * TypeInfo::SkipCv() Strips all top-level cv-qualifiers from type represented by *this; if after stripping *this represents a typedef, expands typdef and proceeds. > * TypeInfo::SkipName(encode, e) Preconditions: - encode points into cstring which encodes a typeinfo. - encode points to the beginning of class or template name withing the encoded typeinfo cstring Postcondition: - return value points immediately after the class or template name in the encoded typeinfo cstring; if cstring contains typedef names, they are expanded in environment e > * TypeInfo::ResolveTypedef(e, ptr, resolvable) Preconditions: - ptr points to the beginning of a typedef name in an encoding - e is an environment, in which the typedef occures (lookup begins in this environment) Postconditions: - if typedef can be correctly looked up, *this represents the same type, but with the typedef name expanded. - otherwise *this is unchanged if 'resolvable == false' or becomes unknown (special state) otherwise. > Thanks for any clarification ! Hope this helps. Best regards Grzegorz > > Stefan > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Stefan S. <sse...@ar...> - 2004-08-13 13:51:44
|
Hi Gilles, Gilles J. Seguin wrote: >On Thu, 2004-08-12 at 21:42, Stefan Seefeld wrote: >>What's the semantics (goal in general, invariants, etc.) >>of the following methods: >> >>* Encoding::GetBaseName() >>* Encoding::ResolveTypedefName() >>* TypeInfo::Reference() >>* TypeInfo::Dereference() >>* TypeInfo::Normalize() >>* TypeInfo::SkipCv() >>* TypeInfo::SkipName() >>* TypeInfo::ResolveTypedef() >> >>Thanks for any clarification ! > > >Allo stephan, > >One word, mangling. yeah, I know. I do understand the principles of type encoding and name mangling in general. My problem is with the specifics of the above methods. To explain my question a bit better: right now everything in opencxx is done with garbage collected character buffers. That's almost as bad as passing void pointers around. I'd like to replace 'char *' by 'const char *' where ptrees point into the file buffer that is being parsed. For the encodings I'm using std::basic_string<unsigned char>. Now, there are public methods that generate new 'derived' encodings from old ones (the same way you can derive types). However, some methods are merely used to parse an encoding, i.e. they don't need to create a new encoding but simply return a pointer *into* an existing encoding (I'm of course using iterators for that). Take the 'SkipCV' method: it sounds as if it returns the sub-encoding with potential 'cv' tags stripped off. An ideal candidate for an iterator, one would think. However, if you look into the code, you realize that there's quite a bit more going on. And that's exactly what I'd like to understand in detail... :-) Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-08-13 01:45:48
|
hi there, I'm still struggling with the internal APIs as I'm trying to make them more type-safe and const-correct. Here are some specific questions: What's the semantics (goal in general, invariants, etc.) of the following methods: * Encoding::GetBaseName() * Encoding::ResolveTypedefName() * TypeInfo::Reference() * TypeInfo::Dereference() * TypeInfo::Normalize() * TypeInfo::SkipCv() * TypeInfo::SkipName() * TypeInfo::ResolveTypedef() Thanks for any clarification ! Stefan |
From: Grzegorz J. <ja...@ac...> - 2004-08-12 14:31:24
|
Gilles J. Seguin wrote: > On Wed, 2004-08-11 at 11:27, Grzegorz Jakacki wrote: > >>Gilles J. Seguin wrote: >> >>>Grzegorz, >>> >>>Can you verify this. >>>function is call second() but return the third element >>>of the list. > > > in opencxx/parser/PtreeBrace.cc > on line 59, we have > Ptree* Second(Ptree* p) > This function return the third element > in fact I do not care which element number it is, > a want the element containing "body" > > The impact is huge because "body" is also in rule > namespace identifier '{' body '}' > > >>I am not sure if I understand. PtreeUtil::Second(), as it stands > > > Why are you talking about PtreeUtil.cc file, > Grzegorz, the change is in opencxx/parser/PtreeBrace.cc Sorry! Please disregard what I wrote in the previous e-mail. The function Opencxx::Detail::Second() declared and defined in opencxx/parser/PtreeBrace.cc returns the list with two initial elements stripped (or NULL if original list is too short), e.g. for NonLeaf / \ A NonLeaf / \ D E it will return E. After your patch it will return NonLeaf / \ D E Observe, that this function is used *only* in PtreeBrace::Print(). Moreover, it is not part of any interface. If current implementation is buggy, please go ahead and fix. You don't have to worry about other clients of this function, because they don't exist. Moreover, as this function is local to opencxx/parser/PtreeBrace.cc, changing its name does not have any impact on external clients. In such situations feel free to rename (the only request is that you stick to the naming convention of the neighbouring code, e.g. 'GetBody', not 'get_body'). Again sorry for the confusion I caused by my previous e-mail. BR Grzegorz > > >>both in HEAD and rel_2_8, > > > The patch says against version 1.2.2.1 > which is branch rel_2_8 > > >> returns second element of the list (or >>NULL if it does not exist). After applying the patch from your >>e-mail (shown below), it will indeed return third element. >>This is a patch against what? > > > against CVS > $ cd opencxx/parser > $ cvs diff -u PtreeBrace.cc > > >>BR >>Grzegorz >> >> >>>Since the result from qmtest are hardcoded, >>>$ qmtest -o myresults.qmr run # lowercase 'o' >>>on future test do >>>$ qmtest -O myresults.qmr run # uppercase 'O' >> >>Where is the version with qmtest? > > > the link to the file is <http://www.infonet.ca/segg/occ.tgz> > Since the patch modify the ouput of "occ", > nearly all tests fail. > Previous commands allow to generated valid new output to retest against. > > >>>diff -u -r1.2.2.1 PtreeBrace.cc >>>--- PtreeBrace.cc 8 Jul 2004 11:57:13 -0000 1.2.2.1 >>>+++ PtreeBrace.cc 10 Aug 2004 13:13:03 -0000 >>>@@ -59,10 +59,10 @@ >>> Ptree* Second(Ptree* p) >>> { >>> if (p) { >>>- p=p->Cdr(); >>>- if (p) { >>>+ //p=p->Cdr(); >>>+ //if (p) { >>> return p->Cdr(); >>>- } >>>+ //} >>> } >>> return p; >>> } > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: <se...@in...> - 2004-08-12 07:24:19
|
On Wed, 2004-08-11 at 11:27, Grzegorz Jakacki wrote: > Gilles J. Seguin wrote: > > Grzegorz, > > > > Can you verify this. > > function is call second() but return the third element > > of the list. in opencxx/parser/PtreeBrace.cc on line 59, we have Ptree* Second(Ptree* p) This function return the third element in fact I do not care which element number it is, a want the element containing "body" The impact is huge because "body" is also in rule namespace identifier '{' body '}' > I am not sure if I understand. PtreeUtil::Second(), as it stands Why are you talking about PtreeUtil.cc file, Grzegorz, the change is in opencxx/parser/PtreeBrace.cc > both in HEAD and rel_2_8, The patch says against version 1.2.2.1 which is branch rel_2_8 > returns second element of the list (or > NULL if it does not exist). After applying the patch from your > e-mail (shown below), it will indeed return third element. > This is a patch against what? against CVS $ cd opencxx/parser $ cvs diff -u PtreeBrace.cc > BR > Grzegorz > > > > > Since the result from qmtest are hardcoded, > > $ qmtest -o myresults.qmr run # lowercase 'o' > > on future test do > > $ qmtest -O myresults.qmr run # uppercase 'O' > > Where is the version with qmtest? the link to the file is <http://www.infonet.ca/segg/occ.tgz> Since the patch modify the ouput of "occ", nearly all tests fail. Previous commands allow to generated valid new output to retest against. > > diff -u -r1.2.2.1 PtreeBrace.cc > > --- PtreeBrace.cc 8 Jul 2004 11:57:13 -0000 1.2.2.1 > > +++ PtreeBrace.cc 10 Aug 2004 13:13:03 -0000 > > @@ -59,10 +59,10 @@ > > Ptree* Second(Ptree* p) > > { > > if (p) { > > - p=p->Cdr(); > > - if (p) { > > + //p=p->Cdr(); > > + //if (p) { > > return p->Cdr(); > > - } > > + //} > > } > > return p; > > } |
From: Stefan S. <se...@sy...> - 2004-08-11 22:45:21
|
Grzegorz Jakacki wrote: > Stefan, could we steer toward integration of what you have in Synopsis > with what is currently in opencxx.sf.net ? I.e. could you integrate > Gilles' development it in a way that can be backported to (or merged > with) the opencxx from SF CVS? sure, Stefan |
From: Grzegorz J. <ja...@ac...> - 2004-08-11 15:32:29
|
-------- Original Message -------- Subject: Re: [Opencxx-users] parser/PtreeBrace.cc Date: Wed, 11 Aug 2004 23:10:03 +0800 From: Grzegorz Jakacki <ja...@ac...> To: Gilles J. Seguin <se...@in...> References: <109...@se...> Gilles J. Seguin wrote: > Grzegorz, > > Can you verify this. > function is call second() but return the third element > of the list. > get_body() would have being a better name. Hi, I am not sure if I understand. PtreeUtil::Second(), as it stands both in HEAD and rel_2_8, returns second element of the list (or NULL if it does not exist). After applying the patch from your e-mail (shown below), it will indeed return third element. This is a patch against what? BR Grzegorz > > Since the result from qmtest are hardcoded, > $ qmtest -o myresults.qmr run # little 'o' > on future test do > $ qmtest -O myresults.qmr run # uppercase 'O' Where is the version with qmtest? BR Grzegorz > > diff -u -r1.2.2.1 PtreeBrace.cc > --- PtreeBrace.cc 8 Jul 2004 11:57:13 -0000 1.2.2.1 > +++ PtreeBrace.cc 10 Aug 2004 13:13:03 -0000 > @@ -59,10 +59,10 @@ > Ptree* Second(Ptree* p) > { > if (p) { > - p=p->Cdr(); > - if (p) { > + //p=p->Cdr(); > + //if (p) { > return p->Cdr(); > - } > + //} > } > return p; > } > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Grzegorz J. <ja...@ac...> - 2004-08-11 15:30:35
|
Stefan Seefeld wrote: > Gilles J. Seguin wrote: > >> Tests for the occ binary on the PATH environment variable >> <http://www.infonet.ca/segg/occ.tgz> >> will unpack in occ directory. >> $ cd occ >> $ qmtest run # will run everything. >> $ qmtest run parse # will do the same as 'qmtest run' >> since we have only one directory. >> $ qmtest run parse.access1 # will run only one test. >> >> The tests examine results from 'occ -p -s myfile.cc' > > > nice ! Some suggestions, though: > > you hardcode the path '/usr/bin/occ' in each single test descriptor. > I'v just changed one to see that it works, and I'm not going to edit > all these files :-) > > What I'd suggest to do instead is defining a new test (python) class > that is then registered with the database and used for all tests. > Then you can configure paths and other stuff right into this class > *once*. > > Second I think it would be good to substructure the tests. In fact > I'd suggest to write a new 'test database', and then use the subdivision > into test suites to set up test types for each suite separately. > > That will allow to tailor the test types, for example one suite/folder > to test the Encoding class, one to test ptree generation / manipulation, > etc., etc. > > Note that this is exactly the approach I'v taken in synopsis, i.e. > I wrote my own database and a handfull of test classes to do various > things (one test class compiles a C++ file and executes it, another > runs a particular synopsis processor on a specific input file, you > get the idea...) > > That said, I think what you'v done is a nice starting point ! Absolutely. > May > I integrate it into my existing testing harness ? Stefan, could we steer toward integration of what you have in Synopsis with what is currently in opencxx.sf.net ? I.e. could you integrate Gilles' development it in a way that can be backported to (or merged with) the opencxx from SF CVS? Thanks Grzegorz > > Regards, > Stefan > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: <se...@in...> - 2004-08-11 02:25:50
|
On Tue, 2004-08-10 at 19:16, Stefan Seefeld wrote: > Gilles J. Seguin wrote: > > > Tests for the occ binary on the PATH environment variable > > <http://www.infonet.ca/segg/occ.tgz> > > will unpack in occ directory. > > $ cd occ > > $ qmtest run # will run everything. > > $ qmtest run parse # will do the same as 'qmtest run' > > since we have only one directory. > > $ qmtest run parse.access1 # will run only one test. > > > > The tests examine results from 'occ -p -s myfile.cc' > > nice ! Some suggestions, though: > > you hardcode the path '/usr/bin/occ' in each single test descriptor. > I'v just changed one to see that it works, and I'm not going to > edit all these files :-) > > What I'd suggest to do instead is defining a new test (python) class > that is then registered with the database and used for all tests. > Then you can configure paths and other stuff right into this class > *once*. > > Second I think it would be good to substructure the tests. In fact > I'd suggest to write a new 'test database', and then use the subdivision > into test suites to set up test types for each suite separately. tests are stolen from gcc/gcc/testsuite/g++.dg with branch tag gcc-3_4-rhl-branch. And follow the same structure. The tar file was a copy of a test directory executing occ on uninstall binary and library. On which I have apply sed command on all file. > That will allow to tailor the test types, for example one suite/folder > to test the Encoding class, one to test ptree generation / manipulation, > etc., etc. What I am trying to do now, is to generate source files base on rules(rProgram, rDeclaration, rClassSpec). Are they producing valid ASTs ? First step, document the Grammer. > Note that this is exactly the approach I'v taken in synopsis, i.e. > I wrote my own database and a handfull of test classes to do various > things (one test class compiles a C++ file and executes it, another > runs a particular synopsis processor on a specific input file, you > get the idea...) > > That said, I think what you'v done is a nice starting point ! May > I integrate it into my existing testing harness ? > > Regards, > Stefan |
From: Stefan S. <se...@sy...> - 2004-08-10 23:19:37
|
Gilles J. Seguin wrote: > Tests for the occ binary on the PATH environment variable > <http://www.infonet.ca/segg/occ.tgz> > will unpack in occ directory. > $ cd occ > $ qmtest run # will run everything. > $ qmtest run parse # will do the same as 'qmtest run' > since we have only one directory. > $ qmtest run parse.access1 # will run only one test. > > The tests examine results from 'occ -p -s myfile.cc' nice ! Some suggestions, though: you hardcode the path '/usr/bin/occ' in each single test descriptor. I'v just changed one to see that it works, and I'm not going to edit all these files :-) What I'd suggest to do instead is defining a new test (python) class that is then registered with the database and used for all tests. Then you can configure paths and other stuff right into this class *once*. Second I think it would be good to substructure the tests. In fact I'd suggest to write a new 'test database', and then use the subdivision into test suites to set up test types for each suite separately. That will allow to tailor the test types, for example one suite/folder to test the Encoding class, one to test ptree generation / manipulation, etc., etc. Note that this is exactly the approach I'v taken in synopsis, i.e. I wrote my own database and a handfull of test classes to do various things (one test class compiles a C++ file and executes it, another runs a particular synopsis processor on a specific input file, you get the idea...) That said, I think what you'v done is a nice starting point ! May I integrate it into my existing testing harness ? Regards, Stefan |
From: <se...@in...> - 2004-08-10 14:43:01
|
On Fri, 2004-08-13 at 06:01, Gilles J. Seguin wrote: > Grzegorz, > > Can you verify this. > function is call second() but return the third element > of the list. > get_body() would have being a better name. > > Since the result from qmtest are hardcoded, > $ qmtest -o myresults.qmr run # little 'o' $ qmtest run -o myresults.qmr # little 'o' > on future test do > $ qmtest -O myresults.qmr run # uppercase 'O' $ qmtest run -O myresults.qmr # uppercase 'O' |
From: <se...@in...> - 2004-08-10 14:39:48
|
On Fri, 2004-08-13 at 06:47, Gilles J. Seguin wrote: > On Fri, 2004-08-13 at 06:08, Gilles J. Seguin wrote: > > Tests for the occ binary on the PATH environment variable > > <http://www.infonet.ca/segg/occ.tgz> > > will unpack in occ directory. > > $ cd occ > > for attachment, untar here. we need the attachment, and do not reply to myself, duh. > will add directories rtti.qms and rtti_src > $ qmtest run rtti > > > $ qmtest run # will run everything. > > $ qmtest run parse # will do the same as 'qmtest run' > > since we have only one directory. > > $ qmtest run parse.access1 # will run only one test. > > > > The tests examine results from 'occ -p -s myfile.cc' |
From: <se...@in...> - 2004-08-10 14:15:45
|
On Fri, 2004-08-13 at 06:08, Gilles J. Seguin wrote: > Tests for the occ binary on the PATH environment variable > <http://www.infonet.ca/segg/occ.tgz> > will unpack in occ directory. > $ cd occ for attachment, untar here. will add directories rtti.qms and rtti_src $ qmtest run rtti > $ qmtest run # will run everything. > $ qmtest run parse # will do the same as 'qmtest run' > since we have only one directory. > $ qmtest run parse.access1 # will run only one test. > > The tests examine results from 'occ -p -s myfile.cc' |
From: <se...@in...> - 2004-08-10 13:39:26
|
On Tue, 2004-08-10 at 04:58, Grzegorz Jakacki wrote: > Stefan Seefeld wrote: > > Hi Gilles, > > > > Gilles J. Seguin wrote: > > > >> On Fri, 2004-07-30 at 22:09, Stefan Seefeld wrote: > >> > >>> Thanks. In fact, I'm working on the C++ parser right now, > >>> to be able to expose a much more rich AST to python to allow > >>> real code inspection and following that code generation. > >>> Drop in if you have ideas or even want to help :-) > >> > >> > >> > >> Hello Stefan, > >> > >> First a little introduction. > >> I have offered Grzegorz Jakacki, from the opencxx project > >> to update the configuration of opencxx. That step is done. > >> The change are visible on the branch rel_2_8. > >> Grzegorz is targetting third week of august for releasing 2.8 . > > > > > > yeah, I know :-) > > > >> Next project in opencxx, increase test coverage. > >> Any idea are welcome. > > > > > > I'm doing that inside synopsis right now, i.e. I'm adding tests > > to the opencxx backend inside synopsis. My tests are all organized > > using the qmtest framework (www.qmtest.com), and I offered Grzegorz > > to put something similar into place for his project. > > Just to clarify: I have never rejected it and this is still open. I > tried to understand how Synopsis uses qmtest, but for misc. reasons I > failed. Tests for the occ binary on the PATH environment variable <http://www.infonet.ca/segg/occ.tgz> will unpack in occ directory. $ cd occ $ qmtest run # will run everything. $ qmtest run parse # will do the same as 'qmtest run' since we have only one directory. $ qmtest run parse.access1 # will run only one test. The tests examine results from 'occ -p -s myfile.cc' > Stefan, if you have running examples that do qmtesting of > OpenC++, I would love to have a look. And generally, could you post > instructions how to check out your OpenC++ "branch" from Synopsis repo? > > And last but not least: I think that the first thing that should be > established if we get down to increasing coverage is a mechanism to > measure line coverage conveniently (using e.g. gcov). Otherwise speaking > about increasing coverage is a little bit hand waving. If any of you > guys want to take it, please do, I will be busy with other things at > least till the end of August (samples, 2.8, merging to main, docs). > > > That being said, I still hope (as you may know) that we can at some > > point merge both projects. Right now I'm doing most work on synopsis, > > offering from time to time to merge back fixes into the OpenC++ project. > > This is certainly suboptimal. However, I don't see any straightforward > solution at the moment. I would suggest that we go through options > again, so that we can have more convenient setup, say, in mid-September. > > BR > Grzegorz > > >> The main point of this email, how can I see the work that > >> you are doing on the C++ parser. > >> I have done 'svn checkout', so I have revision 1332. > > > > > > I'm just starting my work on the backend. My plans are (in that order): > > > > * rewrite some low level classes (Buffer, Lexer, basically) > > > > * make the existing API more clean (const-correct, more typed, etc.) > > > > * try to understand how the Parser needs to be changed in order to > > correctly understand the tokens, i.e. build a correct parse tree / AST > > (see recent posts on the opencxx list). > > > > * build a true high-level AST API on top (no more Car() / Cdr() calls !) > > > > * export that AST API publicly through C++ as well as python > > > > * get rid of the occ executable, provide a python script instead (for > > backward compatibility only) > > > > * solidify the C++ / python APIs to let users build their domain-specific > > applications either as python scripts or C++ programs > > > > I realize this is a lot of work. I'm not sure what the timeframe is to get > > all of this done. I'v almost finished the first point (the Buffer is > > committed, > > the Lexer will follow one of these days). > > If you are interested in getting involved, let me know. > > > > Kind regards, > > Stefan |
From: <se...@in...> - 2004-08-10 13:32:34
|
Grzegorz, Can you verify this. function is call second() but return the third element of the list. get_body() would have being a better name. Since the result from qmtest are hardcoded, $ qmtest -o myresults.qmr run # little 'o' on future test do $ qmtest -O myresults.qmr run # uppercase 'O' diff -u -r1.2.2.1 PtreeBrace.cc --- PtreeBrace.cc 8 Jul 2004 11:57:13 -0000 1.2.2.1 +++ PtreeBrace.cc 10 Aug 2004 13:13:03 -0000 @@ -59,10 +59,10 @@ Ptree* Second(Ptree* p) { if (p) { - p=p->Cdr(); - if (p) { + //p=p->Cdr(); + //if (p) { return p->Cdr(); - } + //} } return p; } |
From: Stefan S. <se...@sy...> - 2004-08-10 11:53:49
|
Grzegorz Jakacki wrote: > Just to clarify: I have never rejected it and this is still open. I > tried to understand how Synopsis uses qmtest, but for misc. reasons I > failed. Stefan, if you have running examples that do qmtesting of > OpenC++, I would love to have a look. And generally, could you post > instructions how to check out your OpenC++ "branch" from Synopsis repo? There are no OpenC++ tests checked in yet, I hope to check in something useful this week. I did write some code to test the Buffer, Lexer, Parser, and Encoding classes, though. As to the code in general, just download a Synopsis snapshot from http://synopsis.fresco.org/download/, the OpenC++ stuff is in Synopsis/Parsers/Cxx (may be a bit unstable these days as I'm heavily refactoring / rewriting). > And last but not least: I think that the first thing that should be > established if we get down to increasing coverage is a mechanism to > measure line coverage conveniently (using e.g. gcov). Otherwise speaking > about increasing coverage is a little bit hand waving. If any of you > guys want to take it, please do, I will be busy with other things at > least till the end of August (samples, 2.8, merging to main, docs). I don't quite agree. There are currently no unit tests, so 'coverage' could just mean to write them starting with the most central classes first (Buffer, Lexer, PTree classes, Encoding, Environment, Parser). Regards, Stefan |
From: Stefan S. <se...@sy...> - 2004-08-10 11:40:02
|
Grzegorz Jakacki wrote: > One factor that makes this pattern messy in OpenC++ is that Walker > (ClassWalker, strictly speaking) performs not only context-dependant > analysis, but also source-to-source translation. yeah, and this is indeed where my confusion started. It would already be less confusing if there were multiple Walkers, one to finish the parsing, and others to do the mop stuff. >> Now, I believe it would be important to try to move this to a one-step >> parse, i.e. integrate the Environment building into the parser such that >> the parser can correctly deal with corner cases that used to be >> interpreted >> incorrectly. > > > This would be great, but I am afraid it is impossible. Consider: > > class A > { > void f() { g(); } > void g() {}; > }; > > So we are doomed to perform some form of two-pass analysis, in > particular binding cannot be performed together with parsing. Hmm, you are right. I'll have to look again into the gcc sources to see how they do it... > Apart of that, I still oppose integrating too much into parser. I > suggest functionally equivalent solution of making higher-level services > (e.g. environment creation, identifiers lookup) available to parser > through an abstract interface. That's what I'm thinking about indeed. The parser would have a reference to a 'SymbolTable', and each time it enters a scope it calls table->new_scope(), each time it runs into a declaration it pushes it into the current scope, etc. I'm still considering what it takes to make occ accept C code. The lexer is easy, as we can at runtime mask some keywords (i.e. report them as identifiers). The parser may be relatively easy, too, as we only have to exclude a number of productions and may be add one or two (C is almost a subset of C++). The SymbolTable finally would need some flexibility, too, as name lookup doesn't follow the same rules (well, there are far less rules to start with :-) Regards, Stefan |
From: Grzegorz J. <ja...@ac...> - 2004-08-10 09:40:51
|
Stefan Seefeld wrote: > Grzegorz Jakacki wrote: > >> Hi, >> >> I checked in my attempt at reverse-engineering docs of OpenC++ >> architecture on the class level. The document is incomplete and most >> likely in flux, since architecture is changing now, but at least it >> describes status quo. Please review, fix, give feedback. >> >> The docs are in opencxx/doc/architecture.html > > > wouldn't it be a good idea to publish these documents on the website ? I am putting it on my TODO list. However, anybody with developer's access is capable of doing it too, so if anybody feels like it, please do (check out module > I'm working my way through the code as I'm cleaning up the 'synopsis > branch'. > Also, our latest discussions with Chiba have helped me a lot to understand > the current approach... > > Something that has to be discussed quite in detail for anybody to > understand > the design of the parser (notably the Walker stuff) is the two-stage > parsing > that is done, i.e. the Parser building up a preliminary parse tree and then > the Walker running over it creating the various 'Environments'. (Up to our > discussion a while ago I just couldn't figure out why the Walker modified > the ptree it was traversing. Now I see and understand that it needs to to > fill in the things the parser's first pass couldn't figure out yet.) > A simple example would help to illustrate this: > > ----- > struct Foo {}; > void bar(const Foo &); > ----- > > creating a parse / syntax tree for this using occ is done in two steps: > > * the first builds a preliminary ptree via Lexer -> Parser > * the second refines the ptree using the Walker to build an 'Environment' > containing the declaration of 'Foo', which is needed to properly encode > the name/type of 'bar' (in the first pass the parser has to *guess* some > details about the function arguments, as no Environment object is > available > for it to inspect). > > I hope this makes some sense... This is classical way of doing analysis in compilation, see "Compilers. Principles, Techniques and Tools" by Aho, Seti and Ullman, chapter 1.2. I don't have English version at hand, so I cannot quote, sorry. The only catch is that C++ grammar (as per ISO Standard) is not context-free, so the AST resulting from parsing is indeed an AST with "gray areas", which are clarified later during context-dependant analysis. One factor that makes this pattern messy in OpenC++ is that Walker (ClassWalker, strictly speaking) performs not only context-dependant analysis, but also source-to-source translation. > Now, I believe it would be important to try to move this to a one-step > parse, i.e. integrate the Environment building into the parser such that > the parser can correctly deal with corner cases that used to be interpreted > incorrectly. This would be great, but I am afraid it is impossible. Consider: class A { void f() { g(); } void g() {}; }; So we are doomed to perform some form of two-pass analysis, in particular binding cannot be performed together with parsing. Apart of that, I still oppose integrating too much into parser. I suggest functionally equivalent solution of making higher-level services (e.g. environment creation, identifiers lookup) available to parser through an abstract interface. That way parser is not coupled to the rest of the code. (Perhaps this is what you meant when you wrote about "integrating the Environment building into parser" and my comment is unnecessary, but I wanted to be certain we have a closure on that.) Best regards Grzegorz > Regards, > Stefan > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |
From: Grzegorz J. <ja...@ac...> - 2004-08-10 09:02:58
|
Stefan Seefeld wrote: > Hi Gilles, > > Gilles J. Seguin wrote: > >> On Fri, 2004-07-30 at 22:09, Stefan Seefeld wrote: >> >>> Thanks. In fact, I'm working on the C++ parser right now, >>> to be able to expose a much more rich AST to python to allow >>> real code inspection and following that code generation. >>> Drop in if you have ideas or even want to help :-) >> >> >> >> Hello Stefan, >> >> First a little introduction. >> I have offered Grzegorz Jakacki, from the opencxx project >> to update the configuration of opencxx. That step is done. >> The change are visible on the branch rel_2_8. >> Grzegorz is targetting third week of august for releasing 2.8 . > > > yeah, I know :-) > >> Next project in opencxx, increase test coverage. >> Any idea are welcome. > > > I'm doing that inside synopsis right now, i.e. I'm adding tests > to the opencxx backend inside synopsis. My tests are all organized > using the qmtest framework (www.qmtest.com), and I offered Grzegorz > to put something similar into place for his project. Just to clarify: I have never rejected it and this is still open. I tried to understand how Synopsis uses qmtest, but for misc. reasons I failed. Stefan, if you have running examples that do qmtesting of OpenC++, I would love to have a look. And generally, could you post instructions how to check out your OpenC++ "branch" from Synopsis repo? And last but not least: I think that the first thing that should be established if we get down to increasing coverage is a mechanism to measure line coverage conveniently (using e.g. gcov). Otherwise speaking about increasing coverage is a little bit hand waving. If any of you guys want to take it, please do, I will be busy with other things at least till the end of August (samples, 2.8, merging to main, docs). > That being said, I still hope (as you may know) that we can at some > point merge both projects. Right now I'm doing most work on synopsis, > offering from time to time to merge back fixes into the OpenC++ project. This is certainly suboptimal. However, I don't see any straightforward solution at the moment. I would suggest that we go through options again, so that we can have more convenient setup, say, in mid-September. BR Grzegorz >> The main point of this email, how can I see the work that >> you are doing on the C++ parser. >> I have done 'svn checkout', so I have revision 1332. > > > I'm just starting my work on the backend. My plans are (in that order): > > * rewrite some low level classes (Buffer, Lexer, basically) > > * make the existing API more clean (const-correct, more typed, etc.) > > * try to understand how the Parser needs to be changed in order to > correctly understand the tokens, i.e. build a correct parse tree / AST > (see recent posts on the opencxx list). > > * build a true high-level AST API on top (no more Car() / Cdr() calls !) > > * export that AST API publicly through C++ as well as python > > * get rid of the occ executable, provide a python script instead (for > backward compatibility only) > > * solidify the C++ / python APIs to let users build their domain-specific > applications either as python scripts or C++ programs > > I realize this is a lot of work. I'm not sure what the timeframe is to get > all of this done. I'v almost finished the first point (the Buffer is > committed, > the Lexer will follow one of these days). > If you are interested in getting involved, let me know. > > Kind regards, > Stefan > > > ------------------------------------------------------- > This SF.Net email is sponsored by OSTG. Have you noticed the changes on > Linux.com, ITManagersJournal and NewsForge in the past few weeks? Now, > one more big change to announce. We are now OSTG- Open Source Technology > Group. Come see the changes on the new OSTG site. www.ostg.com > _______________________________________________ > Opencxx-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opencxx-users |