cedet-devel Mailing List for CEDET (Page 276)
Brought to you by:
zappo
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(5) |
Feb
|
Mar
(3) |
Apr
|
May
(32) |
Jun
(75) |
Jul
(45) |
Aug
(77) |
Sep
(28) |
Oct
(10) |
Nov
(18) |
Dec
(49) |
2003 |
Jan
(98) |
Feb
(116) |
Mar
(111) |
Apr
(99) |
May
(29) |
Jun
(8) |
Jul
(48) |
Aug
(85) |
Sep
(61) |
Oct
(16) |
Nov
(70) |
Dec
(31) |
2004 |
Jan
(50) |
Feb
(74) |
Mar
(151) |
Apr
(76) |
May
(36) |
Jun
(91) |
Jul
(42) |
Aug
(26) |
Sep
(32) |
Oct
(38) |
Nov
(21) |
Dec
(35) |
2005 |
Jan
(78) |
Feb
(46) |
Mar
(25) |
Apr
(68) |
May
(47) |
Jun
(36) |
Jul
(42) |
Aug
(13) |
Sep
(12) |
Oct
(11) |
Nov
(12) |
Dec
(5) |
2006 |
Jan
(15) |
Feb
(9) |
Mar
(9) |
Apr
(10) |
May
(15) |
Jun
(29) |
Jul
(32) |
Aug
(17) |
Sep
(14) |
Oct
(5) |
Nov
(1) |
Dec
(4) |
2007 |
Jan
(17) |
Feb
(60) |
Mar
(39) |
Apr
(7) |
May
(23) |
Jun
(30) |
Jul
(28) |
Aug
(34) |
Sep
(9) |
Oct
(9) |
Nov
(9) |
Dec
(9) |
2008 |
Jan
(18) |
Feb
(38) |
Mar
(33) |
Apr
(35) |
May
(39) |
Jun
(68) |
Jul
(31) |
Aug
(32) |
Sep
(16) |
Oct
(19) |
Nov
(17) |
Dec
(33) |
2009 |
Jan
(83) |
Feb
(104) |
Mar
(214) |
Apr
(156) |
May
(104) |
Jun
(55) |
Jul
(127) |
Aug
(58) |
Sep
(58) |
Oct
(58) |
Nov
(48) |
Dec
(28) |
2010 |
Jan
(46) |
Feb
(135) |
Mar
(97) |
Apr
(52) |
May
(75) |
Jun
(31) |
Jul
(35) |
Aug
(51) |
Sep
(52) |
Oct
(107) |
Nov
(71) |
Dec
(15) |
2011 |
Jan
(24) |
Feb
(49) |
Mar
(107) |
Apr
(110) |
May
(28) |
Jun
(63) |
Jul
(28) |
Aug
(37) |
Sep
(29) |
Oct
(24) |
Nov
(46) |
Dec
(44) |
2012 |
Jan
(79) |
Feb
(103) |
Mar
(67) |
Apr
(81) |
May
(29) |
Jun
(70) |
Jul
(39) |
Aug
(21) |
Sep
(54) |
Oct
(75) |
Nov
(72) |
Dec
(86) |
2013 |
Jan
(72) |
Feb
(38) |
Mar
(131) |
Apr
(8) |
May
(31) |
Jun
(3) |
Jul
(5) |
Aug
(39) |
Sep
(38) |
Oct
(41) |
Nov
(43) |
Dec
(37) |
2014 |
Jan
(12) |
Feb
(47) |
Mar
(36) |
Apr
(9) |
May
(24) |
Jun
(50) |
Jul
(19) |
Aug
(26) |
Sep
(27) |
Oct
(21) |
Nov
(12) |
Dec
(26) |
2015 |
Jan
(83) |
Feb
(58) |
Mar
(34) |
Apr
(26) |
May
(6) |
Jun
(8) |
Jul
(2) |
Aug
(18) |
Sep
(1) |
Oct
(27) |
Nov
(7) |
Dec
(2) |
2016 |
Jan
(9) |
Feb
(4) |
Mar
(17) |
Apr
(3) |
May
|
Jun
(8) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(2) |
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(5) |
Dec
|
2020 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(12) |
Apr
(7) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Berndl, K. <kla...@sd...> - 2003-01-29 16:31:11
|
Hi, I have checked in new c.bnf with some new parsing features and some fixes (see below), a newly generated semantic-c.el and also a new tests/test.cpp which tests latest fixes. wchar_t and string-sequences like "str" "str" now works. But the L"string" or L'c' stuff still fails.... Hmm, maybe Eric could fix this best? > You have added many additional capabilities to the parser now. Are >you getting the correct types of output in ECB? Occasionally in the >past I've added support for a stray & or const in type definitions for >variables. Later, when one of the token->text functions are called, >it shows up in the wrong place. For C this has been one of the more >challenging aspects of translating C into a token, then back out >again. When last I ran tests, it was pretty good, but missing the >finer points of * and & placement. With all your new template and >const work it would be good to asses the situation. Done! You were right, Eric, in the function-arguments correct pointer counting and const recognizing was disrupted after my latest changes. This means parsing was ok, but the output was wrong. Now i have fixed all this and now it works again! Now all the output from the new parsing capabilities is correct - at least for this i have tested ;-) > You can easily check what the outputs are with the command >`semantic-test-all-token->text-functions' while the cursor is on a >definition in your C code. Yes, this is a very helpful command for this stuff... Ciao, Klaus |
From: David P. <Dav...@wa...> - 2003-01-29 14:51:06
|
[...] >>I can check these changes in, if no objection. > > > That looks good to me. Done. > > Don't forget that you can modify an EDe project via custom with the > command ede-customize-project or ede-customize-current-target. > > If I recall, there are a couple cases that it fails, but it works > great for these little modifications. Thanks for the tip! I need to study a little bit more EDE ;-) > > >>> EL> I also checked in the following semantic files: >>> EL> >>> EL> semantic-c.el: support #elif >>> EL> >>> EL> This may need porting back to v1p4 >> >>OK I will do that. Done. David P.S. I also removed the obsolete semantic-mode.el file from CVS. |
From: Eric M. L. <er...@si...> - 2003-01-29 14:18:55
|
>>> "David PONCE" <Dav...@wa...> seems to think that: [ ... ] >> I can run "make" in semantic directory successfully after >> updating new Porject.ede from cvs db, run >> ede-proj-regenerate, then delete mention of >> semanticdb-obj.el from the resulting Makefile. > >Same here. > >I also removed the ":partofall 'nil" option from the "doc" target of >wisent/Project.ede, so wisent.info files are build too (like >semantic.info is). > >Finally in wisent/Project.ede I switched targets "languages" and >"wisent" to first compile core wisent library. So the compilation of >LALR automatons will be done by (faster) byte-compiled code. > >Attached you will find a global patch for new Project.ede and >generated Makefiles. > >I checked the "make" process and all seems fine :-) > >I can check these changes in, if no objection. That looks good to me. Don't forget that you can modify an EDe project via custom with the command ede-customize-project or ede-customize-current-target. If I recall, there are a couple cases that it fails, but it works great for these little modifications. > >> EL> I also checked in the following semantic files: >> EL> >> EL> semantic-c.el: support #elif >> EL> >> EL> This may need porting back to v1p4 > >OK I will do that. Thanks Eric -- Eric Ludlam: za...@gn..., er...@si... Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |
From: Eric M. L. <er...@si...> - 2003-01-29 14:13:46
|
The L"wide string" notation should probably be handled in the Lexical analyzer. This could be an easy piece in semantic 2, but in 1.4, you need to modify semantic-c-flex-extensions with something like this: "L\\s\"" which would then need the existing string logic hidden in the lexer to finish the task. As a rule, you could define a token L, and make a string rule that had an optional L in front, but I'm not sure if it would mess up if a user tried to define a function or variable L. For example, the token FLOAT "float" confused the situation where the user wrote: #include <float.h> which required some goofy stuff in c.bnf. Eric >>> "Berndl, Klaus" <kla...@sd...> seems to think that: >Hi, > >see below the posting arrived yesterday. I have already fixed c.bnf of >v1p4 so now it can parse sequences of strings like "klaus" "berndl" >correct and also the builtin type wchar_t (BTW: Is there also a wint_t t= ype?). > >But what would be the best way to parse these L"a wide-string" and L'a' >(a wide-char-literal correct? Could this be done with c.bnf (my first tr= ies >had no success) or should this be done with semantic-flex-extensions or = what >other ways exist?? > >Thanks for help, >Klaus > > > >-----Original Message----- >From: Markus Sch=F6pflin [mailto:mar...@gi...]= =20 >Sent: Tuesday, January 28, 2003 11:52 AM >To: ced...@li... >Subject: [cedet-semantic] Parsing of C string constants > > >Currently, C string constants are not always parsed correctly. Here is=20 >an example. > >char const *p =3D ""; // ok >char const *q =3D "" ""; // not ok > >wchar_t const *wp =3D L""; // not ok >wchar_t const *wq =3D L"" L""; // not ok > >All examples are legal according to the C++ standard. > >Here is the definition from the standard: > >string-literal: > "s-char-sequence(opt)" > L"s-char-sequence(opt)" > >s-char-sequence: > s-char > s-char-sequence s-char > >s-char: > any member of the source character set except the doublequote ",=20 >backslash \, or newline character > escape-sequence > universal-character-name > >2.3.14(3): In translation phase 6 (2.1), adjacent narrow string=20 >literals are concatenated and adjacent wide string literals >are concatenated. [...] > >2.3.14(5): Escape sequences and universal-character-names in string=20 >literals have the same meaning as in character literals (2.13.2),=20 >except that the single quote ' is representable either by itself or by=20 >the escape sequence \', and the double quote " shall be preceded by a=20 >\. [...] > >A universal character name is defined as: > >hex-quad: > hexadecimal-digit hexadecimal-digit hexadecimal-digit=20 >hexadecimal-digit (Four hex digits in a row) > >universal-character-name: > \u hex-quad > \U hex-quad hex-quad > >Escape sequences are defined in the usual way, there are \n, \r, ...,=20 >octal escape sequences (like \123, length from 1 to 3) and hex=20 >sequences (like \xabcd, length from 1 to unlimited). > >HTH, Markus [ ... ] --=20 Eric Ludlam: za...@gn..., eric@siege-engine.c= om Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |
From: David P. <Dav...@wa...> - 2003-01-29 13:22:46
|
Hi Richard, > I would like to ask your help in figuring out why python grammar does > not seem to work as I expected. I include the full copy of > wisent-python.wy at the end, but here are the three NT's of interest. [...] > > This is my attempt at ading function arguments to 'function tokens. > As you can see the code above is very similar to what you have in > wisent-java-tag.wy. The last NT, function_parameters, is very > simplified version since it only matches NAME terminal. [...] > The problem I can't solve now is that if I add > > %start function_parameters > > then rebuild the tables and re-bovinate, then no token is generated > from the same input lines! It seems like adding this one line > seems to break the parser. What is going on=3F If fact the result of an EXPANDFULL clause nonterminal must be a valid semantic token otherwise the iterative parser just return nil. As your NAME rule just return a string (value of NAME) it is not considered as a valid token and ignored. After some minor changes in your grammar I managed to successfully parse your small example below: def ttt(x): i =3D 1 That produced this parse tree: (("ttt" function nil (("x" variable nil nil nil nil ((reparse-symbol . function_parameters)) #<overlay from 9 to 10 in test.py>)) nil #<overlay from 1 to 23 in test.py>)) I attached the new grammar. Using diff you will easily see what has changed. Particularly I had to remove the NEWLINE following compound_stmt in the goal nonterminal, as it appeared to me that the lexical stream never terminates with a NEWLINE lexical token! Hope it helps. Thanks for working on that! David |
From: Eric M. L. <er...@si...> - 2003-01-29 13:07:43
|
Hi Klaus, You certainly have a dilemma. It is exactly this problem that made me attempt to make the new incremental parser more robust. I would also someday like to make the full-reparse go back and synchronize against old overlays and cons cells also. Anyway, what you need to do here is, whenever you "cache" a token, make a marker object, and position it at token start. (let ((m (make-marker))) (move-marker m (semantic-token-start T)) ...) When you need to get back to the old token, you can do this: (save-excursion (goto-char m) (semantic-current-nonterminal)) and this will work perhaps 90% of the time. It will not work if T was deleted, or cut from one location to another. To protect against that, you need to do name compares or something similar. Good Luck Eric >>> "Berndl, Klaus" <kla...@sd...> seems to think that: >Hi guys, > >suppose semantic 1.4.2. > >Further suppose that i have a semantic-token T (e.g. a function-token of an >elisp-file F). T contains all informations of a function-token and therefore >among others also a valid overlay. > >Ok, now suppose that i store this token in a certain own cache. Then a full reparse >is done (either autom. or manually triggered) because i have changes something other >in file F - the elisp-code to which token T belongs (i.e. the function) is unchanged. >But after the full reparse the overlay of the token T stored in my own cache (i need it >for certain purposes - at least the overlay-informations of the tokens) is invalid. > >(Eric, you remember we have discussed a related topic a long time ago?) > >Ok, it sounds understandable that after a reparse the old overlays are invalid because >replaced by the new one (generated by the bovination/parsing run). > >But now i have a problem: I want to use the overlay-informations of my cached token T >(need overlay-buffer overlay-start/end) but the overlay of T is now invalid cause of the >reparse and there is a new token T' with a new valid overlay. > >Now i'm wondering how to resync my cache with the new tokens (ok, i could throw away >the whole cache after a reparse but this would be bad because this cache stores a >"navigation"-history similary back- and next-button of a webbrowser :-( or at least with >the new overlays of the tokens. > >But for this i must recognize the the new generated token T' is equlivalent to the old token >T (equivalent in the sense it is the token of the same unchanged function, has same argument- >list etc. etc.) but i can not compare T and T' foe example with semantic-equivalent-tokens-p >because this function needs for comparison the overlays and - remember - token T doesn't >contain a valid overlay! > >To make a long story short: Any senseful or prakticable ways to check for token-equivalence if >one of the token contains an INVALID overlay. AFAICS name, and type are not enough because for >example in C++ files there can be method-declaration in the class and also the method- >implementation outside of the class in the same file...hmm, this could be checked with adopted- >attribute.... > >Ok, anyway, thoughs? Or was my problem not described understandable?? > >Thanks, >Klaus > > >------------------------------------------------------- >This SF.NET email is sponsored by: >SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! >http://www.vasoftware.com >_______________________________________________ >Cedet-devel mailing list >Ced...@li... >https://lists.sourceforge.net/lists/listinfo/cedet-devel > -- Eric Ludlam: za...@gn..., er...@si... Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |
From: Berndl, K. <kla...@sd...> - 2003-01-29 12:48:31
|
Hi, see below the posting arrived yesterday. I have already fixed c.bnf of v1p4 so now it can parse sequences of strings like "klaus" "berndl" correct and also the builtin type wchar_t (BTW: Is there also a wint_t = type?). But what would be the best way to parse these L"a wide-string" and L'a' (a wide-char-literal correct? Could this be done with c.bnf (my first = tries had no success) or should this be done with semantic-flex-extensions or = what other ways exist?? Thanks for help, Klaus -----Original Message----- From: Markus Sch=F6pflin = [mailto:mar...@gi...]=20 Sent: Tuesday, January 28, 2003 11:52 AM To: ced...@li... Subject: [cedet-semantic] Parsing of C string constants Currently, C string constants are not always parsed correctly. Here is=20 an example. char const *p =3D ""; // ok char const *q =3D "" ""; // not ok wchar_t const *wp =3D L""; // not ok wchar_t const *wq =3D L"" L""; // not ok All examples are legal according to the C++ standard. Here is the definition from the standard: string-literal: "s-char-sequence(opt)" L"s-char-sequence(opt)" s-char-sequence: s-char s-char-sequence s-char s-char: any member of the source character set except the doublequote ",=20 backslash \, or newline character escape-sequence universal-character-name 2.3.14(3): In translation phase 6 (2.1), adjacent narrow string=20 literals are concatenated and adjacent wide string literals are concatenated. [...] 2.3.14(5): Escape sequences and universal-character-names in string=20 literals have the same meaning as in character literals (2.13.2),=20 except that the single quote ' is representable either by itself or by=20 the escape sequence \', and the double quote " shall be preceded by a=20 \. [...] A universal character name is defined as: hex-quad: hexadecimal-digit hexadecimal-digit hexadecimal-digit=20 hexadecimal-digit (Four hex digits in a row) universal-character-name: \u hex-quad \U hex-quad hex-quad Escape sequences are defined in the usual way, there are \n, \r, ...,=20 octal escape sequences (like \123, length from 1 to 3) and hex=20 sequences (like \xabcd, length from 1 to unlimited). HTH, Markus ------------------------------------------------------- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld =3D Something 2 See! http://www.vasoftware.com _______________________________________________ cedet-semantic mailing list ced...@li... https://lists.sourceforge.net/lists/listinfo/cedet-semantic |
From: Eric M. L. <er...@si...> - 2003-01-29 12:34:01
|
Aha, one of my many incomplete semanticdb projects. That can be safely removed from the project file. My work in progress is attempting to map a semanticdb database on top of a .so file. Many .so files with debug info (STABS) can be dumped with all the info a debugger needs, which could prove useful as a source of API calls. I was hoping that one subclass would deal with .so files, and another with .class or .jar files. From the C case, header files are probably better. From the java case, I think the .jar file is also the C header file equivalent, yes? In the java case, I think javap could be used to extract the information, and if not, a custom java file that used introspection like the beanshell in JDE could be used to query the class file directly, mapping standard semanticdb-search.el functions directly to code in the introspection program. Eric >>> ry...@ds... seems to think that: >Hi Eric, > >It seems like mention of semanticdb-obj.el needs to be >deleted from semantic/Project.ede, because I can't find that >file anywhere. > >I can run "make" in semantic directory successfully after >updating new Porject.ede from cvs db, run >ede-proj-regenerate, then delete mention of >semanticdb-obj.el from the resulting Makefile. > >>>>>> "EL" == Eric M Ludlam <er...@si...> writes: > EL> > EL> Hi, > EL> As David pointed out, the Makefiles checked into semantic could not > EL> be regenerated via EDE. To my dismay I found a plethora of changes > EL> waiting to be checked in, so I did. Anyone who gets the new EDE files > EL> should be able to rebuild the Makefiles in semantic in a way that > EL> they will come close to what is in CVS. I'm getting clean builds on > EL> both EDE and Semantic now. > EL> > EL> I think the Project/Makefile may need to be double checked. I'll > EL> look closer when I get a chance. > EL> > EL> I also checked in the following semantic files: > EL> > EL> semantic-c.el: support #elif > EL> > EL> This may need porting back to v1p4 > EL> > EL> semantic-el.el: support defvar-mode-local > EL> > EL> This is a nifty change that makes any language support file (like > EL> semantic-el.el, for instance) look cool in speedbar, treating the mode > EL> much like a class with externally defined methods in C++. > EL> > EL> Despite David's request, I did not check in newly regenerated > EL> Makefiles. I'm hoping for confirmation from others that I didn't > EL> really mess something up in EDE first. > EL> > EL> Enjoy > EL> Eric > EL> > EL> -- > EL> Eric Ludlam: za...@gn..., er...@si... > EL> Home: http://www.ludlam.net Siege: www.siege-engine.com > EL> Emacs: http://cedet.sourceforge.net GNU: www.gnu.org > EL> > EL> > EL> ------------------------------------------------------- > EL> This SF.NET email is sponsored by: > EL> SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! > EL> http://www.vasoftware.com > EL> _______________________________________________ > EL> Cedet-devel mailing list > EL> Ced...@li... > EL> https://lists.sourceforge.net/lists/listinfo/cedet-devel > > > > > > >------------------------------------------------------- >This SF.NET email is sponsored by: >SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! >http://www.vasoftware.com >_______________________________________________ >Cedet-devel mailing list >Ced...@li... >https://lists.sourceforge.net/lists/listinfo/cedet-devel > -- Eric Ludlam: za...@gn..., er...@si... Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |
From: Berndl, K. <kla...@sd...> - 2003-01-29 11:23:27
|
Hi guys, suppose semantic 1.4.2. Further suppose that i have a semantic-token T (e.g. a function-token of an elisp-file F). T contains all informations of a function-token and therefore among others also a valid overlay. Ok, now suppose that i store this token in a certain own cache. Then a full reparse is done (either autom. or manually triggered) because i have changes something other in file F - the elisp-code to which token T belongs (i.e. the function) is unchanged. But after the full reparse the overlay of the token T stored in my own cache (i need it for certain purposes - at least the overlay-informations of the tokens) is invalid. (Eric, you remember we have discussed a related topic a long time ago?) Ok, it sounds understandable that after a reparse the old overlays are invalid because replaced by the new one (generated by the bovination/parsing run). But now i have a problem: I want to use the overlay-informations of my cached token T (need overlay-buffer overlay-start/end) but the overlay of T is now invalid cause of the reparse and there is a new token T' with a new valid overlay. Now i'm wondering how to resync my cache with the new tokens (ok, i could throw away the whole cache after a reparse but this would be bad because this cache stores a "navigation"-history similary back- and next-button of a webbrowser :-( or at least with the new overlays of the tokens. But for this i must recognize the the new generated token T' is equlivalent to the old token T (equivalent in the sense it is the token of the same unchanged function, has same argument- list etc. etc.) but i can not compare T and T' foe example with semantic-equivalent-tokens-p because this function needs for comparison the overlays and - remember - token T doesn't contain a valid overlay! To make a long story short: Any senseful or prakticable ways to check for token-equivalence if one of the token contains an INVALID overlay. AFAICS name, and type are not enough because for example in C++ files there can be method-declaration in the class and also the method- implementation outside of the class in the same file...hmm, this could be checked with adopted- attribute.... Ok, anyway, thoughs? Or was my problem not described understandable?? Thanks, Klaus |
From: David P. <Dav...@wa...> - 2003-01-29 10:35:50
|
Hi Klaus, > Yes, indeed, but wait for a moment, i want to take a look at the string= -parsing > stuff, sent yesterday to the semantic mailing list..maybe this can be f= ixed quickly... There is no hurry ;-) I am waiting for your GO... Thanks! David |
From: Berndl, K. <kla...@sd...> - 2003-01-29 10:15:20
|
>>If I am not too busy, I will put a new distrib. on SF this week! >>Eric, do you agree with that? [ ... ] >If you have time, this would be great. Yes, indeed, but wait for a moment, i want to take a look at the string-parsing stuff, sent yesterday to the semantic mailing list..maybe this can be fixed quickly... Ciao, Klaus |
From: David P. <Dav...@wa...> - 2003-01-29 08:55:51
|
Hi Eric & Richard, > EL> Hi, > EL> As David pointed out, the Makefiles checked into semantic > EL> could not be regenerated via EDE. To my dismay I found a > EL> plethora of changes waiting to be checked in, so I did. > EL> Anyone who gets the new EDE files should be able to rebuild > EL> the Makefiles in semantic in a way that they will come close > EL> to what is in CVS. I'm getting clean builds on both EDE and > EL> Semantic now. > EL> > EL> I think the Project/Makefile may need to be double checked. > EL> I'll look closer when I get a chance. Thanks Eric! > It seems like mention of semanticdb-obj.el needs to be > deleted from semantic/Project.ede, because I can't find that > file anywhere. > > I can run "make" in semantic directory successfully after > updating new Porject.ede from cvs db, run > ede-proj-regenerate, then delete mention of > semanticdb-obj.el from the resulting Makefile. Same here. I also removed the ":partofall 'nil" option from the "doc" target of wisent/Project.ede, so wisent.info files are build too (like semantic.info is). Finally in wisent/Project.ede I switched targets "languages" and "wisent" to first compile core wisent library. So the compilation of LALR automatons will be done by (faster) byte-compiled code. Attached you will find a global patch for new Project.ede and generated Makefiles. I checked the "make" process and all seems fine :-) I can check these changes in, if no objection. > EL> I also checked in the following semantic files: > EL> > EL> semantic-c.el: support #elif > EL> > EL> This may need porting back to v1p4 OK I will do that. > EL> semantic-el.el: support defvar-mode-local > EL> > EL> This is a nifty change that makes any language support file > EL> (like semantic-el.el, for instance) look cool in speedbar, > EL> treating the mode much like a class with externally defined > EL> methods in C++. I tried it, and it is a really cool feature. I like it! Thanks! David |
From: <ry...@ds...> - 2003-01-29 06:01:47
|
Hi David, I would like to ask your help in figuring out why python grammar does not seem to work as I expected. I include the full copy of wisent-python.wy at the end, but here are the three NT's of interest. ;; funcdef: 'def' NAME parameters ':' suite funcdef : DEF NAME function_parameter_list COLON suite (wisent-token $2 'function nil $3) ; function_parameter_list : PAREN_BLOCK (EXPANDFULL $1 function_parameters) ; ;; parameters: '(' [varargslist] ')' function_parameters : LPAREN () | RPAREN () | NAME ; This is my attempt at ading function arguments to 'function tokens. As you can see the code above is very similar to what you have in wisent-java-tag.wy. The last NT, function_parameters, is very simplified version since it only matches NAME terminal. The following is from top of test.py: def ttt(x): i = 1 which results in the following token ("ttt" function nil (("expr" code nil nil ((reparse-symbol . function_parameters)) #<overlay from 48 to 49 in test.py>)) nil #<overlay from 40 to 216 in test.py>) The overlay fences the "x" character. The token is not quite right yet, because what I want is something like ("ttt" function nil ("x") nil #<overlay from 40 to 216 in test.py>) The problem I can't solve now is that if I add %start function_parameters then rebuild the tables and re-bovinate, then no token is generated from the same input lines! It seems like adding this one line seems to break the parser. What is going on? ;;; wisent-python.wy -- LALR grammar for Python ;; ;; Copyright (C) 2002 Richard Kim ;; ;; Author: Richard Kim <ry...@ds...> ;; Maintainer: Richard Kim <ry...@ds...> ;; Created: June 2002 ;; Keywords: syntax ;; X-RCS: $Id: wisent-python.wy,v 1.18 2003/01/25 06:13:44 emacsman Exp $ ;; ;; This file is not part of GNU Emacs. ;; ;; This program is free software; you can redistribute it and/or ;; modify it under the terms of the GNU General Public License as ;; published by the Free Software Foundation; either version 2, or (at ;; your option) any later version. ;; ;; This software is distributed in the hope that it will be useful, ;; but WITHOUT ANY WARRANTY; without even the implied warranty of ;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ;; General Public License for more details. ;; ;; You should have received a copy of the GNU General Public License ;; along with GNU Emacs; see the file COPYING. If not, write to the ;; Free Software Foundation, Inc., 59 Temple Place - Suite 330, ;; Boston, MA 02111-1307, USA. ;;; Commentary: ;; ;; See comments in wisent-python.el. ;; -------- ;; Settings ;; -------- %outputfile "wisent-python.el" %parsetable wisent-python-parser-tables %keywordtable wisent-python-keywords %tokentable wisent-python-tokens %languagemode python-mode %setupfunction wisent-python-default-setup %start goal ;; Customize `wisent-flex' match algorithm ;; - Use string comparison for: ;; %put {open-paren close-paren symbol} string t ;; Customize `wisent-flex' match algorithm ;; - Use string comparison for: ;; - An operator can be made of multiple successive punctuations ;; %put punctuation {string t multiple t} %{ (setq ;; Character used to separation a parent/child relationship semantic-type-relation-separator-character '(".") semantic-command-separation-character ";" ;; Init indentation stack wisent-python-lexer-indent-stack '(0) semantic-lex-analyzer #'semantic-python-lexer semantic-lex-depth 0 ) %} %token <newline> NEWLINE ;; --------------------- ;; Parenthesis terminals ;; --------------------- %token <open-paren> LPAREN "(" %token <close-paren> RPAREN ")" %token <open-paren> LBRACE "{" %token <close-paren> RBRACE "}" %token <open-paren> LBRACK "[" %token <close-paren> RBRACK "]" ;; --------------- ;; Block terminals ;; --------------- %token <semantic-list> PAREN_BLOCK "^(" %token <semantic-list> BRACE_BLOCK "^{" %token <semantic-list> BRACK_BLOCK "^\\[" ;; ------------------ ;; Operator terminals ;; ------------------ %token <punctuation> LTLTEQ "<<=" %token <punctuation> GTGTEQ ">>=" %token <punctuation> EXPEQ "**=" %token <punctuation> DIVDIVEQ "//=" %token <punctuation> DIVDIV "//" %token <punctuation> LTLT "<<" %token <punctuation> GTGT ">>" %token <punctuation> EXPONENT "**" %token <punctuation> EQ "==" %token <punctuation> GE ">=" %token <punctuation> LE "<=" %token <punctuation> PLUSEQ "+=" %token <punctuation> MINUSEQ "-=" %token <punctuation> MULTEQ "*=" %token <punctuation> DIVEQ "/=" %token <punctuation> MODEQ "%=" %token <punctuation> AMPEQ "&=" %token <punctuation> OREQ "|=" %token <punctuation> HATEQ "^=" %token <punctuation> LTGT "<>" %token <punctuation> NE "!=" %token <punctuation> HAT "^" %token <punctuation> LT "<" %token <punctuation> GT ">" %token <punctuation> AMP "&" %token <punctuation> MULT "*" %token <punctuation> DIV "/" %token <punctuation> MOD "%" %token <punctuation> PLUS "+" %token <punctuation> MINUS "-" %token <punctuation> PERIOD "." %token <punctuation> TILDE "~" %token <punctuation> BAR "|" %token <punctuation> COLON ":" %token <punctuation> SEMICOLON ";" %token <punctuation> COMMA "," %token <punctuation> ASSIGN "=" %token <punctuation> BACKQUOTE "`" %token <charquote> BACKSLASH "\\" %put charquote string t ;; ----------------- ;; Literal terminals ;; ----------------- %token <string> STRING_LITERAL %token <number> NUMBER_LITERAL %token <symbol> NAME %token INDENT %token DEDENT ;; ----------------- ;; Keyword terminals ;; ----------------- %token AND "and" %put AND summary "Logical AND binary operator ... " %token ASSERT "assert" %put ASSERT summary "Raise AssertionError exception if <expr> is false" %token BREAK "break" %put BREAK summary "Terminate 'for' or 'while loop" %token CLASS "class" %put CLASS summary "Define a new class" %token CONTINUE "continue" %put CONTINUE summary "Skip to the next interation of enclosing for or whilte loop" %token DEF "def" %put DEF summary "Define a new function" %token DEL "del" %put DEL summary "Delete specified objects, i.e., undo what assignment did" %token ELIF "elif" %put ELIF summary "Shorthand for 'else if' following an 'if' statement" %token ELSE "else" %put ELSE summary "Start the 'else' clause following an 'if' statement" %token EXCEPT "except" %put EXCEPT summary "Specify exception handlers along with 'try' keyword" %token EXEC "exec" %put EXEC summary "Dynamically execute python code" %token FINALLY "finally" %put FINALLY summary "Specify code to be executed after 'try' statements whether or not an exception occured" %token FOR "for" %put FOR summary "Start a 'for' loop" %token FROM "from" %put FROM summary "Modify behavior of 'import' statement" %token GLOBAL "global" %put GLOBAL summary "Declare one or more symbols as global symbols" %token IF "if" %put IF summary "Start 'if' conditional statement" %token IMPORT "import" %put IMPORT summary "Load specified modules" %token IN "in" %put IN summary "Part of 'for' statement " %token IS "is" %put IS summary "Binary operator that tests for object equality" %token LAMBDA "lambda" %put LAMDA summary "Create anonymous function" %token NOT "not" %put NOT summary "Unary boolean negation operator" %token OR "or" %put OR summary "Binary logical 'or' operator" %token PASS "pass" %put PASS summary "Statement that does nothing" %token PRINT "print" %put PRINT summary "Print each argument to standard output" %token RAISE "raise" %put RAISE summary "Raise an exception" %token RETURN "return" %put RETURN summary "Return from a function" %token TRY "try" %put TRY summary "Start of statements protected by exception handlers" %token WHILE "while" %put WHILE summary "Start a 'while' loop" %token YIELD "yield" %put YIELD summary "Create a generator function" %% ;;;**************************************************************************** ;;;@ goal ;;;**************************************************************************** # simple_stmt are statements that do not involve INDENT tokens # compound_stmt are statements that involve INDENT tokens goal : NEWLINE | simple_stmt | compound_stmt NEWLINE (identity $1) ; ;;;**************************************************************************** ;;;@ simple_stmt ;;;**************************************************************************** ;; simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE simple_stmt : small_stmt_list semicolon_opt NEWLINE (identity $1) ; ;; small_stmt (';' small_stmt)* small_stmt_list : small_stmt | small_stmt_list SEMICOLON small_stmt (identity $1) ; small_stmt : expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | exec_stmt | assert_stmt ; ;;;============================================================================ ;;;@@ print_stmt ;;;============================================================================ ;; print_stmt: 'print' [ test (',' test)* [','] ] ;; | '>>' test [ (',' test)+ [','] ] print_stmt : PRINT print_stmt_trailer (wisent-token $1 'code nil nil) ; ;; [ test (',' test)* [','] ] | '>>' test [ (',' test)+ [','] ] print_stmt_trailer : test_list_opt (or $1 "") | GTGT test trailing_test_list_with_opt_comma_opt (identity $2) ; ;; [ (',' test)+ [','] ] trailing_test_list_with_opt_comma_opt : ;;EMPTY | trailing_test_list comma_opt () ; ;; (',' test)+ trailing_test_list : COMMA test () | trailing_test_list COMMA test () ; ;;;============================================================================ ;;;@@ expr_stmt ;;;============================================================================ ;; expr_stmt: testlist (augassign testlist | ('=' testlist)*) expr_stmt : testlist expr_stmt_trailer (wisent-token "expr" 'code nil nil) ; ;; Could be EMPTY because of eq_testlist_zom. ;; (augassign testlist | ('=' testlist)*) expr_stmt_trailer : augassign testlist () | eq_testlist_zom ; ;; Could be EMPTY! ;; ('=' testlist)* eq_testlist_zom : ;;EMPTY | eq_testlist_zom ASSIGN testlist () ; ;; augassign: '+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' ;; | '<<=' | '>>=' | '**=' | '//=' augassign : PLUSEQ | MINUSEQ | MULTEQ | DIVEQ | MODEQ | AMPEQ | OREQ | HATEQ | LTLTEQ | GTGTEQ | EXPEQ | DIVDIVEQ ; ;;;============================================================================ ;;;@@ del_stmt ;;;============================================================================ ;; del_stmt: 'del' exprlist del_stmt : DEL exprlist (wisent-token $1 'code nil nil) ; ;; exprlist: expr (',' expr)* [','] exprlist : expr_list comma_opt (identity $1) ; ;; expr (',' expr)* expr_list : expr | expr_list COMMA expr (format "%s, %s" $1 $3) ; ;;;============================================================================ ;;;@@ pass_stmt ;;;============================================================================ ;; pass_stmt: 'pass' pass_stmt : PASS (wisent-token $1 'code nil nil) ; ;;;============================================================================ ;;;@@ flow_stmt ;;;============================================================================ flow_stmt : break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt ; ;; break_stmt: 'break' break_stmt : BREAK (wisent-token $1 'code nil nil) ; ;; continue_stmt: 'continue' continue_stmt : CONTINUE (wisent-token $1 'code nil nil) ; ;; return_stmt: 'return' [testlist] return_stmt : RETURN testlist_opt (wisent-token $1 'code nil nil) ; ;; [testlist] testlist_opt : ;;EMPTY | testlist ; ;; yield_stmt: 'yield' testlist yield_stmt : YIELD testlist (wisent-token $1 'code nil nil) ; ;; raise_stmt: 'raise' [test [',' test [',' test]]] raise_stmt : RAISE zero_one_two_or_three_tests (wisent-token $1 'code nil nil) ; ;; [test [',' test [',' test]]] zero_one_two_or_three_tests : ;;EMPTY (identity "") | test zero_one_or_two_tests (identity $1) ; ;; [',' test [',' test]] zero_one_or_two_tests : ;;EMPTY | COMMA test zero_or_one_comma_test () ; ;; [',' test] zero_or_one_comma_test : ;;EMPTY | COMMA test () ; ;;;============================================================================ ;;;@@ import_stmt ;;;============================================================================ ;; import_stmt : 'import' dotted_as_name (',' dotted_as_name)* ;; | 'from' dotted_name 'import' ;; ('*' | import_as_name (',' import_as_name)*) import_stmt : IMPORT dotted_as_name_list (wisent-token $2 'import nil nil) | FROM dotted_name IMPORT star_or_import_as_name_list (wisent-token $2 'import nil nil) ; ;; dotted_as_name (',' dotted_as_name)* dotted_as_name_list : dotted_as_name | dotted_as_name_list COMMA dotted_as_name (identity $1) ; ;; ('*' | import_as_name (',' import_as_name)*) star_or_import_as_name_list : MULT () | import_as_name_list () ; ;; import_as_name (',' import_as_name)* import_as_name_list : import_as_name () | import_as_name_list COMMA import_as_name () ; ;; import_as_name: NAME [NAME NAME] import_as_name : NAME name_name_opt () ; ;; dotted_as_name: dotted_name [NAME NAME] dotted_as_name : dotted_name name_name_opt (identity $1) ; ;; [NAME NAME] name_name_opt : ;;EMPTY | NAME NAME () ; ;; dotted_name: NAME ('.' NAME)* dotted_name : NAME | dotted_name PERIOD NAME (format "%s.%s" $1 $3) ; ;;;============================================================================ ;;;@@ global_stmt ;;;============================================================================ ;; global_stmt: 'global' NAME (',' NAME)* global_stmt : GLOBAL comma_sep_name_list (wisent-token $1 'code nil nil) ; ;; NAME (',' NAME)* comma_sep_name_list : NAME | comma_sep_name_list COMMA NAME (format "%s" $1) ; ;;;============================================================================ ;;;@@ exec_stmt ;;;============================================================================ ;; exec_stmt: 'exec' expr ['in' test [',' test]] exec_stmt : EXEC expr exec_trailer (wisent-token $1 'code nil nil) ; ;; ['in' test [',' test]] exec_trailer : ;;EMPTY | IN test comma_test_opt () ; ;; [',' test] comma_test_opt : ;;EMPTY | COMMA test () ; ;;;============================================================================ ;;;@@ assert_stmt ;;;============================================================================ ;; assert_stmt: 'assert' test [',' test] assert_stmt : ASSERT test comma_test_opt (wisent-token $1 'code nil nil) ; ;;;**************************************************************************** ;;;@ compound_stmt ;;;**************************************************************************** compound_stmt : if_stmt | while_stmt | for_stmt | try_stmt | funcdef | classdef ; ;;;============================================================================ ;;;@@ if_stmt ;;;============================================================================ ;; if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] if_stmt : IF test COLON suite elif_suite_pair_list else_suite_pair_opt (wisent-token $1 'code nil nil) ; ;; ('elif' test ':' suite)* elif_suite_pair_list : ;;EMPTY | elif_suite_pair_list ELIF test COLON suite () ; ;; ['else' ':' suite] else_suite_pair_opt : ;;EMPTY | ELSE COLON suite () ; ;; This NT follows the COLON token for most compound statements. ;; suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT suite : simple_stmt (list $1) | NEWLINE INDENT stmt_oom DEDENT (nreverse $3) ; ;; stmt+ oom stands for One Or More stmt_oom : stmt (list $1) | stmt_oom stmt (cons $2 $1) ; ;; stmt: simple_stmt | compound_stmt stmt : simple_stmt | compound_stmt ; ;;;============================================================================ ;;;@@ while_stmt ;;;============================================================================ ;; while_stmt: 'while' test ':' suite ['else' ':' suite] while_stmt : WHILE test COLON suite else_suite_pair_opt (wisent-token $1 'code nil nil) ; ;;;============================================================================ ;;;@@ for_stmt ;;;============================================================================ ;; for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] for_stmt : FOR exprlist IN testlist COLON suite else_suite_pair_opt (wisent-token $1 'code nil nil) ; ;;;============================================================================ ;;;@@ try_stmt ;;;============================================================================ ;; try_stmt: ('try' ':' suite (except_clause ':' suite)+ #diagram:break ;; ['else' ':' suite] | 'try' ':' suite 'finally' ':' suite) try_stmt : TRY COLON suite except_clause_suite_pair_list else_suite_pair_opt (wisent-token $1 'code nil nil) | TRY COLON suite FINALLY COLON suite (wisent-token $1 'code nil nil) ; ;; (except_clause ':' suite)+ except_clause_suite_pair_list : except_clause COLON suite (concat "except_clause_suite_pair_list") | except_clause_suite_pair_list except_clause COLON suite (concat "except_clause_suite_pair_list") ; ;; # NB compile.c makes sure that the default except clause is last ;; except_clause: 'except' [test [',' test]] except_clause : EXCEPT zero_one_or_two_test ; ;; [test [',' test]] zero_one_or_two_test : ;;EMPTY | test zero_or_one_comma_test ; ;;;============================================================================ ;;;@@ funcdef ;;;============================================================================ ;; funcdef: 'def' NAME parameters ':' suite funcdef : DEF NAME function_parameter_list COLON suite (wisent-token $2 'function nil $3) ; function_parameter_list : PAREN_BLOCK (EXPANDFULL $1 function_parameters) ; ;; parameters: '(' [varargslist] ')' function_parameters : LPAREN () | RPAREN () | NAME ; ;;;============================================================================ ;;;@@ classdef ;;;============================================================================ ;; classdef: 'class' NAME ['(' testlist ')'] ':' suite classdef : CLASS NAME paren_testlist_opt COLON suite (wisent-token $2 'type $1 $5 nil) ; ;; ['(' testlist ')'] paren_testlist_opt : ;;EMPTY ;;| LPAREN testlist RPAREN | PAREN_BLOCK ; ;;;**************************************************************************** ;;;@ test ;;;**************************************************************************** ;; test: and_test ('or' and_test)* | lambdef test : test_test | lambdef ; ;; and_test ('or' and_test)* test_test : and_test | test_test OR and_test (format "%s %s %s" $1 $2 $3) ; ;; and_test: not_test ('and' not_test)* and_test : not_test | and_test AND not_test (format "%s %s %s" $1 $2 $3) ; ;; not_test: 'not' not_test | comparison not_test : NOT not_test (format "NOT %s" $1) | comparison ; ;; comparison: expr (comp_op expr)* comparison : expr | comparison comp_op expr (format "%s %s %s" $1 $2 $3) ; ;; comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' comp_op : LT | GT | EQ | GE | LE | LTGT | NE | IN | NOT IN | IS | IS NOT ; ;; expr: xor_expr ('|' xor_expr)* expr : xor_expr | expr BAR xor_expr (format "%s %s %s" $1 $2 $3) ; ;; xor_expr: and_expr ('^' and_expr)* xor_expr : and_expr | xor_expr HAT and_expr (format "%s %s %s" $1 $2 $3) ; ;; and_expr: shift_expr ('&' shift_expr)* and_expr : shift_expr | and_expr AMP shift_expr (format "%s %s %s" $1 $2 $3) ; ;; shift_expr: arith_expr (('<<'|'>>') arith_expr)* shift_expr : arith_expr | shift_expr shift_expr_operators arith_expr (format "%s %s %s" $1 $2 $3) ; ;; ('<<'|'>>') shift_expr_operators : LTLT | GTGT ; ;; arith_expr: term (('+'|'-') term)* arith_expr : term | arith_expr plus_or_minus term (format "%s %s %s" $1 $2 $3) ; ;; ('+'|'-') plus_or_minus : PLUS | MINUS ; ;; term: factor (('*'|'/'|'%'|'//') factor)* term : factor | term term_operator factor (format "%s %s %s" $1 $2 $3) ; term_operator : MULT | DIV | MOD | DIVDIV ; ;; factor: ('+'|'-'|'~') factor | power factor : prefix_operators factor (format "%s %s" $1 $2) | power ; ;; ('+'|'-'|'~') prefix_operators : PLUS | MINUS | TILDE ; ;; power: atom trailer* ('**' factor)* power : atom trailer_zom exponent_zom (concat $1 (if $2 (concat " " $2 " ") "") (if $3 (concat " " $3) "") ) ; trailer_zom : ;;EMPTY | trailer_zom trailer (format "(%s %s)" (or $1 "") $2) ; exponent_zom : ;;EMPTY | exponent_zom EXPONENT factor (format "(%s ** %s)" (or $1 "") $3) ; ;; trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME trailer : PAREN_BLOCK | BRACK_BLOCK | PERIOD NAME (concat "." $2) ; ;; atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' ;; | '`' testlist '`' | NAME | NUMBER | STRING+ atom : PAREN_BLOCK (format "%s" $1) | BRACK_BLOCK (format "%s" $1) | BRACE_BLOCK (format "%s" $1) | BACKQUOTE testlist BACKQUOTE () | NAME | NUMBER_LITERAL (concat $1) | one_or_more_string ; test_list_opt : ;;EMPTY | testlist ; ;; testlist: test (',' test)* [','] testlist : comma_sep_test_list comma_opt (identity $1) ; ;; test (',' test)* comma_sep_test_list : test | comma_sep_test_list COMMA test (format "%s, %s" $1 $3) ; ;; (read $1) and (read $2) were done before to peel away the double quotes. ;; However that does not work for single quotes, so it was taken out. one_or_more_string : STRING_LITERAL (wisent-python-truncate-string $1) ;; limit string to 16 chars | one_or_more_string STRING_LITERAL (wisent-python-truncate-string (concat $1 $2)) ;; limit string to 16 chars ; ;;;**************************************************************************** ;;;@ lambdef ;;;**************************************************************************** ;; lambdef: 'lambda' [varargslist] ':' test lambdef : LAMBDA varargslist_opt COLON test (format "%s %s %s" $1 $2 $3) ; ;; [varargslist] varargslist_opt : ;;EMPTY | varargslist ; ;; varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) ;; | fpdef ['=' test] (',' fpdef ['=' test])* [','] varargslist : fpdef_opt_test_list_comma_zom rest_args (nconc $2 $1) | fpdef_opt_test_list comma_opt ; ;; ('*' NAME [',' '**' NAME] | '**' NAME) rest_args : MULT NAME multmult_name_opt () ;;(wisent-token $2 'variable nil nil) | EXPONENT NAME () ;;(wisent-token $2 'variable nil nil) ; ;; [',' '**' NAME] multmult_name_opt : ;;EMPTY | COMMA EXPONENT NAME (wisent-token $3 'variable nil nil) ; fpdef_opt_test_list_comma_zom : ;;EMPTY | fpdef_opt_test_list_comma_zom fpdef_opt_test COMMA (nconc $2 $1) ; ;; fpdef ['=' test] (',' fpdef ['=' test])* fpdef_opt_test_list : fpdef_opt_test | fpdef_opt_test_list COMMA fpdef_opt_test (nconc $3 $1) ; ;; fpdef ['=' test] fpdef_opt_test : fpdef eq_test_opt (identity $1) ; ;; fpdef: NAME | '(' fplist ')' fpdef : NAME (list (wisent-token $1 'variable nil nil)) | LPAREN fplist RPAREN (identity $2) ; ;; fplist: fpdef (',' fpdef)* [','] fplist : fpdef_list comma_opt (identity $1) ; ;; fpdef (',' fpdef)* fpdef_list : fpdef | fpdef_list COMMA fpdef (nconc $3 $1) ; ;; ['=' test] eq_test_opt : ;;EMPTY | ASSIGN test () ; ;;;**************************************************************************** ;;;@ Misc ;;;**************************************************************************** ;; [','] comma_opt : ;;EMPTY | COMMA ; ;; [';'] semicolon_opt : ;;EMPTY | SEMICOLON ; ;;; wisent-python.wy ends here |
From: <ry...@ds...> - 2003-01-29 04:58:26
|
Hi Eric, It seems like mention of semanticdb-obj.el needs to be deleted from semantic/Project.ede, because I can't find that file anywhere. I can run "make" in semantic directory successfully after updating new Porject.ede from cvs db, run ede-proj-regenerate, then delete mention of semanticdb-obj.el from the resulting Makefile. >>>>> "EL" == Eric M Ludlam <er...@si...> writes: EL> EL> Hi, EL> As David pointed out, the Makefiles checked into semantic could not EL> be regenerated via EDE. To my dismay I found a plethora of changes EL> waiting to be checked in, so I did. Anyone who gets the new EDE files EL> should be able to rebuild the Makefiles in semantic in a way that EL> they will come close to what is in CVS. I'm getting clean builds on EL> both EDE and Semantic now. EL> EL> I think the Project/Makefile may need to be double checked. I'll EL> look closer when I get a chance. EL> EL> I also checked in the following semantic files: EL> EL> semantic-c.el: support #elif EL> EL> This may need porting back to v1p4 EL> EL> semantic-el.el: support defvar-mode-local EL> EL> This is a nifty change that makes any language support file (like EL> semantic-el.el, for instance) look cool in speedbar, treating the mode EL> much like a class with externally defined methods in C++. EL> EL> Despite David's request, I did not check in newly regenerated EL> Makefiles. I'm hoping for confirmation from others that I didn't EL> really mess something up in EDE first. EL> EL> Enjoy EL> Eric EL> EL> -- EL> Eric Ludlam: za...@gn..., er...@si... EL> Home: http://www.ludlam.net Siege: www.siege-engine.com EL> Emacs: http://cedet.sourceforge.net GNU: www.gnu.org EL> EL> EL> ------------------------------------------------------- EL> This SF.NET email is sponsored by: EL> SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! EL> http://www.vasoftware.com EL> _______________________________________________ EL> Cedet-devel mailing list EL> Ced...@li... EL> https://lists.sourceforge.net/lists/listinfo/cedet-devel |
From: <ry...@ds...> - 2003-01-29 04:16:15
|
Hi all, I added the following to NEWS file. Please let me know what you think. *** NEWS.~1.6.~ Tue Jan 28 20:02:02 2003 --- NEWS Tue Jan 28 20:11:09 2003 *************** *** 237,242 **** --- 237,247 ---- * Languages. + ** Python parser added. + The LALR grammar is based on the official grammar with slight + modifications. All top-level expressions produce a semantic token. + Further work is planned to improve tokens for functions and classes. + TBD. |
From: Eric M. L. <er...@si...> - 2003-01-29 03:47:28
|
Hi, As David pointed out, the Makefiles checked into semantic could not be regenerated via EDE. To my dismay I found a plethora of changes waiting to be checked in, so I did. Anyone who gets the new EDE files should be able to rebuild the Makefiles in semantic in a way that they will come close to what is in CVS. I'm getting clean builds on both EDE and Semantic now. I think the Project/Makefile may need to be double checked. I'll look closer when I get a chance. I also checked in the following semantic files: semantic-c.el: support #elif This may need porting back to v1p4 semantic-el.el: support defvar-mode-local This is a nifty change that makes any language support file (like semantic-el.el, for instance) look cool in speedbar, treating the mode much like a class with externally defined methods in C++. Despite David's request, I did not check in newly regenerated Makefiles. I'm hoping for confirmation from others that I didn't really mess something up in EDE first. Enjoy Eric -- Eric Ludlam: za...@gn..., er...@si... Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |
From: Eric M. L. <er...@si...> - 2003-01-29 03:30:21
|
>>> "David PONCE" <Dav...@wa...> seems to think that: [ ... ] >I think our first priority now is to release Semantic 1.4.3, so people >can benefit quickly of your great improvement of C/C++ parsing. > >If I am not too busy, I will put a new distrib. on SF this week! >Eric, do you agree with that? [ ... ] If you have time, this would be great. Eric -- Eric Ludlam: za...@gn..., er...@si... Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |
From: <ry...@ds...> - 2003-01-29 03:24:48
|
Hi Eric, I'll mention python in NEWS file in the next few days. The python parser seems to be able to generate proper tokens although I would like the tokens for functions and classes to include more information such as function arguments. For the past few days, I tried to add code to generate tokens for function arguments, but I have not been successful. Most likely I don't understand how to use EXPANDFULL. I'll plug away for a few more days. Either I'll be successful or I'll cry for help. We'll see. >>>>> "EL" == Eric M Ludlam <er...@si...> writes: EL> >>>> ry...@ds... seems to think that: EL> [ ... ] DP> Finally, it would be nice if you could add something DP> in the NEWS file DP> about python into the "* Languages." topic ;-) >> >> I would be happy to do that except that I was waiting to see >> if I could get python parser to a decent state first. >> I wouldn't want to announce something if it is not too >> useful. I'll try hard to fix as much lose ends as possible >> before 2.0 beta whenever that is. EL> [ ... ] EL> EL> You should download semantic 1.0 sometime. It really wanted to be EL> useful, but was not. ;) EL> EL> I often fall victim of thinking "just fix a few more bugs first", EL> which is a habit I need to get out of for my Emacs Projects. I have EL> found that I meet really cool people when I release buggy Emacs EL> code. ;) EL> EL> Eric |
From: <ry...@ds...> - 2003-01-29 03:18:42
|
David, Thanks for your reply. I knew what the error messages meant, i.e., I know what shift/reduce conflicts mean, but I didn't know how to find out which non-terminals were the culprit. (setq wisent-verbose-flag t) is what I needed to know. Thanks again. >>>>> "DP" == David Ponce <dav...@wa...> writes: DP> DP> Hi Richard, >> I noticed the following lines in my *Messages* buffer: >> >> Grammar in wisent-python.el contains 1 useless nonterminals and 1 DP> useless rules >> Grammar in wisent-python.el contains 2 shift/reduce conflicts DP> DP> The first message indicates unused rules that can probably be safely DP> removed from the grammar. DP> DP> The second message indicates that certain states of the automaton DP> result in two different actions (shift or reduce, reduce or reduce) DP> that conflict. Normally an LALR automaton must be fully deterministic DP> (1 state -> 1 action). I encourage you to read the bison's manual to DP> learn more on LALR parsers. In most case shift/reduce conflicts are DP> not errors and can be safely ignored. It is probably the case for the DP> python grammar :-) DP> DP> You can do M-: (set wisent-verbose-flag t) before compiling your DP> grammar to get detailed informations on these issues in a DP> "*wisent-log*" buffer. wisent.info should explain the content of DP> that buffer (see chapter "Grammar/Compiling a grammar/Conflicts", DP> then menu item "Grammar Debugging"). DP> >> *** `<no-type>' default matching spec DEDENT redefined as INDENT >> *** `symbol' default matching spec INDENT redefined as NAME >> *** `number' default matching spec NAME redefined as NUMBER_LITERAL >> *** `string' default matching spec NUMBER_LITERAL redefined as DP> STRING_LITERAL >> *** `newline' default matching spec STRING_LITERAL redefined as NEWLINE DP> DP> The above messages are warnings issued when building the table of DP> terminal tokens (from %token declarations). You can safely ignore DP> them. I will have a look at the code that builds the token table to DP> see if I can improve these obscure messages! DP> DP> Hope it helps! DP> DP> Thanks. DP> David |
From: David P. <Dav...@wa...> - 2003-01-28 12:08:51
|
Hi Eric, In the process of preparing a future Semantic 2.0beta1 release, I noticed that the current wisent/Makefile seems not synchronized with wisent/Project.ede. Maybe have you the right Project.ede on your local area=3F If so, could you please check it in. Thanks! David |
From: David P. <Dav...@wa...> - 2003-01-28 11:43:46
|
Hi All, Maybe it's time to do some administrative tasks ;-) The following Semantic bugs are still open on SF: ID Summary Date ------ -------------------------------------------- ---------------- 605541 Parses some C function decls incorrectly 2002-09-06 14:28 595868 C Parser cannot handle __THROW 2002-08-16 08:18 562371 semantic brings Emacs down=3F 2002-05-30 16:17 230808 incremental reparse fixes 2001-02-02 02:35 230788 one token in buffer, a partial reparse fails 2001-02-02 01:46 230784 Support function pointers for C 2001-02-02 01:42 I suspect that some of them have been fixed by recent changes, and perhaps others can be closed too. Would you like to have a look at them and close those that are no more pertinent=3F Thank you for your time! David |
From: David P. <Dav...@wa...> - 2003-01-28 10:29:33
|
> BTW: What's your estimation when to public release first beta > of semantic 2.0=3F Please do not misunderstand me, i do not want to put= > pressure on you but just for my interest.... I hope it will be soon ;-) I think our first priority now is to release Semantic 1.4.3, so people can benefit quickly of your great improvement of C/C++ parsing. If I am not too busy, I will put a new distrib. on SF this week! Eric, do you agree with that=3F >>Thanks for your great work! > > > You're welcome...it was just a very small piece of a really great > tool. Semantic is IMHO the most important (of course after our ECB > ;-)) work for making Emacs a really (language independend) IDE > comparable with commercials (language depended) IDEs. Thanks! It is always good to receive compliments ;-) David |
From: Berndl, K. <kla...@sd...> - 2003-01-28 10:04:11
|
Hi David, >> just checked in new c.bnf, semantic-c.el (just newly generated) and >> tests/test.cpp. Now all c++-parsing stuff we discussed yesterday >> works fine. >I synchronized Semantic 2.0 with your latest enhancements. Fine, thanks! BTW: What's your estimation when to public release first beta of semantic 2.0? Please do not misunderstand me, i do not want to put pressure on you but just for my interest.... >Thanks for your great work! You're welcome...it was just a very small piece of a really great tool. Semantic is IMHO the most important (of course after our ECB ;-)) work for making Emacs a really (language independend) IDE comparable with commercials (language depended) IDEs. Ciao, Klaus P.S. My remark about ECB was a joke :-) |
From: David P. <Dav...@wa...> - 2003-01-28 09:57:23
|
Hi Klaus, > just checked in new c.bnf, semantic-c.el (just newly generated) and > tests/test.cpp. Now all c++-parsing stuff we discussed yesterday > works fine. I synchronized Semantic 2.0 with your latest enhancements. Thanks for your great work! David |
From: Berndl, K. <kla...@sd...> - 2003-01-28 08:25:54
|
Hi guys, just checked in new c.bnf, semantic-c.el (just newly generated) and tests/test.cpp. Now all c++-parsing stuff we discussed yesterday works fine. I will write a short message in the semantic mailing list so users can download from CVS the new semantic-c.el. Ciao, Klaus |