I prepared a NEWS update for the new auto-generated single analyzer
feature. You will find a patch at end.
Before committing it I would appreciate if you could review it. I am
not sure my English is correct and clear enough ;-)
Concerning the "single lexical analyzer" terminology, I find it a little
confusing with "lexical analyzer". The former actually indicates a
lexer matching rule, whereas the latter indicates the actual lexical
What do you thing of adopting the "lexical rule" terminology already
used by Flex, instead of "single lexical analyzer"? Here is an
excerpt of what the Flex manual says:
The rules section of the flex input contains a series of rules of the
The patterns in the input are written using an extended set of
Each pattern in a rule has a corresponding action, which can be
any arbitrary C statement.
It looks like that also describes well what single analyzers are ;-)
To enforce adoption of the "lexical rule" terminology we could rename
these macros like this:
define-lex-analyzer -> define-lex-rule
define-lex-regex-analyzer -> define-lex-regex-rule
define-lex-simple-regex-analyzer -> define-lex-simple-regex-rule
define-lex-block-analyzer -> define-lex-block-rule
define-lex-keyword-type-analyzer -> define-lex-keyword-type-rule
define-lex-sexp-type-analyzer -> define-lex-sexp-type-rule
define-lex-regex-type-analyzer -> define-lex-regex-type-rule
define-lex-string-type-analyzer -> define-lex-string-type-rule
define-lex-block-type-analyzer -> define-lex-block-type-rule
RCS file: /cvsroot/cedet/cedet/semantic/NEWS,v
retrieving revision 1.15
diff -c -r1.15 NEWS
*** NEWS 9 Jan 2004 03:08:18 -0000 1.15
--- NEWS 20 Jan 2004 10:29:19 -0000
*** 265,270 ****
--- 265,290 ----
Language specific human written code must call the automatically
generated setup function.
+ *** Auto-generation of single lexical analyzers
+ The new %type statement combined with the use of %token statements
+ permit to respectively declare a lexical type and associate to it
+ patterns that define how to match lexical tokens of that type.
+ The grammar construction process can take benefit of %type and %token
+ declarations to automatically generate the definition of a single
+ lexical analyzer for each explicitly declared lexical type.
+ Useful default values are provided for usual well known types like
+ <keyword>, <symbol>, <string>, <number>, <punctuation>, and <block>.
+ So for those types, assuming that the correct patterns are provided by
+ %token statements, a simple "%type <type>" declaration should
+ generally suffice to auto-generate a suitable single analyzer.
+ It is then easy to put predefined and auto-generated single analyzers
+ together to build ad-hoc lexical analyzers. Examples are available
+ among the grammars included in the distribution.
*** Bovine grammar
A file FOO.by will create the file FOO-by.el, and FOO-by.elc