If you create a lexer with no block analyzers, then it will parse
all the characters. If you are re-using an existing lexer, then that
would be a bit of a problem. Looking in the code I don't see a magic
switch. I think using a really big number would be the way to go
for a quick hack.
Here is another cool trick. If you pass in the number 1.0e+INF,
then you will never reach it, but the current depth will always be
less. I don't know if that number is unique to Emacs 23 CVS (which is
what I'm running) or not.
>>> Dmitry Dzhus <mail@...> seems to think that:
>I need to read all lexems of a given text (Semantic tag body) and find
>those who have `'symbol` or `'NAME` token symbol.
>For now I've got the following line in my code:
> (dolist (lexem (semantic-lex from to 100) result)
>What argument may I pass to `semantic-lex` to make it process the text
>with _arbitrary_ parsing depth? Of course specifying something about
>10000 would work for most cases, but I wonder if there's some special
Eric Ludlam: zappo@..., eric@...
Home: http://www.ludlam.net Siege: http://www.siege-engine.com
Emacs: http://cedet.sourceforge.net GNU: http://www.gnu.org