Re: [cedet-semantic] Arbitrary parsing depth with `semantic-lex`
Brought to you by:
zappo
From: Eric M. L. <er...@si...> - 2007-10-28 12:25:45
|
Hi, If you create a lexer with no block analyzers, then it will parse all the characters. If you are re-using an existing lexer, then that would be a bit of a problem. Looking in the code I don't see a magic switch. I think using a really big number would be the way to go for a quick hack. Here is another cool trick. If you pass in the number 1.0e+INF, then you will never reach it, but the current depth will always be less. I don't know if that number is unique to Emacs 23 CVS (which is what I'm running) or not. Good Luck Eric >>> Dmitry Dzhus <ma...@sp...> seems to think that: > >I need to read all lexems of a given text (Semantic tag body) and find >those who have `'symbol` or `'NAME` token symbol. > >For now I've got the following line in my code: > > (dolist (lexem (semantic-lex from to 100) result) > >What argument may I pass to `semantic-lex` to make it process the text >with _arbitrary_ parsing depth? Of course specifying something about >10000 would work for most cases, but I wonder if there's some special >switch. > -- Eric Ludlam: za...@gn..., er...@si... Home: http://www.ludlam.net Siege: www.siege-engine.com Emacs: http://cedet.sourceforge.net GNU: www.gnu.org |