pyparsing-users Mailing List for Python parsing module (Page 5)
Brought to you by:
ptmcg
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(12) |
May
(2) |
Jun
|
Jul
|
Aug
(12) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2006 |
Jan
(5) |
Feb
(1) |
Mar
(10) |
Apr
(3) |
May
(7) |
Jun
(2) |
Jul
(2) |
Aug
(7) |
Sep
(8) |
Oct
(17) |
Nov
|
Dec
(3) |
2007 |
Jan
(4) |
Feb
|
Mar
(10) |
Apr
|
May
(6) |
Jun
(11) |
Jul
(1) |
Aug
|
Sep
(19) |
Oct
(8) |
Nov
(32) |
Dec
(8) |
2008 |
Jan
(12) |
Feb
(6) |
Mar
(42) |
Apr
(47) |
May
(17) |
Jun
(15) |
Jul
(7) |
Aug
(2) |
Sep
(13) |
Oct
(6) |
Nov
(11) |
Dec
(3) |
2009 |
Jan
(2) |
Feb
(3) |
Mar
|
Apr
|
May
(11) |
Jun
(13) |
Jul
(19) |
Aug
(17) |
Sep
(8) |
Oct
(3) |
Nov
(7) |
Dec
(1) |
2010 |
Jan
(2) |
Feb
|
Mar
(19) |
Apr
(6) |
May
|
Jun
(2) |
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
(2) |
2011 |
Jan
(4) |
Feb
|
Mar
(5) |
Apr
(1) |
May
(3) |
Jun
(8) |
Jul
(6) |
Aug
(8) |
Sep
(35) |
Oct
(1) |
Nov
(1) |
Dec
(2) |
2012 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(4) |
May
|
Jun
(1) |
Jul
|
Aug
(6) |
Sep
(18) |
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
(7) |
Feb
(7) |
Mar
(1) |
Apr
(4) |
May
|
Jun
|
Jul
(1) |
Aug
(5) |
Sep
(3) |
Oct
(11) |
Nov
(3) |
Dec
|
2014 |
Jan
(3) |
Feb
(1) |
Mar
|
Apr
(6) |
May
(10) |
Jun
(4) |
Jul
|
Aug
(5) |
Sep
(2) |
Oct
(4) |
Nov
(1) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(13) |
May
(1) |
Jun
|
Jul
(2) |
Aug
|
Sep
(9) |
Oct
(2) |
Nov
(11) |
Dec
(2) |
2016 |
Jan
|
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(4) |
2017 |
Jan
(2) |
Feb
(2) |
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
(10) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Asif I. <va...@gm...> - 2014-05-25 07:00:28
|
On Sun, May 25, 2014 at 1:46 AM, Paul McGuire <pt...@au...> wrote: > Pyparsing only reports one because you only told it to get one. Change to: > > result = OneOrMore(expr).parseString(data).asList() > I am using expr.scanString(data) instead to make sure only to pickup user block like user = foo { .. {..} .. } and not get stuck if sees group = foo {.. {..} .. } Here is a sample of how a real tac_plus.conf looks like and I am only parsing out the user blocks Here are some example tac_plus.conf file https://github.com/mkouhei/tacacs-plus/blob/master/debian/tac_plus.conf https://github.com/mirek186/BackUp/blob/master/tacProject/tac_plus.conf > and you'll process the rest of the file too. > > Also, you will find your results easier to process if you wrap expr in a > Group, as in: > > expr = Group(Word(alphas) + '=' + Word(alphanums) + > Optional(nestedCurlies)) > > -- Paul > > > -----Original Message----- > From: Asif Iqbal [mailto:va...@gm...] > Sent: Sunday, May 25, 2014 12:08 AM > To: pyp...@li... > Subject: [Pyparsing] parsing and extracting tac_plus.conf user data > > Hi All, > > I am trying to use pyparse to extract user data and it only picks up the > first block. > > Any idea what I am doing wrong? I am using python 2.7.6 on ubuntu trusty > > #!/usr/bin/python > import pprint, sys > #from pyparsing import Word, Literal, Forward, Group, ZeroOrMore, alphas > from pyparsing import * > > f = sys.argv[1] > > data = open(f,'r').read() > > nestedCurlies = nestedExpr('{','}') > > expr = Word(alphas) + '=' + Word(alphanums) + Optional(nestedCurlies) > > expr.ignore("#" + restOfLine) > result = expr.parseString(data).asList() > > pprint.pprint(result) > > Here is sample data file: > > user = aa06591 { > pap = PAM > login = PAM > member = readonly > > ## temporary commands so John can adjust resolver > ## uplink speed/duplex on SVCS routers > > cmd = interface { > permit "Ethernet" > deny .* > } > > cmd = speed { > permit .* > } > cmd = duplex { > permit .* > } > cmd = default { > permit speed > permit duplex > deny .* > } > cmd = write { > deny ^erase > permit .* > } > } > user = lukesd { > pap = des 11uGIcdXQ6v9E > login = file /etc/tacacs-passwd > member = readonly > } > user = curryc { > pap = PAM > login = PAM > member = implementation > } > user = rhodesw { > pap = PAM > login = PAM > member = implementation > } > user = aa68442 { > pap = PAM > login = PAM > member = implementation > } > user = jdimayu { > pap = PAM > login = PAM > member = readonly > } > > > Here is the output, and it only displays the first block > > ['user', > '=', > 'aa60591', > ['pap', > '=', > 'PAM', > 'login', > '=', > 'PAM', > 'member', > '=', > 'readonly', > 'cmd', > '=', > 'interface', > ['permit', '"Ethernet"', 'deny', '.*'], > 'cmd', > '=', > 'speed', > ['permit', '.*'], > 'cmd', > '=', > 'duplex', > ['permit', '.*'], > 'cmd', > '=', > 'default', > ['permit', 'speed', 'permit', 'duplex', 'deny', '.*'], > 'cmd', > '=', > 'write', > ['deny', '^erase', 'permit', '.*']]] > > > -- > Asif Iqbal > PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu > A: Because it messes up the order in which people normally read text. > Q: Why is top-posting such a bad thing? > > ---------------------------------------------------------------------------- > -- > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform > available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Pyparsing-users mailing list > Pyp...@li... > https://lists.sourceforge.net/lists/listinfo/pyparsing-users > > > --- > This email is free from viruses and malware because avast! Antivirus > protection is active. > http://www.avast.com > > -- Asif Iqbal PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? |
From: Paul M. <pt...@au...> - 2014-05-25 05:46:42
|
Pyparsing only reports one because you only told it to get one. Change to: result = OneOrMore(expr).parseString(data).asList() and you'll process the rest of the file too. Also, you will find your results easier to process if you wrap expr in a Group, as in: expr = Group(Word(alphas) + '=' + Word(alphanums) + Optional(nestedCurlies)) -- Paul -----Original Message----- From: Asif Iqbal [mailto:va...@gm...] Sent: Sunday, May 25, 2014 12:08 AM To: pyp...@li... Subject: [Pyparsing] parsing and extracting tac_plus.conf user data Hi All, I am trying to use pyparse to extract user data and it only picks up the first block. Any idea what I am doing wrong? I am using python 2.7.6 on ubuntu trusty #!/usr/bin/python import pprint, sys #from pyparsing import Word, Literal, Forward, Group, ZeroOrMore, alphas from pyparsing import * f = sys.argv[1] data = open(f,'r').read() nestedCurlies = nestedExpr('{','}') expr = Word(alphas) + '=' + Word(alphanums) + Optional(nestedCurlies) expr.ignore("#" + restOfLine) result = expr.parseString(data).asList() pprint.pprint(result) Here is sample data file: user = aa06591 { pap = PAM login = PAM member = readonly ## temporary commands so John can adjust resolver ## uplink speed/duplex on SVCS routers cmd = interface { permit "Ethernet" deny .* } cmd = speed { permit .* } cmd = duplex { permit .* } cmd = default { permit speed permit duplex deny .* } cmd = write { deny ^erase permit .* } } user = lukesd { pap = des 11uGIcdXQ6v9E login = file /etc/tacacs-passwd member = readonly } user = curryc { pap = PAM login = PAM member = implementation } user = rhodesw { pap = PAM login = PAM member = implementation } user = aa68442 { pap = PAM login = PAM member = implementation } user = jdimayu { pap = PAM login = PAM member = readonly } Here is the output, and it only displays the first block ['user', '=', 'aa60591', ['pap', '=', 'PAM', 'login', '=', 'PAM', 'member', '=', 'readonly', 'cmd', '=', 'interface', ['permit', '"Ethernet"', 'deny', '.*'], 'cmd', '=', 'speed', ['permit', '.*'], 'cmd', '=', 'duplex', ['permit', '.*'], 'cmd', '=', 'default', ['permit', 'speed', 'permit', 'duplex', 'deny', '.*'], 'cmd', '=', 'write', ['deny', '^erase', 'permit', '.*']]] -- Asif Iqbal PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? ---------------------------------------------------------------------------- -- "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available Simple to use. Nothing to install. Get started now for free." http://p.sf.net/sfu/SauceLabs _______________________________________________ Pyparsing-users mailing list Pyp...@li... https://lists.sourceforge.net/lists/listinfo/pyparsing-users --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com |
From: Asif I. <va...@gm...> - 2014-05-25 05:42:35
|
On Sun, May 25, 2014 at 1:07 AM, Asif Iqbal <va...@gm...> wrote: > Hi All, > > I am trying to use pyparse to extract user data and it only picks up the > first block. > > Any idea what I am doing wrong? I am using python 2.7.6 on ubuntu trusty > > #!/usr/bin/python > import pprint, sys > #from pyparsing import Word, Literal, Forward, Group, ZeroOrMore, alphas > from pyparsing import * > > f = sys.argv[1] > > data = open(f,'r').read() > > nestedCurlies = nestedExpr('{','}') > > expr = Word(alphas) + '=' + Word(alphanums) + Optional(nestedCurlies) > > expr.ignore("#" + restOfLine) > result = expr.parseString(data).asList() > using scanString was the trick. result = list(expr.scanString(data)) did the trick. > > pprint.pprint(result) > > Here is sample data file: > > user = aa06591 { > pap = PAM > login = PAM > member = readonly > > ## temporary commands so John can adjust resolver > ## uplink speed/duplex on SVCS routers > > cmd = interface { > permit "Ethernet" > deny .* > } > > cmd = speed { > permit .* > } > cmd = duplex { > permit .* > } > cmd = default { > permit speed > permit duplex > deny .* > } > cmd = write { > deny ^erase > permit .* > } > } > user = lukesd { > pap = des 11uGIcdXQ6v9E > login = file /etc/tacacs-passwd > member = readonly > } > user = curryc { > pap = PAM > login = PAM > member = implementation > } > user = rhodesw { > pap = PAM > login = PAM > member = implementation > } > user = aa68442 { > pap = PAM > login = PAM > member = implementation > } > user = jdimayu { > pap = PAM > login = PAM > member = readonly > } > > > Here is the output, and it only displays the first block > > ['user', > '=', > 'aa60591', > ['pap', > '=', > 'PAM', > 'login', > '=', > 'PAM', > 'member', > '=', > 'readonly', > 'cmd', > '=', > 'interface', > ['permit', '"Ethernet"', 'deny', '.*'], > 'cmd', > '=', > 'speed', > ['permit', '.*'], > 'cmd', > '=', > 'duplex', > ['permit', '.*'], > 'cmd', > '=', > 'default', > ['permit', 'speed', 'permit', 'duplex', 'deny', '.*'], > 'cmd', > '=', > 'write', > ['deny', '^erase', 'permit', '.*']]] > > > -- > Asif Iqbal > PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu > A: Because it messes up the order in which people normally read text. > Q: Why is top-posting such a bad thing? > > -- Asif Iqbal PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? |
From: Asif I. <va...@gm...> - 2014-05-25 05:08:10
|
Hi All, I am trying to use pyparse to extract user data and it only picks up the first block. Any idea what I am doing wrong? I am using python 2.7.6 on ubuntu trusty #!/usr/bin/python import pprint, sys #from pyparsing import Word, Literal, Forward, Group, ZeroOrMore, alphas from pyparsing import * f = sys.argv[1] data = open(f,'r').read() nestedCurlies = nestedExpr('{','}') expr = Word(alphas) + '=' + Word(alphanums) + Optional(nestedCurlies) expr.ignore("#" + restOfLine) result = expr.parseString(data).asList() pprint.pprint(result) Here is sample data file: user = aa06591 { pap = PAM login = PAM member = readonly ## temporary commands so John can adjust resolver ## uplink speed/duplex on SVCS routers cmd = interface { permit "Ethernet" deny .* } cmd = speed { permit .* } cmd = duplex { permit .* } cmd = default { permit speed permit duplex deny .* } cmd = write { deny ^erase permit .* } } user = lukesd { pap = des 11uGIcdXQ6v9E login = file /etc/tacacs-passwd member = readonly } user = curryc { pap = PAM login = PAM member = implementation } user = rhodesw { pap = PAM login = PAM member = implementation } user = aa68442 { pap = PAM login = PAM member = implementation } user = jdimayu { pap = PAM login = PAM member = readonly } Here is the output, and it only displays the first block ['user', '=', 'aa60591', ['pap', '=', 'PAM', 'login', '=', 'PAM', 'member', '=', 'readonly', 'cmd', '=', 'interface', ['permit', '"Ethernet"', 'deny', '.*'], 'cmd', '=', 'speed', ['permit', '.*'], 'cmd', '=', 'duplex', ['permit', '.*'], 'cmd', '=', 'default', ['permit', 'speed', 'permit', 'duplex', 'deny', '.*'], 'cmd', '=', 'write', ['deny', '^erase', 'permit', '.*']]] -- Asif Iqbal PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? |
From: Werner <wer...@gm...> - 2014-05-24 15:04:34
|
Hi Diez, On 5/23/2014 13:45, Diez B. Roggisch wrote: > Hi, > > for that problem (if it's that "simple", meaning one line, strict conventions) I wouldn't bother using pyparsing. > > Just readline, and string-methods. Good point, will do that, but I am still intrigued why my code in the last post is not handling correctly the none matching lines, i.e. I like to just have them ignored. Werner |
From: Werner <wer...@gm...> - 2014-05-24 12:43:32
|
Hi, Made a bit of progress, following works for my test string, but doesn't yet for when I parse files. tagStart = pp.Literal("# Tags:").setDebug() otherStuff = pp.lineStart + pp.restOfLine def aTagLineAction(s, l, t): return 'test' aTagLine = tagStart + pp.restOfLine aTagLine.setParseAction(aTagLineAction) allLines = pp.OneOrMore(otherStuff | aTagLine) result = allLines.parseString(test) print(result) Werner |
From: Werner <wer...@gm...> - 2014-05-23 11:40:12
|
Hi, I like to parse many .py files and check if any of the following is present in it: test = """# Tags: phoenix-port, unittest, documented, py3-port""" A file might or might not have this comment line somewhere at the top, and it might have one or more of the tags. I like to report on the file name and what tags are present in it, the use of this is a check list of what modules have been converted/been done. On the above test string I tried this, but it only reports on the first. allTags = pp.Literal("# Tags:") +\ pp.Literal("phoenix-port").setResultsName('phoenix') |\ pp.FollowedBy("unittest").setResultsName('test') |\ pp.FollowedBy("py3-port").setResultsName('py3') |\ pp.FollowedBy("documented").setResultsName('doc') result = allTags.parseString(test) print(result) The other problem I have when using 'parseFile' is how to tell it to ignore everything before or after, or even all if the '# Tags:' line is not present. Hopefully someone can push me in the right direction. Werner |
From: Paul M. <pt...@au...> - 2014-05-06 17:47:43
|
Did you check out the ANTLR grammar parser in the pyparsing examples? (You have to download either the source zip or tarball.) -- Paul -----Original Message----- From: Athanasios Anastasiou [mailto:ath...@gm...] Sent: Tuesday, May 06, 2014 12:07 PM To: pyp...@li... Subject: [Pyparsing] Bringing together ANTLR and PyParsing Hello everybody I have been using PyParsing for some time now and after i came across ANTLR, i thought about bringing the two together. That is, specify a parser in ANTLR's meta-language and then have that translated to a set of PyParsing objects. If you have not seen ANTLR before, it looks like this: input: list+; list: SYM_SBL atom (SYM_COMA atom)* SYM_SBR; //Tokens and rules separated by comas effectively means concatenation. atom:number|string; //The usual OR number:NUM; //number is a rule, NUM is a token (Anything that starts with a capital letter is a token else is a rule) string:STR; SYM_SBL:'['; SYM_RBL:']'; SYM_COMA:','; NUM:[0-9]+; STR:'\'' .*? '\''; A preliminary attempt at this transformation can be found at: https://bitbucket.org/aanastasiou/antlr2pyparsing The relevant discussion over at ANTLR's group is available at: https://groups.google.com/forum/#!topic/antlr-discussion/feRPZhfMcpU ANTLR will be generating parsers in Python soon as it seems. The relevant discussion is available at: https://groups.google.com/forum/#!topic/antlr-discussion/sFH5Y0QO4HA But in any case, this transformation would be very useful to have for PyParsing too. I am just wondering if we could perhaps even augment ANTLR's syntax to accommodate PyParsing features (like Group or Suppress) that ANTLR does not know about. What do you think? All the best AA ---------------------------------------------------------------------------- -- Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce _______________________________________________ Pyparsing-users mailing list Pyp...@li... https://lists.sourceforge.net/lists/listinfo/pyparsing-users |
From: Athanasios A. <ath...@gm...> - 2014-05-06 17:07:07
|
Hello everybody I have been using PyParsing for some time now and after i came across ANTLR, i thought about bringing the two together. That is, specify a parser in ANTLR's meta-language and then have that translated to a set of PyParsing objects. If you have not seen ANTLR before, it looks like this: input: list+; list: SYM_SBL atom (SYM_COMA atom)* SYM_SBR; //Tokens and rules separated by comas effectively means concatenation. atom:number|string; //The usual OR number:NUM; //number is a rule, NUM is a token (Anything that starts with a capital letter is a token else is a rule) string:STR; SYM_SBL:'['; SYM_RBL:']'; SYM_COMA:','; NUM:[0-9]+; STR:'\'' .*? '\''; A preliminary attempt at this transformation can be found at: https://bitbucket.org/aanastasiou/antlr2pyparsing The relevant discussion over at ANTLR's group is available at: https://groups.google.com/forum/#!topic/antlr-discussion/feRPZhfMcpU ANTLR will be generating parsers in Python soon as it seems. The relevant discussion is available at: https://groups.google.com/forum/#!topic/antlr-discussion/sFH5Y0QO4HA But in any case, this transformation would be very useful to have for PyParsing too. I am just wondering if we could perhaps even augment ANTLR's syntax to accommodate PyParsing features (like Group or Suppress) that ANTLR does not know about. What do you think? All the best AA |
From: Twitter <con...@tw...> - 2014-04-29 03:19:38
|
Venezuela Unida SOS todavía está esperando que te unas a Twitter... Twitter te ayuda a estar conectado con lo que está sucediendo en estos momentos y con las personas y organizaciones que te importan. Aceptar invitación https://twitter.com/i/44f5a1f6-d0fb-432f-ab29-ffbc381929a8 -- You can unsubscribe from receiving email notifications from Twitter at anytime. For general inquiries, please visit us at Twitter Support. Eliminar subscripción: https://twitter.com/i/o?t=1&iid=e3cf29375bb04e4d84cbef8dd0435ac6&uid=0&c=sjFctp%2FfEblnKskZvAI8NqHSezLlGx7tj%2FWeGVK08blwcKoSn%2FHEpQ%3D%3D&nid=156+26+20140427 ¿Necesitas ayuda? https://support.twitter.com |
From: Venezuela U. S. (v. Twitter) <con...@tw...> - 2014-04-24 15:04:03
|
Venezuela Unida SOS te envió una invitación Twitter te ayuda a estar conectado con lo que está sucediendo en estos momentos y con las personas y organizaciones que te importan. Aceptar invitación https://twitter.com/i/44f5a1f6-d0fb-432f-ab29-ffbc381929a8 -- You can unsubscribe from receiving email notifications from Twitter at anytime. For general inquiries, please visit us at Twitter Support. Eliminar subscripción: https://twitter.com/i/o?t=1&iid=d52835de51b74d5eb4bfccf361dd3027&uid=0&c=sjFctp%2FfEblnKskZvAI8NqHSezLlGx7tj%2FWeGVK08blwcKoSn%2FHEpQ%3D%3D&nid=9+26 ¿Necesitas ayuda? https://support.twitter.com |
From: Marc B. <mar...@gm...> - 2014-04-09 19:16:31
|
Hi All - I am building a parser for queries. I have the following structure: list(foo.bar, bang) I'd like to end up with something like this: actions: [ { 'action':'foo','attrib':'bar'}, {'action':'bang'} ] For starters, I have: self.action = Combine(oneOf(self.action_names) + \ Optional("."+ self.action_attrib)) That gives me something like this: actions : ['foo.bar', 'bang'] I'm not sure how to created the named parts that would let me specify actions and the optional attribs. Any pointers or suggestions would be appreciated. Thanks, Marc |
From: Paul M. <pt...@au...> - 2014-04-09 18:38:37
|
In place of: ident = p.Combine(name('id') + p.Optional(p.oneOf('- +'))) use: ident = p.Group(name('id') + p.Optional(p.oneOf('- +'))) The Group class is used to create sub-structures within a parsed result. Otherwise, the default behavior for pyparsing is to just create a single list of strings. The reason you see all the id's in the internal directory for the ParseResults is that it is possible to define a results name that is actually an accumulator of all matching strings. When using the full call syntax, add the listAllMatches argument, as in: expr.setResultsName("id", listAllMatches=True) When using the abbreviated call syntax as you have done, use: expr("id*") But based on what you have so far, I think Group is really the way to go. I would also add a name for the optional trailing '+' or '-' sign, as in: ident = p.Group(name('id') + p.Optional(p.oneOf('- +')('sign'))) Then you can write: for pt in r.points: if 'sign' in pt: # or "if pt.sign:" # do special sign stuff with pt.sign -- Paul -----Original Message----- From: Loïc Berthe [mailto:lo...@li...] Sent: Wednesday, April 09, 2014 12:08 PM To: pyp...@li... Subject: [Pyparsing] delimitedList Results Hi, I have some questions on how to use properly parseResults associated with a delimitedList. Here is the kind of line I would like to parse : [code] string = 'line 22 width=2;type=1 V101-,V103,V99+,V12 L102' [/code] And here is the parser I've built to extract information from this string: [code] import pyparsing as p integer = p.Word(p.nums) name = p.Word(p.alphas, p.alphanums+'_') ident = p.Combine(name('id') + p.Optional(p.oneOf('- +'))) lineParser = ( 'line' + integer('num') + 'width=' + integer('width')+';' + 'type=' + integer('type') + p.delimitedList(ident)('points') + name('name') ) r = lineParser.parseString(string) [/code] I can access to simple keys : >>> print r.name L102 but with delimitedList, it seems to be a bit more trickier as the associated parseResults can not be used as a list of ParseResults but rather as a list of strings: >>> for pt in r.points: >>> print type(pt), pt <type 'str'> V101- <type 'str'> V103 <type 'str'> V99+ <type 'str'> V12 I can get the list of point directly with r.points.asList(), but I don't know how to access to the list of ids. I was expecting that I could use classical __getitem__ method to list all ids with something like [ pt.id for pt in r.points] Is there any way to do this ? Besides, I don't fully understand the structure of this parseResult : >>> print repr(r.points) (['V101-', 'V103', 'V99+', 'V12'], {'id': [('V101', 0), ('V103', 1), ('V99', 2), ('V12', 3)]}) it seems that the id key is associated with the list that I m looking for, but when I try to retrieve it I only get one element of that list: >>> r.points.id V12 Have you got any clue? Regards -- Loïc ---------------------------------------------------------------------------- -- Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees _______________________________________________ Pyparsing-users mailing list Pyp...@li... https://lists.sourceforge.net/lists/listinfo/pyparsing-users --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com |
From: Loïc B. <lo...@li...> - 2014-04-09 17:27:29
|
Hi, I have some questions on how to use properly parseResults associated with a delimitedList. Here is the kind of line I would like to parse : [code] string = 'line 22 width=2;type=1 V101-,V103,V99+,V12 L102' [/code] And here is the parser I've built to extract information from this string: [code] import pyparsing as p integer = p.Word(p.nums) name = p.Word(p.alphas, p.alphanums+'_') ident = p.Combine(name('id') + p.Optional(p.oneOf('- +'))) lineParser = ( 'line' + integer('num') + 'width=' + integer('width')+';' + 'type=' + integer('type') + p.delimitedList(ident)('points') + name('name') ) r = lineParser.parseString(string) [/code] I can access to simple keys : >>> print r.name L102 but with delimitedList, it seems to be a bit more trickier as the associated parseResults can not be used as a list of ParseResults but rather as a list of strings: >>> for pt in r.points: >>> print type(pt), pt <type 'str'> V101- <type 'str'> V103 <type 'str'> V99+ <type 'str'> V12 I can get the list of point directly with r.points.asList(), but I don't know how to access to the list of ids. I was expecting that I could use classical __getitem__ method to list all ids with something like [ pt.id for pt in r.points] Is there any way to do this ? Besides, I don't fully understand the structure of this parseResult : >>> print repr(r.points) (['V101-', 'V103', 'V99+', 'V12'], {'id': [('V101', 0), ('V103', 1), ('V99', 2), ('V12', 3)]}) it seems that the id key is associated with the list that I m looking for, but when I try to retrieve it I only get one element of that list: >>> r.points.id V12 Have you got any clue? Regards -- Loïc |
From: Will M. <wil...@gm...> - 2014-04-04 17:15:28
|
Hi, I have an expression parser that is working nicely, and I'd like to add a 'call' syntax. Basically something like this: <operand>(<operand>) Which should be simple to implement, I have an index operator <operand>[<operand>] which works, but when I use round brackets the parser gets stuck in an infinite loop. I think this may be because operatorPrecendance already uses brackets for parenthesis. Any idea how I would implement this? Here's my grammar definition: integer = Word(nums) real = Combine(Word(nums) + "." + Word(nums)) constant = oneOf('True False None yes no') + WordEnd() variable = Regex(r'([a-zA-Z0-9\._]+)') explicit_variable = '$' + Regex(r'([a-zA-Z0-9\._]+)') string = QuotedString('"', escChar="\\") | QuotedString('\'', escChar="\\") regexp = QuotedString('/', escChar=None) timespan = Combine(Word(nums) + oneOf('ms s m h d')) variable_operand = variable explicit_variable_operand = explicit_variable integer_operand = integer real_operand = real number_operand = real | integer string_operand = string operand = variable | real | integer | string assignop = Literal('=') groupop = Literal(',') signop = oneOf('+ -') multop = oneOf('* / // %') filterop = oneOf('|') plusop = oneOf('+ -') notop = Literal('not') rangeop = Literal('..') exclusiverangeop = Literal('...') ternaryop = ('?', ':') variable_operand.setParseAction(EvalVariable) explicit_variable_operand.setParseAction(EvalExplicitVariable) integer_operand.setParseAction(EvalInteger) real_operand.setParseAction(EvalReal) string_operand.setParseAction(EvalString) constant.setParseAction(EvalConstant) regexp.setParseAction(EvalRegExp) timespan.setParseAction(EvalTimespan) expr = Forward() modifier = Combine(Word(alphas + nums) + ':') callop = Group(Suppress('(') + expr + Suppress(')')) operand = (timespan | real_operand | integer_operand | string_operand | regexp | constant | explicit_variable_operand | variable_operand ) comparisonop = (oneOf("< <= > >= != == ~= ^= $=") | (Literal('is not') + WordEnd()) | (oneOf("is in instr lt lte gt gte matches fnmatches") + WordEnd()) | (Literal('not in') + WordEnd()) | (Literal('not instr') + WordEnd())) logicop = oneOf("and or") + WordEnd() logicopOR = Literal('or') + WordEnd() logicopAND = Literal('and') + WordEnd() formatop = Literal('::') expr << operatorPrecedence(operand, [ (signop, 1, opAssoc.RIGHT, EvalSignOp), (exclusiverangeop, 2, opAssoc.LEFT, EvalExclusiveRangeOp), (rangeop, 2, opAssoc.LEFT, EvalRangeOp), (callop, 2, opAssoc.LEFT, EvalCallOp), (index, 1, opAssoc.LEFT, EvalIndexOp), (modifier, 1, opAssoc.RIGHT, EvalModifierOp), (formatop, 2, opAssoc.LEFT, EvalFormatOp), (multop, 2, opAssoc.LEFT, EvalMultOp), (plusop, 2, opAssoc.LEFT, EvalAddOp), (assignop, 2, opAssoc.LEFT, EvalAssignOp), (groupop, 2, opAssoc.LEFT, EvalGroupOp), (filterop, 2, opAssoc.LEFT, EvalFilterOp), (comparisonop, 2, opAssoc.LEFT, EvalComparisonOp), (notop, 1, opAssoc.RIGHT, EvalNotOp), #(logicop, 2, opAssoc.LEFT, EvalLogicOp), (logicopOR, 2, opAssoc.LEFT, EvalLogicOpOR), (logicopAND, 2, opAssoc.LEFT, EvalLogicOpAND), (ternaryop, 3, opAssoc.LEFT, EvalTernaryOp), ]) Thanks, Will -- Will McGugan http://www.willmcgugan.com |
From: James H. <jam...@gm...> - 2014-02-14 13:43:31
|
Hello, I am trying to parse a block of text lines where the lines of interest always begin with a keyword from a set of known keywords. All other lines can be ignored, even if they contain the keyword which is not the first entry in that line. The code that follows almost works but stops when it meets an unwanted line containing one of the known keywords ('kw1' and 'kw2'). from pyparsing import * def main(): # A test string where I want to match all lines starting with # a keyword 'kw1' or 'kw2'. # Other lines should not be matched. test_string_1 = """ An unwanted line can contain anything kw2 par1 kw1 par1 2 another unwanted line kw1 opt 1 another unwanted line that contains a kw1 kw2 h1 yet another unwanted line kw1 = Literal("kw1") kw2 = Literal("kw2") keywords = (kw1 | kw2) kw1_record = (kw1 + Word(alphanums) + Word(nums) + restOfLine.suppress() + LineEnd().suppress()) kw2_record = (kw2 + Word(alphanums) + restOfLine.suppress() + LineEnd().suppress()) valid_records = (kw1_record | kw2_record) record = Group(SkipTo(keywords, include=False, ignore=None, failOn=None).suppress() + valid_records) all_records = ZeroOrMore(record) res = all_records.parseString(test_string_1) for entry in res: print entry if __name__ == '__main__': main() The output from this code is ['kw2', 'par1'] ['kw1', 'par1', '2'] ['kw1', 'opt', '1'] What is missing from the output is ['kw2', 'h1'] I am new to pyparsing and so I am probably missing something obvious. Is there a way to correct my code so that it does what I want? Or is there a better way to achieve my aims? I would be grateful for any suggestions. Thanks in advance, James |
From: Paul M. <pt...@au...> - 2014-01-30 23:37:44
|
Check out matchPreviousExpr and matchPreviousLiteral methods. -- Paul -----Original Message----- From: Loïc Berthe [mailto:lo...@li...] Sent: Thursday, January 30, 2014 3:56 PM To: pyp...@li... Subject: [Pyparsing] Pyparsing equivalent to regex \1, \2 ... patterns Hi, I would like to define a parser to detect strings containing repeated terms : - it should match 'A123A', 'B123B' - but not 'A123B' With python re module, I would define a parser like this : parser = re.compile(r'(\w+)(\d+)\1') Is there a pyparsing equivalent to re \1 and \2 patterns in pyparsing ? -- Loïc ---------------------------------------------------------------------------- -- WatchGuard Dimension instantly turns raw network data into actionable security intelligence. It gives you real-time visual feedback on key security issues and trends. Skip the complicated setup - simply import a virtual appliance and go from zero to informed in seconds. http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg..clktrk _______________________________________________ Pyparsing-users mailing list Pyp...@li... https://lists.sourceforge.net/lists/listinfo/pyparsing-users --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com |
From: Loïc B. <lo...@li...> - 2014-01-30 22:15:46
|
Hi, I would like to define a parser to detect strings containing repeated terms : - it should match 'A123A', 'B123B' - but not 'A123B' With python re module, I would define a parser like this : parser = re.compile(r'(\w+)(\d+)\1') Is there a pyparsing equivalent to re \1 and \2 patterns in pyparsing ? -- Loïc |
From: Marcin C. <mc...@gm...> - 2014-01-08 15:18:10
|
As I understand it, line http://sourceforge.net/p/pyparsing/code/255/tree/trunk/src/pyparsing.py#l1843has a bug introduced in version 2.0.1. Previously, the line was self.escCharReplacePattern = re.escape(self.escChar)+"(.)" Now, it is self.escCharReplacePattern = re.escape(self.escChar)+("([%s])" % charset) Note that the character class is [%s]. I suppose it should be [^%s]. Then, this test passes: import pyparsing quoted_string = pyparsing.QuotedString('"', unquoteResults=True, escChar='\\') assert quoted_string.parseString(r'"\\foo"').asList()[0] == r'\foo' -- Marcin |
From: Paul M. <pt...@au...> - 2013-11-06 12:55:12
|
You might look at one of the variations on parsing that pyparsing expressions can do. The typical parser case is one which the parser handles all the input text. It requires the most work because it has to handle everything in the input. You can also write a pyparsing parser that only matches part of the input file, and then scan or search for just those parts. I think this may be suitable for your case. Look over the following code and see how searchString and scanString return the matching lines, and how with scanString (which returns a Python generator - if you're not familiar with these, look it up), you can pull out the text between parses, since scanString returns not only the matching text, but also the start and end locations. -- Paul from pyparsing import * line_of_words = OneOrMore(Word(alphas)) inputText = """\ sldjf lskjflsja lasdfljsdf owiuerowue ndf 122 1203 080182 0123 1023021 013802 02108 aslkjweoiur olsuaperu lsfiwuer kfdsldf 293749237 029 927397 2979 29793732974 9237 82739 sjfdhhwl oewr lwkejrlj wlehrnmb 34982 9392 """ # find all groups of words using searchString for line in line_of_words.searchString(inputText): print line # prints: # ['sldjf', 'lskjflsja', 'lasdfljsdf', 'owiuerowue', 'ndf'] # ['aslkjweoiur', 'olsuaperu', 'lsfiwuer', 'kfdsldf'] # ['o'] # ['sjfdhhwl', 'oewr', 'lwkejrlj', 'wlehrnmb'] # find all groups and their start/end locations using scanString for line,start,end in line_of_words.scanString(inputText): print line # prints: # ['sldjf', 'lskjflsja', 'lasdfljsdf', 'owiuerowue', 'ndf'] # ['aslkjweoiur', 'olsuaperu', 'lsfiwuer', 'kfdsldf'] # ['o'] # ['sjfdhhwl', 'oewr', 'lwkejrlj', 'wlehrnmb'] # use scanString to associate intervening text with matched line parsedData = [] scanner = line_of_words.scanString(inputText) lastLine,lastStart,lastEnd = next(scanner) for line, start, end in scanner: parsedData.append((lastLine, inputText[lastEnd:start].splitlines())) lastLine,lastEnd = line,end # add final group after last parsed line parsedData.append((lastLine, inputText[lastEnd:].splitlines())) for line,data in parsedData: print '-', ' '.join(line) for d in data: print ' ', d # prints #- sldjf lskjflsja lasdfljsdf owiuerowue ndf # # 122 # 1203 080182 0123 1023021 013802 # 02108 # #- aslkjweoiur olsuaperu lsfiwuer kfdsldf # # 293749237 # 029 927397 2979 29793732974 # 9237 #- o # 82739 # #- sjfdhhwl oewr lwkejrlj wlehrnmb # # 34982 9392 # -----Original Message----- From: Hanchel Cheng [mailto:han...@br...] Sent: Tuesday, November 05, 2013 7:15 PM To: pyp...@li... Subject: [Pyparsing] Using grammar as a condition for loop Hello! I have a text file in a structure like this: ######start####### [line1 matching grammar] #[text] #[text] [text] [line2 matching grammar] #[text] [etc.] #######end####### There can be N amounts of lines with or without the # under each indent with a line that matches the grammar. I'm checking for the grammar, then I would like to check all the lines until the next line that follows the grammar. Something like... for line in text_file: if not(line matches grammar): do something Can pyparsing do this? If not, any suggestions? I can give more info if necessary. I really appreciate the help! Kind regards, Hanchel ---------------------------------------------------------------------------- -- November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk _______________________________________________ Pyparsing-users mailing list Pyp...@li... https://lists.sourceforge.net/lists/listinfo/pyparsing-users --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com |
From: Mario R. O. <nim...@gm...> - 2013-11-06 03:32:34
|
> Can pyparsing do this? Saying yes is pretty much an understatement, but thats what you want to hear I guess, so yes it can. Dtb/Gby ======= Mario R. Osorio "... Begin with the end in mind ..." http://www.google.com/profiles/nimbiotics On Tue, Nov 5, 2013 at 8:14 PM, Hanchel Cheng <han...@br...> wrote: > Hello! > > I have a text file in a structure like this: > ######start####### > [line1 matching grammar] > #[text] > #[text] > [text] > > [line2 matching grammar] > #[text] > [etc.] > #######end####### > There can be N amounts of lines with or without the # under each indent > with a line that matches the grammar. > > I'm checking for the grammar, then I would like to check all the lines > until the next line that follows the grammar. > > Something like... > for line in text_file: > if not(line matches grammar): > do something > > Can pyparsing do this? If not, any suggestions? I can give more info if > necessary. > > I really appreciate the help! > > Kind regards, > Hanchel > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Pyparsing-users mailing list > Pyp...@li... > https://lists.sourceforge.net/lists/listinfo/pyparsing-users > |
From: Hanchel C. <han...@br...> - 2013-11-06 01:14:54
|
Hello! I have a text file in a structure like this: ######start####### [line1 matching grammar] #[text] #[text] [text] [line2 matching grammar] #[text] [etc.] #######end####### There can be N amounts of lines with or without the # under each indent with a line that matches the grammar. I'm checking for the grammar, then I would like to check all the lines until the next line that follows the grammar. Something like... for line in text_file: if not(line matches grammar): do something Can pyparsing do this? If not, any suggestions? I can give more info if necessary. I really appreciate the help! Kind regards, Hanchel |
From: <pt...@au...> - 2013-10-29 18:50:13
|
This might be a good time to invoke some level of moderation on the discussion. There is already confusion over who said what, who assumed what, and so on. The original post definitely asked a question in an area in which pyparsing is a sledgehammer to swat a fly. In the case of pyparsing, some people do start with flies as simple test cases, or choose to publish those in online support forums as a distilled-down example of some larger problem. (If the latter, I *really* appreciate it, when someone can post a short specific issue instead of a 60-line parser with the issue buried somewhere in it.) As it happens, this particular example points up a bit of a pitfall in pyparsing, that of looking for TABs in your parser - the call to parseWithTabs is easily overlooked. I don't think it is at all out of line to suggest non-pyparsing solutions to particular posts here, and I've found that investigating alternative solutions is always of value: sometimes I find a better way, and sometimes I get better insights into the original selected option. But this *is* a pyparsing mailing list, not a general Python support list, so if there is a bias to pyparsing-based solutions, I think that could be forgiven. -- Paul (I *would* prefer though that we avoid discourteous language - nobody is being paid to respond to these emails, I think most are posted in good faith and with good intentions, and usually with some amount of personal investment. I think there is room to disagree and still maintain civility and respect.) |
From: Mark L. <bre...@ya...> - 2013-10-29 16:24:07
|
On 29/10/2013 15:46, Mario R. Osorio wrote: > Mark, > > Only two mistakes were made here: > > 1. In lue of more detailed background, you *assumed* Roggisch is trying > to use pyparsing just to look for "\t"'s, and > 2. The rest of us stupids here (me included of course) > *assumed*Roggisch just asked a very specific question pertaining a > more complex > issue from which he might have wanted to keep us away either because he > doesn't need help on anything else or because this is the first such issue > he finds where he needs help. > > See the two mistakes? neither do I ... The one and only mistake here is > that everyone assumed what each wanted to assume > > Now, in my particular case, I can give you a reason for that (which I do > not pretend to use as an excuse): "I prefer to think everyone has a certain > level of knowledge, though not necessarily of experience, otherwise you > wouldn't be here but probably asking your teacher at school". That is of > course, another assumption but, hey! I'm just one more stupid human being > trusting other human beings!!. > > > The only *sheer unadulterated rubbish* here have been your comments. Please > be more of a boy!* > * > What are you on about? The OP was Hanchel Cheng, not Diez B. Roggisch. The OP said "My question is pretty basic: I have a string 'name\tdate\t\tlocation'. What would I do to ensure that between the 'name' and 'date' there is exactly one tab and between 'date' and 'location' there is exactly two tabs?" What is there to assume about that? Diez replied "I would forego pyparsing and use single string-functions" and later "yes, you can do it with pyparsing, but IMHO it's overkill". D* then wrote his piece to which I replied, obviously very much agreeing with Diez. Have I missed something obvious? -- Python is the second best programming language in the world. But the best has yet to be invented. Christian Tismer Mark Lawrence |
From: Mario R. O. <nim...@gm...> - 2013-10-29 15:46:42
|
Mark, Only two mistakes were made here: 1. In lue of more detailed background, you *assumed* Roggisch is trying to use pyparsing just to look for "\t"'s, and 2. The rest of us stupids here (me included of course) *assumed*Roggisch just asked a very specific question pertaining a more complex issue from which he might have wanted to keep us away either because he doesn't need help on anything else or because this is the first such issue he finds where he needs help. See the two mistakes? neither do I ... The one and only mistake here is that everyone assumed what each wanted to assume Now, in my particular case, I can give you a reason for that (which I do not pretend to use as an excuse): "I prefer to think everyone has a certain level of knowledge, though not necessarily of experience, otherwise you wouldn't be here but probably asking your teacher at school". That is of course, another assumption but, hey! I'm just one more stupid human being trusting other human beings!!. The only *sheer unadulterated rubbish* here have been your comments. Please be more of a boy!* * Dtb/Gby ======= Mario R. Osorio "... Begin with the end in mind ..." http://www.google.com/profiles/nimbiotics On Tue, Oct 29, 2013 at 11:18 AM, Mark Lawrence <bre...@ya...>wrote: > IMHO sheer unadulterated rubbish. What you're saying is that when you > want to crack a nut you skip the sledge hammer stage and use a steam > roller. Here string methods are perfectly adequate for the task so use them > > > Kindest regards. > > Mark Lawrence. > > > > On Tuesday, 29 October 2013, 15:08, d* <d*@y23.org> wrote: > > Hi, > >I see nothing wrong with using pyparsing. There are actually many ways > to solve the problem here. If you expect to be using pyparsing more in the > future and expect to have multiple users maintaining the code I'd keep it > simple and just stick to one paradigm and stay in the pyparsing realm. > > > >I'm no expert at pyparsing yet, and find myself still continuing to learn > it as I go. I've become a fan of pair programming as well where two of us > are learning pyparsing at the same time. > > > >There is nothing wrong with regex. I use it where its needed, but in > general, I have found I don't mix it with the pyparsing code modules. > > > >And by doing so I've managed to get several different 'grammars' now > robustly working. > > > >I don't see the overkill argument. When one walks into a factory and > looks at the machinery for example. Is overkill that a machine the size of > a truck cuts the same pattern 2000 times a day to an endless supply of > steel? I doubt it. The work probably was done in the past by less refined > machines and more than likely you are looking at the latest and newest > production model for that type of work. I see it as, "Why do extra work, > when you can have code that has been tested and end up doing less work?" > > > >Good luck on which ever method(s) you choose to implement. And don't > forget to have some fun while you are doing it :) > >David > > > > > >> -------Original Message------- > >> From: Diez B. Roggisch <de...@we...> > >> To: Hanchel Cheng <han...@br...> > >> Cc: pyp...@li... < > pyp...@li...> > >> Subject: Re: [Pyparsing] Check for tabs > >> Sent: Oct 29 '13 03:09 > >> > >> > >> On Oct 29, 2013, at 12:41 AM, Hanchel Cheng <han...@br...> > wrote: > >> > >> > Regardless of "all [I] have," I'd like to know if pyparser can check > for a specific number of tabs between alphanumeric strings. If there are > not two tabs between the 2nd and 3rd word, I'd like to error out. Is > pyparsing truly overkill for this task? > >> > >> I think by now you have your answer: yes, you can do it with > pyparsing, but IMHO it's overkill, if that's all you ask it to do. Probably > even using a regex would be more opaque then necessary. > >> > >> I've use pyparsing happily quite a few times to e.g. parse CSS or > small DSLs. But for this kind of thing, I'd use string-methods. > >> > >> Diez > >> > ------------------------------------------------------------------------------ > >> Android is increasing in popularity, but the open development platform > that > >> developers love is also attractive to malware creators. Download this > white > >> paper to learn more about secure code signing practices that can help > keep > >> Android apps secure. > >> > http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk > >> _______________________________________________ > >> Pyparsing-users mailing list > >> Pyp...@li... > >> https://lists.sourceforge.net/lists/listinfo/pyparsing-users > >> > > > > >------------------------------------------------------------------------------ > >Android is increasing in popularity, but the open development platform > that > >developers love is also attractive to malware creators. Download this > white > >paper to learn more about secure code signing practices that can help keep > >Android apps secure. > > > http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk > >_______________________________________________ > >Pyparsing-users mailing list > >Pyp...@li... > >https://lists.sourceforge.net/lists/listinfo/pyparsing-users > > > > > > > > ------------------------------------------------------------------------------ > Android is increasing in popularity, but the open development platform that > developers love is also attractive to malware creators. Download this white > paper to learn more about secure code signing practices that can help keep > Android apps secure. > http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk > _______________________________________________ > Pyparsing-users mailing list > Pyp...@li... > https://lists.sourceforge.net/lists/listinfo/pyparsing-users > |