pygccxml-development Mailing List for C++ Python language bindings (Page 71)
Brought to you by:
mbaas,
roman_yakovenko
You can subscribe to this list here.
2006 |
Jan
|
Feb
(6) |
Mar
(160) |
Apr
(96) |
May
(152) |
Jun
(72) |
Jul
(99) |
Aug
(189) |
Sep
(161) |
Oct
(110) |
Nov
(9) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(13) |
Feb
(48) |
Mar
(35) |
Apr
(7) |
May
(37) |
Jun
(8) |
Jul
(15) |
Aug
(8) |
Sep
(2) |
Oct
(1) |
Nov
(2) |
Dec
(38) |
2008 |
Jan
(11) |
Feb
(29) |
Mar
(17) |
Apr
(3) |
May
|
Jun
(64) |
Jul
(49) |
Aug
(51) |
Sep
(18) |
Oct
(22) |
Nov
(9) |
Dec
(9) |
2009 |
Jan
(28) |
Feb
(15) |
Mar
(2) |
Apr
(11) |
May
(6) |
Jun
(2) |
Jul
(3) |
Aug
(34) |
Sep
(5) |
Oct
(7) |
Nov
(13) |
Dec
(14) |
2010 |
Jan
(39) |
Feb
(3) |
Mar
(3) |
Apr
(14) |
May
(11) |
Jun
(8) |
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
(7) |
Apr
|
May
|
Jun
(3) |
Jul
(3) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2021 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
From: Allen B. <al...@vr...> - 2006-03-06 20:12:09
|
Matthias Baas wrote: > Allen Bierbaum wrote: > >> Toward this goal (and following up on Matthias's comments from last >> week), I started looking at what it would take to generate >> documentation using epydoc and restructuredtext. I added a script in >> "experimental/docs/generate_all_docs.py" that will call epydoc with >> what I think are the correct parameters to generate html >> documentation. I also started converting pypp_api over to use >> restructured text for documentation. The only problem is that it >> looks like the tags/fields I am using are not being recognized >> correctly (ex: params, args, etc). According to the docs I am doing >> things correctly but it is just not working. If either of you have >> experience with this tool please take a look and let me know if I am >> doing something wrong. > > > We're using epydoc for the PyODE manual, but we're actually using it > with epytext markup, not rest. But anyway, I'm no expert with epydoc, > I'm more used to doxygen. > > The "experimental" directory is no package, so I had to change the > "experimental" entry in generate_all_docs.py to > "experimental/pypp_api.py". My bad. I forgot to check in the __init__.py file I was using. :( > epydoc actually generates errors on pypp_api.py. Your doc strings > usually begin with blanks, obviously epydoc things the text is > indented and generates errors. To test epydoc I changed the doc string > for ModuleBuilder.__init__ as follows: > > """Initialize module. > > :Parameters: > - `workingDir`: Directory to start processing. (default: > current working dir) > - `includePaths`: List of paths to tell gccxml to search > for header files. > - `gccxmlPath`: path to gccxml. If not set then attempt > to find it in system path. > - `defines`: set of symbols to add as defined when > compiling. > - `undefines`: set of symbols to add as undefined when > compiling. > - `cacheFile`: name of file to use for caching parse > data for this module. > - `verbose`: if true output status and command > information as building the module. > """ > > That is, I moved the first line to the beginning of the string, I > changed the apostrophes of the list entries into those "backticks" (or > what are they called?) and I split the docs for define/undefine into > two entries. > With those changes, epydoc seems to recognize the stuff but the result > looks somewhat weird. On the generated page each line describing an > argument begins on a new line (i.e. the argument name and its > documentation are on two *separate* lines). This looks particularly > weird, when I also add the type information. This rather looks like a > bug to me. > I also tested with epytext markup and then the results look normal. So > maybe we should consider switching to that markup...? I am fine with using epytext (or even javadoc). I was trying to use rest since Roman already used it, but I agree that it looks like there are still some bugs processing it. I am more interested in getting good documentation in place then I am in working around bugs and picky parsers. > >> Does this seem like a good direction for documentation? I know we >> can document classes and methods using this, do we want to document >> the usage of the modules like this as well? (it is supported but do >> we want that type of documentation in the code?) > > > Well, I guess a short usage information in the module or class doc > string doesn't do any harm. But I guess as there is a dedicated "docs" > directory already existing in pyplusplus this would probably be a > better place for adding any tutorial/introduction style thing. But I > leave that up to Roman how he wants the documentation to be organized. Agreed. I will go in whatever direction you both think is best. -Allen |
From: Matthias B. <ba...@ir...> - 2006-03-06 19:00:44
|
Allen Bierbaum wrote: > Toward this goal (and following up on Matthias's comments from last > week), I started looking at what it would take to generate documentation > using epydoc and restructuredtext. I added a script in > "experimental/docs/generate_all_docs.py" that will call epydoc with what > I think are the correct parameters to generate html documentation. I > also started converting pypp_api over to use restructured text for > documentation. The only problem is that it looks like the tags/fields I > am using are not being recognized correctly (ex: params, args, etc). > According to the docs I am doing things correctly but it is just not > working. If either of you have experience with this tool please take a > look and let me know if I am doing something wrong. We're using epydoc for the PyODE manual, but we're actually using it with epytext markup, not rest. But anyway, I'm no expert with epydoc, I'm more used to doxygen. The "experimental" directory is no package, so I had to change the "experimental" entry in generate_all_docs.py to "experimental/pypp_api.py". epydoc actually generates errors on pypp_api.py. Your doc strings usually begin with blanks, obviously epydoc things the text is indented and generates errors. To test epydoc I changed the doc string for ModuleBuilder.__init__ as follows: """Initialize module. :Parameters: - `workingDir`: Directory to start processing. (default: current working dir) - `includePaths`: List of paths to tell gccxml to search for header files. - `gccxmlPath`: path to gccxml. If not set then attempt to find it in system path. - `defines`: set of symbols to add as defined when compiling. - `undefines`: set of symbols to add as undefined when compiling. - `cacheFile`: name of file to use for caching parse data for this module. - `verbose`: if true output status and command information as building the module. """ That is, I moved the first line to the beginning of the string, I changed the apostrophes of the list entries into those "backticks" (or what are they called?) and I split the docs for define/undefine into two entries. With those changes, epydoc seems to recognize the stuff but the result looks somewhat weird. On the generated page each line describing an argument begins on a new line (i.e. the argument name and its documentation are on two *separate* lines). This looks particularly weird, when I also add the type information. This rather looks like a bug to me. I also tested with epytext markup and then the results look normal. So maybe we should consider switching to that markup...? > Does this seem like a good direction for documentation? I know we can > document classes and methods using this, do we want to document the > usage of the modules like this as well? (it is supported but do we want > that type of documentation in the code?) Well, I guess a short usage information in the module or class doc string doesn't do any harm. But I guess as there is a dedicated "docs" directory already existing in pyplusplus this would probably be a better place for adding any tutorial/introduction style thing. But I leave that up to Roman how he wants the documentation to be organized. - Matthias - |
From: Allen B. <al...@vr...> - 2006-03-06 16:21:53
|
Matthias Baas wrote: > Hi, > > this mail is a general reply to the previous mails posted. I think > there's still some confusion among me and Allen about the last changes > to pyplusplus (module_builder_t, decl_wrappers, etc). So I'd just like > to mention a few points and check if we've already agreed on them or > if there are still different views (which may have led to the confusion): > > - The final version of pyplusplus will have two "separate" APIs, a low > level/internal API and a high level API. Is this already general > consensus? I agree with this. > > - The internal API is what pyplusplus already had when I first tried > it out a few weeks ago. A user could just use this API and create > bindings. In this case, his driver script may be somewhat verbose but > he has full control over pyplusplus. This is my understanding. > > - The high level API is a mere convenience for the user to express > things more concisely. This API refers to the internal API to > implement its stuff. Currently, Allen's pypp_api module constitutes > the high level API. > If I've understood Roman correctly, I believe his new module_builder_t > and decl_wrapper classes should already be seen as high level API, > replacing pypp_api, right? I am confused about this as well. It seems that these classes mix two ideas: 1. They provide some of the interface that my high-level API needs to modify they way creators are created (the injected flags patch I submitted) 2. They also seem to be attempting to draw in some (but not all) of the capabilities that we were talking about providing to the users in the high-level API. It seems reasonable that there will be some duplication here because the high-level API will use some helpers in the low-level API but I don't think it is a good idea to keep moving things across to the low-level API. I agree with Matthias's comments below about separation being a good thing. > What I liked before those classes was the strict separation between > low level API and high level API. It was clear whenever I was using > something "low level" (a class with _t suffix) and something "high > level" (from pypp_api or initially also from my own version). This > distinction has become somewhat blurred and the high level and low > level stuff have become somewhat intermingled. As there is no > documentation yet, the only hint would be the sources themselves but > they don't provide any information whether something is considered to > be part of the low level or the high level API. > > So what I'm missing is documentation about the low level API that > Allen and I could rely on to experiment with various high level APIs. > Maybe some ideas will lead to modifications to the low level part of > pyplusplus but when I was creating my own API version I was quite > pleased to see that the previous internal API has already allowed > expressing almost everything I needed. So it was not that bad at all. Agreed. I like the idea of you and I refining and expanding the high-level API while we pepper Roman with ideas, requests, and refinements for the low-level API. (and as my status update message describes I think one of the first things I would like to see is documentation and comments. :) So summary, I agree with you Matthias. -Allen > > - Matthias - > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the live > webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > pygccxml-development mailing list > pyg...@li... > https://lists.sourceforge.net/lists/listinfo/pygccxml-development > |
From: Matthias B. <ba...@ir...> - 2006-03-06 15:49:26
|
Hi, this mail is a general reply to the previous mails posted. I think there's still some confusion among me and Allen about the last changes to pyplusplus (module_builder_t, decl_wrappers, etc). So I'd just like to mention a few points and check if we've already agreed on them or if there are still different views (which may have led to the confusion): - The final version of pyplusplus will have two "separate" APIs, a low level/internal API and a high level API. Is this already general consensus? - The internal API is what pyplusplus already had when I first tried it out a few weeks ago. A user could just use this API and create bindings. In this case, his driver script may be somewhat verbose but he has full control over pyplusplus. - The high level API is a mere convenience for the user to express things more concisely. This API refers to the internal API to implement its stuff. Currently, Allen's pypp_api module constitutes the high level API. If I've understood Roman correctly, I believe his new module_builder_t and decl_wrapper classes should already be seen as high level API, replacing pypp_api, right? What I liked before those classes was the strict separation between low level API and high level API. It was clear whenever I was using something "low level" (a class with _t suffix) and something "high level" (from pypp_api or initially also from my own version). This distinction has become somewhat blurred and the high level and low level stuff have become somewhat intermingled. As there is no documentation yet, the only hint would be the sources themselves but they don't provide any information whether something is considered to be part of the low level or the high level API. So what I'm missing is documentation about the low level API that Allen and I could rely on to experiment with various high level APIs. Maybe some ideas will lead to modifications to the low level part of pyplusplus but when I was creating my own API version I was quite pleased to see that the previous internal API has already allowed expressing almost everything I needed. So it was not that bad at all. - Matthias - |
From: Allen B. <al...@vr...> - 2006-03-06 15:28:22
|
Matthias Baas wrote: > Roman Yakovenko wrote: > >> On 3/6/06, Allen Bierbaum <al...@vr...> wrote: >> >>> Second try. Roman: you are on the pygccxml-development list correct? >> >> >> Yes. > > > The SF site only lists 2 subscribers. I am subscribed and I do receive > the mails from the list (e.g. I got Allen's mail). > So who else did subscribe and who didn't yet? > Please post a quick reply if you got this email (I have posted it to > the list only, so this is already a check if you are properly > subscribed or not). > > - Matthias - I am getting the messages as well. It looks like Roman is not subscribed. -Allen |
From: Allen B. <al...@vr...> - 2006-03-06 15:27:16
|
Matthias Baas wrote: > Allen Bierbaum wrote: > >> Interesting. I would like to hear more about that. I was never able >> to figure out what ALL_AT_ONCE did let alone get it to work. I have >> been creating my own temporary file in the wrapper generation >> script. Please tell me more. :) > > > The parser() function has an option "compilation_mode" which can be > set to pygccxml.parser.COMPILATION_MODE.ALL_AT_ONCE (default is > FILE_BY_FILE): > > decls = pygccxml.parser.parse( > ...<your options here>...., > compilation_mode = pygccxml.parser.COMPILATION_MODE.ALL_AT_ONCE > ) > > With this option pyplusplus generates a temporary header (it seems > this is even done in memory so no temporary file is used, but I > haven't inspected the sources that closely, so I could be wrong) and > invokes gccxml only once. So it would relieve you from creating the > temporary header yourself. The drawback is that this compilation mode > totally ignores the cache. :( Ah. That is probably why I didn't end up using it. Using temporary files for headers with the md5-based cache is really fairly magical in some cases. (for example I use this to force template template instantiation in the module builder) :) -Allen > > > - Matthias - > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the live > webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > pygccxml-development mailing list > pyg...@li... > https://lists.sourceforge.net/lists/listinfo/pygccxml-development > |
From: Matthias B. <ba...@ir...> - 2006-03-06 15:21:09
|
Allen Bierbaum wrote: > Interesting. I would like to hear more about that. I was never able to > figure out what ALL_AT_ONCE did let alone get it to work. I have been > creating my own temporary file in the wrapper generation script. Please > tell me more. :) The parser() function has an option "compilation_mode" which can be set to pygccxml.parser.COMPILATION_MODE.ALL_AT_ONCE (default is FILE_BY_FILE): decls = pygccxml.parser.parse( ...<your options here>...., compilation_mode = pygccxml.parser.COMPILATION_MODE.ALL_AT_ONCE ) With this option pyplusplus generates a temporary header (it seems this is even done in memory so no temporary file is used, but I haven't inspected the sources that closely, so I could be wrong) and invokes gccxml only once. So it would relieve you from creating the temporary header yourself. The drawback is that this compilation mode totally ignores the cache. :( - Matthias - |
From: Allen B. <al...@vr...> - 2006-03-06 15:02:25
|
Matthias Baas wrote: > Allen Bierbaum wrote: > >> I don't know if this would help you or not, but one thing I found >> that helped my cache performance greatly (and was on of the reasons I >> refactored the code to use md5 signatures) was that I could create a >> temporary header file that included all the headers I wanted to >> parse. Then I included that file instead of using a full "project" >> of header files. This made it so gccxml only ran once and it removed >> all the redundancy of seeing the same decls from multiple included >> headers. This was able to help me take a parse that was around 2 >> hours down to about 1.5 minutes. Overall a very good improvement in >> speed. :) > > > Wow, you're right! Using a single temporary header I get the following > timings: > > Without cache: 12s > File cache: 4s (cache size: 3.1MB) > Dir cache: 2s (cache size: 2.4MB) > > 2 seconds for parsing 222 header files is absolutely ok with me. :) > > And now I finally understand what the compilation mode ALL_AT_ONCE as > opposed to FILE_BY_FILE actually does (unfortunately, it doesn't use > the cache, so it's still more efficient to use FILE_BY_FILE and create > a temporary header file manually. This should be fixed eventually and > once this is fixed, ALL_AT_ONCE should probably be default). Interesting. I would like to hear more about that. I was never able to figure out what ALL_AT_ONCE did let alone get it to work. I have been creating my own temporary file in the wrapper generation script. Please tell me more. :) -A |
From: Matthias B. <ba...@ir...> - 2006-03-06 14:56:16
|
Roman Yakovenko wrote: > On 3/6/06, Allen Bierbaum <al...@vr...> wrote: >> Second try. Roman: you are on the pygccxml-development list correct? > > Yes. The SF site only lists 2 subscribers. I am subscribed and I do receive the mails from the list (e.g. I got Allen's mail). So who else did subscribe and who didn't yet? Please post a quick reply if you got this email (I have posted it to the list only, so this is already a check if you are properly subscribed or not). - Matthias - |
From: Matthias B. <ba...@ir...> - 2006-03-06 14:42:17
|
Allen Bierbaum wrote: > I don't know if this would help you or not, but one thing I found that > helped my cache performance greatly (and was on of the reasons I > refactored the code to use md5 signatures) was that I could create a > temporary header file that included all the headers I wanted to parse. > Then I included that file instead of using a full "project" of header > files. This made it so gccxml only ran once and it removed all the > redundancy of seeing the same decls from multiple included headers. > This was able to help me take a parse that was around 2 hours down to > about 1.5 minutes. Overall a very good improvement in speed. :) Wow, you're right! Using a single temporary header I get the following timings: Without cache: 12s File cache: 4s (cache size: 3.1MB) Dir cache: 2s (cache size: 2.4MB) 2 seconds for parsing 222 header files is absolutely ok with me. :) And now I finally understand what the compilation mode ALL_AT_ONCE as opposed to FILE_BY_FILE actually does (unfortunately, it doesn't use the cache, so it's still more efficient to use FILE_BY_FILE and create a temporary header file manually. This should be fixed eventually and once this is fixed, ALL_AT_ONCE should probably be default). Now my new cache class is not as useful anymore as I thought, but as it's still smaller and faster than the file cache I committed it anyway. - Matthias - |
From: Allen B. <al...@vr...> - 2006-03-06 14:31:35
|
Roman Yakovenko wrote: >On 3/6/06, Allen Bierbaum <al...@vr...> wrote: > > >>Allen Bierbaum wrote: >> >> >> >>>>5. I think that ModuleBuilder interface is a little bit messy. There >>>>are so many properties >>>> and functions. Some configuration is done using properties, other >>>>using function >>>> arguments. I am not saying that this is wrong, but rather >>>>confusing. Also I did not >>>> get the guide-line when and what approach to use. >>>> I think it would be much better to have 2 configuration properties >>>>on module builder >>>> 1. parser_configuration >>>> 2. code_generator_configuration >>>> >>>> Then could would look like this: >>>> mb = module_builder_t( module_name, files, [optional] parser >>>>configuration ) >>>> mb.parser_configuration.defines.append( ... ) >>>> mb.code_generator_configuration.license = ... >>>> >>>> >>>> >>I was thinking about this a little more and took a look at the code. As >>far as I can tell all configuration is currently done by passing >>parameters to the ModuleBuilder init when things get started up and then >>to the writeModule() method when the user wants to finally generate the >>code. I don't see anything done directly with properties. Am I missing >>something here? >> >> > >I think yes. In order to set licence you need to modify mLicense. > >Also why __init__ takes parsing configuration and not parse function >( I think I know the answer, but still ) > >I think that in __init__, parse or writeModule we should only ask for >parameters we can deal without them. All other should be set using properties. > > > >>Are you just saying that you would like to see the internal properties >>grouped into two namespaces so they are separated for users? (that >>looks to be what the interface example above would do) >> >> > >Yes, I fill that this is a little better way - you say to user what >property configure what. >( I hope I said this right :-) ) > >Don't you think so? > > Maybe. I think it is just a matter of getting the right combination of usability and power. What I was going for in the current interface was to have reasonable defaults and let the user override them by passing their options to the initialization and write methods. This seemed like a simple interface because they could just look at the documentation for a couple of methods to be able know everything they need to set. There may be a middle ground here were we could allow the "simple" options to be configured when initializing the module and then allow advanced users to set attributes of the builder to configure the advanced settings. I would not use properties since what we are talking about is really publicly accessible attributes of the builder. Once I get everything working again I will give this idea a try. -Allen |
From: Roman Y. <rom...@gm...> - 2006-03-06 14:13:37
|
On 3/6/06, Allen Bierbaum <al...@vr...> wrote: > Second try. Roman: you are on the pygccxml-development list correct? Yes. > -Allen > > -------- Original Message -------- > Subject: [pygccxml-development] Updated status > Date: Sat, 04 Mar 2006 19:03:32 -0600 > From: Allen Bierbaum <al...@vr...> > To: pyg...@li... > > > > A apologize in advance, this e-mail is going to be a little random > because it is based on my notes from this afternoon. I started trying > to move the pypp_api interface over the the new codebase and these are > the things I have run into right away. > > - Calling parser changed a bit. > > There seems to be a new interface to calling the project parser. I > think I just need to add a decl wrapper factory but I am not for sure. > Roman? > > - How am I supposed to use the decl_wrappers? > > I am trying to parse with the project reader and then I was expecting > that the returned decls would actually be the wrappers/decorators for > the decls. Is this how it is supposed to work? > > - Code complexity: The new decorators does look like a LOT of additional > code for what could be done with some grafted flags. Not really a > question, more of a comment. I really think the codebase could be > simplified a lot. For me at least all the property usage seems like a > lot of extra code to read through and maintain for very little payoff. > > - What is module_builder_t for? Do I have to use it? It looks like > Roman's stab at the higher-level interface but what do I need to copy > from there into pypp_api. > > - Why did call policies move to the wrapper module? Those still seem > like code_creators. > > - Decl wrappers don't seem to support everything I needed/added with the > decorators. > > Specifically, how can I add custom creators? In my flag-based version I > could add a list of code creators that would automatically be added on > when the creator class build the code creators for a decl. > > - What are all the options that have been added to the wrappers? (see > documentation below) > > - I think Logger should probably use info() for most of the current > output. Why put time and level name in log output? > > My original goal with this output was to have verbose output for users > not to provide a debug log to a log file on errors. Any objections to > making the output look cleaner by making the info() filter just output > the text instead of all the header information? > > - Comments: We *really* need comments and documentation for the classes > and methods in pygccxml and pyplusplus. I think Matthias and I could > help out by pointing to areas of the code and saying in our very nicest > voice "Roman, please please document this". > > Toward this goal (and following up on Matthias's comments from last > week), I started looking at what it would take to generate documentation > using epydoc and restructuredtext. I added a script in > "experimental/docs/generate_all_docs.py" that will call epydoc with what > I think are the correct parameters to generate html documentation. I > also started converting pypp_api over to use restructured text for > documentation. The only problem is that it looks like the tags/fields I > am using are not being recognized correctly (ex: params, args, etc). > According to the docs I am doing things correctly but it is just not > working. If either of you have experience with this tool please take a > look and let me know if I am doing something wrong. > > Does this seem like a good direction for documentation? I know we can > document classes and methods using this, do we want to document the > usage of the modules like this as well? (it is supported but do we want > that type of documentation in the code?) > > That is about all I have for now. Until I find out more about the > decl_wrapper's and their use I am at a stand still. :( > > Thanks, > Allen > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting langua= ge > that extends applications into web and mobile media. Attend the live webc= ast > and join the prime developer group breaking into this new coding territor= y! > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D110944&bid=3D241720&dat= =3D121642 > _______________________________________________ > pygccxml-development mailing list > pyg...@li... > https://lists.sourceforge.net/lists/listinfo/pygccxml-development > > > -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-03-06 14:13:03
|
On 3/6/06, Allen Bierbaum <al...@vr...> wrote: > Allen Bierbaum wrote: > > > > >> 5. I think that ModuleBuilder interface is a little bit messy. There > >> are so many properties > >> and functions. Some configuration is done using properties, other > >> using function > >> arguments. I am not saying that this is wrong, but rather > >> confusing. Also I did not > >> get the guide-line when and what approach to use. > >> I think it would be much better to have 2 configuration properties > >> on module builder > >> 1. parser_configuration > >> 2. code_generator_configuration > >> > >> Then could would look like this: > >> mb =3D module_builder_t( module_name, files, [optional] parser > >> configuration ) > >> mb.parser_configuration.defines.append( ... ) > >> mb.code_generator_configuration.license =3D ... > >> > I was thinking about this a little more and took a look at the code. As > far as I can tell all configuration is currently done by passing > parameters to the ModuleBuilder init when things get started up and then > to the writeModule() method when the user wants to finally generate the > code. I don't see anything done directly with properties. Am I missing > something here? I think yes. In order to set licence you need to modify mLicense. Also why __init__ takes parsing configuration and not parse function ( I think I know the answer, but still ) I think that in __init__, parse or writeModule we should only ask for parameters we can deal without them. All other should be set using properti= es. > Are you just saying that you would like to see the internal properties > grouped into two namespaces so they are separated for users? (that > looks to be what the interface example above would do) Yes, I fill that this is a little better way - you say to user what property configure what. ( I hope I said this right :-) ) Don't you think so? > -Allen -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-03-06 14:12:47
|
>>I need to look into this once I can get the code working again. We also >>need to make a list of all the things that we would want to be able to >>configure and set. This would help us to understand how it may be best >>presented to a user. >> >> > >Yes this is what I will try to do this evening. > > Could you summary the results and post them on the wiki for discussion? I think a good place to start would be the parameters that are currently passed to the init() and writeModule() methods of the ModuleBuilder. From there we/I need to look at what configuration options Matthias is using and add the unique ones to the list of things to support. After that we can start discussing ideas for new options that may be needed (I can already think of a few that would be helpful). > > >>When you refer to module_builder are you talking about some new class >>you have added in pyplusplus? >> >> > >No. I am just switched to pyplusplus coding convention. I refer to the >class that you and >Matthias will create. I am waiting for this. I took a look on the >ModuleBuilder class\module >and moved some functionality to the right place. That's all. > > OK. The "move some functionality to the right place" is the part that surprised me. I think we need to make sure we are all on the same page here because right now it seems like you (Roman) are looking to extend the internal pyplusplus APIs to not only support the new functionality but also looking to have users call to it directly. My goal at least was that users would only need to use the pypp_api and would never need to know about or call the internal APIs. I think Matthais was working in a similar direction but we will have to ask him. We should all make sure we are clear on what the goal is we are trying to accomplish so we don't end up with duplicated effort and simply a more powerful internal api for pyplusplus. -Allen |
From: Allen B. <al...@vr...> - 2006-03-06 14:05:52
|
Second try. Roman: you are on the pygccxml-development list correct? -Allen -------- Original Message -------- Subject: [pygccxml-development] Updated status Date: Sat, 04 Mar 2006 19:03:32 -0600 From: Allen Bierbaum <al...@vr...> To: pyg...@li... A apologize in advance, this e-mail is going to be a little random because it is based on my notes from this afternoon. I started trying to move the pypp_api interface over the the new codebase and these are the things I have run into right away. - Calling parser changed a bit. There seems to be a new interface to calling the project parser. I think I just need to add a decl wrapper factory but I am not for sure. Roman? - How am I supposed to use the decl_wrappers? I am trying to parse with the project reader and then I was expecting that the returned decls would actually be the wrappers/decorators for the decls. Is this how it is supposed to work? - Code complexity: The new decorators does look like a LOT of additional code for what could be done with some grafted flags. Not really a question, more of a comment. I really think the codebase could be simplified a lot. For me at least all the property usage seems like a lot of extra code to read through and maintain for very little payoff. - What is module_builder_t for? Do I have to use it? It looks like Roman's stab at the higher-level interface but what do I need to copy from there into pypp_api. - Why did call policies move to the wrapper module? Those still seem like code_creators. - Decl wrappers don't seem to support everything I needed/added with the decorators. Specifically, how can I add custom creators? In my flag-based version I could add a list of code creators that would automatically be added on when the creator class build the code creators for a decl. - What are all the options that have been added to the wrappers? (see documentation below) - I think Logger should probably use info() for most of the current output. Why put time and level name in log output? My original goal with this output was to have verbose output for users not to provide a debug log to a log file on errors. Any objections to making the output look cleaner by making the info() filter just output the text instead of all the header information? - Comments: We *really* need comments and documentation for the classes and methods in pygccxml and pyplusplus. I think Matthias and I could help out by pointing to areas of the code and saying in our very nicest voice "Roman, please please document this". Toward this goal (and following up on Matthias's comments from last week), I started looking at what it would take to generate documentation using epydoc and restructuredtext. I added a script in "experimental/docs/generate_all_docs.py" that will call epydoc with what I think are the correct parameters to generate html documentation. I also started converting pypp_api over to use restructured text for documentation. The only problem is that it looks like the tags/fields I am using are not being recognized correctly (ex: params, args, etc). According to the docs I am doing things correctly but it is just not working. If either of you have experience with this tool please take a look and let me know if I am doing something wrong. Does this seem like a good direction for documentation? I know we can document classes and methods using this, do we want to document the usage of the modules like this as well? (it is supported but do we want that type of documentation in the code?) That is about all I have for now. Until I find out more about the decl_wrapper's and their use I am at a stand still. :( Thanks, Allen ------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ pygccxml-development mailing list pyg...@li... https://lists.sourceforge.net/lists/listinfo/pygccxml-development |
From: Allen B. <al...@vr...> - 2006-03-06 14:04:31
|
Allen Bierbaum wrote: > >> 5. I think that ModuleBuilder interface is a little bit messy. There >> are so many properties >> and functions. Some configuration is done using properties, other >> using function >> arguments. I am not saying that this is wrong, but rather >> confusing. Also I did not >> get the guide-line when and what approach to use. >> I think it would be much better to have 2 configuration properties >> on module builder >> 1. parser_configuration >> 2. code_generator_configuration >> >> Then could would look like this: >> mb = module_builder_t( module_name, files, [optional] parser >> configuration ) >> mb.parser_configuration.defines.append( ... ) >> mb.code_generator_configuration.license = ... >> I was thinking about this a little more and took a look at the code. As far as I can tell all configuration is currently done by passing parameters to the ModuleBuilder init when things get started up and then to the writeModule() method when the user wants to finally generate the code. I don't see anything done directly with properties. Am I missing something here? Are you just saying that you would like to see the internal properties grouped into two namespaces so they are separated for users? (that looks to be what the interface example above would do) -Allen |
From: Roman Y. <rom...@gm...> - 2006-03-06 13:53:14
|
On 3/6/06, Allen Bierbaum <al...@vr...> wrote: > The current interface is not polished yet and doesn't even work. I > really need answers to the questions posed in the message titled > "Updated status" from 03/04/2006 07:03 PM on this list. Please take a > look at that message so we can start discussing it. I'd like, but it seems that I do not have such message. Can you post it aga= in? > Roman Yakovenko wrote: > > >2. I could be wrong, but pyplusplus has now all functionality provided b= y > > DeclWrapper and MultiDeclWrapper. May be they should be removed? > > > > > It was never really my intention to remove these since the idea is that > DeclWrapper will add an easy to use user-level API for interfacing with > the decls. (ie. users are not required to use the internal APIs or > understand anything about pyplusplus) There are still several things in > the interface that I can't do with the pyplusplus wrappers (if I could > get access to them). (see message above) I don't have that message, I could tell more when I read it. > >3. iterdecls - this functionality already exists. Please see > > pygccxml/declarations/algorithm.py function __make_flatten_generator= . > > It has all functionality provided by iterdecls. Also I did not use > >it because of > > performance, it is slowly then creating a list and then iterating ov= er it. > > But I could be wrong. > > > > > I will check into this. If they are the same I would say I like the > name iterdecls better at least because it sounds more like something > that would iterate over a tree of decls. __make_flatten_generator > sounds more like something that creates a new list. I wrote it and tested. I did not like performance, so I rename it to be pri= vate. You can rename it to iterdecls. > >4. I created 2 new printers: decl_printer_t in pygccxml/declarations pac= kage > > and decl_wrapper_printer_t in pyplusplus/decl_wrappers, that derives= from > > decl_printer_t. As a bonus, decl_wrapper_printer_t prints all > >relevant properties from > > declaration wrappper classes. > > > > > Does one of these have the modifications that I had in the helper > printer in the pyppapi? I had modified that printer to print less a > less verbose tree in a way that I think would be of help to users trying > to wrap an API. If not than this is a bug. I left all functionality you created. > I need to look into this once I can get the code working again. We also > need to make a list of all the things that we would want to be able to > configure and set. This would help us to understand how it may be best > presented to a user. Yes this is what I will try to do this evening. > When you refer to module_builder are you talking about some new class > you have added in pyplusplus? No. I am just switched to pyplusplus coding convention. I refer to the class that you and Matthias will create. I am waiting for this. I took a look on the ModuleBuilder class\module and moved some functionality to the right place. That's all. > -Allen -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-03-06 13:40:26
|
The current interface is not polished yet and doesn't even work. I really need answers to the questions posed in the message titled "Updated status" from 03/04/2006 07:03 PM on this list. Please take a look at that message so we can start discussing it. Roman Yakovenko wrote: >Hi. I did small review to pypp_api. > >1. cleanTemplateName this function already exists: > decl_wrapper/algorithm.py - create_valid_name. > I think you can use it. > > I will need to look at this again. I think I had a slight modification, but I will check. >2. I could be wrong, but pyplusplus has now all functionality provided by > DeclWrapper and MultiDeclWrapper. May be they should be removed? > > It was never really my intention to remove these since the idea is that DeclWrapper will add an easy to use user-level API for interfacing with the decls. (ie. users are not required to use the internal APIs or understand anything about pyplusplus) There are still several things in the interface that I can't do with the pyplusplus wrappers (if I could get access to them). (see message above) >3. iterdecls - this functionality already exists. Please see > pygccxml/declarations/algorithm.py function __make_flatten_generator. > It has all functionality provided by iterdecls. Also I did not use >it because of > performance, it is slowly then creating a list and then iterating over it. > But I could be wrong. > > I will check into this. If they are the same I would say I like the name iterdecls better at least because it sounds more like something that would iterate over a tree of decls. __make_flatten_generator sounds more like something that creates a new list. >4. I created 2 new printers: decl_printer_t in pygccxml/declarations package > and decl_wrapper_printer_t in pyplusplus/decl_wrappers, that derives from > decl_printer_t. As a bonus, decl_wrapper_printer_t prints all >relevant properties from > declaration wrappper classes. > > Does one of these have the modifications that I had in the helper printer in the pyppapi? I had modified that printer to print less a less verbose tree in a way that I think would be of help to users trying to wrap an API. >5. I think that ModuleBuilder interface is a little bit messy. There >are so many properties > and functions. Some configuration is done using properties, other >using function > arguments. I am not saying that this is wrong, but rather >confusing. Also I did not > get the guide-line when and what approach to use. > > I think it would be much better to have 2 configuration properties >on module builder > 1. parser_configuration > 2. code_generator_configuration > > Then could would look like this: > mb = module_builder_t( module_name, files, [optional] parser configuration ) > mb.parser_configuration.defines.append( ... ) > mb.code_generator_configuration.license = ... > > This way we could remove some code from module_builder. Interface >and code will > gain readability. > >What do you think? > > I need to look into this once I can get the code working again. We also need to make a list of all the things that we would want to be able to configure and set. This would help us to understand how it may be best presented to a user. When you refer to module_builder are you talking about some new class you have added in pyplusplus? -Allen >-- >Roman Yakovenko >C++ Python language binding >http://www.language-binding.net/ > > >------------------------------------------------------- >This SF.Net email is sponsored by xPML, a groundbreaking scripting language >that extends applications into web and mobile media. Attend the live webcast >and join the prime developer group breaking into this new coding territory! >http://sel.as-us.falkag.net/sel?cmd=k&kid0944&bid$1720&dat1642 >_______________________________________________ >pygccxml-development mailing list >pyg...@li... >https://lists.sourceforge.net/lists/listinfo/pygccxml-development > > > |
From: Roman Y. <rom...@gm...> - 2006-03-06 05:37:06
|
Hi. I did small review to pypp_api. 1. cleanTemplateName this function already exists: decl_wrapper/algorithm.py - create_valid_name. I think you can use it. 2. I could be wrong, but pyplusplus has now all functionality provided by DeclWrapper and MultiDeclWrapper. May be they should be removed? 3. iterdecls - this functionality already exists. Please see pygccxml/declarations/algorithm.py function __make_flatten_generator. It has all functionality provided by iterdecls. Also I did not use it because of performance, it is slowly then creating a list and then iterating over = it. But I could be wrong. 4. I created 2 new printers: decl_printer_t in pygccxml/declarations packag= e and decl_wrapper_printer_t in pyplusplus/decl_wrappers, that derives fr= om decl_printer_t. As a bonus, decl_wrapper_printer_t prints all relevant properties from declaration wrappper classes. 5. I think that ModuleBuilder interface is a little bit messy. There are so many properties and functions. Some configuration is done using properties, other using function arguments. I am not saying that this is wrong, but rather confusing. Also I did not get the guide-line when and what approach to use. I think it would be much better to have 2 configuration properties on module builder 1. parser_configuration 2. code_generator_configuration Then could would look like this: mb =3D module_builder_t( module_name, files, [optional] parser configura= tion ) mb.parser_configuration.defines.append( ... ) mb.code_generator_configuration.license =3D ... This way we could remove some code from module_builder. Interface and code will gain readability. What do you think? -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-03-06 05:09:44
|
Good morning. I created multiple declarations wrapper class - mdecl_wrapper= _t. I think you will like it. Sometimes, I don't like Python do be as fully dynamic language, but sometime I can not live without it: Here is the code: class call_redirector_t( object ): def __init__( self, name, decls ): object.__init__( self ) self.name =3D name self.decls =3D decls def __call__( self, *arguments, **keywords ): for d in self.decls: callable =3D getattr(d, self.name) callable( *arguments, **keywords ) class mdecl_wrapper_t( object ): def __init__( self, decls ): object.__init__( self ) self.__dict__['_decls'] =3D decls def __len__( self ): return len( self._decls ) def __getitem__( self, index ): return self._decls[index] def __ensure_attribute( self, name ): invalid_decls =3D filter( lambda d: not hasattr( d, name ), self._d= ecls ) if invalid_decls: raise RuntimeError( "Not all declarations have '%s' attribute." % name ) def __setattr__( self, name, value ): self.__ensure_attribute( name ) for d in self._decls: setattr( d, name, value ) def __getattr__( self, name ): return call_redirector_t( name, self._decls ) Usage: mdw =3D decl_wrappers.mdecl_wrapper_t( classes ) mdw.always_expose_using_scope =3D True mdw.include() for decl in mdw: print decl.name Comments and suggestions are welcome. I felt pretty good with this implementation, so I commited it. -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-03-06 04:43:18
|
On 3/6/06, Allen Bierbaum <al...@vr...> wrote: > Matthias: > > I really like anything that can help increase performance. :) I did not see the code, but my filling is that Matthias did pretty good wor= k. > I don't know if this would help you or not, but one thing I found that > helped my cache performance greatly (and was on of the reasons I > refactored the code to use md5 signatures) was that I could create a > temporary header file that included all the headers I wanted to parse. > Then I included that file instead of using a full "project" of header > files. This made it so gccxml only ran once and it removed all the > redundancy of seeing the same decls from multiple included headers. > This was able to help me take a parse that was around 2 hours down to > about 1.5 minutes. Overall a very good improvement in speed. :) My experience is a little bit different: I learn, that instead of using cache, it is better to leave gccxml created files and to parse them again. But now with all you improvements I should re-check my approach. > > -Allen -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-03-06 04:37:54
|
On 3/5/06, Matthias Baas <ba...@ir...> wrote: > Roman, is it ok when I commit that cache into the pygccxml directory? > The implementation consists of one single file "directory_cache.py" > which I would put into the pygccxml.parser directory. The new class is > called "directory_cache_t" (because the user has to specify a directory > name instead of a file name). There are also a few internal helper > classes, but those aren't meant to be instantiated by the user. I was > using your naming conventions as far as I could figure them out (lower > case class/method names with underscores between words and the classes > have a "_t" suffix. Private methods have a leading underscore). Doc > strings are available. I haven't modified any other file, so everything > will still work as before (and the file cache is still the default). To > activate the directory cache the user currently has to instantiate a > class himself and pass this instance to the parse() method. > I can also email you the file first if you want to have a look at it > before it is actually added to the repository. Please commit your changes to repository. Don't forget to put license within the file. Also, I am sure that you tested your directory cache class, right? Can you create also unit test for it? If not I will do that. Can you save your numbers somewhere? I think we will need them for release notes. Thanks > There's still one more question I have regarding the > source_reader_t.__parse_gccxml_created_file() method. What is the exact > meaning of the returned file list? All files that have been parsed > The update() method of a cache class > receives this list as "included_files" argument, so one might think the > list only contains the files that were included from the corresponding > header file. But I noticed that the list also contains the header file > itself. Is this intentional (and can I rely on that behavior) or is this > a bug and the header file does not belong into this list? It is not intentional, but we can say that from now this is a protocol. > > 2. I updated setup file before release, when I have something like > > "feature freeze" period. > > I suppose, that every one who use CVS will be able to use it > > without setup. I could be > > wrong, but right now I prefer to concentrate my attention on someth= ing else > > Well, it's only two lines that need to be added to setup_pyplusplus.py: > > , 'pyplusplus.decl_wrappers' > , 'pyplusplus.module_builder' > > If you want I can commit that change back into the repository myself. Yes, go ahead please. > >> Well, you could fill in some extra words (such as class_wrapper_t) but > >> as Python already organizes code in a hierarchy I guess it would be a > >> better idea to use decl_wrapper.class_t explicitly instead of importin= g > >> the classes into the namespace of another module. Then it's also clear > >> to the reader which class is being referred to. > > > > So, basically you would like to stay with name I proposed, but user > > is forced to use "fully qualified" names, am I right? > > No. I don't want the user having to deal with all those classes anyway, > so the user should only need *one* such class (of each kind) which he > might import into his namespace if he wishes to. > My suggestion to use the fully qualified names only refers to the > internal implementation (of course, this is entirely up to you what > conventions you want to use as you're the maintainer of the package. > I > was just thinking that it might prevent confusion among those people who > also want to have a look at the sources of pyplusplus... :) So I don't understand. Within the code I always use fully qualified name - package.class . Even within package I use module.class. Do you think that this is a right approach? > > May I give you a small advice? You can combine between power of pyplusp= lus and > > power of C++. I think that using creating single template for every > > group is better solution. > > I'm not sure if I understand what you mean. The above classes aren't > implemented by myself, they are part of the Maya SDK and, of course, I'm > not in the position to change that SDK. Those classes have same interface, may be you can create boost python wrapp= ers using templates? > - Matthias - -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-03-05 23:05:47
|
Matthias: I really like anything that can help increase performance. :) I don't know if this would help you or not, but one thing I found that helped my cache performance greatly (and was on of the reasons I refactored the code to use md5 signatures) was that I could create a temporary header file that included all the headers I wanted to parse. Then I included that file instead of using a full "project" of header files. This made it so gccxml only ran once and it removed all the redundancy of seeing the same decls from multiple included headers. This was able to help me take a parse that was around 2 hours down to about 1.5 minutes. Overall a very good improvement in speed. :) -Allen Matthias Baas wrote: > Roman Yakovenko wrote: > >>> first needs to be resolved before I can finish the class (so is >>> source_reader_t.__parse_gccxml_created_file() supposed to return the >>> files as a dictionary or as a list?) >> >> >> May be I missed something, but what cache do you implement? >> Any way the answer: list, I already fixed this bug. > > > I've implemented a new cache class because I had some issues with the > file_cache_t class: > > - The cache file is about 39MB and on a machine with 512MB main memory > I couldn't use that cache anymore. The machine started memory > thrashing while parsing the headers (when the cache was already there) > and CPU usage dropped to around 1%. > > - When I was creating wrappers for only a few selected classes using > the cache took much more time than without cache (because the cache > always loads all cached declarations, no matter if they are required > or not). > > That's why I had a look at the caching mechanism and implemented my > own class that fixed the above issues. Instead of one single cache > file the cache now uses a directory and stores individual files (one > file per header). Here is a comparison of the time it takes to do the > parse() step for 222 headers from the Maya SDK (the table is best > viewed with a fixed pitch font): > > | Parsing time | Cache size | Parameters > | (min:sec) | (MB) | > ----------------------+--------------+------------+-------------- > Without cache | 2:53 | - | > | | | > File cache (initial) | 4:12 | 39.1 | > File cache (cached) | 1:58 | 39.1 | > | | | > Dir cache (initial) | 3:40 | 38.4 | -compression > Dir cache (cached) | 0:34 | 38.4 | -compression > Dir cache (initial) | 4:03 | 11.8 | +compression > Dir cache (cached) | 2:18 | 11.8 | +compression > ----------------------+--------------+------------+-------------- > > The "initial" rows refer to the cases when the cache didn't exist yet > and had to be build. But of course, this only has to be done once, so > the "cached" rows are the more important ones. The directory cache has > an option to compress the cache files which was used in the last two > rows (so in my case, compression isn't really useful for me). > > Memory usage of the directory cache is much lower, so I could also use > that cache on the machine with "only" 512MB main memory. There's also > no disadvantage anymore when only a few headers are parsed while the > cache actually contains a lot more headers. Cached declarations that > are not requested by the main program are never touched. > > Roman, is it ok when I commit that cache into the pygccxml directory? > The implementation consists of one single file "directory_cache.py" > which I would put into the pygccxml.parser directory. The new class is > called "directory_cache_t" (because the user has to specify a > directory name instead of a file name). There are also a few internal > helper classes, but those aren't meant to be instantiated by the user. > I was using your naming conventions as far as I could figure them out > (lower case class/method names with underscores between words and the > classes have a "_t" suffix. Private methods have a leading > underscore). Doc strings are available. I haven't modified any other > file, so everything will still work as before (and the file cache is > still the default). To activate the directory cache the user currently > has to instantiate a class himself and pass this instance to the > parse() method. > I can also email you the file first if you want to have a look at it > before it is actually added to the repository. > > There's still one more question I have regarding the > source_reader_t.__parse_gccxml_created_file() method. What is the > exact meaning of the returned file list? The update() method of a > cache class receives this list as "included_files" argument, so one > might think the list only contains the files that were included from > the corresponding header file. But I noticed that the list also > contains the header file itself. Is this intentional (and can I rely > on that behavior) or is this a bug and the header file does not belong > into this list? > >> 2. I updated setup file before release, when I have something like >> "feature freeze" period. >> I suppose, that every one who use CVS will be able to use it >> without setup. I could be >> wrong, but right now I prefer to concentrate my attention on >> something else > > > Well, it's only two lines that need to be added to setup_pyplusplus.py: > > , 'pyplusplus.decl_wrappers' > , 'pyplusplus.module_builder' > > If you want I can commit that change back into the repository myself. > >>> Well, you could fill in some extra words (such as class_wrapper_t) but >>> as Python already organizes code in a hierarchy I guess it would be a >>> better idea to use decl_wrapper.class_t explicitly instead of importing >>> the classes into the namespace of another module. Then it's also clear >>> to the reader which class is being referred to. >> >> >> So, basically you would like to stay with name I proposed, but user >> is forced to use "fully qualified" names, am I right? > > > No. I don't want the user having to deal with all those classes > anyway, so the user should only need *one* such class (of each kind) > which he might import into his namespace if he wishes to. > My suggestion to use the fully qualified names only refers to the > internal implementation (of course, this is entirely up to you what > conventions you want to use as you're the maintainer of the package. I > was just thinking that it might prevent confusion among those people > who also want to have a look at the sources of pyplusplus... :) > >>> In the Maya SDK there are a couple of related classes that basically >>> have the same interface (e.g. vector, float vector, point, float point, >>> color, then the same thing for array versions etc). When decorating >>> those classes I can treat every class of such a group the same and >>> apply >>> the same operations. This is where being able to select stuff from >>> several classes at once can be quite handy. >> >> >> May I give you a small advice? You can combine between power of >> pyplusplus and >> power of C++. I think that using creating single template for every >> group is better solution. > > > I'm not sure if I understand what you mean. The above classes aren't > implemented by myself, they are part of the Maya SDK and, of course, > I'm not in the position to change that SDK. > > - Matthias - > |
From: Matthias B. <ba...@ir...> - 2006-03-05 18:41:53
|
Roman Yakovenko wrote: >> first needs to be resolved before I can finish the class (so is >> source_reader_t.__parse_gccxml_created_file() supposed to return the >> files as a dictionary or as a list?) > > May be I missed something, but what cache do you implement? > Any way the answer: list, I already fixed this bug. I've implemented a new cache class because I had some issues with the file_cache_t class: - The cache file is about 39MB and on a machine with 512MB main memory I couldn't use that cache anymore. The machine started memory thrashing while parsing the headers (when the cache was already there) and CPU usage dropped to around 1%. - When I was creating wrappers for only a few selected classes using the cache took much more time than without cache (because the cache always loads all cached declarations, no matter if they are required or not). That's why I had a look at the caching mechanism and implemented my own class that fixed the above issues. Instead of one single cache file the cache now uses a directory and stores individual files (one file per header). Here is a comparison of the time it takes to do the parse() step for 222 headers from the Maya SDK (the table is best viewed with a fixed pitch font): | Parsing time | Cache size | Parameters | (min:sec) | (MB) | ----------------------+--------------+------------+-------------- Without cache | 2:53 | - | | | | File cache (initial) | 4:12 | 39.1 | File cache (cached) | 1:58 | 39.1 | | | | Dir cache (initial) | 3:40 | 38.4 | -compression Dir cache (cached) | 0:34 | 38.4 | -compression Dir cache (initial) | 4:03 | 11.8 | +compression Dir cache (cached) | 2:18 | 11.8 | +compression ----------------------+--------------+------------+-------------- The "initial" rows refer to the cases when the cache didn't exist yet and had to be build. But of course, this only has to be done once, so the "cached" rows are the more important ones. The directory cache has an option to compress the cache files which was used in the last two rows (so in my case, compression isn't really useful for me). Memory usage of the directory cache is much lower, so I could also use that cache on the machine with "only" 512MB main memory. There's also no disadvantage anymore when only a few headers are parsed while the cache actually contains a lot more headers. Cached declarations that are not requested by the main program are never touched. Roman, is it ok when I commit that cache into the pygccxml directory? The implementation consists of one single file "directory_cache.py" which I would put into the pygccxml.parser directory. The new class is called "directory_cache_t" (because the user has to specify a directory name instead of a file name). There are also a few internal helper classes, but those aren't meant to be instantiated by the user. I was using your naming conventions as far as I could figure them out (lower case class/method names with underscores between words and the classes have a "_t" suffix. Private methods have a leading underscore). Doc strings are available. I haven't modified any other file, so everything will still work as before (and the file cache is still the default). To activate the directory cache the user currently has to instantiate a class himself and pass this instance to the parse() method. I can also email you the file first if you want to have a look at it before it is actually added to the repository. There's still one more question I have regarding the source_reader_t.__parse_gccxml_created_file() method. What is the exact meaning of the returned file list? The update() method of a cache class receives this list as "included_files" argument, so one might think the list only contains the files that were included from the corresponding header file. But I noticed that the list also contains the header file itself. Is this intentional (and can I rely on that behavior) or is this a bug and the header file does not belong into this list? > 2. I updated setup file before release, when I have something like > "feature freeze" period. > I suppose, that every one who use CVS will be able to use it > without setup. I could be > wrong, but right now I prefer to concentrate my attention on something else Well, it's only two lines that need to be added to setup_pyplusplus.py: , 'pyplusplus.decl_wrappers' , 'pyplusplus.module_builder' If you want I can commit that change back into the repository myself. >> Well, you could fill in some extra words (such as class_wrapper_t) but >> as Python already organizes code in a hierarchy I guess it would be a >> better idea to use decl_wrapper.class_t explicitly instead of importing >> the classes into the namespace of another module. Then it's also clear >> to the reader which class is being referred to. > > So, basically you would like to stay with name I proposed, but user > is forced to use "fully qualified" names, am I right? No. I don't want the user having to deal with all those classes anyway, so the user should only need *one* such class (of each kind) which he might import into his namespace if he wishes to. My suggestion to use the fully qualified names only refers to the internal implementation (of course, this is entirely up to you what conventions you want to use as you're the maintainer of the package. I was just thinking that it might prevent confusion among those people who also want to have a look at the sources of pyplusplus... :) >> In the Maya SDK there are a couple of related classes that basically >> have the same interface (e.g. vector, float vector, point, float point, >> color, then the same thing for array versions etc). When decorating >> those classes I can treat every class of such a group the same and apply >> the same operations. This is where being able to select stuff from >> several classes at once can be quite handy. > > May I give you a small advice? You can combine between power of pyplusplus and > power of C++. I think that using creating single template for every > group is better solution. I'm not sure if I understand what you mean. The above classes aren't implemented by myself, they are part of the Maya SDK and, of course, I'm not in the position to change that SDK. - Matthias - |
From: Roman Y. <rom...@gm...> - 2006-03-05 06:53:19
|
On 2/27/06, Allen Bierbaum <abi...@gm...> wrote: > - Currently pyplusplus generates invalid bindings on class with > virtual methods and static methods. It tries to add the static > methods to the wrapper and things just don't work out well. IIRC the > code doesn't even compile. > > - Type traits: I noticed when using the pygccxml type traits to test > a type for const'ness that the current code only tests the most > external type wrapper and thus doesn't really tell you if the entire > type is const. > > for example a type in pygccxml may end up looking like: ref( const( > ptr( float)))) > > This type would only return true when tested for is_reference. > is_const would fail. This is inconvenient and doesn't reflect the way > boost.type_traits works on types. I think it could be solved by just > making the type traits methods process the type recursively. > > - finalizing: Are there known issues with finalizing a class? I have > run into multiple cases where I was able to finalize a class with > Pyste but pyplusplus returns an error when attempting to finalize it. > (I don't have a self contained example right now but I may be able to > come up with one if needed) Allen, I would like to fix those bugs. Can you create simple test-cases that reproduce those problems? Thank you. Also, I am waiting for your answer about "finalizing". Do you agree with me= ? Thanks -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |