pygccxml-development Mailing List for C++ Python language bindings (Page 54)
Brought to you by:
mbaas,
roman_yakovenko
You can subscribe to this list here.
2006 |
Jan
|
Feb
(6) |
Mar
(160) |
Apr
(96) |
May
(152) |
Jun
(72) |
Jul
(99) |
Aug
(189) |
Sep
(161) |
Oct
(110) |
Nov
(9) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(13) |
Feb
(48) |
Mar
(35) |
Apr
(7) |
May
(37) |
Jun
(8) |
Jul
(15) |
Aug
(8) |
Sep
(2) |
Oct
(1) |
Nov
(2) |
Dec
(38) |
2008 |
Jan
(11) |
Feb
(29) |
Mar
(17) |
Apr
(3) |
May
|
Jun
(64) |
Jul
(49) |
Aug
(51) |
Sep
(18) |
Oct
(22) |
Nov
(9) |
Dec
(9) |
2009 |
Jan
(28) |
Feb
(15) |
Mar
(2) |
Apr
(11) |
May
(6) |
Jun
(2) |
Jul
(3) |
Aug
(34) |
Sep
(5) |
Oct
(7) |
Nov
(13) |
Dec
(14) |
2010 |
Jan
(39) |
Feb
(3) |
Mar
(3) |
Apr
(14) |
May
(11) |
Jun
(8) |
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
(7) |
Apr
|
May
|
Jun
(3) |
Jul
(3) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2021 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
From: Roman Y. <rom...@gm...> - 2006-06-21 20:07:45
|
Good evening. I need some help. I don't know what is preferred behaviour of py++ in next situation. In order to use indexing suite, class X should define operator==. What py++ should do when it found usage of std::vector< X >? The generated code class_< vector< X > >( "Xs" ) .def( vector_indexing_suite< vector<X> > ); will not compile. There are 2 options: 1. py++ will not generate that code. Very bad option, because user will get wrong filling, that created bindings just work. All functions that use vector< X > are non-callable from Python. 2. Generate code. The bad side of this approach is compiler error message - it is pretty big and scaring. It can confuse any user, but at least he will aware of the problem. py++ can generate some comment within source code that will help the user to understand the problem quickly. Thoughts? -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Lakin W. <lak...@gm...> - 2006-06-21 18:47:53
|
On 6/21/06, Roman Yakovenko <rom...@gm...> wrote: > On 6/21/06, Lakin Wecker <lak...@gm...> wrote: > > What about generating: > > #define BOOST_MAX_ARITY 19 > > at the top of file which contains those methods that are too long. > > I'm not sure about this particular option, but maybe it is possible to > > set this option on a per file basis. > > I think that this is good idea. pyplusplus can generate some kind of > configuration file > and include it in every generated file. Thus user will have single > place to configure > all bindings. I like this idea. Lakin Wecker |
From: Roman Y. <rom...@gm...> - 2006-06-21 18:22:58
|
On 6/21/06, Lakin Wecker <lak...@gm...> wrote: > What about generating: > #define BOOST_MAX_ARITY 19 > at the top of file which contains those methods that are too long. > I'm not sure about this particular option, but maybe it is possible to > set this option on a per file basis. I think that this is good idea. pyplusplus can generate some kind of configuration file and include it in every generated file. Thus user will have single place to configure all bindings. Thoughts? -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Lakin W. <lak...@gm...> - 2006-06-21 16:26:38
|
On 6/21/06, Lakin Wecker <lak...@gm...> wrote: > On 6/21/06, Matthias Baas <ba...@ir...> wrote: > > but it's more serious that I > > missed the warnings about that (if I had seen them I wouldn't have tried > > compiling the code in the first place). > > The problem is that a command line tool basically only has one single > > channel (stdout) to communicate with the user (having stderr as well is > > only a slight improvement as they both refer to the same "channel" by > > default, namely the console window). My suggestion would be to write the > > really important messages (like the max arity thing) into a separate log > > file in addition to writing to stdout (or actually it should be stderr). > > In my opinion this should already be the default behavior. So a user can > > check any time after the tool has run if there have been important > > issues. Before quitting, pyplusplus could even check if anything has > > been written into that log file and print a final message that there > > have been critical errors and the user should refer to that log file. > > I don't mind the current behavior as it would be easy for the user to > hook into the logging module and write out messages above a certain > importance to a separate log file. As a reference, http://docs.python.org/lib/multiple-destinations.html If pyplusplus is reporting this as a warning, then I think it's appropriate and that pyplusplus has done it's job. On a side-note, It would be nice to have a small wiki somewhere to aggregate all of these best practices for pyplusplus. Such as setting up multiple loggers in order to capture _all_ output somewhere for debugging purposes, and capturing only important pyplusplus messages to the console or another file for general success feedback purposes. Lakin |
From: Lakin W. <lak...@gm...> - 2006-06-21 13:24:54
|
On 6/21/06, Matthias Baas <ba...@ir...> wrote: > Roman Yakovenko wrote: > > I am aware of the this problem ( lot of output ), so next code will > > write only important things: > > > > import logging > > from pyplusplus import module_builder > > module_builder.set_logger_level( logging.INFO ) > > I have that already in my script, but I still get a line for each > individual file that is written. Maybe these messages should be debug > messages instead of info messages (I think a summary of the time it took > to write the files and maybe how many files were actually updated would > be enough in the standard case). > (But apart from that there is still a lot of output from my own script > that would drown the message anyway) > > >> But as pyplusplus already knows that a particular member won't compile, > >> shouldn't it then refuse to write it to the output file (as it was the > >> case before)? Or even abort or something so that the user definitely > >> knows that some action is required on his side? > > > > Now when you have bad experience( and I am sorry for that ) you can > > give an advice. > > In my opinion pyplusplus did the job, but of course I could be wrong. > > Well, in general I'd say that knowingly producing code that won't > compile is bad practice. In this particular case I can live with it as > it is because 1) there actually is a warning message (even though it > might pass unnoticed) and 2) once you know what the problem is it is > easy to either ignore those methods or modify Boost.Python accordingly. > For now, I just ignored them which was only one additional line in my > script. > In my opinion, what should actually be "fixed" here is the way important > information is passed to the user. I think it's not so much of a problem > that pyplusplus generated invalid code I know that I'm just arguing semantics, but I disagree. The code _does_ compile and is therefore _not_ invalid. It just requires you to pass an extra switch the compiler, in this case: -DBOOST_PYTHON_MAX_ARITY=19 or somesuch. > but it's more serious that I > missed the warnings about that (if I had seen them I wouldn't have tried > compiling the code in the first place). > The problem is that a command line tool basically only has one single > channel (stdout) to communicate with the user (having stderr as well is > only a slight improvement as they both refer to the same "channel" by > default, namely the console window). My suggestion would be to write the > really important messages (like the max arity thing) into a separate log > file in addition to writing to stdout (or actually it should be stderr). > In my opinion this should already be the default behavior. So a user can > check any time after the tool has run if there have been important > issues. Before quitting, pyplusplus could even check if anything has > been written into that log file and print a final message that there > have been critical errors and the user should refer to that log file. I don't mind the current behavior as it would be easy for the user to hook into the logging module and write out messages above a certain importance to a separate log file. However, having a some sort of summary printed at the end of all the interesting things that a User may have to take care of would be nice. > >> In the code the maximum number (10) is hard-coded, shouldn't that be at > >> least be user-settable > > > > Yes, I fixed and committed. > > > >> (or, if possible, automatically determined). > > > > It does not worth that, too much work ( make files, auto > > configuration, scons, bjam, prj .... ) > > Yes, I agree. If it's not inside an official header file then it's not > worth trying to get hold of the config files and parse them. What about generating: #define BOOST_MAX_ARITY 19 at the top of file which contains those methods that are too long. I'm not sure about this particular option, but maybe it is possible to set this option on a per file basis. Lakin |
From: Neal B. <ndb...@gm...> - 2006-06-21 12:27:33
|
On Wednesday 21 June 2006 7:56 am, Roman Yakovenko wrote: > On 6/21/06, Neal Becker <ndb...@gm...> wrote: > > [nbecker@nbecker4 pyplusplus_dev]$ python setup.py doc > > running doc > > Setting PYTHONPATH to :/usr/local/src/pygccxml/pygccxml_dev > > Traceback (most recent call last): > > File "setup.py", line 105, in ? > > cmdclass = {"doc" : doc_cmd} > > File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup > > dist.run_commands() > > File "/usr/lib64/python2.4/distutils/dist.py", line 946, in > > run_commands self.run_command(cmd) > > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File "setup.py", line 79, in run > > generate_doc() > > File "setup.py", line 48, in generate_doc > > from epydoc.docbuilder import build_doc_index > > ImportError: No module named docbuilder > > > > rpm -q epydoc > > epydoc-2.1-4 > > > > Do I need a newer epydoc (3.0 alpha?) > > Yes: Epydoc 3.0 alpha 2 I made an SRPM for fedora5 and put it here: http://nbecker.dyndns.org:8080/~nbecker/epydoc-3.0alpha2-1.src.rpm |
From: Roman Y. <rom...@gm...> - 2006-06-21 11:56:25
|
On 6/21/06, Neal Becker <ndb...@gm...> wrote: > [nbecker@nbecker4 pyplusplus_dev]$ python setup.py doc > running doc > Setting PYTHONPATH to :/usr/local/src/pygccxml/pygccxml_dev > Traceback (most recent call last): > File "setup.py", line 105, in ? > cmdclass = {"doc" : doc_cmd} > File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup > dist.run_commands() > File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands > self.run_command(cmd) > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "setup.py", line 79, in run > generate_doc() > File "setup.py", line 48, in generate_doc > from epydoc.docbuilder import build_doc_index > ImportError: No module named docbuilder > > rpm -q epydoc > epydoc-2.1-4 > > Do I need a newer epydoc (3.0 alpha?) Yes: Epydoc 3.0 alpha 2 -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Neal B. <ndb...@gm...> - 2006-06-21 11:53:23
|
[nbecker@nbecker4 pyplusplus_dev]$ python setup.py doc running doc Setting PYTHONPATH to :/usr/local/src/pygccxml/pygccxml_dev Traceback (most recent call last): File "setup.py", line 105, in ? cmdclass = {"doc" : doc_cmd} File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "setup.py", line 79, in run generate_doc() File "setup.py", line 48, in generate_doc from epydoc.docbuilder import build_doc_index ImportError: No module named docbuilder rpm -q epydoc epydoc-2.1-4 Do I need a newer epydoc (3.0 alpha?) |
From: Roman Y. <rom...@gm...> - 2006-06-21 10:22:41
|
On 6/21/06, Matthias Baas <ba...@ir...> wrote: > Roman Yakovenko wrote: > > I am aware of the this problem ( lot of output ), so next code will > > write only important things: > > > > import logging > > from pyplusplus import module_builder > > module_builder.set_logger_level( logging.INFO ) > > I have that already in my script, but I still get a line for each > individual file that is written. Maybe these messages should be debug > messages instead of info messages (I think a summary of the time it took > to write the files and maybe how many files were actually updated would > be enough in the standard case). > (But apart from that there is still a lot of output from my own script > that would drown the message anyway) Do you want/have time to make pyplusplus messages really useful? Can you fix current situation? > In my opinion, what should actually be "fixed" here is the way important > information is passed to the user. I agree with you. The amount of information, that is written by pyplusplus is so big, that users just ignore it. This is the opposite of what I want. > I think it's not so much of a problem > that pyplusplus generated invalid code but it's more serious that I > missed the warnings about that (if I had seen them I wouldn't have tried > compiling the code in the first place). Would it be helpful, if pyplusplus will dump the information to the generated source files, too? So when you see compilation error you also can read an explanation. > The problem is that a command line tool basically only has one single > channel (stdout) to communicate with the user (having stderr as well is > only a slight improvement as they both refer to the same "channel" by > default, namely the console window). My suggestion would be to write the > really important messages (like the max arity thing) into a separate log > file in addition to writing to stdout (or actually it should be stderr). What is your definition/guide line of "really important"? Also I agree with you. > In my opinion this should already be the default behavior. So a user can > check any time after the tool has run if there have been important > issues. Before quitting, pyplusplus could even check if anything has > been written into that log file and print a final message that there > have been critical errors and the user should refer to that log file. Writing log to file is quite an improvement. On top of the file we can write small guide: search for word WARNING or ERROR. Or something like this. > > >> What if the user actually did increase BOOST_PYTHON_MAX_ARITY? > > > > Then he will update decl_wrappers.calldef_t.BOOST_PYTHON_MAX_ARITY to the > > actual number and will get rid of warning :-) > > Aha, there we are... :) > (though I'd recommend to add functionality to the high level API to > read/write this value so that the details of where the attribute is > actually stored in pyplusplus are encapsulated) :-) I will add new property to module_builder_t. That is what you meant, right? -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Matthias B. <ba...@ir...> - 2006-06-21 09:41:35
|
Roman Yakovenko wrote: > I am aware of the this problem ( lot of output ), so next code will > write only important things: > > import logging > from pyplusplus import module_builder > module_builder.set_logger_level( logging.INFO ) I have that already in my script, but I still get a line for each individual file that is written. Maybe these messages should be debug messages instead of info messages (I think a summary of the time it took to write the files and maybe how many files were actually updated would be enough in the standard case). (But apart from that there is still a lot of output from my own script that would drown the message anyway) >> But as pyplusplus already knows that a particular member won't compile, >> shouldn't it then refuse to write it to the output file (as it was the >> case before)? Or even abort or something so that the user definitely >> knows that some action is required on his side? > > Now when you have bad experience( and I am sorry for that ) you can > give an advice. > In my opinion pyplusplus did the job, but of course I could be wrong. Well, in general I'd say that knowingly producing code that won't compile is bad practice. In this particular case I can live with it as it is because 1) there actually is a warning message (even though it might pass unnoticed) and 2) once you know what the problem is it is easy to either ignore those methods or modify Boost.Python accordingly. For now, I just ignored them which was only one additional line in my script. In my opinion, what should actually be "fixed" here is the way important information is passed to the user. I think it's not so much of a problem that pyplusplus generated invalid code but it's more serious that I missed the warnings about that (if I had seen them I wouldn't have tried compiling the code in the first place). The problem is that a command line tool basically only has one single channel (stdout) to communicate with the user (having stderr as well is only a slight improvement as they both refer to the same "channel" by default, namely the console window). My suggestion would be to write the really important messages (like the max arity thing) into a separate log file in addition to writing to stdout (or actually it should be stderr). In my opinion this should already be the default behavior. So a user can check any time after the tool has run if there have been important issues. Before quitting, pyplusplus could even check if anything has been written into that log file and print a final message that there have been critical errors and the user should refer to that log file. >> In the code the maximum number (10) is hard-coded, shouldn't that be at >> least be user-settable > > Yes, I fixed and committed. > >> (or, if possible, automatically determined). > > It does not worth that, too much work ( make files, auto > configuration, scons, bjam, prj .... ) Yes, I agree. If it's not inside an official header file then it's not worth trying to get hold of the config files and parse them. >> What if the user actually did increase BOOST_PYTHON_MAX_ARITY? > > Then he will update decl_wrappers.calldef_t.BOOST_PYTHON_MAX_ARITY to the > actual number and will get rid of warning :-) Aha, there we are... :) (though I'd recommend to add functionality to the high level API to read/write this value so that the details of where the attribute is actually stored in pyplusplus are encapsulated) - Matthias - |
From: Roman Y. <rom...@gm...> - 2006-06-20 21:17:50
|
On 6/20/06, Allen Bierbaum <al...@vr...> wrote: > I ended up having to do something like this: > > td.type.declaration.type.declaration If I understand you right: you have typedef to other typedef right? In this case you can use function remove_alias. > Once I get done with the binding generation I can point you at the > script I am using so you can see what I have to do to get to this type > information. > -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-06-20 16:39:27
|
Roman Yakovenko wrote: > [...] > >> >> .type is the correct type, but it is not the decl_wrapper for the type >> that the typedef aliases. >> >> For example: >> >> typedef really_complex< nested<int>, ugly<long>, 56> simple_t; >> >> td = ns.typedef("simple_t") >> >> td is a pygccxml.declarations.cpptypes.declared_t and >> td.declaration is a pyplusplus.decl_wrapper.typedef_wrapper.typedef_t >> but what I want is the class_wrapper for "really_complex< nested<int>, >> ugly<long>, 56>". How do I get to this object? > > > td.type.declaration ? > > May be it is a time to send me small script, so I will be able to > provide you a solution. > It seems to me that we have small misunderstanding I ended up having to do something like this: td.type.declaration.type.declaration Once I get done with the binding generation I can point you at the script I am using so you can see what I have to do to get to this type information. -Allen > >> -Allen >> >> > >> >> I probably won't need it until this weekend or the beginning of next >> >> week, but if you are going to get busy I wanted to ask before you >> have >> >> better things to do then answer e-mails from pesky people like me. :) >> > >> > >> > :-) >> > >> >> -Allen >> >> >> >> >> > >> > >> >> > > |
From: Roman Y. <rom...@gm...> - 2006-06-20 13:35:31
|
On 6/20/06, Allen Bierbaum <al...@vr...> wrote: > It would also be helpful if the message contained the name of the method > so the user knew which method was causing the problem. You are right, fixed and committed -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-06-20 13:31:50
|
On 6/20/06, Allen Bierbaum <al...@vr...> wrote: > Summary: How is the list of header files in the generated module code > determined? The code I am going to describe is in module_creator/creator.py file. See _create_includes method Basically I extract all files where declaration have been declared and then I add include to those files to generated code. > Details: > > I have my generation script running now, but it is producing code that > will not compile. The problem is that the list of header files in the > generated code contains file names that I never told the module builder > to parse directly. The "extra" files are not meant to be included > directly and cause the compiler to error out (they are .inl files that > are included by the associated class .hpp file) > > It seems a little strange to me that the generated code would not just > contain the same list of headers that I passed to the module builder > originally. Those headers were used to find all the symbols so it would > make sense that those are the only headers that would be needed in the > generated code. You are absolutely right and this should be fixed. > Is there some way that I can override this behavior and tell pyplusplus > to only put my original list of headers into the generated code? Yes. mb = module_builder_t( ... ) mb.build_code_creator( ... ) mb.code_creator.replace_included_headers( list of your includes(=plain string ) ) -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-06-20 13:20:20
|
On 6/20/06, Matthias Baas <ba...@ir...> wrote: > Ah, I see. Sorry, actually there was that warning message but it got > drowned in the output, so I didn't notice it. I am aware of the this problem ( lot of output ), so next code will write only important things: import logging from pyplusplus import module_builder module_builder.set_logger_level( logging.INFO ) > But as pyplusplus already knows that a particular member won't compile, > shouldn't it then refuse to write it to the output file (as it was the > case before)? Or even abort or something so that the user definitely > knows that some action is required on his side? Now when you have bad experience( and I am sorry for that ) you can give an advice. In my opinion pyplusplus did the job, but of course I could be wrong. > In the code the maximum number (10) is hard-coded, shouldn't that be at > least be user-settable Yes, I fixed and committed. > (or, if possible, automatically determined). It does not worth that, too much work ( make files, auto configuration, scons, bjam, prj .... ) > What > if the user actually did increase BOOST_PYTHON_MAX_ARITY? Then he will update decl_wrappers.calldef_t.BOOST_PYTHON_MAX_ARITY to the actual number and will get rid of warning :-) -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-06-20 13:19:10
|
Matthias Baas wrote: >Roman Yakovenko wrote: > > >>On 6/19/06, Matthias Baas <ba...@ir...> wrote: >> >> >>>So what is wrong with this method? (are there too many arguments? are >>>there some required policies missing?) >>>And another question is why was this ignored in previous versions? Which >>>modification of pyplusplus has changed that? >>> >>> >>This should explain to you: >> >>http://svn.sourceforge.net/viewcvs.cgi/pygccxml/pyplusplus_dev/pyplusplus/decl_wrappers/calldef_wrapper.py?r1=195&r2=214 >> >> > >Ah, I see. Sorry, actually there was that warning message but it got >drowned in the output, so I didn't notice it. > >But as pyplusplus already knows that a particular member won't compile, >shouldn't it then refuse to write it to the output file (as it was the >case before)? Or even abort or something so that the user definitely >knows that some action is required on his side? > > It would also be helpful if the message contained the name of the method so the user knew which method was causing the problem. -Allen >In the code the maximum number (10) is hard-coded, shouldn't that be at >least be user-settable (or, if possible, automatically determined). What >if the user actually did increase BOOST_PYTHON_MAX_ARITY? > >- Matthias - > > >_______________________________________________ >pygccxml-development mailing list >pyg...@li... >https://lists.sourceforge.net/lists/listinfo/pygccxml-development > > > |
From: Allen B. <al...@vr...> - 2006-06-20 13:15:31
|
Summary: How is the list of header files in the generated module code determined? Details: I have my generation script running now, but it is producing code that will not compile. The problem is that the list of header files in the generated code contains file names that I never told the module builder to parse directly. The "extra" files are not meant to be included directly and cause the compiler to error out (they are .inl files that are included by the associated class .hpp file) It seems a little strange to me that the generated code would not just contain the same list of headers that I passed to the module builder originally. Those headers were used to find all the symbols so it would make sense that those are the only headers that would be needed in the generated code. Is there some way that I can override this behavior and tell pyplusplus to only put my original list of headers into the generated code? -Allen |
From: Matthias B. <ba...@ir...> - 2006-06-20 13:03:18
|
Roman Yakovenko wrote: > On 6/19/06, Matthias Baas <ba...@ir...> wrote: >> >> So what is wrong with this method? (are there too many arguments? are >> there some required policies missing?) >> And another question is why was this ignored in previous versions? Which >> modification of pyplusplus has changed that? > > This should explain to you: > > http://svn.sourceforge.net/viewcvs.cgi/pygccxml/pyplusplus_dev/pyplusplus/decl_wrappers/calldef_wrapper.py?r1=195&r2=214 Ah, I see. Sorry, actually there was that warning message but it got drowned in the output, so I didn't notice it. But as pyplusplus already knows that a particular member won't compile, shouldn't it then refuse to write it to the output file (as it was the case before)? Or even abort or something so that the user definitely knows that some action is required on his side? In the code the maximum number (10) is hard-coded, shouldn't that be at least be user-settable (or, if possible, automatically determined). What if the user actually did increase BOOST_PYTHON_MAX_ARITY? - Matthias - |
From: Roman Y. <rom...@gm...> - 2006-06-20 04:27:34
|
On 6/20/06, Allen Bierbaum <al...@vr...> wrote: > Roman Yakovenko wrote: > > > On 5/31/06, Allen Bierbaum <al...@vr...> wrote: > > > >> Roman: > >> > >> Taking into account that you will be out of contact soon, there is one > >> feature I know I am going to need that I used in the old prototype API. > >> I must apologize that I don't remember the exact details, but what it > >> boiled down to was being able to lookup a full resolved type from a > >> typedef. > >> > >> So if I had something like: > >> > >> typedef really_complex< nested<int>, ugly<long>, 56> simple_t; > >> > >> I would need to be able to look it up from simple_t to get the type > >> information. It is the mTypeDefMap inside the old pypp_api (it is still > >> in experimental). > >> > >> The code to look at is in parse() at the end: > >> > >> typedef_decls = declarations.make_flatten(parsed_decls) > >> typedef_decls = decls = filter( lambda x: (isinstance( x, > >> declarations.typedef_t ) and > >> not > >> x.name.startswith('__') and > >> x.location.file_name != > >> "<internal>") > >> , typedef_decls ) > >> > >> self.mTypeDefMap = {} > >> for d in typedef_decls: > >> type_def_name = d.name > >> full_name = declarations.full_name(d) > >> if full_name.startswith("::"): # Remove the base namespace > >> full_name = full_name[2:] > >> real_type_name = d.type.decl_string > >> if real_type_name.startswith("::"): # Remove base namespace > >> real_type_name = real_type_name[2:] > >> self.mTypeDefMap[full_name] = real_type_name > >> > >> > >> So my question is, do I have another "easy" way to do this with the > >> current API? > > > > > > Typedef class has 2 properties: name and type. I believe type is what > > you are looking for. > > .type is the correct type, but it is not the decl_wrapper for the type > that the typedef aliases. > > For example: > > typedef really_complex< nested<int>, ugly<long>, 56> simple_t; > > td = ns.typedef("simple_t") > > td is a pygccxml.declarations.cpptypes.declared_t and > td.declaration is a pyplusplus.decl_wrapper.typedef_wrapper.typedef_t > but what I want is the class_wrapper for "really_complex< nested<int>, > ugly<long>, 56>". How do I get to this object? td.type.declaration ? May be it is a time to send me small script, so I will be able to provide you a solution. It seems to me that we have small misunderstanding > -Allen > > > > >> I probably won't need it until this weekend or the beginning of next > >> week, but if you are going to get busy I wanted to ask before you have > >> better things to do then answer e-mails from pesky people like me. :) > > > > > > :-) > > > >> -Allen > >> > >> > > > > > > -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-06-19 22:10:39
|
Roman Yakovenko wrote: > On 5/31/06, Allen Bierbaum <al...@vr...> wrote: > >> Roman: >> >> Taking into account that you will be out of contact soon, there is one >> feature I know I am going to need that I used in the old prototype API. >> I must apologize that I don't remember the exact details, but what it >> boiled down to was being able to lookup a full resolved type from a >> typedef. >> >> So if I had something like: >> >> typedef really_complex< nested<int>, ugly<long>, 56> simple_t; >> >> I would need to be able to look it up from simple_t to get the type >> information. It is the mTypeDefMap inside the old pypp_api (it is still >> in experimental). >> >> The code to look at is in parse() at the end: >> >> typedef_decls = declarations.make_flatten(parsed_decls) >> typedef_decls = decls = filter( lambda x: (isinstance( x, >> declarations.typedef_t ) and >> not >> x.name.startswith('__') and >> x.location.file_name != >> "<internal>") >> , typedef_decls ) >> >> self.mTypeDefMap = {} >> for d in typedef_decls: >> type_def_name = d.name >> full_name = declarations.full_name(d) >> if full_name.startswith("::"): # Remove the base namespace >> full_name = full_name[2:] >> real_type_name = d.type.decl_string >> if real_type_name.startswith("::"): # Remove base namespace >> real_type_name = real_type_name[2:] >> self.mTypeDefMap[full_name] = real_type_name >> >> >> So my question is, do I have another "easy" way to do this with the >> current API? > > > Typedef class has 2 properties: name and type. I believe type is what > you are looking for. .type is the correct type, but it is not the decl_wrapper for the type that the typedef aliases. For example: typedef really_complex< nested<int>, ugly<long>, 56> simple_t; td = ns.typedef("simple_t") td is a pygccxml.declarations.cpptypes.declared_t and td.declaration is a pyplusplus.decl_wrapper.typedef_wrapper.typedef_t but what I want is the class_wrapper for "really_complex< nested<int>, ugly<long>, 56>". How do I get to this object? -Allen > >> I probably won't need it until this weekend or the beginning of next >> week, but if you are going to get busy I wanted to ask before you have >> better things to do then answer e-mails from pesky people like me. :) > > > :-) > >> -Allen >> >> > > |
From: Roman Y. <rom...@gm...> - 2006-06-19 17:43:31
|
On 6/19/06, Allen Bierbaum <al...@vr...> wrote: > How are people generating bindings against template classes using the > module_builder interface? It has nothing to do with module_builder. Please read this: http://www.language-binding.net/pygccxml/design.html > The old experimental pypp api had a helper method of the ModuleBuilder > that allowed the user to pass a template type and an alias name. Then > when the module builder parse() method was run the module builder would > automatically generate the needed code to instantiate the template type > and create a typedef from the template type to the alias so the user > could use the alias to look up the type later. You can still achieve the behaviour you want, by creating some convenient function on top of create_text_fc function. > If you are currently exposing template classes with pyplusplus, are you > currently just manually coding something like this? Yes. What will force user to learn new API, in order to do something he already has better way to do? I think the better way is to create C++ header and then just to include it. You can even generate it before you invoke pyplusplus. Obviously you think different, do you mind to share your use case? > Does anyone have > any code that could be turned into something reusable to put into > pyplusplus? You have a chance to contribute to pygccxml :-). -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Roman Y. <rom...@gm...> - 2006-06-19 17:31:47
|
On 6/19/06, Matthias Baas <ba...@ir...> wrote: > > So what is wrong with this method? (are there too many arguments? are > there some required policies missing?) > And another question is why was this ignored in previous versions? Which > modification of pyplusplus has changed that? This should explain to you: http://svn.sourceforge.net/viewcvs.cgi/pygccxml/pyplusplus_dev/pyplusplus/decl_wrappers/calldef_wrapper.py?r1=195&r2=214 -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |
From: Allen B. <al...@vr...> - 2006-06-19 15:46:48
|
How are people generating bindings against template classes using the module_builder interface? The old experimental pypp api had a helper method of the ModuleBuilder that allowed the user to pass a template type and an alias name. Then when the module builder parse() method was run the module builder would automatically generate the needed code to instantiate the template type and create a typedef from the template type to the alias so the user could use the alias to look up the type later. If you are currently exposing template classes with pyplusplus, are you currently just manually coding something like this? Does anyone have any code that could be turned into something reusable to put into pyplusplus? -Allen |
From: Matthias B. <ba...@ir...> - 2006-06-19 15:04:32
|
Roman Yakovenko wrote: >> Is this a bug in pyplusplus or do I have to specify anything related to >> this in my driver script? > > Cache !!! :-(((( We need to implement some mechanizm that will prevent > this in future. Argh, yes, sorry! (maybe it would be possible to store some sort of signature of the classes (such as the number of attributes) and compare this signature with the signature of the imported classes....?) Now the script runs fine but there's an error during compilation. What puzzles me is that the methods that seem to trigger the error have not been wrapped with the previous version of pyplusplus (even if they should have been, I was not ignoring them). For example, it's a method like this: bool MFnMesh::closestIntersection ( const MFloatPoint & raySource, const MFloatVector & rayDirection, const MIntArray * faceIds, const MIntArray * triIds, bool idsSorted, MSpace::Space space, float maxParam, bool testBothDirections, MMeshIsectAccelParams *accelerator, MFloatPoint & hitPoint, float* hitRayParam, int* hitFace, int* hitTriangle, float* hitBary1, float* hitBary2, float tolerance = 1e-6, MStatus * ReturnStatus = NULL ) which is now turned into the following code: MFnMesh_exposer.def("closestIntersection" , &::MFnMesh::closestIntersection , ( bp::arg("raySource"), bp::arg("rayDirection"), bp::arg("faceIds"), bp::arg("triIds"), bp::arg("idsSorted"), bp::arg("space"), bp::arg("maxParam"), bp::arg("testBothDirections"), bp::arg("accelerator"), bp::arg("hitPoint"), bp::arg("hitRayParam"), bp::arg("hitFace"), bp::arg("hitTriangle"), bp::arg("hitBary1"), bp::arg("hitBary2"), bp::arg("tolerance")=9.99999999999999954748111825886258685613938723691e-7, bp::arg("ReturnStatus")=bp::object() ) , bp::default_call_policies() ); Compiling this results in a lengthy and somewhat difficult to read error message. The actual error goes something like this: /sw/i386_linux-2.0_glibc2/boost-1.33.1/include/boost-1_33_1/boost/python/class.hpp:536: error: no matching function for call to `get_signature(bool (MFnMesh::*&)(const MFloatPoint&, const MFloatVector&, const MIntArray*, const MIntArray*, bool, MSpace::Space, float, bool, MMeshIsectAccelParams*, MFloatPoint&, float*, int*, int*, float*, float*, float, MStatus*), MFnMesh*)' /sw/i386_linux-2.0_glibc2/boost-1.33.1/include/boost-1_33_1/boost/python/class.hpp: In member function `void boost::python::class_<T, X1, X2, X3>::def_impl(T*, const char*, Fn, const Helper&, ...) [with T = MFnMesh, Fn = bool (MFnMesh::*)(const MFloatPoint&, const MFloatVector&, const MIntArray*, const MIntArray*, bool, MSpace::Space, float, bool, MMeshIsectAccelParams*, bool, MFloatPointArray&, MFloatArray*, MIntArray*, MIntArray*, MFloatArray*, MFloatArray*, float, MStatus*), Helper = boost::python::detail::def_helper<boost::python::detail::keywords<18>, boost::python::default_call_policies, boost::python::detail::not_specified, boost::python::detail::not_specified>, W = MFnMesh_wrapper, X1 = boost::python::bases<MFnDagNode, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_>, X2 = boost::python::detail::not_specified, X3 = boost::python::detail::not_specified]': So what is wrong with this method? (are there too many arguments? are there some required policies missing?) And another question is why was this ignored in previous versions? Which modification of pyplusplus has changed that? - Matthias - |
From: Roman Y. <rom...@gm...> - 2006-06-19 08:52:19
|
On 6/19/06, Matthias Baas <ba...@ir...> wrote: > Hi, Good morning. Good to hear from you again. > after being too busy the last couple of weeks to check out the new > pyplusplus stuff (and move the experimental stuff to the contrib > directory) I finally did an update on the code and now I get the below > exception when I try to create the Maya bindings. My script is still > unchanged and has worked with previous versions. I also haven't > specified anything that has to do with the indexing suite, so I would > have expected not to trigger anything that is related to this feature. > > Is this a bug in pyplusplus or do I have to specify anything related to > this in my driver script? Cache !!! :-(((( We need to implement some mechanizm that will prevent this in future. > - Matthias - -- Roman Yakovenko C++ Python language binding http://www.language-binding.net/ |