You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(34) |
Aug
(21) |
Sep
(8) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(8) |
Feb
(3) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2003 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
(6) |
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2004 |
Jan
|
Feb
(2) |
Mar
(3) |
Apr
(2) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Mike808 <mi...@ne...> - 2000-08-07 04:49:48
|
In the maintenance branch, I found several instances of the following kind of code: %{$obj->{hashref}} = (%{$obj->{hashref}}, $name => $value); On the face of it, this appears to perform a copy of the entire hash on every add. It would seem to me that the more direct coding of $obj->{hashref}->{$name} = $value; would result in faster execution times. I ran some benchmarks, and found this to be the case. The function new_obj() is just a initializer to setup a dummy object data structure of some fixed complexity. perl xslt-test1.pl 1000 Rate orig nocopy new_obj orig 14.7/s -- -97% -97% nocopy 444/s 2924% -- -7% new_obj 478/s 3156% 8% -- So you can see that avoiding the copy-on-add is almost a 30x speedup. Is there some reason I should *not* implement this throughout the module? Mike808 Here's the code I threw together to test this out: #!D:\Perl\bin\perl use Benchmark qw( timethese cmpthese ); my %subs; sub add_a_sub { $subs{$_[0]} = \&{"$_[0]"}; } # Arguments: Benchmark_iterations, items_to_preload, new_items_to_add my ($iterations, $preload, $count) = @ARGV; $iterations ||= 100; $preload ||= 100; $count ||= 100; # Setup the dummy object sub new_obj { my $obj = {}; foreach (1..$preload) { $obj->{"M-$_"} = [0..10]; } $obj->{hashref} = {}; foreach (1..$preload) { $obj->{hashref}->{"K-$_"} = "V-$_"; } return $obj; } add_a_sub("new_obj"); sub orig { my $obj = new_obj; foreach (1..$count) { %{$obj->{hashref}} = (%{$obj->{hashref}}, "NK-$_" => "NV-$_"); } } add_a_sub("orig"); sub nocopy { my $obj = new_obj; foreach (1..$count) { $obj->{hashref}->{"NK-$_"} = "NV-$_"; } } add_a_sub("nocopy"); # Let's see how they fare { local *STDOUT; open(STDOUT, ">NUL") or die; $results = timethese( $iterations, \%subs, 'none' ); } cmpthese($results); 1; |
From: Bruce L. (c) <bl...@sa...> - 2000-08-04 05:39:23
|
Is there any way of getting the file name of the XML source/input file as a variable inside and XSLT style sheet? Thanks, Bruce. Bruce Long (c) Sapient | Architects for the New Economy Level 13, 60 Castlereagh Street Phone: +612 9210.2067 Sydney, NSW 2000 Fax: +612 9210.2500 Australia Email: <bl...@sa...> |
From: <ma...@ev...> - 2000-08-01 02:06:52
|
>>>>> "M" == Mike808 <mi...@ne...> writes: M> "Mark A. Hershberger" wrote: >> With the `use constant' available, the code remains >> maintainable: $Self->{debug} becomes $Self->[DEBUG] is all. M> Yeah, but maintaining subclasses and inheritance is a M> nightmare. Talk to Michael Schwern or Damian Conway on M> pseudo-hashes as to why. Ahh... subclasses... Now I see. Mark. |
From: Mike808 <mi...@ne...> - 2000-07-31 23:27:01
|
"Mark A. Hershberger" wrote: > With the `use constant' available, the code remains maintainable: > $Self->{debug} > becomes > $Self->[DEBUG] > is all. Yeah, but maintaining subclasses and inheritance is a nightmare. Talk to Michael Schwern or Damian Conway on pseudo-hashes as to why. > Now... How can I speed up XML::DOM... ? Um, maybe 'package XML::DOM; # rewrite here' ... Too bad we don't have programming by contract and TMTOWTDI in CPAN instead of first-come-gets-the-namespace. Oops. Wrong rant... Mike808 |
From: Mike808 <mi...@ne...> - 2000-07-31 23:23:26
|
> I don't care even on 5.005 (since that version inlines as well). Are you > sure 5.004 doesn't inline? (I'm 20% convinced that it does). Er, the earliest the constant stuff started working right was 5.003_96. This is from Tom Phoenix's require of that version in Constant.pm. I worked with him extensively on the improvements for 5.6 that were a direct result of my porting the Java XSLT engine to Perl (where there's loads of constants, not just XML::DOM's). I'm half-tempted to bring over XSLConstant.pm since I found it pretty useful in that it makes the code much easier to read, and you can flip back and forth between function call and scalar, depending on how much of a hurry you're in. > However, while we are at the topic: It is nice to support stone-age perls > like 5.004, but there shouldn't be speed penalties for everybody else "just" > because 5.004 is slower. Sure, but there are massive problems with array implementations of objects and using constants to make the code readable. The short of it is that the road to inheritance is wrought with much peril and maintenance quickly assimilates all resources to keep everything from breaking. See recent discussions of fields.pm, base.pm, and friends with Michael Schwern for some clues. > In Gimp, for example, 5.004 _works_, for those people who need it, but > when there is a choice I always go for "works best with latest perl". I'm kind of wondering if XML::DOM or XML::Parser don't have requirements that would supercede ours. But, then theirs may be _very_ undocumented. It's a shame, since all that it takes is a little 'require $PERL_VERSION_REQUIRED;' in the code. When we decide what our LCD is, it will be in XML::XSLT and clearly documented. So far 5.004 seems reasonable, and gets us out of trouble with constant.pm. > *grumble* :() I'd rather like to contribute something with more > substance... hmm.. Gift Certificates or hardware would be nice.... :) > > filtering the line ending cruft - yes that was a mess. I don't know > > what they were thinking) > They were going for portability, no doubt ;) But, \n *is* portable (except for the cases that the code block I ripped from CGI.pm takes care of - namely where the HTTP spec explicitly calls for CR LF CR LF (\015\012\015\012) to separate headers from the message and Macs and VMS and EBCDIC machines get wierd on us). > Maybe speeding up XML::DOM I've preached extensively on Perl/CPAN's inability to support mutliple implementations for namespaces that represent a program-by-contract (i.e. an API implementation). And Enno's been taking lots of flak on DOM's bloat in many other forums besides ours. I just don't have the time to start off another XML::DOM project. But if you start one, feel free to claim the XML::DOM namespace (just don't name the bundle that). Maybe those who insist that competition for namespaces is bad due to the limitations of CPAN being unable to handle TMTOWTDI, and therefore we shouldn't will finally get a clue. But I digress. > Ah, and adding convinience methods (that could be even fast) into > XML::DOM or some superclass might also be interesting... There's nothing from stopping us from simply 'package XML::DOM;'-ing and doing what we like. Thanks for the comments. Mike808 |
From: <ma...@ev...> - 2000-07-31 17:19:47
|
>>>>> "ML" == Marc Lehmann <pc...@go...> writes: >> Now... How can I speed up XML::DOM... ? ML> Basically by not using it. Agreed. Bron, I assume you've looked over what I've done by now. How soon do you think you can share your new implementation with us? (Assuming I haven't totally screwed up the module.) ML> I do not say that XML::Grove (which has it's own problems) ML> should be used, What problems do you know of in XML::Grove? ML> Another road that is usually very worthwhile tp pursue is to ML> compile a stylesheet (that supposedly is used often) into a ML> perl program. I was thinking about this as well. ML> I do that in PApp::XML, Wow! Thanks for mentioning your module. It looks almost exactly like what I was thinking of building. I've been playing around with HTML::Mason, but it doesn't do the code seperating thing so well. Some way to mix XSLT and perl (via namespaces) is perfect and I'm glad to see that you have done some work implementing this. One question: have you tried using this with mod_perl? Thanks, Mark. |
From: Marc L. <pc...@go...> - 2000-07-31 14:53:12
|
On Mon, Jul 31, 2000 at 09:09:58AM -0500, "Mark A. Hershberger" <ma...@ev...> wrote: > We haven't heard from anyone using anything earlier than 5.004 and it > supports `use constant', so I'm going to stick with my arrayref ... and any optimizations did and will go into constant.pm. > Now... How can I speed up XML::DOM... ? Basically by not using it. Look at all this array-copying for example: @children = $current_xml_node->getChildNodes; $children = \@children; ... foreach my $child (@$children) { my $node_type = $child->getNodeType; and calling methods for each and everything is a major slow-down. using XML::Grove, the above would be something like: my $children = $current_xml_node->{Contents}; for (@$children) { my $node_type = ref $child; } I do not say that XML::Grove (which has it's own problems) should be used, just that DOM is a very slow way to do things in perl. Enhancing XML::DOM (to create a kind of superdom ;) to make it's use more pelr-like would speed up processing a lot, at the cost of not being able to 1-1 translate XSLT.pm into C (jepp ;). So, one should compare the advantages of DOM (the only one coming to my mind is that it is a standard) to other forms of representations (e.g. perl-like, as XML::Grove is). Ah, well. the chance that other modules XSLT could use are based on XML::DOM is also high. Another road that is usually very worthwhile tp pursue is to compile a stylesheet (that supposedly is used often) into a perl program. I do that in PApp::XML, for xml fragments that can contain executable code, to create something that approaches the speed of a simple print. I think compiling (e.g.) an xpath search into native perl code has a lot of room for improvement, however, care should be taken to kep the number of subs created to a minimum (function claling overjead is relatively large). > > Mark. > -- -----==- | ----==-- _ | ---==---(_)__ __ ____ __ Marc Lehmann +-- --==---/ / _ \/ // /\ \/ / pc...@op... |e| -=====/_/_//_/\_,_/ /_/\_\ XX11-RIPE --+ The choice of a GNU generation | | |
From: <ma...@ev...> - 2000-07-31 14:10:09
|
>>>>> "ML" == Marc Lehmann <pc...@go...> writes: ML> I don't care even on 5.005 (since that version inlines as ML> well). Are you sure 5.004 doesn't inline? (I'm 20% convinced ML> that it does). From the benchmarks I posted a while back, it does appear to. We haven't heard from anyone using anything earlier than 5.004 and it supports `use constant', so I'm going to stick with my arrayref implementation for the sake of whatever minor speedup it gives us. With the `use constant' available, the code remains maintainable: $Self->{debug} becomes $Self->[DEBUG] is all. Now... How can I speed up XML::DOM... ? Mark. |
From: Marc L. <pc...@go...> - 2000-07-31 08:50:08
|
On Sun, Jul 30, 2000 at 02:43:19PM -0500, Mike808 <mi...@ne...> wrote: > > Any reason why my patch has been ignored twice? > > I don't think it's been ignored Mark. The line endings have been > fixed (at least in the maint tree). I've removed the extraneous /i > on matches - I left in questionable ones (or ones I don't understand > yet) just in case. Hmm, sorry, I didn't see it in the ChangeLog and didn't receive a comment like "hey, you S.O.B., we already did that!" ;) > I can't speak to Mark H's speed on getting them into his 0.30 dev tree. No problem... you have to understand that my patch was applied a long time ago, and was missing in CVS, so I was wondering. And since I didn't receive any complaints I even doubted wether my e-mail was received at all! > We've been discussing how to handle the constant issue, but it's not Oh, I didn't want to touch the constant issue. > cut and dry, since older versions of perl didn't inline things, and > function calls are slower than scalars which are a tad slower than > lexicals. I know you don't care, 'cuz you're on 5.6, but there I don't care even on 5.005 (since that version inlines as well). Are you sure 5.004 doesn't inline? (I'm 20% convinced that it does). However, while we are at the topic: It is nice to support stone-age perls like 5.004, but there shouldn't be speed penalties for everybody else "just" because 5.004 is slower. In Gimp, for example, 5.004 _works_, for those people who need it, but when there is a choice I always go for "works best with latest perl". IMHO this is very reasonable ;) Although I admit more people might be forced to use XML::XSLT in "hostile" environments (no control over perl version, and bofh's administrating the server ;) > using mod_perl and XML::XSLT, and last I heard, mod_perl and 5.6 > aren't stable yet. Never found a problem. But mod_perl *is* stable with 5.005. > And just for bitching, you go in the CREDITS file. :) *grumble* :() I'd rather like to contribute something with more substance... hmm.. > filtering the line ending cruft - yes that was a mess. I don't know > what they were thinking) They were going for portability, no doubt ;) > functional. But, that's the beauty of Perl in that you can be as OO > as you want to be. Well, I am astonished that it works that well. Together with *very* aggressive caching it is even useful. However, the biggest uglyness IMHO is the DOM concept, which just does not fit nicely into perl. And trees also do not fit nicely into perl. Maybe speeding up XML::DOM (by writing key routines in XS, or even put part of the data structures into XS) might be more worthwhile, since, ugly as it might be, DOM is the standard and used by most xml processors since it is a standardized internal representation. Ah, and adding convinience methods (that could be even fast) into XML::DOM or some superclass might also be interesting, since DOM requires a lot of method calls, and method calls are a performance penalty. But I'm drifting away. Maybe I should bithc some other developers elsewhere... -- -----==- | ----==-- _ | ---==---(_)__ __ ____ __ Marc Lehmann +-- --==---/ / _ \/ // /\ \/ / pc...@op... |e| -=====/_/_//_/\_,_/ /_/\_\ XX11-RIPE --+ The choice of a GNU generation | | |
From: Bron G. <br...@ne...> - 2000-07-31 03:29:46
|
I'm getting: brong@sma~/xslcvs/XML-XSLT>cvs update -r cpan-0_30 br...@cv...'s password: cvs [server aborted]: cannot write /cvsroot/xmlxslt/CVSROOT/val-tags: Permission denied brong@sma~/xslcvs/XML-XSLT> This is somewhat annoying since it means that I can't actually get at these files. Grr. Unless I'm using the wrong name? -- Bron ( going to merge the stuff I've been playing with for XPath into it ) |
From: <ma...@ev...> - 2000-07-31 02:09:53
|
>>>>> "ML" == Marc Lehmann <pc...@go...> writes: ML> Any reason why my patch has been ignored twice? Marc, I assume you are are refering to the patch to Makefile.PL and to the patch to XSLT.pm. I've incorporated your patch for Makefile.PL to the main branch (rev 1.5 in CVS). Also, in looking over your patch for XSLT.pm, I confess that I didn't take time to incorporate it. I am, however, guilty of re-inventing your wheel. There are a few elements to your patch: Your Patch My Status 1. replace $/ with \n. now done. (I've created debug and warn methods, so the change was one or two lines.) 2. make REs case sensitive now done. (I knew this needed to be done, but you goaded me into action.) 3. replace REs with stringwise re-invented. matches in _evaluate_element Part of the reason I've taken so long with this is because my codebase has changed so much that a simple patch is not possible. Please let me know if I've missed any parts of your patch. Thanks, Mark. |
From: Bron G. <br...@ne...> - 2000-07-31 01:32:59
|
On Tue, Jul 25, 2000 at 06:45:18PM +0200, Pavel Nejedly wrote: > On Tue, Jul 18, 2000 at 11:45:51PM +1000, Bron Gondwana wrote: > # > well... could you add me to the developer list? I'm nejedly@sourceforge > # Sure, I'll check with the other developers first ( politeness and all that ) > # then do so. > > I agree... thank you Ok guys - sorry I took so long about this. Very slack I am. This message is CC'd to xml...@li... - if anyone on here has any problems with signing up Pavel as a developer by this time tomorrow (midday Aus Eastern Time) we'll continue discussion, otherwise I'll go off and learn the Sourceforge interface. > # > I'm interested in adding XPath... the current matching algorithm contains > # > bugs (rule "p1" matches "pp1" as well) and is not much extensible for > # > 'real' XPath statements... > # > But I don't have much time this week, but I hope I can start next week... > # Ahh.. do you have any plans for how yet? I'm also looking at that using lex, > # yacc and XS to get a proper lexical DFA rather than regex NFA based parse. > > perhaps we need evaluator more than parser... :) but we can do it this > way, if you wish... however XS won't help much with evaluating as we > can't use XML::DOM internal representation... :( But we can build our > own non-OO structure which would be faster... But perhaps we should > try XML::XPath first, did you? I had a look at it - but it doesn't seem very easy to add caching. I have a framework for XPath type stuff which I've been working on, and plan to try and splice into probably the 0.30 code today. I'm just about to check out a copy. -- Bron ( sorry, been out of circulation for a bit.. but I have a job again now, yay! ) |
From: <ma...@ev...> - 2000-07-31 01:26:59
|
>>>>> "AK" == Alex Kremer <kr...@cs...> writes: AK> there is a small bug which breaks the code. AK> it seems that XML::DOM uses not hashes but arrays for its data AK> ($self is an array reference, not a hash reference). This is a known problem. We should be making a release soon to fix this. Mark. |
From: Mike808 <mi...@ne...> - 2000-07-30 19:49:29
|
Marc Lehmann wrote: > Any reason why my patch has been ignored twice? I don't think it's been ignored Mark. The line endings have been fixed (at least in the maint tree). I've removed the extraneous /i on matches - I left in questionable ones (or ones I don't understand yet) just in case. I can't speak to Mark H's speed on getting them into his 0.30 dev tree. We've been discussing how to handle the constant issue, but it's not cut and dry, since older versions of perl didn't inline things, and function calls are slower than scalars which are a tad slower than lexicals. I know you don't care, 'cuz you're on 5.6, but there are apparently a significant group of people out there that are using mod_perl and XML::XSLT, and last I heard, mod_perl and 5.6 aren't stable yet. And just for bitching, you go in the CREDITS file. :) I might have time real soon now to go through the patch (after filtering the line ending cruft - yes that was a mess. I don't know what they were thinking) and see what's left. Never mind. I worked in your speedup on the evaluate method. I'm still going through the code trying to grok it. It's so ... um, functional. But, that's the beauty of Perl in that you can be as OO as you want to be. Mike808 |
From: <ma...@ev...> - 2000-07-29 19:30:39
|
[Ccing Geert on this since he probably has more insight. Geert, it may be helpful if you join the xmlxslt-devel mailing list.] Pavel offered up a patch to fix the CDATA problem on __string__. In looking over the code, I began to puzzle about something he apparently was confused about as well: my $ref = (ref ($node) || "ARRAY") if $parser->{debug}; print " "x$parser->{indent},"stripping child nodes ($ref):$/" if $parser->{debug}; $parser->{indent} += $parser->{indent_incr}; if (ref $node eq "ARRAY") { # added ref, however this if is allways false in the module $result = $parser->__string__ ($$node[0],$depth) if @$node; # check for existence of at least one node # replaced return with assignment beacause indent wasn't # decremented } else { It would seem that insead of adding `ref' to the `if($node eq "ARRAY")', we would want to replace $node with $ref and remove the `if $parser->{debug}'. That way, there is a possibility of the conditional being true, and seems to reflect the original author's intent. Of course, this doesn't get into what the code is actually supposed to /do/. I don't see __string__ called with an array ref for an arg anywhere (or even an array). Is this conditional actually needed? I'm thinking not. Comments? Mark. |
From: <ma...@ev...> - 2000-07-29 19:07:41
|
This past weekend, while working on porting Matt Sergeant's cv xml, I implemented <xsl:element>. Mike, it should be trivial to backport this to the maintainance release. I'd like to get others to try it if possible. Thanks, Mark. |
From: Mike808 <mi...@ne...> - 2000-07-27 04:29:36
|
"Mark A. Hershberger" wrote: > That is, is this a case where we should educate the users to use > URIs and not their native file scheme? Or should we just do what is > expedient and try to guess what the user means?" Now I see where you're coming from. I vote for the former (educate users about URIs). I went back and looked at the specifics of the $OS usage and here's what I found. We still need it. When spitting out a document for a webserver, you must send two CR LF pairs. Not \r\n or \n\n. Because those depend on your perl implementation and OS and EBCDIC usage. Lincoln's comments spell this out. And I find it just as annoying, but how I feel about these minor Perl / protocol / platform dependencies doesn't matter. If we are writing to streams or files, particularly UNICODE content, if we are on certain platforms, we need to remember to open the file in binmode, as there is still some distinction between text/binary modes on the filehandles in MS OSes. The file separator code was a freebie in the CGI.pm code. So, I think we are fighting the wrong fight maybe over the filepath separator character stuff. I'll agree to revisit this later when I get to the checking where we spit out documents, get documents from URIs, and access filesystem or stream objects. Mike. PS - I haven't worked on stuff this week cuz I'm swamped at work and my WinCVS is busted. I think IE5.5 broke it. Something related to MS having numerous undocumented versions of one 'MFC42.DLL' file. Thank you Microsoft. :( |
From: <ma...@ev...> - 2000-07-27 04:11:25
|
>>>>> "M" == Mike808 <mi...@ne...> writes: >> I think this is more of a philisophical issue (education >> vs. expediency) than a technical one right now, though. At >> least at this stage in the game. M> Yup. I'd sleep better knowing URI::file was doing things vs a M> hack from CGI.pm. Expediency won this round. Interesting. What I meant was "At this point (XML::XSLT being alpha-quality), all these heuristics seem to be unecessary and discussing which way is the right implementation seems more philisophical than anything else right now. That is, is this a case where we should educate the users to use URIs and not their native file scheme? Or should we just do what is expedient and try to guess what the user means?" Mark. |
From: Mike808 <mi...@ne...> - 2000-07-27 02:11:25
|
"Mark A. Hershberger" wrote: > >>>>> "M" == Mike808 <mi...@ne...> writes: > M> Well, it helps when dealing with file paths in particular, > M> since Macs VMS, and Win32 in particular are a bit weird in this > M> regard. > Is <xsl:include href="my:Mac:File"> or <xsl:include > href="my\windows\file"> really valid (see > http://www.ietf.org/rfc/rfc2396.txt)? Or is this not what you are > talking about? I was thinking more like <xsl:include href="file:///DRIVE|/dir/file"> At *some* point XSLT will have to talk to the OS to open this file. Knowing how to construct a filepath is helped with $OS. Read the "Mapping Notes" of URI::file. $OS was a quick and dirty approach, and it came for free in CGI.pm. > I think this is more of a philisophical issue (education > vs. expediency) than a technical one right now, though. At least at > this stage in the game. Yup. I'd sleep better knowing URI::file was doing things vs a hack from CGI.pm. Expediency won this round. Mike. |
From: <ma...@ev...> - 2000-07-26 18:04:37
|
>>>>> "M" == Mike808 <mi...@ne...> writes: M> Well, it helps when dealing with file paths in particular, M> since Macs VMS, and Win32 in particular are a bit weird in this M> regard. Is <xsl:include href="my:Mac:File"> or <xsl:include href="my\windows\file"> really valid (see http://www.ietf.org/rfc/rfc2396.txt)? Or is this not what you are talking about? M> the real world and aren't feeding it nice well-manicured M> ISO-8859-1 US-centric Intel-based XSL transforms. I bet! I think this is more of a philisophical issue (education vs. expediency) than a technical one right now, though. At least at this stage in the game. M> Also, as the comment says, that whole block was ripped from M> CGI.pm. Hmmm... Missed the comment, I guess. -- The worst thing about new books is that they keep us from reading the old ones. -Joseph Joubert (1754-1824) |
From: Mike808 <mi...@ne...> - 2000-07-26 04:54:31
|
"Mark A. Hershberger" wrote: > Mike, > I was looking over your changes and saw the following: > =head1 LICENSE > When included as part of the XML::XSLT package, or as part of its complete > documentation whether printed or otherwise, this work may be distributed only > under the terms of Perl's Artistic License. Any distribution of this file or > derivatives thereof outside of or separate from this package require that > special arrangements be made with copyright holder. > This seems much more onerous than the conventional "same terms as > Perl". Is there a reason this is needed? I saw blurbs like that in Tom Christiansen's perltoot and perltootc documents that are part of the Perl CORE. And I don't think it's 'onerous' in the least. It's quite clear. Either you distribute this package *in its entirety* or you talk to who wrote the code to get a special license to do what you want with it. Keep in mind that this isn't set in stone yet. I also want to take a look at how Matt Seargent is licensing his AxKit package, since I very much agree with what he is trying to do with his licensing terms. The meat of the license is that if you want to use it in the "perl way", we've got no issues. None. But, if you want to derive a commercial product from our hard work and effort and profit from it, you need to clear that with those who put forth the effort. Where things get complicated is how we assign copyright. Since the SourceForge project isn't a legal entity, we can't transfer them to it (like Apache contributors do to the Apache Software Foundation). And I don't know how to designate things like "you wrote this, Brong wrote that, Egon wrote this, I wrote that, etc." in a clear manner. We could just go for a list, but then, I think we should have a mechanism that 'floats/transfers' the copyright to the currently active members of the project, should one of us lose interest and 'drop out'. Neither do I want the licensing to get hung up in confusion over copyright ownership. Because if we're confused about it, then a judge somewhere will be, and will rule against asserting or protecting XSLT and giving the contributors some control over 'side projects' that use XSLT and maybe don't feel like they need to share the benefit they receive from us doing all the hard R&D work for no cost. Or we can give it away. But it's an important distinction that we retain that right to choose how XSLT will be used under conditions outside of the Artistic License. Like I said, the license wording is a bit fluid now, and there might not be any really usable verbage other than the standard 'same terms as Perl' verbage. Or maybe we switch to a GPL license. Who knows? By all means, we need to discuss this part of XSLT in detail and reach consensus, because, unfortunately, the legal ramifications can be crucial, particularly in this developing technology. This also means we will probably have to include some disclaimer about accidental patent or copyright infringement and a mechanism to remove the offending code. I plan to post some queries on the SourceForge licensing forum on copyright ownership and transfer and such in a SourceForge project where there is no central legal entity for the project. Mike. |
From: Mike808 <mi...@ne...> - 2000-07-26 04:27:45
|
"Mark A. Hershberger" wrote: > [A lot of questions, I know] > Mike, > > More curiousity: Why Time::HiRes? Devel::DProf is bundled with Perl > 5.6 and can be used without changing the source. Meanwhile, > Time::HiRes requires extra work to use. What is the benefit? This was from Bron's fork. I was just taking a stab at merging in his stuff. But, I'm with you if Devel::DProf is in the CORE now. Could be that Bron just didn't know about it. It's something I'll look into. If Bron doesn't tackle it first. > Also, I notice that you've added some code on a per-OS basis. There > is stuff for EBCDIC, VMS, etc. Why all the platform code? Isn't XML > supposed to emit UTF encoded text? Well, it helps when dealing with file paths in particular, since Macs VMS, and Win32 in particular are a bit weird in this regard. There's been some discussion over in Xalan-land about chaining and includes of stylesheets and using URLs and local file paths interchangeably when referring to some external 'thing'. Having these around just helps a bit if you need them, since Xalan seems to be stumbling as people start using it in the real world and aren't feeding it nice well-manicured ISO-8859-1 US-centric Intel-based XSL transforms. It also could come in handy for file-based caching of pre-parsed DOMs and XSL stylesheet trees. Also, as the comment says, that whole block was ripped from CGI.pm. If it turns out that later on, we don't need to know the OS to help parsing filepaths, then I'm ok with removing it. Wouldn't be the first dead code to get removed when it's no longer useful. :) I hope that answers some questions (and maybe even prompts some more!) Mike. |
From: Mike808 <mi...@ne...> - 2000-07-26 04:02:39
|
"Mark A. Hershberger" wrote: > I'd like to help incorporate the API I published to the list into your > code. Also, it seems to me that your changes are not maintenance but > development changes and, as such, should be committed to the main branch. > > What do you think needs to be done to accomplish this? What changes do you like? I didn't mean them to be development changes in the sense that broken stuff was fixed (or at least made more consistent so that the logic bugs would stand out more readily to the readers) and that no new functionality was implemented. The only real structural change I made was moving the empty tag policy implementation outboard to a separate module. I felt it distracted from the task at hand (processing XSL sheets), and could be component-ized pretty readily and all related interaction encapsulated within the module. It seemed like a slam dunk. I can put it back if you think it is too 'developmental'. I thought otherwise, since it didn't add or extend any functionality of the existing module. On an unrelated topic, I've also been tracking the Xalan project where they've proposed a major overhaul in v2.0. It seems XSL is being pulled to a dead-end as they bolt on more and more things that smell like a programming language. I don't think we should go there. We have a perfectly good programming language if we need to embed one (Perl) into stylesheet processing. Contriving a new programming language, IMHO is a waste of effort that doesn't really help anything get any better. And it reeks of the usual design-by-committee that pretty much dooms attempts at real progress. Sorry about the rant. Just thought y'all might want a heads up on what's going on in other XSLT developments. Mike. |
From: <ma...@ev...> - 2000-07-25 19:15:48
|
Mike, I was looking over your changes and saw the following: =head1 LICENSE When included as part of the XML::XSLT package, or as part of its complete documentation whether printed or otherwise, this work may be distributed only under the terms of Perl's Artistic License. Any distribution of this file or derivatives thereof outside of or separate from this package require that special arrangements be made with copyright holder. See the file 'LICENSE' in this distribution for details of copy and distribution terms. It is a copy of the file 'Artistic' in the distribution of Perl 5.002 or later. This seems much more onerous than the conventional "same terms as Perl". Is there a reason this is needed? Mark. -- The worst thing about new books is that they keep us from reading the old ones. -Joseph Joubert (1754-1824) |
From: <ma...@ev...> - 2000-07-25 01:36:49
|
Mike, I'd like to help incorporate the API I published to the list into your code. Also, it seems to me that your changes are not maintenance but development changes and, as such, should be committed to the main branch. What do you think needs to be done to accomplish this? Mark. -- The worst thing about new books is that they keep us from reading the old ones. -Joseph Joubert (1754-1824) |