You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(131) |
Jun
(104) |
Jul
(35) |
Aug
(22) |
Sep
(113) |
Oct
(82) |
Nov
(98) |
Dec
(124) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(75) |
Feb
(18) |
Mar
(44) |
Apr
(59) |
May
(82) |
Jun
(78) |
Jul
(131) |
Aug
(82) |
Sep
(29) |
Oct
(118) |
Nov
(281) |
Dec
(134) |
2004 |
Jan
(116) |
Feb
(247) |
Mar
(159) |
Apr
(133) |
May
(65) |
Jun
(104) |
Jul
(83) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Christian H. <ch...@ng...> - 2004-07-02 01:30:23
|
On 2004-07-01, at 20.52, JT Smith wrote: [...] > 3) At the bare minimum we need to change the current page tree > mechanism to not cache the entire page tree all at once. But I think > I'd prefer to just replace the page tree system. I agree, replacing the tree model seems to be the smartest choice. > 4) After reading what's been posted to this list, and some external > resources on Nested Sets, I think that they are the best way to go. > I'd like to see the page tree implemented using nested sets. And as we > go forward, I'd like to see the whole Tree::DAG_Node and > WebGUI::Persistent system go away. Sounds good. > 5) If we need to implement a page tree caching system under the new > nested set method, it cannot cache the entire tree, but rather > segments of the tree due to scalability problems with huge page trees. > I have recommended caching the parts of the tree associated with each > navigation that is defined, but if someone else has a better plan then > I'm all ears. I don't think that caching the tree will be necessary. What if all the required pages to build the navigation would cost six sql statements? Still worth the caching? Here is a crude example, to give an idea: my $root = Class->root; my $current = Class->node_by_id(1234); # main menu for my $child ( $root->children ) { print $child->title; unless ( $root->is_right_child($child) ) { print ' | '; } } # sub menu my ($ancestor) = $current->ancestors( depth => 1 ); for my $decendant ( $ancestor->descendants( max_depth => 3 ) ) { printf( "%s %s", " " x ( $decendant->depth - 2 ), $decendant->title ); } # crumbtrail print join( " > ", map { $_->title } reverse $current->ancestors ), "\n"; This will cost: - one statement for $root - one statement for $current - one statement for $root->children - one statement for $ancestor - one statement for $ancestor->descendants - one statement for $current->ancestors Six statements in total, thats cheap IMO. I have some other ideas for caching, but I'll have to dig in to WebGUI source first to see if it's possible. I don't want to make a fool of myself ;) > Now finally, to my question: > > Is anybody currently working on, or willing to work on doing this > conversion? I'll be happy to help out as far as I can. I'm in a weird legal situation right now, so I'll get back within a couple of days on what i can contribute. [...] -- Christian Hansen nGmedia +46 40 660 17 50 |
From: Christian H. <ch...@ng...> - 2004-07-02 00:57:11
|
On 2004-07-01, at 22.14, Martin Kamerbeek wrote: > Christian Hansen wrote: > >> This sounds wrong. Can you tell us more about how you did the >> benchmark? Some code would be nice too. > > Sure, I just hook both things up to the database, time how long it > takes to get all pages out of the database, repeat that 99 times and > take the average. I'm sorry. I did misread your previous mail. Try doing the same benchmark and pull a subtree in the middle of the tree, that would probably be a more fair benchmark. Please also consider using cmpthese from Benchmark.pm module that ships with perl, it will report some valuable cpu stats. If you don't have the time, I could whip up something by the weekend. [...] >> >>> I've got some other suggestions: >>> -While some properties are very easily extracted from NestedSet >>> nodes (like hasDaughter, isDescendant, etc.) others (like >>> isTopLevel, depth, isChild, etc.) require some kind of traversal. >>> For a big part this can be avoided by including ajacency list >>> properties like parentId, and depth in the table. I therefore think >>> it would be a good idea to keep those properties, if we were to go >>> for Nested Sets. >> >> >> If isTopLevel is the same as root, it comes for free. Root node in >> Nested Set always has lft == 1 > > Well it's not. You could see the page tree as forrest hold together by > a dummy root. So if you also count the dummy root a toplevelpage is a > child of a (webgui)root page which in turn is a child of the dummy > root (page id: 0) Is there a reason to keep the the "forrest" model if we are going for the Nested Set model? >> >> Depth also comes for free, just count nodes lft between ancestors lft >> and rgt. > > Which would take a traversal of some kind. Yes. >> >> IsChild would require a statement if you use the "orginal" Nested Set. >> >> Keeping parentId would only buy you easier comparison when comparing >> isSibling, isChild and isParent. > > It was the whole idea to make those things easier and faster, so this > would be a good thing ;) I agree :) -- Christian Hansen nGmedia +46 40 660 17 50 |
From: Dan C. P. <dp...@ml...> - 2004-07-01 23:35:49
|
<big_motha_of_a_snip> I will release a new version of DBIx::Tree::NestedSet this weekend, one that: 1) Abstracts some of the SQL to make it more DBD independent, ala CGI::Session (though the only currently implemented driver is MySQL, though to implement a Postgresql driver would be 5-10 lines of code. Volunteers? PLEASE contact me off-list.), 2) Gives you control over what the name of the id is. Dammit. What's wrong with "id"? 3) Doesn't disconnect the $dbh. Oops. Dumb. 4) Anything else? PLEASE don't let DBIx::Tree::NestedSet hold up WebGUI. Here's my thoughts on the whole matter: No matter what engine you use to create trees, the SQL is going to be the slowest part of your app. It almost always is in any mod_perl app. That means, no matter what, you're going to need to implement caching of your page tree, and the only way to do it Right is to have a granular cache. So let's play with D::T::NestedSet, by all means, but not get too hung up on it. If the Tree stuff can be abstracted enough to make D::T::NestedSet easy to drop in, that'd be great. Gives us the benefit of a quick release with a little more breathing room to play with D::T::NestedSet in the WebGUI environment. Thanks. I'm glad you're all interested and I'm glad I can contribute something to the goodness that is WebGUI. An a completely different note, I just launched a new WebGUI site at: http://www.newvotersproject.org I'm not completely happy with the design (that's not my bag anyway, I host it and did all the template stuff) and it'll probably be changing, but the client is very happy so far. -DJCP -- *-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-* Daniel Collis Puro CTO and Lead Developer, MassLegalServices.org Massachusetts Law Reform Institute 99 Chauncy St., Suite 500 Boston, MA 02111 617-357-0019 ext. 342 dp...@ml... http://www.masslegalservices.org |
From: Martin K. <maj...@gm...> - 2004-07-01 20:12:47
|
JT Smith wrote: > I know I haven't been very involved in this particular conversation, > but I wanted to quickly give a little feedback. > > 1) There isn't much left to do in the 6.1 release except for fixing > the page tree issue. Everything else will be completed after this > weekend. So the faster we can decide on an implementation and > implement it the better. True, I'm gonna work on it this weekend. > 3) At the bare minimum we need to change the current page tree > mechanism to not cache the entire page tree all at once. But I think > I'd prefer to just replace the page tree system. That's what I was planning to do. > 4) After reading what's been posted to this list, and some external > resources on Nested Sets, I think that they are the best way to go. > I'd like to see the page tree implemented using nested sets. And as we > go forward, I'd like to see the whole Tree::DAG_Node and > WebGUI::Persistent system go away. The way the nested set module is built this would be the best way. > 5) If we need to implement a page tree caching system under the new > nested set method, it cannot cache the entire tree, but rather > segments of the tree due to scalability problems with huge page trees. > I have recommended caching the parts of the tree associated with each > navigation that is defined, but if someone else has a better plan then > I'm all ears. I think this is the way to go. Another possibility is to let the page tree system handle the caching of requested tree parts. > Now finally, to my question: > > Is anybody currently working on, or willing to work on doing this > conversion? I'm gonna do it this weekend. Martin > > > I can do it, but I'm not as familiar with Nested Sets and I don't want > to duplicate efforts. If someone else is doing it, or wants to do it > then that's fine by me. > > If not, I'll start on it next week. > > We need to get this done so we can move on to the next release. > > > On Thu, 1 Jul 2004 19:57:25 +0200 > Christian Hansen <ch...@ng...> wrote: > >> >> On 2004-07-01, at 18.59, Martin Kamerbeek wrote: >> >>> I did some benchmarking. Nothing spiffy just checking how fast >>> WebGUI::Page and DBIx::Tree::NestedSet can fetch entire trees from >>> the database. Both were measured without caching. >>> >>> On small trees (I took the WebGUI 6.0.3 default tree) Nested sets >>> are faster: >>> >>> WG::Tree: 0.016 sec per tree >>> NestedSet: 0.006 sec per tree >>> >>> On larger (about 400 pages) trees there's no difference: >>> >>> WG::Tree: 0.555 sec per tree >>> NestedSet: 0.556 sec per tree >>> >>> At first sight this doesn't look to good, but remember that we're >>> gonna cache only the nav trees, and those consist of probably less >>> than 400 nodes. >> >> >> This sounds wrong. Can you tell us more about how you did the >> benchmark? Some code would be nice too. >> >>> >>> I've got some other suggestions: >>> -While some properties are very easily extracted from NestedSet >>> nodes (like hasDaughter, isDescendant, etc.) others (like >>> isTopLevel, depth, isChild, etc.) require some kind of traversal. >>> For a big part this can be avoided by including ajacency list >>> properties like parentId, and depth in the table. I therefore think >>> it would be a good idea to keep those properties, if we were to go >>> for Nested Sets. >> >> >> If isTopLevel is the same as root, it comes for free. Root node in >> Nested Set always has lft == 1 >> >> Depth also comes for free, just count nodes lft between ancestors lft >> and rgt. >> >> IsChild would require a statement if you use the "orginal" Nested Set. >> >> Keeping parentId would only buy you easier comparison when comparing >> isSibling, isChild and isParent. >> >>> -There's no good traversal method included for now. I'm not sure if >>> we need one for webgui though, so that might not prove to be to >>> difficult. >>> >>> To conclude this mail, I've found two issues with the code: >>> -You can't set the name of the id-column, it's alwys called 'id'. >>> -In the DESTROY method the database connection is closed >>> ($dbh->disconnect). This is strange behaviour since you have to pass >>> a dbh to the new method. I think disconnecting a dbh should be done >>> by the person who did iniate in the first place. >>> >>> Martin >>> >>> Dan Collis Puro wrote: >> >> >> >> -- >> >> Christian Hansen >> nGmedia >> +46 40 660 17 50 >> >> >> >> ------------------------------------------------------- >> This SF.Net email sponsored by Black Hat Briefings & Training. >> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital >> self defense, top technical experts, no vendor pitches, unmatched >> networking opportunities. Visit www.blackhat.com >> _______________________________________________ >> Pbwebgui-development mailing list >> Pbw...@li... >> https://lists.sourceforge.net/lists/listinfo/pbwebgui-development > > > > JT ~ Plain Black > > Create like a god, command like a king, work like a slave. > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital > self defense, top technical experts, no vendor pitches, unmatched > networking opportunities. Visit www.blackhat.com > _______________________________________________ > Pbwebgui-development mailing list > Pbw...@li... > https://lists.sourceforge.net/lists/listinfo/pbwebgui-development |
From: Martin K. <maj...@gm...> - 2004-07-01 20:04:25
|
Christian Hansen wrote: > This sounds wrong. Can you tell us more about how you did the > benchmark? Some code would be nice too. Sure, I just hook both things up to the database, time how long it takes to get all pages out of the database, repeat that 99 times and take the average. The code is used: /--- benchmark script for WebGUI::Page ---/ #!/usr/bin/perl our ($webguiRoot, $configFile); BEGIN { $configFile = "WebGUI-zes.conf"; $webguiRoot = "/data/domains/zes"; unshift (@INC, $webguiRoot."/lib"); } #-----------------DO NOT MODIFY BELOW THIS LINE-------------------- #use CGI::Carp qw(fatalsToBrowser); use strict; use WebGUI; use WebGUI::Session; use WebGUI::Page; use Tree::DAG_Node; use WebGUI::SQL; use Time::HiRes qw(gettimeofday); use DBIx::Tree::NestedSet; WebGUI::Session::open($webguiRoot, $configFile); my $total; for (1..100) { my $start = gettimeofday; WebGUI::Page->getPage; my $stop = gettimeofday; $total += ($stop - $start); print "Excuted $_ times: runlenght =".($stop-$start)." ; avg = ".($total/$_)."\n"; } print "\n\n". ($total / 100) . "\n\n"; /--- benchmarscript for DBIx::Tree::NestedSet --- #!/usr/bin/perl our ($webguiRoot, $configFile); BEGIN { $configFile = "WebGUI-zes.conf"; $webguiRoot = "/data/domains/zes"; unshift (@INC, $webguiRoot."/lib"); } #-----------------DO NOT MODIFY BELOW THIS LINE-------------------- #use CGI::Carp qw(fatalsToBrowser); use strict; use WebGUI; use WebGUI::Session; use WebGUI::Page; use Tree::DAG_Node; use WebGUI::SQL; use Time::HiRes qw(gettimeofday); use DBIx::Tree::NestedSet; WebGUI::Session::open($webguiRoot, $configFile); my $total; for (1..100) { my $start = gettimeofday; my $ns = DBIx::Tree::NestedSet->new( dbh => $session{dbh}, left_column_name => 'lft', right_column_name => 'rgt', table_name => 'page' ); my $ds = $ns->get_self_and_children_flat(id=>$ns->get_root); my $stop = gettimeofday; $total += ($stop - $start); print "Excuted $_ times: runlenght =".($stop-$start)." ; avg = ".($total/$_)."\n"; } print "\n\n". ($total / 100) . "\n\n"; > >> I've got some other suggestions: >> -While some properties are very easily extracted from NestedSet >> nodes (like hasDaughter, isDescendant, etc.) others (like isTopLevel, >> depth, isChild, etc.) require some kind of traversal. For a big part >> this can be avoided by including ajacency list properties like >> parentId, and depth in the table. I therefore think it would be a >> good idea to keep those properties, if we were to go for Nested Sets. > > > If isTopLevel is the same as root, it comes for free. Root node in > Nested Set always has lft == 1 Well it's not. You could see the page tree as forrest hold together by a dummy root. So if you also count the dummy root a toplevelpage is a child of a (webgui)root page which in turn is a child of the dummy root (page id: 0) > > Depth also comes for free, just count nodes lft between ancestors lft > and rgt. Which would take a traversal of some kind. > > IsChild would require a statement if you use the "orginal" Nested Set. > > Keeping parentId would only buy you easier comparison when comparing > isSibling, isChild and isParent. It was the whole idea to make those things easier and faster, so this would be a good thing ;) |
From: JT S. <jt...@pl...> - 2004-07-01 19:38:41
|
I know I haven't been very involved in this particular conversation, but I wanted to quickly give a little feedback. 1) There isn't much left to do in the 6.1 release except for fixing the page tree issue. Everything else will be completed after this weekend. So the faster we can decide on an implementation and implement it the better. 2) As a result of this conversation I realized that there are a lot of people on this list that are way smarter than me. =) I knew that there were some that are a little smarter than me, but holy crap guys. You've blown me away. Thanks for that. 3) At the bare minimum we need to change the current page tree mechanism to not cache the entire page tree all at once. But I think I'd prefer to just replace the page tree system. 4) After reading what's been posted to this list, and some external resources on Nested Sets, I think that they are the best way to go. I'd like to see the page tree implemented using nested sets. And as we go forward, I'd like to see the whole Tree::DAG_Node and WebGUI::Persistent system go away. 5) If we need to implement a page tree caching system under the new nested set method, it cannot cache the entire tree, but rather segments of the tree due to scalability problems with huge page trees. I have recommended caching the parts of the tree associated with each navigation that is defined, but if someone else has a better plan then I'm all ears. Now finally, to my question: Is anybody currently working on, or willing to work on doing this conversion? I can do it, but I'm not as familiar with Nested Sets and I don't want to duplicate efforts. If someone else is doing it, or wants to do it then that's fine by me. If not, I'll start on it next week. We need to get this done so we can move on to the next release. On Thu, 1 Jul 2004 19:57:25 +0200 Christian Hansen <ch...@ng...> wrote: > >On 2004-07-01, at 18.59, Martin Kamerbeek wrote: > >> I did some benchmarking. Nothing spiffy just checking how fast >> WebGUI::Page and DBIx::Tree::NestedSet can fetch entire trees from the >> database. Both were measured without caching. >> >> On small trees (I took the WebGUI 6.0.3 default tree) Nested sets are >> faster: >> >> WG::Tree: 0.016 sec per tree >> NestedSet: 0.006 sec per tree >> >> On larger (about 400 pages) trees there's no difference: >> >> WG::Tree: 0.555 sec per tree >> NestedSet: 0.556 sec per tree >> >> At first sight this doesn't look to good, but remember that we're >> gonna cache only the nav trees, and those consist of probably less >> than 400 nodes. > >This sounds wrong. Can you tell us more about how you did the benchmark? Some code >would be nice too. > >> >> I've got some other suggestions: >> -While some properties are very easily extracted from NestedSet nodes >> (like hasDaughter, isDescendant, etc.) others (like isTopLevel, depth, >> isChild, etc.) require some kind of traversal. For a big part this can >> be avoided by including ajacency list properties like parentId, and >> depth in the table. I therefore think it would be a good idea to keep >> those properties, if we were to go for Nested Sets. > >If isTopLevel is the same as root, it comes for free. Root node in Nested Set always >has lft == 1 > >Depth also comes for free, just count nodes lft between ancestors lft and rgt. > >IsChild would require a statement if you use the "orginal" Nested Set. > >Keeping parentId would only buy you easier comparison when comparing isSibling, isChild >and isParent. > >> -There's no good traversal method included for now. I'm not sure if >> we need one for webgui though, so that might not prove to be to >> difficult. >> >> To conclude this mail, I've found two issues with the code: >> -You can't set the name of the id-column, it's alwys called 'id'. >> -In the DESTROY method the database connection is closed >> ($dbh->disconnect). This is strange behaviour since you have to pass a >> dbh to the new method. I think disconnecting a dbh should be done by >> the person who did iniate in the first place. >> >> Martin >> >> Dan Collis Puro wrote: > > >-- > >Christian Hansen >nGmedia >+46 40 660 17 50 > > > >------------------------------------------------------- >This SF.Net email sponsored by Black Hat Briefings & Training. >Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self defense, top >technical experts, no vendor pitches, unmatched networking opportunities. Visit >www.blackhat.com >_______________________________________________ >Pbwebgui-development mailing list >Pbw...@li... >https://lists.sourceforge.net/lists/listinfo/pbwebgui-development JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Christian H. <ch...@ng...> - 2004-07-01 17:58:29
|
On 2004-07-01, at 18.59, Martin Kamerbeek wrote: > I did some benchmarking. Nothing spiffy just checking how fast > WebGUI::Page and DBIx::Tree::NestedSet can fetch entire trees from the > database. Both were measured without caching. > > On small trees (I took the WebGUI 6.0.3 default tree) Nested sets are > faster: > > WG::Tree: 0.016 sec per tree > NestedSet: 0.006 sec per tree > > On larger (about 400 pages) trees there's no difference: > > WG::Tree: 0.555 sec per tree > NestedSet: 0.556 sec per tree > > At first sight this doesn't look to good, but remember that we're > gonna cache only the nav trees, and those consist of probably less > than 400 nodes. This sounds wrong. Can you tell us more about how you did the benchmark? Some code would be nice too. > > I've got some other suggestions: > -While some properties are very easily extracted from NestedSet nodes > (like hasDaughter, isDescendant, etc.) others (like isTopLevel, depth, > isChild, etc.) require some kind of traversal. For a big part this can > be avoided by including ajacency list properties like parentId, and > depth in the table. I therefore think it would be a good idea to keep > those properties, if we were to go for Nested Sets. If isTopLevel is the same as root, it comes for free. Root node in Nested Set always has lft == 1 Depth also comes for free, just count nodes lft between ancestors lft and rgt. IsChild would require a statement if you use the "orginal" Nested Set. Keeping parentId would only buy you easier comparison when comparing isSibling, isChild and isParent. > -There's no good traversal method included for now. I'm not sure if > we need one for webgui though, so that might not prove to be to > difficult. > > To conclude this mail, I've found two issues with the code: > -You can't set the name of the id-column, it's alwys called 'id'. > -In the DESTROY method the database connection is closed > ($dbh->disconnect). This is strange behaviour since you have to pass a > dbh to the new method. I think disconnecting a dbh should be done by > the person who did iniate in the first place. > > Martin > > Dan Collis Puro wrote: -- Christian Hansen nGmedia +46 40 660 17 50 |
From: Martin K. <ma...@pr...> - 2004-07-01 16:59:33
|
I did some benchmarking. Nothing spiffy just checking how fast WebGUI::Page and DBIx::Tree::NestedSet can fetch entire trees from the database. Both were measured without caching. On small trees (I took the WebGUI 6.0.3 default tree) Nested sets are faster: WG::Tree: 0.016 sec per tree NestedSet: 0.006 sec per tree On larger (about 400 pages) trees there's no difference: WG::Tree: 0.555 sec per tree NestedSet: 0.556 sec per tree At first sight this doesn't look to good, but remember that we're gonna cache only the nav trees, and those consist of probably less than 400 nodes. I've got some other suggestions: -While some properties are very easily extracted from NestedSet nodes (like hasDaughter, isDescendant, etc.) others (like isTopLevel, depth, isChild, etc.) require some kind of traversal. For a big part this can be avoided by including ajacency list properties like parentId, and depth in the table. I therefore think it would be a good idea to keep those properties, if we were to go for Nested Sets. -There's no good traversal method included for now. I'm not sure if we need one for webgui though, so that might not prove to be to difficult. To conclude this mail, I've found two issues with the code: -You can't set the name of the id-column, it's alwys called 'id'. -In the DESTROY method the database connection is closed ($dbh->disconnect). This is strange behaviour since you have to pass a dbh to the new method. I think disconnecting a dbh should be done by the person who did iniate in the first place. Martin Dan Collis Puro wrote: >Folks, > >I've released the nested set tree module discussed recently on this list >to CPAN. > >http://cpan.uwinnipeg.ca/~djcp/DBIx-Tree-NestedSet > >It may not have propagated to all CPAN mirrors yet. I'm also soliciting >feedback on the module at www.perlmonks.org and I plan on getting review >from a few other sources as well. > >The full RFC is at: > >http://www.perlmonks.org/index.pl?node_id=370205 > >Thanks! > >-DJCP > > > |
From: Frank D. <fld...@ms...> - 2004-07-01 01:38:21
|
I'll put the 6.1+ version there. I figured the developers here could help give the beta version a good workout =). I'm not sure if this is going to work for other databases, so I'm curious to see some feedback if anyone would like to try it. ----Original Message Follows---- From: JT Smith <jt...@pl...> Reply-To: pbw...@li... To: pbw...@li... Subject: Re: [Pbwebgui-development] The end of the create/drop script nightmare Date: Wed, 30 Jun 2004 18:50:53 -0500 If you haven't already, you should put this in the user contribs area. On Wed, 30 Jun 2004 18:45:46 -0500 "Frank Dillon" <fld...@ms...> wrote: >For any of you who have ever created a WebGUI Wobject and then had to write >create and drop scripts for it, you are well aware of the headache that >goes into tracking down all of the data from the database and putting it >into one script that successfully installs your wobject. > >To streamline this process, I have written (and attached) a utility that >will take the namespace of any Wobject and make these create and drop >scripts for it: > ><webguiroot>/docs/create-<namespace>.sql ><webguilroot>/docs/drop-<namespace>.sql > >-All create statements for tables beginning with the namespace provied as a >parameter >-All drop statements for tables beginning with the namespace provied as a >parameter >-All rows of data from the international table, ordered by >languageId,internationalId desc, that have the namespace provided >-A drop statement which deletes all international rows with the namespace >passed in >-All rows of data from the incrementer table that have incementerId >beginning with the namespace >provided >-A drop statement that deletes all incrementer rows that have incrementerId >beginning with the namespace passed in >-All rows of data from the help table that have the namespace provided >-A drop statement that deletes all rows of data from the help table that >have the namespae passed in >-All rows of data from the template table that have namespace beginning >with the namespace provided >-A drop statement that deletes all rows of data from the template table >that has a namspace beginning with the namespace passed in. > >Also, this script backs up your old create scripts (if they have the same >name as the new one) and puts a datestamp on it. > >Simply put the attached file in your sbin folder and run like so: > >perl exportWobject.pl --conf=<conf file> --namespace=<wobject namespace> > >ex: >perl exportWobject.pl --conf=WebGUI.conf --namespace=Article > >This will create the files: ><webguiroot>/docs/create-Article.sql ><webguiroot>/docs/drop-Article.sql > >As soon as I get time, I plan on adding support for a delta file that will >compare the old create script against the new data in the database and >generate ><webguiroot>/docs/upgrades/upgrade-<namespace>-timestamp.sql > >PS. I just realized that this utility will be obsolete with WG versions >6.1 and above due to some of the changes. This version will work for >versions WG 6.03 and lower. I'll send a modified version when I have time >to make one. > JT ~ Plain Black Create like a god, command like a king, work like a slave. ------------------------------------------------------- This SF.Net email sponsored by Black Hat Briefings & Training. Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self defense, top technical experts, no vendor pitches, unmatched networking opportunities. Visit www.blackhat.com _______________________________________________ Pbwebgui-development mailing list Pbw...@li... https://lists.sourceforge.net/lists/listinfo/pbwebgui-development |
From: JT S. <jt...@pl...> - 2004-07-01 00:37:22
|
If you haven't already, you should put this in the user contribs area. On Wed, 30 Jun 2004 18:45:46 -0500 "Frank Dillon" <fld...@ms...> wrote: >For any of you who have ever created a WebGUI Wobject and then had to write create and >drop scripts for it, you are well aware of the headache that goes into tracking down >all of the data from the database and putting it into one script that successfully >installs your wobject. > >To streamline this process, I have written (and attached) a utility that will take the >namespace of any Wobject and make these create and drop scripts for it: > ><webguiroot>/docs/create-<namespace>.sql ><webguilroot>/docs/drop-<namespace>.sql > >-All create statements for tables beginning with the namespace provied as a parameter >-All drop statements for tables beginning with the namespace provied as a parameter >-All rows of data from the international table, ordered by languageId,internationalId >desc, that have the namespace provided >-A drop statement which deletes all international rows with the namespace passed in >-All rows of data from the incrementer table that have incementerId beginning with the >namespace >provided >-A drop statement that deletes all incrementer rows that have incrementerId beginning >with the namespace passed in >-All rows of data from the help table that have the namespace provided >-A drop statement that deletes all rows of data from the help table that have the >namespae passed in >-All rows of data from the template table that have namespace beginning with the >namespace provided >-A drop statement that deletes all rows of data from the template table that has a >namspace beginning with the namespace passed in. > >Also, this script backs up your old create scripts (if they have the same name as the >new one) and puts a datestamp on it. > >Simply put the attached file in your sbin folder and run like so: > >perl exportWobject.pl --conf=<conf file> --namespace=<wobject namespace> > >ex: >perl exportWobject.pl --conf=WebGUI.conf --namespace=Article > >This will create the files: ><webguiroot>/docs/create-Article.sql ><webguiroot>/docs/drop-Article.sql > >As soon as I get time, I plan on adding support for a delta file that will compare the >old create script against the new data in the database and generate ><webguiroot>/docs/upgrades/upgrade-<namespace>-timestamp.sql > >PS. I just realized that this utility will be obsolete with WG versions 6.1 and above >due to some of the changes. This version will work for versions WG 6.03 and lower. > I'll send a modified version when I have time to make one. > JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Andy G. <an...@hy...> - 2004-07-01 00:03:15
|
Awesome, I definitely plan to use this. Thanks! :) -Andy Frank Dillon wrote: > For any of you who have ever created a WebGUI Wobject and then had to > write create and drop scripts for it, you are well aware of the headache > that goes into tracking down all of the data from the database and > putting it into one script that successfully installs your wobject. > > To streamline this process, I have written (and attached) a utility that > will take the namespace of any Wobject and make these create and drop > scripts for it: > > <webguiroot>/docs/create-<namespace>.sql > <webguilroot>/docs/drop-<namespace>.sql |
From: Frank D. <fld...@ms...> - 2004-06-30 23:45:56
|
For any of you who have ever created a WebGUI Wobject and then had to write create and drop scripts for it, you are well aware of the headache that goes into tracking down all of the data from the database and putting it into one script that successfully installs your wobject. To streamline this process, I have written (and attached) a utility that will take the namespace of any Wobject and make these create and drop scripts for it: <webguiroot>/docs/create-<namespace>.sql <webguilroot>/docs/drop-<namespace>.sql -All create statements for tables beginning with the namespace provied as a parameter -All drop statements for tables beginning with the namespace provied as a parameter -All rows of data from the international table, ordered by languageId,internationalId desc, that have the namespace provided -A drop statement which deletes all international rows with the namespace passed in -All rows of data from the incrementer table that have incementerId beginning with the namespace provided -A drop statement that deletes all incrementer rows that have incrementerId beginning with the namespace passed in -All rows of data from the help table that have the namespace provided -A drop statement that deletes all rows of data from the help table that have the namespae passed in -All rows of data from the template table that have namespace beginning with the namespace provided -A drop statement that deletes all rows of data from the template table that has a namspace beginning with the namespace passed in. Also, this script backs up your old create scripts (if they have the same name as the new one) and puts a datestamp on it. Simply put the attached file in your sbin folder and run like so: perl exportWobject.pl --conf=<conf file> --namespace=<wobject namespace> ex: perl exportWobject.pl --conf=WebGUI.conf --namespace=Article This will create the files: <webguiroot>/docs/create-Article.sql <webguiroot>/docs/drop-Article.sql As soon as I get time, I plan on adding support for a delta file that will compare the old create script against the new data in the database and generate <webguiroot>/docs/upgrades/upgrade-<namespace>-timestamp.sql PS. I just realized that this utility will be obsolete with WG versions 6.1 and above due to some of the changes. This version will work for versions WG 6.03 and lower. I'll send a modified version when I have time to make one. |
From: Jay R. A. <jr...@ba...> - 2004-06-30 15:54:38
|
On Wed, Jun 30, 2004 at 12:13:44AM -0700, Doug wrote: > On Tue, 2004-06-29 at 15:01, JT Smith wrote: > > There is another problem though, but it isn't technical, and that's what makes this such > > a crazy idea. People have been trained for 20 years to think heirarchically. They think > > about folders and files. So do you think it will throw them off to think "meta"? > > Makes sense to me - but it *will* throw them off, for sure. > > Best IMHO to provide a hierarchical "filing system" plus "subject > catalogs". This matches the callnumber/author/subject library catalogs > that most folks are familiar with. > > The ironic thing about this is that most people think they understand > and can manage a hierarchical filing system but very few people actually > do it competently in practice. Indeed: search solves the "I don't know where I put it" problem. But you still have to *have* a "where to put it", and de-hierarchicalizing the storage paradigm (oh, my *ghod*; did I just say that? :-) penalizes those who *do* know what the hell they're doing; something I've always *HATED*. Cheers, -- jra -- Jay R. Ashworth jr...@ba... Designer Baylink RFC 2100 Ashworth & Associates The Things I Think '87 e24 St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 2004 Stanley Cup Champion Tampa Bay Lightning |
From: JT S. <jt...@pl...> - 2004-06-30 14:38:28
|
Thanks to everyone who has replied. It's pretty clear that both the search and hierarchical versions need to exist. On Wed, 30 Jun 2004 12:40:22 +0200 Martin Kamerbeek <ma...@pr...> wrote: >JT Smith wrote: > >> There is another problem though, but it isn't technical, and that's >> what makes this such a crazy idea. People have been trained for 20 >> years to think heirarchically. They think about folders and files. So >> do you think it will throw them off to think "meta"? >> >Maybe I've been reading this the wrong way, but to me it seems that meta-data is >usefull for searching data, while a heirarchical approach is best suited for browsing >through data in a structured manner. Since I'm one of those people who doesn't always >know what to search for I'd certainly like to keep a (heirarchical) browsing interface. >That's not to say that I dislike the idea of metadata, but I think it's best to combine >both. > >Martin > > >------------------------------------------------------- >This SF.Net email sponsored by Black Hat Briefings & Training. >Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self defense, top >technical experts, no vendor pitches, unmatched networking opportunities. Visit >www.blackhat.com >_______________________________________________ >Pbwebgui-development mailing list >Pbw...@li... >https://lists.sourceforge.net/lists/listinfo/pbwebgui-development JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Frank D. <fld...@ms...> - 2004-06-30 14:29:41
|
"There is another problem though, but it isn't technical, and that's what makes this such a crazy idea. People have been trained for 20 years to think heirarchically. They think about folders and files. So do you think it will throw them off to think "meta"?" I'm all for assigning meta data to content to make searching for content easier, but I can't really envision a content storage mechanism with no logical hierarchy. If we as developers aren't comfortable, I'm fairly certain the end users won't be either. I think it's a good idea, but it needs to be implemented to allow for hierachical browsing. |
From: Martin K. <ma...@pr...> - 2004-06-30 10:40:28
|
JT Smith wrote: > There is another problem though, but it isn't technical, and that's > what makes this such a crazy idea. People have been trained for 20 > years to think heirarchically. They think about folders and files. So > do you think it will throw them off to think "meta"? > Maybe I've been reading this the wrong way, but to me it seems that meta-data is usefull for searching data, while a heirarchical approach is best suited for browsing through data in a structured manner. Since I'm one of those people who doesn't always know what to search for I'd certainly like to keep a (heirarchical) browsing interface. That's not to say that I dislike the idea of metadata, but I think it's best to combine both. Martin |
From: David S. <dp...@di...> - 2004-06-30 08:30:36
|
JT Smith wrote: > I think that the concept of folders is not terribly useful going forward. Instead I > think that when the user is presented with the asset manager UI s/he should get a search > interface. S/he could search based on date uploaded or modified, user who uploaded, > filename, title, url, id, and most importantly some sort of metadata. Call the metadata > keywords or categories, or whatever. It's not important what it's called, but more > important that it exists. For instance, if a user creates an article with a title of "EU > Bans Software Patents", and attaches an image to it, then the image would get tagged > with the keywords of "Article" and "EU Bans Software Patents" automatically. I don't know what sort of an image might go with that article, but if someone was searching for "image patents", would this create a hit? > There is another problem though, but it isn't technical, and that's what makes this such > a crazy idea. People have been trained for 20 years to think heirarchically. They think > about folders and files. So do you think it will throw them off to think "meta"? > > Anyway, it's a thought. Let me know what you think. You have some time as I won't start > working on it until 6.1 is released. My thoughts ... I'm not sure this is an accurate statement. Rather, consider that people have been trained to think within the problem domains they're experienced with. Outside of those familiar domains, most folks are not very organized in terms of how they perform their searches. At that point, the big "impedence mismatch" between people who file things by specifying keywords and the folks who search for them is that, unless the parties share the same "domain knowledge", the likelihood of the searcher getting "quality hits" plummets. Ironically, when the parties DO share the same domain knowledge, it's usually easier to impose a domain-specific organization on the data and let them drill down through it to find what they need. But, just because you've got a heirarchy doesn't mean it's "well-organized", only that it's "organized". So a valid question might be, who's the audience you have in mind here? Is this a scheme that is only going to be used by "insiders" (ie., some kind of admin or content manager)? Or will it be accessible by anybody visiting the site? My personal preference is that I like the option of being able to specify my own organizational structure (ie., views into the data)), and hierarchical seems to be the preferred method these days. It seems like whenever I've encountered a hierarchally organized tree of data that someone else has created, it takes me quite a while to figure out how to navigate through it. Searches help find known keywords, but not synonyms, misspellings, allegories, and other stuff that most people tend to generate on their own. It's a tough problem. If you have time, do some reading on "facet-based categorization schemes" (see, now there's a word you probably would *NEVER* have searched for!). Interestingly, one of the top hits this phrase turned up in Google is this: http://www.awprofessional.com/articles/article.asp?p=102609 -David |
From: Doug <dco...@ab...> - 2004-06-30 07:13:54
|
On Tue, 2004-06-29 at 15:01, JT Smith wrote: > There is another problem though, but it isn't technical, and that's what makes this such > a crazy idea. People have been trained for 20 years to think heirarchically. They think > about folders and files. So do you think it will throw them off to think "meta"? Makes sense to me - but it *will* throw them off, for sure. Best IMHO to provide a hierarchical "filing system" plus "subject catalogs". This matches the callnumber/author/subject library catalogs that most folks are familiar with. The ironic thing about this is that most people think they understand and can manage a hierarchical filing system but very few people actually do it competently in practice. -- doug. |
From: JT S. <jt...@pl...> - 2004-06-29 22:48:03
|
As you know (if you've looked at the roadmap) one of the next things on the list is the new asset management system, which will replace WebGUI::Node, WebGUI::Attachment, WebGUI::Collateral, WebGUI::CollateralFolder, and WebGUI::Operation::Collateral. Every file that ever gets uploaded to WebGUI will go through the asset management system, whether it be what's in today's collateral management system or files attached to an article or files attached to a user submission system, or files generated by the export of the Data Form. Every file will have privileges associated with it, and will be versioned. Each file will also have a property to determine whether it will be displayed in the new user interface (the thing we now call the collateral manager). Right now we have the concept of folders in the collateral manager. That serves the purpose of organizing the files on smaller sites. But it becomes quite difficult to deal with if you've got thousands of files. You start seeing 20 folders at the root level and folders 5 or 10 levels deep. I think that the concept of folders is not terribly useful going forward. Instead I think that when the user is presented with the asset manager UI s/he should get a search interface. S/he could search based on date uploaded or modified, user who uploaded, filename, title, url, id, and most importantly some sort of metadata. Call the metadata keywords or categories, or whatever. It's not important what it's called, but more important that it exists. For instance, if a user creates an article with a title of "EU Bans Software Patents", and attaches an image to it, then the image would get tagged with the keywords of "Article" and "EU Bans Software Patents" automatically. In addition, the user could add new keywords through the asset manager. So if s/he is uploading a bunch of files related to a new site design for the spring of 2005, s/he could add "site design spring 2005" to the keywords. If you like, we could parameterize the data so it would be entered in name/value pairs like: wobject=Article title=EU Bans Software Patents I think that might be overkill, but who knows? The only technical problem I see that this whole idea presents is in path-based integration. For instance, if we add an FTP server or WebDAV server to the asset manager, then we don't have an easy /path/to/get/the/user/where/they/need/to/go. So as a compromise, maybe we allow both views, folder and search, with the default being search. There is another problem though, but it isn't technical, and that's what makes this such a crazy idea. People have been trained for 20 years to think heirarchically. They think about folders and files. So do you think it will throw them off to think "meta"? Anyway, it's a thought. Let me know what you think. You have some time as I won't start working on it until 6.1 is released. JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Dan C. P. <dp...@ml...> - 2004-06-28 16:08:42
|
Folks, I've released the nested set tree module discussed recently on this list to CPAN. http://cpan.uwinnipeg.ca/~djcp/DBIx-Tree-NestedSet It may not have propagated to all CPAN mirrors yet. I'm also soliciting feedback on the module at www.perlmonks.org and I plan on getting review from a few other sources as well. The full RFC is at: http://www.perlmonks.org/index.pl?node_id=370205 Thanks! -DJCP -- *-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-* Daniel Collis Puro CTO and Lead Developer, MassLegalServices.org Massachusetts Law Reform Institute 99 Chauncy St., Suite 500 Boston, MA 02111 617-357-0019 ext. 342 dp...@ml... http://www.masslegalservices.org |
From: JT S. <jt...@pl...> - 2004-06-28 14:10:41
|
>everything OK? You're quite back on track with your webgui programming, >like you promised. It's good to see your devotion and speed programming >back in the development of WG. I'm not as back on track as I had hoped. Too much stuff going on, but at the end of July everything will be better. =) >I'm analyzing the new template-cache mechanism you implemented. There >are some things that aren't all that clear to me: >1. The HTML::Template system stores the processed templates. Are these >totally processed or does the system do some rendering on the fly? There are two levels of cache. The first level caches the raw template from the database to the filesystem. The second caches the parsed template (not processed, but parsed). Once the new filesystem is in place, the templates will be stored directly to the filesystem so there won't be a reason to have the dual caching scenario. >2. Where did you build in that templates aren't to be cached in admin >mode? I can't find any directives for this in the code, but it seems to >work. They are cached while in admin mode. The difference is that each time the admin edits the template, the cache is destroyed and therefore the admin doesn't know it's been cached. >3. how does it handle user specific templates (like in USS/discussion >system)? The templates aren't user specific only the data is. I'm not caching the processed template, only the parsed and raw forms of it. JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Christian H. <ch...@ng...> - 2004-06-25 17:40:51
|
On 2004-06-25, at 17.46, JT Smith wrote: >>> -It can't do HTML. >> >> Thats good, because this data has very little to do with presentation. > > I agree in a normal context, but we also put help in the > internationalization system, and WebGUI help contains a lot of HTML. Documentation and l10n, IMHO two separate things. Why not split the apart? > >>> - You have to be careful with special characters. >> >> If you keep your data in a unicode format, like utf-8 it wont be an >> issue. > > We have 3 years of existing WebGUI content out in the field without > utf-8. It's simply not possible to convert it all to utf-8 > immediately. If we were starting from scratch today then utf-8 is > where I'd be, and it is where I want to go, but it isn't going to > happen overnight. I see. > > >>> - It won't be backwards compatible with our current system so all of >>> the work the translators have done would have to be redone, and all >>> of the code in WebGUI would have to go through and be replaced. >> >> That's probably a hit you'll have to take sooner or later. > > Why? If I can make it backward compatible, why not? There are several reason for go with a "standard" format. - Translations shops now to handle this format. - There is several opensource and closed source applications to aid you in translation. - Reusable translations. - Lexicons. > >>> - It has a lot of binary prereqs. (I don't like adding those if I >>> can help it.) >> >> Why is that bad? > > Because not everyone is you. One of the reasons I picked perl is for > it's cross-platform compatibility. However, this is complicated every > time I use a module with C bindings. It makes the install more > difficult. Therefore using a bunch of binary prereqs is bad. I understand. Locale::Maketext and Locale::Maketext::Lexicon are both "pure" perl modules. > >>> - It doesn't provide as big of a performance increase as what I'm >>> about to suggest. >> >> Nothing prevent you from preload the data/classes when apache start. > > I haven't read anything that says you can preload the gettext > datafiles. We do it in several apps, both from po and mo files. > >>> lib/WebGUI/i18n/<language>/<namespace>.pm >> >> This is a common mistake, l10n has more to do with culture and >> language then language. >> Once your app is i18n you can add l10ns. > > l10n does, i18n does not. I'm not sure i understand what you mean. > There are lots of nitpicky points we could argue back and forth on > this particular issue, but to be honest, I don't have any interest in > arguing them. Im not here for the sake of arguing, that would be a waste of time and resources. > >> A good resource is: http://www.i18nguy.com/ >> >> Have a look at Locale::Maketext::Lexicon at CPAN, it can make your >> life easier. > > Thanks for your viewpoints. I would have liked to have read them a > long time ago when I posted this message. Now the new system is > already done. It will be checked into CVS in a couple of hours. Your welcome, to bad i didn't reply sooner. -- Christian Hansen nGmedia +46 40 660 17 50 |
From: JT S. <jt...@pl...> - 2004-06-25 16:33:02
|
>> -It can't do HTML. > >Thats good, because this data has very little to do with presentation. I agree in a normal context, but we also put help in the internationalization system, and WebGUI help contains a lot of HTML. >> - You have to be careful with special characters. > >If you keep your data in a unicode format, like utf-8 it wont be an issue. We have 3 years of existing WebGUI content out in the field without utf-8. It's simply not possible to convert it all to utf-8 immediately. If we were starting from scratch today then utf-8 is where I'd be, and it is where I want to go, but it isn't going to happen overnight. >> - It won't be backwards compatible with our current system so all of >> the work the translators have done would have to be redone, and all of >> the code in WebGUI would have to go through and be replaced. > >That's probably a hit you'll have to take sooner or later. Why? If I can make it backward compatible, why not? >> - It has a lot of binary prereqs. (I don't like adding those if I can >> help it.) > >Why is that bad? Because not everyone is you. One of the reasons I picked perl is for it's cross-platform compatibility. However, this is complicated every time I use a module with C bindings. It makes the install more difficult. Therefore using a bunch of binary prereqs is bad. >> - It doesn't provide as big of a performance increase as what I'm >> about to suggest. > >Nothing prevent you from preload the data/classes when apache start. I haven't read anything that says you can preload the gettext datafiles. >> lib/WebGUI/i18n/<language>/<namespace>.pm > >This is a common mistake, l10n has more to do with culture and language then language. >Once your app is i18n you can add l10ns. l10n does, i18n does not. There are lots of nitpicky points we could argue back and forth on this particular issue, but to be honest, I don't have any interest in arguing them. >A good resource is: http://www.i18nguy.com/ > >Have a look at Locale::Maketext::Lexicon at CPAN, it can make your life easier. Thanks for your viewpoints. I would have liked to have read them a long time ago when I posted this message. Now the new system is already done. It will be checked into CVS in a couple of hours. JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Christian H. <ch...@ng...> - 2004-06-25 15:45:46
|
On 2004-06-18, at 05.41, JT Smith wrote: > The following is my recommendation for the new internationalization > format. > > Before I get too far I should say that I've investigated the old > standby: gettext. It won't work for WebGUI for these reasons: > > -It can't do HTML. Thats good, because this data has very little to do with presentation. > - You have to be careful with special characters. If you keep your data in a unicode format, like utf-8 it wont be an issue. > - It won't be backwards compatible with our current system so all of > the work the translators have done would have to be redone, and all of > the code in WebGUI would have to go through and be replaced. That's probably a hit you'll have to take sooner or later. > - It has a lot of binary prereqs. (I don't like adding those if I can > help it.) Why is that bad? > - It doesn't provide as big of a performance increase as what I'm > about to suggest. Nothing prevent you from preload the data/classes when apache start. > lib/WebGUI/i18n/<language>/<namespace>.pm This is a common mistake, l10n has more to do with culture and language then language. Once your app is i18n you can add l10ns. A good resource is: http://www.i18nguy.com/ Have a look at Locale::Maketext::Lexicon at CPAN, it can make your life easier. -- Christian Hansen nGmedia +46 40 660 17 50 |
From: Leendert B. <lee...@un...> - 2004-06-25 13:32:23
|
Op do 24-06-2004, om 17:44 schreef JT Smith: > I'll be working on caching parts of the user session shortly. There must be a bug in the > isInGroup function if it's acutally querying group 3 that many times. probably > If you've got implementation ideas then by all means share them. Apparently you didn't I'll work them out and share them somewhere next week. > look at the template caching mechanism very long though, because it IS caching as long > as possible. It never updates the cache unless: > > a) the cache doesn't exist > b) the template has been modified (I have the idea that content will change more often than the template itself. That's why I think that this control isn't enough.) I'm very happy to see so many innovative initiatives. It'll bring WebGUI definitely to a higher level! Keep up the good work guys. -leendert |