You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(131) |
Jun
(104) |
Jul
(35) |
Aug
(22) |
Sep
(113) |
Oct
(82) |
Nov
(98) |
Dec
(124) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(75) |
Feb
(18) |
Mar
(44) |
Apr
(59) |
May
(82) |
Jun
(78) |
Jul
(131) |
Aug
(82) |
Sep
(29) |
Oct
(118) |
Nov
(281) |
Dec
(134) |
2004 |
Jan
(116) |
Feb
(247) |
Mar
(159) |
Apr
(133) |
May
(65) |
Jun
(104) |
Jul
(83) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: arne b. <arn...@ya...> - 2013-06-21 03:18:30
|
gjdMake ser10us health 0nl1ne http://darabozmag.kz/www.facebook.com.newsreview.myfriend1.php |
From: arne b. <arn...@ya...> - 2013-06-19 20:43:51
|
How do you do?link: http://www.oxygenbenin.com/wzdtf.html arne bjarne |
From: Colin K. <ck...@st...> - 2005-06-12 02:12:21
|
I'm getting caught up with the Admin Console documentation and am working on the Authentication section. I have a couple of questions and will need a little help. 1) Can multiple (different) LDAP servers be used to authenticate on 1 webgui site? 2) How much time do I have to work on them before the 6.6.2 release? I'd like to know so that I can button up whatever I'm working on, then bite off another big chunk for the next release. 3) Would someone who is familiar with LDAP help me with some form field definitions? Thanks, Colin |
From: JT S. <jt...@pl...> - 2004-07-11 18:39:16
|
The feature freeze for WebGUI 6.1 starts right now. I'm working on fixing the bugs that are listed in the bug list, and as soon as all of those are done I'll be putting out 6.1.0 and 5.5.7. They should both be out tonight. Starting tommorrow the feature freeze will be lifted and work can begin on 6.2.0. JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: JT S. <jt...@pl...> - 2004-07-11 18:33:05
|
>True, but not always practical. In my view, replication is the >solution for a) people whose needs exceed one motherhog server (that's That is true. I personally have only ever worked on one system that large. And, from my very limited experience, the fact that content was a few seconds or minutes out of date is the least of the concerns and problems. >a cool adjective; can I steal it? :-) and b) people who want physical Sure, it's not mine. I picked it up growing up on a farm. I'm sure it's been around for 1000 years. On the farm there's not much that's bigger or more badass than a mother hog. She will weigh several hundred lbs and is libel to bite your leg off if you go near her or or piglets. =) Cows are bigger, but certainly not as mean. >distribution to avoid disaster outages. That's something that we still need to add. The system currently doesn't deal with slave failures, or with slave to master conversion (failover) when the master dies. >Not at the moment... but it's good to think about the final target >market at design time. > >Like David, I'm not a target customer for some time... just a hacker, >kibitzing. ;-) I'd like to encourage you and David and everyone else to think about the future. All I ask is that the conversation states that it's such a talk rather than an attempt to change an ongoing implementation. JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Jay R. A. <jr...@ba...> - 2004-07-11 17:45:13
|
On Sun, Jul 11, 2004 at 11:52:46AM -0500, JT Smith wrote: > The reason I can do this from an application environment is 2 fold. > > 1) The interval that the slaves use to update from the master is > determined largely by the configuration of the slaves to their master. > > 2) As a developer, you control whether your app is reading directly > from the master or from the slave, and presumably you know what your > app needs to do to work properly. True. > From a content viewing point of view, yes the data can be out of date > for a little bit, but that's the whole purpose of caching in a content > environment. Let the data go out of date for a little bit in order to > not stress out the machine. If you're in an environment where cachable > reads need to not be cached, then instead of buying 2 small database > servers and doing replication, buy one motherhog database server and > serve all requests from there. True, but not always practical. In my view, replication is the solution for a) people whose needs exceed one motherhog server (that's a cool adjective; can I steal it? :-) and b) people who want physical distribution to avoid disaster outages. > I'm not trying to be a prick, shun anyone's ideas, or anything else > dickheadish. It's just that: > > 1) This isn't an easy problem to solve in any way that doesn't cause > other problems. You're correct, it's not. > and > > 2) I don't think that the amount of time required to solve this > particular problem is a good investment given that it's not even a > problem yet. Not at the moment... but it's good to think about the final target market at design time. Like David, I'm not a target customer for some time... just a hacker, kibitzing. ;-) Cheers, -- jra -- Jay R. Ashworth jr...@ba... Designer Baylink RFC 2100 Ashworth & Associates The Things I Think '87 e24 St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 "You know: I'm a fan of photosynthesis as much as the next guy, but if God merely wanted us to smell the flowers, he wouldn't have invented a 3GHz microprocessor and a 3D graphics board." -- Luke Girardi |
From: JT S. <jt...@pl...> - 2004-07-11 17:42:16
|
>Well, this seems to depend entirely on the ratio of editing work to site >visitors that you get; I don't think that's a reasonable assumption at >all. But visitors *can edit* stuff too. This isn't an admin mode thing. Visitors can post to polls, and publish USS stuff and post messages to a message board. Visitor sessions are updated in the databases just like regular users. That's why I'm saying that using replicated databases in this way is not very useful. You're right that it would work out for content managers, but they just aren't the only ones editing. >On this point, while that's not *always* the safest design approach, I >believe you're probably correct. If I felt there was any danger in losing data here, I would be worried and change my outlook on this problem. But it's not about losing data. In this case it's about content managers not necessarily seeing their edits published immediately to the site when they log out. It's a convenience thing, and something that can easily be handled through training. JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: JT S. <jt...@pl...> - 2004-07-11 17:38:04
|
The reason I can do this from an application environment is 2 fold. 1) The interval that the slaves use to update from the master is determined largely by the configuration of the slaves to their master. 2) As a developer, you control whether your app is reading directly from the master or from the slave, and presumably you know what your app needs to do to work properly. From a content viewing point of view, yes the data can be out of date for a little bit, but that's the whole purpose of caching in a content environment. Let the data go out of date for a little bit in order to not stress out the machine. If you're in an environment where cachable reads need to not be cached, then instead of buying 2 small database servers and doing replication, buy one motherhog database server and serve all requests from there. I'm not trying to be a prick, shun anyone's ideas, or anything else dickheadish. It's just that: 1) This isn't an easy problem to solve in any way that doesn't cause other problems. and 2) I don't think that the amount of time required to solve this particular problem is a good investment given that it's not even a problem yet. On Sun, 11 Jul 2004 12:35:24 -0400 "Jay R. Ashworth" <jr...@ba...> wrote: >On Sat, Jul 10, 2004 at 11:05:51PM -0500, JT Smith wrote: >> >I hate to sound like a pissant, but that doesn't help precisely the >> >audience who care: those people who might, in fact, need to use it. >> > >> >So I guess the question is, how much impact will it have on them? >> >> >> 1) I have no idea what you're even asking. >> >> 2) You sound angry, and if you are, why? > >No, I wasn't. > >The issue, as I understand it, is this: if I have multiple WebGUI >servers, running from replicated databases, one of which is defined as >the 'master' instance, how do we handle reads of data which is -- >because we're in admin mode -- *required* to be current. > >My recommendation originally was to allow the read to be to whatever >instance was local, and force a cache flush at that point. > >You, if I understood you correctly (and vice versa), said that wasn't >efficient enough, and that it was better to short circuit the read in >the code directly to the master. > >Now, as I recap things, I wonder how you *could*, since they might be >on different machines with no cross-connects, but perhaps I've >misunderstood the issue completely. > >But in any event, if you *don't* force cache flushes (by which, in this >case, I actually mean forcing an update between the replicated >databases -- but from 30,000 feet, you can treat those replicated >databases as a 'local cache', I think), then the slaves will be giving >out bad data... > >and since your stated target for WebGUI is as an applications >development environment -- notwithstanding that the only application >being developed by a large part of it's userbase is "my website" :-) -- >then it seems to me that you can't just brush off the question he >asked... which is what it seemed to *me* like you were doing. > >But I might be wrong. :-) > >Cheers, >-- jra >-- >Jay R. Ashworth jr...@ba... >Designer Baylink RFC 2100 >Ashworth & Associates The Things I Think '87 e24 >St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 > > "You know: I'm a fan of photosynthesis as much as the next guy, > but if God merely wanted us to smell the flowers, he wouldn't > have invented a 3GHz microprocessor and a 3D graphics board." > -- Luke Girardi > > >------------------------------------------------------- >This SF.Net email sponsored by Black Hat Briefings & Training. >Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >digital self defense, top technical experts, no vendor pitches, >unmatched networking opportunities. Visit www.blackhat.com >_______________________________________________ >Pbwebgui-development mailing list >Pbw...@li... >https://lists.sourceforge.net/lists/listinfo/pbwebgui-development JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: David S. <dp...@di...> - 2004-07-11 17:32:08
|
Sorry, I was just trying to forward the discussion. On 7/11/2004 8:35:24 AM, pbw...@li... wrote: > I'm not saying that your point is unreasonable, but this side effect is part of using > replicated databases. I cannot think of a way to implement this in a reasonable manner > that will also solve the problem you bring up. > > I'm > not sure it was you, but someone mentioned that if the user ever edits > something > during their session, then for the rest of their session they should never > get anything > from a slave again. At that point you might as well not even use slaves, > cuz they'll > hardly ever be used. > > I also think that there's not much > of a reason to solve a problem that does not yet > exist. Before we spend a ton of time engineering a solution to a problem > that doesn't > exist, let's see if the people that actually use this > functionality find this or other > things to be a problem. Could we do that? > > > > On Sun, 11 Jul 2004 08:52:01 -0700 > David Schwartz <dp...@di...> wrote: > >The discussion thus far has reached the point where > y'all have figured out a way to > >divert > >requests for multiple databases to just the master, but on a session-based granularity. > >I've > brought up a legitimate scenario when someone would be working in such an > > |
From: Jay R. A. <jr...@ba...> - 2004-07-11 17:05:51
|
On Sun, Jul 11, 2004 at 10:35:24AM -0500, JT Smith wrote: > I'm not saying that your point is unreasonable, but this side effect > is part of using replicated databases. I cannot think of a way to > implement this in a reasonable manner that will also solve the problem > you bring up. > > I'm not sure it was you, but someone mentioned that if the user ever > edits something during their session, then for the rest of their > session they should never get anything from a slave again. At that > point you might as well not even use slaves, cuz they'll hardly ever > be used. Well, this seems to depend entirely on the ratio of editing work to site visitors that you get; I don't think that's a reasonable assumption at all. > I also think that there's not much of a reason to solve a problem > that does not yet exist. Before we spend a ton of time engineering > a solution to a problem that doesn't exist, let's see if the people > that actually use this functionality find this or other things to be a > problem. Could we do that? On this point, while that's not *always* the safest design approach, I believe you're probably correct. Cheers, -- jra -- Jay R. Ashworth jr...@ba... Designer Baylink RFC 2100 Ashworth & Associates The Things I Think '87 e24 St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 "You know: I'm a fan of photosynthesis as much as the next guy, but if God merely wanted us to smell the flowers, he wouldn't have invented a 3GHz microprocessor and a 3D graphics board." -- Luke Girardi |
From: Jay R. A. <jr...@ba...> - 2004-07-11 17:02:27
|
On Sat, Jul 10, 2004 at 11:05:51PM -0500, JT Smith wrote: > >I hate to sound like a pissant, but that doesn't help precisely the > >audience who care: those people who might, in fact, need to use it. > > > >So I guess the question is, how much impact will it have on them? > > > 1) I have no idea what you're even asking. > > 2) You sound angry, and if you are, why? No, I wasn't. The issue, as I understand it, is this: if I have multiple WebGUI servers, running from replicated databases, one of which is defined as the 'master' instance, how do we handle reads of data which is -- because we're in admin mode -- *required* to be current. My recommendation originally was to allow the read to be to whatever instance was local, and force a cache flush at that point. You, if I understood you correctly (and vice versa), said that wasn't efficient enough, and that it was better to short circuit the read in the code directly to the master. Now, as I recap things, I wonder how you *could*, since they might be on different machines with no cross-connects, but perhaps I've misunderstood the issue completely. But in any event, if you *don't* force cache flushes (by which, in this case, I actually mean forcing an update between the replicated databases -- but from 30,000 feet, you can treat those replicated databases as a 'local cache', I think), then the slaves will be giving out bad data... and since your stated target for WebGUI is as an applications development environment -- notwithstanding that the only application being developed by a large part of it's userbase is "my website" :-) -- then it seems to me that you can't just brush off the question he asked... which is what it seemed to *me* like you were doing. But I might be wrong. :-) Cheers, -- jra -- Jay R. Ashworth jr...@ba... Designer Baylink RFC 2100 Ashworth & Associates The Things I Think '87 e24 St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 "You know: I'm a fan of photosynthesis as much as the next guy, but if God merely wanted us to smell the flowers, he wouldn't have invented a 3GHz microprocessor and a 3D graphics board." -- Luke Girardi |
From: JT S. <jt...@pl...> - 2004-07-11 16:20:49
|
I'm not saying that your point is unreasonable, but this side effect is part of using replicated databases. I cannot think of a way to implement this in a reasonable manner that will also solve the problem you bring up. I'm not sure it was you, but someone mentioned that if the user ever edits something during their session, then for the rest of their session they should never get anything from a slave again. At that point you might as well not even use slaves, cuz they'll hardly ever be used. I also think that there's not much of a reason to solve a problem that does not yet exist. Before we spend a ton of time engineering a solution to a problem that doesn't exist, let's see if the people that actually use this functionality find this or other things to be a problem. Could we do that? On Sun, 11 Jul 2004 08:52:01 -0700 David Schwartz <dp...@di...> wrote: >The discussion thus far has reached the point where y'all have figured out a way to >divert >requests for multiple databases to just the master, but on a session-based granularity. >I've brought up a legitimate scenario when someone would be working in such an >environment >and make some changes, then close the session and want to view their changes "live" >before >proceeding with more changes. I don't think this scenario is unreasonable or even very >rare. > >The implication in your reply is that the WG content admin would have some idea of how >long >it would take to propagate the changes based on the internal server architecture. I >think >you are making an assumption here where the guy who's the "Admin" of the WG content is >also >the "admin" of the servers. For a single-server scenario, that's probably valid. But >when >you start to venture off into sites that are sophisticated enough to require replicated >databases, it's increasingly likely that the site content admin is not the same person >who >maintains the operation of the servers themselves. > >It's probably no skin off my back because I can't image running replicated databases, >not >for a while at least. But, in the spirit of contributing to the discussion, I'm just >suggesting a reasonable usage scenario that you might want to take into consideration >in >your present design. > >-David > >JT Smith wrote: > >> If you are running multiple databases, which you probably won't be, yes it could take >>a >> few seconds or even minutes to see your changes. It really depends upon how busy your >> servers are as to how quickly they are resynced. >> >> 98% of WebGUI users will never use multiple databases, and therefore won't even know >> that this feature exists, and won't be affected by it. >> >> On Sat, 10 Jul 2004 00:48:09 -0700 >> David Schwartz <dp...@di...> wrote: >> >When I modify a WG site (in Admin mode), I tend to put some stuff on a page, >> >then have a look at it from the "outside" (after logging out) to be sure I got >> >all of my security settings correct. What happens in this situation? If it >> >takes "a while" (are we talking seconds, minutes, or what?) for the slaves to >> >get updated, my interpretation of this discussions implies that it's fairly >> >likely that none of the changes made in Admin mode will actually be visible for >> >"a while". Or am I missing something? >> > >> >-David >> > >> >Christian Zoellin wrote: >> > > > > >------------------------------------------------------- >This SF.Net email sponsored by Black Hat Briefings & Training. >Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >digital self defense, top technical experts, no vendor pitches, >unmatched networking opportunities. Visit www.blackhat.com >_______________________________________________ >Pbwebgui-development mailing list >Pbw...@li... >https://lists.sourceforge.net/lists/listinfo/pbwebgui-development JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: David S. <dp...@di...> - 2004-07-11 16:01:14
|
The discussion thus far has reached the point where y'all have figured out a way to divert requests for multiple databases to just the master, but on a session-based granularity. I've brought up a legitimate scenario when someone would be working in such an environment and make some changes, then close the session and want to view their changes "live" before proceeding with more changes. I don't think this scenario is unreasonable or even very rare. The implication in your reply is that the WG content admin would have some idea of how long it would take to propagate the changes based on the internal server architecture. I think you are making an assumption here where the guy who's the "Admin" of the WG content is also the "admin" of the servers. For a single-server scenario, that's probably valid. But when you start to venture off into sites that are sophisticated enough to require replicated databases, it's increasingly likely that the site content admin is not the same person who maintains the operation of the servers themselves. It's probably no skin off my back because I can't image running replicated databases, not for a while at least. But, in the spirit of contributing to the discussion, I'm just suggesting a reasonable usage scenario that you might want to take into consideration in your present design. -David JT Smith wrote: > If you are running multiple databases, which you probably won't be, yes it could take a > few seconds or even minutes to see your changes. It really depends upon how busy your > servers are as to how quickly they are resynced. > > 98% of WebGUI users will never use multiple databases, and therefore won't even know > that this feature exists, and won't be affected by it. > > On Sat, 10 Jul 2004 00:48:09 -0700 > David Schwartz <dp...@di...> wrote: > >When I modify a WG site (in Admin mode), I tend to put some stuff on a page, > >then have a look at it from the "outside" (after logging out) to be sure I got > >all of my security settings correct. What happens in this situation? If it > >takes "a while" (are we talking seconds, minutes, or what?) for the slaves to > >get updated, my interpretation of this discussions implies that it's fairly > >likely that none of the changes made in Admin mode will actually be visible for > >"a while". Or am I missing something? > > > >-David > > > >Christian Zoellin wrote: > > |
From: JT S. <jt...@pl...> - 2004-07-11 04:52:14
|
>I hate to sound like a pissant, but that doesn't help precisely the >audience who care: those people who might, in fact, need to use it. > >So I guess the question is, how much impact will it have on them? 1) I have no idea what you're even asking. 2) You sound angry, and if you are, why? JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: Roy J. <RJo...@sp...> - 2004-07-10 22:31:40
|
Attached are the modified versions of HTMLForm.pm and Form.pm that allow = sorting by value of the options hash for the selectList method. I tested these on our dev site (6.0.3) and they worked fine. I hope that = someone else may find these mods useful as well. Special thanks to Andy = for the info on Tie::IxHash. Should I post this to the user contributions section as well, or is the = mailing list sufficent? Thanks, Roy >>> RJo...@sp... 07/10/04 11:57AM >>> > Hello, >=20 > I realized that when using HTMLForm::selectList that the items in the = list were returned in an unsorted order. For a particular form I'm = creating, I needed the items in the list to be sorted by the 'value' of = the options hash. Attached is some code that does this. (alphabetically = ascending case-insensitive) >=20 > Basically, when you call selectList from Form.pm you pass an additional = parameter: >=20 > sortByValue =3D> 1 >=20 > I haven't actually used this in a WebGUI site yet because I wanted to = run this by you guys before I got too excited. The attached script simply = prints the generated HTML to stdout and demonstrates a simple example. >=20 > Thanks, >=20 > Roy >=20 >>It's a good idea to include this functionality in the actual selectList= =20 method, but I think you can make it a bit shorter. Check out the=20 Tie::IxHash module which is actually used all over the place in WebGUI=20 already to sort select lists. I don't think anyone thought to include=20 it in the actual selectList method yet though for some reason. :) Check out the attached script. >> >> -Andy Okay, I didn't know it was possible to make a hash keep elements in any = kind of order. All of my Perl books have been beating the idea into my = head that 'Hashes have no order except the order that perl likes' so I = appreciate your feedback. I will re-implement my idea using your suggested method as it will be = cleaner and easier to follow in the code. This time will also update the = POD and test on our development site. If all goes well, I will repost. Thanks again, Roy ------------------------------------------------------- This SF.Net email sponsored by Black Hat Briefings & Training. Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self defense, top technical experts, no vendor pitches, unmatched networking opportunities. Visit www.blackhat.com=20 _______________________________________________ Pbwebgui-development mailing list Pbw...@li...=20 https://lists.sourceforge.net/lists/listinfo/pbwebgui-development |
From: Jay R. A. <jr...@ba...> - 2004-07-10 21:40:00
|
On Sat, Jul 10, 2004 at 09:13:26AM -0500, JT Smith wrote: > If you are running multiple databases, which you probably won't be, > yes it could take a few seconds or even minutes to see your changes. > It really depends upon how busy your servers are as to how quickly > they are resynced. > > 98% of WebGUI users will never use multiple databases, and therefore > won't even know that this feature exists, and won't be affected by it. I hate to sound like a pissant, but that doesn't help precisely the audience who care: those people who might, in fact, need to use it. So I guess the question is, how much impact will it have on them? Cheers, -- jra -- Jay R. Ashworth jr...@ba... Designer Baylink RFC 2100 Ashworth & Associates The Things I Think '87 e24 St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 "You know: I'm a fan of photosynthesis as much as the next guy, but if God merely wanted us to smell the flowers, he wouldn't have invented a 3GHz microprocessor and a 3D graphics board." -- Luke Girardi |
From: Roy J. <RJo...@sp...> - 2004-07-10 15:57:36
|
> Hello, >=20 > I realized that when using HTMLForm::selectList that the items in the = list were returned in an unsorted order. For a particular form I'm = creating, I needed the items in the list to be sorted by the 'value' of = the options hash. Attached is some code that does this. (alphabetically = ascending case-insensitive) >=20 > Basically, when you call selectList from Form.pm you pass an additional = parameter: >=20 > sortByValue =3D> 1 >=20 > I haven't actually used this in a WebGUI site yet because I wanted to = run this by you guys before I got too excited. The attached script simply = prints the generated HTML to stdout and demonstrates a simple example. >=20 > Thanks, >=20 > Roy >=20 >>It's a good idea to include this functionality in the actual selectList= =20 method, but I think you can make it a bit shorter. Check out the=20 Tie::IxHash module which is actually used all over the place in WebGUI=20 already to sort select lists. I don't think anyone thought to include=20 it in the actual selectList method yet though for some reason. :) Check out the attached script. >> >> -Andy Okay, I didn't know it was possible to make a hash keep elements in any = kind of order. All of my Perl books have been beating the idea into my = head that 'Hashes have no order except the order that perl likes' so I = appreciate your feedback. I will re-implement my idea using your suggested method as it will be = cleaner and easier to follow in the code. This time will also update the = POD and test on our development site. If all goes well, I will repost. Thanks again, Roy |
From: JT S. <jt...@pl...> - 2004-07-10 14:58:46
|
If you are running multiple databases, which you probably won't be, yes it could take a few seconds or even minutes to see your changes. It really depends upon how busy your servers are as to how quickly they are resynced. 98% of WebGUI users will never use multiple databases, and therefore won't even know that this feature exists, and won't be affected by it. On Sat, 10 Jul 2004 00:48:09 -0700 David Schwartz <dp...@di...> wrote: >When I modify a WG site (in Admin mode), I tend to put some stuff on a page, >then have a look at it from the "outside" (after logging out) to be sure I got >all of my security settings correct. What happens in this situation? If it >takes "a while" (are we talking seconds, minutes, or what?) for the slaves to >get updated, my interpretation of this discussions implies that it's fairly >likely that none of the changes made in Admin mode will actually be visible for >"a while". Or am I missing something? > >-David > >Christian Zoellin wrote: > >> Exactly, and this is, what the tag on the session should take care of >> (in my suggested solution). >> As soon as someone updates data (to the master), he get's the most >> up2date version for all following reads. That way people see the results >> of THEIR modifications immediately. Turning Admin mode would also set >> the tag on the session. >> >> Transparent and correct. >> That way, also things like adding USS would work correct (in the sense >> that no single user would notice any difference). With your solution, >> the USS wobject will not be able to use any replicants. At least not >> without the user noticing. >> >> I will not need replication in the near future, so how it is implemented >> is not important, as long as it doesn't break existing wobjects (that >> don't want to use the feature). >> >> Just my 2 cents. >> >> Chris >> >> JT Smith wrote: >> >> > It's already implemented so this is someone of a lame duck discussion, >> > but I think it's important that everyone understand what's going on. >> > >> > Using replicated databases isn't as easy as just all writes go to the >> > master and all reads come from the slave. There is a such thing as a >> > high-priority read. Let me give you an example. >> > >> > Let's say I'm editing my user profile (I'm not in admin mode). The data >> > is read from a slave and then when I hit save it's written to the >> > master. Now it may be a few minutes (or even longer) before the slave is >> > updates. Let's say I forgot to update my phone number as well. I >> > immediately go back and edit my profile, which reads from the slave. The >> > problem here is that my new email address hasn't been written to the >> > slave yet, so when the form is populated, it get's populated with stale >> > data, then when I hit save again, my new email address is overwritten >> > with the old one, but my phone number is up to date. >> > >> > Therefore you have reads, writes, and high-priority reads. The latter >> > two need to come from the master, and only the reads can come from the >> > slave. >> > >> > Then you might say, "Well then, anything that deals with editing must >> > come from the master." That's also easier said than done. For instance >> > if you have getter's and setters. The getters are used both to populate >> > edit forms, and to display data to the user. >> > >> > Anyway, all I'm trying to get across is that this isn't a simple flag to >> > be thrown. It's a decision, that must be made for each query by a human >> > at coding time. That's the reason why I implemented it in the way I did. >> > >> > >> > >> > >> > >> > On Fri, 09 Jul 2004 22:53:27 +0200 >> > Christian Zoellin <chr...@zo...> wrote: >> > >> >> So, why not transparently use the replicants until an UPDATE, CREATE, >> >> DELETE etc. statement is issued. After that, there should be a switch >> >> on the user session to tie the session to the master db. >> >> That way, users that just read from the database will use the >> >> replicants and users that change content will always see the most >> >> up2date version. Only updates to the database for statistics (we >> >> always have those for the page statistic) would have to be treated >> >> different in that they don't set the switch. >> >> >> >> I think, that would be as transparent as it gets. >> >> >> >> Chris >> >> >> >> >> >> JT Smith wrote: >> >> >> >>>> 1) Aren't there cases when people will want to update a central >> >>>> database >> >>>> but not have admin on? For instance, posting to a "public" message >> >>>> board. I'm assuming the RDBMS takes care of keeping this in sync? I >> >>>> haven't had a need to set up replicated DBs yet. >> >>> >> >>> >> >>> >> >>> >> >>> Yes there are, but in those cases the developer will the main >> >>> database and not one of the replicants. >> >>> >> >>> In fact, all writes should use the master and not a slave. >> >>> >> >>> >> >>>> 2) Transparency == good, normally. The more difficult path for you is >> >>>> probably the simplest for everyone. >> >>> >> >>> >> >>> >> >>> >> >>> Could you explain what you mean by that? >> >>> >> >>>> What will adding replication do to performance for (an assumed >> >>>> majority) >> >>>> of WebGUI users that run only one database? >> >>> >> >>> >> >>> >> >>> It will do nothing to the performance of the majority. They won't >> >>> have to set anything up differently or anything else. >> >>> >> >>> >> >>> In fact, I should have mentioned this before, if no slaves are >> >>> defined then the master will always be used regardless. >> >>> >> >>> >> >>> >> >>>> On Fri, 2004-07-09 at 10:49, JT Smith wrote: >> >>>> >> >>>>> I've been struggling over the last week with how to implement the >> >>>>> ability to use a replicated database as a read source. I've >> >>>>> experimented with about 10 different ways of doing this and all >> >>>>> either didn't work or were way too convoluted to be useful. I've >> >>>>> come down to two ways to implement this, but I need about 2 seconds >> >>>>> of feedback from a couple of people on this list to make sure I'm >> >>>>> on the right track. >> >>>>> >> >>>>> Way #1: Refactor WebGUI::SQL Change WebGUI::SQL to not use all >> >>>>> those class methods, but instead instance methods. So we'd add >> >>>>> several constructors and remove the ability to pass in a new >> >>>>> database handler on each seperate method. The new constructors >> >>>>> would be: >> >>>>> >> >>>>> new ( [ dbh ] ) >> >>>>> >> >>>>> This would default to the WebGUI db handler, but you could override >> >>>>> it by passing in a dbh. This would be the common one everyone would >> >>>>> use. >> >>>>> >> >>>>> newReplicant ( ) >> >>>>> >> >>>>> This would use a random one of the defined replicated databases, >> >>>>> except if the user was in admin mode. >> >>>>> >> >>>>> >> >>>>> newHandler ( dsn, user, pass [ , options ] ) >> >>>>> >> >>>>> This would create a new DBH and set it in the object instance. And >> >>>>> would require the use of a disconnect() method to destroy it. >> >>>>> >> >>>>> >> >>>>> So to use any of these you'd do something like this: >> >>>>> >> >>>>> >> >>>>> my $db = WebGUI::SQL->newReplicant; >> >>>>> my @arr = $db->quickArray($sql); >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> Way #2: Progie does more work, but gets more control We leave >> >>>>> WebGUI::SQL as is, except for adding one method like: >> >>>>> >> >>>>> WebGUI::SQL->getReplicant() >> >>>>> >> >>>>> >> >>>>> Then the programmer in his code would write something like (when he >> >>>>> wants to use a replicant): >> >>>>> >> >>>>> my $dbh = $session{dbh}; >> >>>>> unless ($session{var}{adminOn} || $shouldntUseReplicant) { >> >>>>> $dbh = WebGUI::SQL->getReplicant; >> >>>>> } >> >>>>> my @arr = WebGUI::SQL->quickAarry($sql,$dbh); >> >>>>> >> >>>>> >> >>>>> >> >>>>> Way #1 is cleaner, but requires refactoring all of WebGUI. (which >> >>>>> I'll do if you guys think this is the best way to go) >> >>>>> >> >>>>> Way #2 is not nearly as clean, but requires no change to the rest >> >>>>> of WebGUI. >> >>>>> >> >>>>> >> >>>>> I hope to implement one of these in a few hours, so quick feedback >> >>>>> is appreciated. >> >>>>> >> >>>>> >> >>>>> JT ~ Plain Black >> >>>>> >> >>>>> Create like a god, command like a king, work like a slave. >> >>>>> >> >>>>> ------------------------------------------------------- >> >>>>> This SF.Net email sponsored by Black Hat Briefings & Training. >> >>>>> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >> >>>>> digital self defense, top technical experts, no vendor pitches, >> >>>>> unmatched networking opportunities. Visit www.blackhat.com >> >>>>> _______________________________________________ >> >>>>> Pbwebgui-development mailing list >> >>>>> Pbw...@li... >> >>>>> https://lists.sourceforge.net/lists/listinfo/pbwebgui-development >> >>>> >> >>>> >> >>>> -- >> >>>> *-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-* >> >>>> >> >>>> Daniel Collis Puro >> >>>> CTO and Lead Developer, MassLegalServices.org >> >>>> Massachusetts Law Reform Institute >> >>>> 99 Chauncy St., Suite 500 >> >>>> Boston, MA 02111 >> >>>> 617-357-0019 ext. 342 >> >>>> dp...@ml... >> >>>> http://www.masslegalservices.org >> >>>> >> >> ------------------------------------------------------- >> This SF.Net email sponsored by Black Hat Briefings & Training. >> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >> digital self defense, top technical experts, no vendor pitches, >> unmatched networking opportunities. Visit www.blackhat.com >> _______________________________________________ >> Pbwebgui-development mailing list >> Pbw...@li... >> https://lists.sourceforge.net/lists/listinfo/pbwebgui-development > > > >------------------------------------------------------- >This SF.Net email sponsored by Black Hat Briefings & Training. >Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >digital self defense, top technical experts, no vendor pitches, >unmatched networking opportunities. Visit www.blackhat.com >_______________________________________________ >Pbwebgui-development mailing list >Pbw...@li... >https://lists.sourceforge.net/lists/listinfo/pbwebgui-development JT ~ Plain Black Create like a god, command like a king, work like a slave. |
From: David S. <dp...@di...> - 2004-07-10 08:03:53
|
When I modify a WG site (in Admin mode), I tend to put some stuff on a page, then have a look at it from the "outside" (after logging out) to be sure I got all of my security settings correct. What happens in this situation? If it takes "a while" (are we talking seconds, minutes, or what?) for the slaves to get updated, my interpretation of this discussions implies that it's fairly likely that none of the changes made in Admin mode will actually be visible for "a while". Or am I missing something? -David Christian Zoellin wrote: > Exactly, and this is, what the tag on the session should take care of > (in my suggested solution). > As soon as someone updates data (to the master), he get's the most > up2date version for all following reads. That way people see the results > of THEIR modifications immediately. Turning Admin mode would also set > the tag on the session. > > Transparent and correct. > That way, also things like adding USS would work correct (in the sense > that no single user would notice any difference). With your solution, > the USS wobject will not be able to use any replicants. At least not > without the user noticing. > > I will not need replication in the near future, so how it is implemented > is not important, as long as it doesn't break existing wobjects (that > don't want to use the feature). > > Just my 2 cents. > > Chris > > JT Smith wrote: > > > It's already implemented so this is someone of a lame duck discussion, > > but I think it's important that everyone understand what's going on. > > > > Using replicated databases isn't as easy as just all writes go to the > > master and all reads come from the slave. There is a such thing as a > > high-priority read. Let me give you an example. > > > > Let's say I'm editing my user profile (I'm not in admin mode). The data > > is read from a slave and then when I hit save it's written to the > > master. Now it may be a few minutes (or even longer) before the slave is > > updates. Let's say I forgot to update my phone number as well. I > > immediately go back and edit my profile, which reads from the slave. The > > problem here is that my new email address hasn't been written to the > > slave yet, so when the form is populated, it get's populated with stale > > data, then when I hit save again, my new email address is overwritten > > with the old one, but my phone number is up to date. > > > > Therefore you have reads, writes, and high-priority reads. The latter > > two need to come from the master, and only the reads can come from the > > slave. > > > > Then you might say, "Well then, anything that deals with editing must > > come from the master." That's also easier said than done. For instance > > if you have getter's and setters. The getters are used both to populate > > edit forms, and to display data to the user. > > > > Anyway, all I'm trying to get across is that this isn't a simple flag to > > be thrown. It's a decision, that must be made for each query by a human > > at coding time. That's the reason why I implemented it in the way I did. > > > > > > > > > > > > On Fri, 09 Jul 2004 22:53:27 +0200 > > Christian Zoellin <chr...@zo...> wrote: > > > >> So, why not transparently use the replicants until an UPDATE, CREATE, > >> DELETE etc. statement is issued. After that, there should be a switch > >> on the user session to tie the session to the master db. > >> That way, users that just read from the database will use the > >> replicants and users that change content will always see the most > >> up2date version. Only updates to the database for statistics (we > >> always have those for the page statistic) would have to be treated > >> different in that they don't set the switch. > >> > >> I think, that would be as transparent as it gets. > >> > >> Chris > >> > >> > >> JT Smith wrote: > >> > >>>> 1) Aren't there cases when people will want to update a central > >>>> database > >>>> but not have admin on? For instance, posting to a "public" message > >>>> board. I'm assuming the RDBMS takes care of keeping this in sync? I > >>>> haven't had a need to set up replicated DBs yet. > >>> > >>> > >>> > >>> > >>> Yes there are, but in those cases the developer will the main > >>> database and not one of the replicants. > >>> > >>> In fact, all writes should use the master and not a slave. > >>> > >>> > >>>> 2) Transparency == good, normally. The more difficult path for you is > >>>> probably the simplest for everyone. > >>> > >>> > >>> > >>> > >>> Could you explain what you mean by that? > >>> > >>>> What will adding replication do to performance for (an assumed > >>>> majority) > >>>> of WebGUI users that run only one database? > >>> > >>> > >>> > >>> It will do nothing to the performance of the majority. They won't > >>> have to set anything up differently or anything else. > >>> > >>> > >>> In fact, I should have mentioned this before, if no slaves are > >>> defined then the master will always be used regardless. > >>> > >>> > >>> > >>>> On Fri, 2004-07-09 at 10:49, JT Smith wrote: > >>>> > >>>>> I've been struggling over the last week with how to implement the > >>>>> ability to use a replicated database as a read source. I've > >>>>> experimented with about 10 different ways of doing this and all > >>>>> either didn't work or were way too convoluted to be useful. I've > >>>>> come down to two ways to implement this, but I need about 2 seconds > >>>>> of feedback from a couple of people on this list to make sure I'm > >>>>> on the right track. > >>>>> > >>>>> Way #1: Refactor WebGUI::SQL Change WebGUI::SQL to not use all > >>>>> those class methods, but instead instance methods. So we'd add > >>>>> several constructors and remove the ability to pass in a new > >>>>> database handler on each seperate method. The new constructors > >>>>> would be: > >>>>> > >>>>> new ( [ dbh ] ) > >>>>> > >>>>> This would default to the WebGUI db handler, but you could override > >>>>> it by passing in a dbh. This would be the common one everyone would > >>>>> use. > >>>>> > >>>>> newReplicant ( ) > >>>>> > >>>>> This would use a random one of the defined replicated databases, > >>>>> except if the user was in admin mode. > >>>>> > >>>>> > >>>>> newHandler ( dsn, user, pass [ , options ] ) > >>>>> > >>>>> This would create a new DBH and set it in the object instance. And > >>>>> would require the use of a disconnect() method to destroy it. > >>>>> > >>>>> > >>>>> So to use any of these you'd do something like this: > >>>>> > >>>>> > >>>>> my $db = WebGUI::SQL->newReplicant; > >>>>> my @arr = $db->quickArray($sql); > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> Way #2: Progie does more work, but gets more control We leave > >>>>> WebGUI::SQL as is, except for adding one method like: > >>>>> > >>>>> WebGUI::SQL->getReplicant() > >>>>> > >>>>> > >>>>> Then the programmer in his code would write something like (when he > >>>>> wants to use a replicant): > >>>>> > >>>>> my $dbh = $session{dbh}; > >>>>> unless ($session{var}{adminOn} || $shouldntUseReplicant) { > >>>>> $dbh = WebGUI::SQL->getReplicant; > >>>>> } > >>>>> my @arr = WebGUI::SQL->quickAarry($sql,$dbh); > >>>>> > >>>>> > >>>>> > >>>>> Way #1 is cleaner, but requires refactoring all of WebGUI. (which > >>>>> I'll do if you guys think this is the best way to go) > >>>>> > >>>>> Way #2 is not nearly as clean, but requires no change to the rest > >>>>> of WebGUI. > >>>>> > >>>>> > >>>>> I hope to implement one of these in a few hours, so quick feedback > >>>>> is appreciated. > >>>>> > >>>>> > >>>>> JT ~ Plain Black > >>>>> > >>>>> Create like a god, command like a king, work like a slave. > >>>>> > >>>>> ------------------------------------------------------- > >>>>> This SF.Net email sponsored by Black Hat Briefings & Training. > >>>>> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > >>>>> digital self defense, top technical experts, no vendor pitches, > >>>>> unmatched networking opportunities. Visit www.blackhat.com > >>>>> _______________________________________________ > >>>>> Pbwebgui-development mailing list > >>>>> Pbw...@li... > >>>>> https://lists.sourceforge.net/lists/listinfo/pbwebgui-development > >>>> > >>>> > >>>> -- > >>>> *-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-* > >>>> > >>>> Daniel Collis Puro > >>>> CTO and Lead Developer, MassLegalServices.org > >>>> Massachusetts Law Reform Institute > >>>> 99 Chauncy St., Suite 500 > >>>> Boston, MA 02111 > >>>> 617-357-0019 ext. 342 > >>>> dp...@ml... > >>>> http://www.masslegalservices.org > >>>> > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Pbwebgui-development mailing list > Pbw...@li... > https://lists.sourceforge.net/lists/listinfo/pbwebgui-development |
From: Andy G. <an...@hy...> - 2004-07-10 02:14:50
|
Roy Johnson wrote: > Hello, > > I realized that when using HTMLForm::selectList that the items in the list were returned in an unsorted order. For a particular form I'm creating, I needed the items in the list to be sorted by the 'value' of the options hash. Attached is some code that does this. (alphabetically ascending case-insensitive) > > Basically, when you call selectList from Form.pm you pass an additional parameter: > > sortByValue => 1 > > I haven't actually used this in a WebGUI site yet because I wanted to run this by you guys before I got too excited. The attached script simply prints the generated HTML to stdout and demonstrates a simple example. > > Thanks, > > Roy > It's a good idea to include this functionality in the actual selectList method, but I think you can make it a bit shorter. Check out the Tie::IxHash module which is actually used all over the place in WebGUI already to sort select lists. I don't think anyone thought to include it in the actual selectList method yet though for some reason. :) Check out the attached script. -Andy |
From: Roy J. <RJo...@sp...> - 2004-07-10 01:46:01
|
Hello, I realized that when using HTMLForm::selectList that the items in the list = were returned in an unsorted order. For a particular form I'm creating, I = needed the items in the list to be sorted by the 'value' of the options = hash. Attached is some code that does this. (alphabetically ascending = case-insensitive) Basically, when you call selectList from Form.pm you pass an additional = parameter: sortByValue =3D> 1 I haven't actually used this in a WebGUI site yet because I wanted to run = this by you guys before I got too excited. The attached script simply = prints the generated HTML to stdout and demonstrates a simple example. Thanks, Roy |
From: Jay R. A. <jr...@ba...> - 2004-07-10 00:15:44
|
On Fri, Jul 09, 2004 at 04:29:02PM -0500, JT Smith wrote: > >Would it be safer here to say "cacheable reads" and "non-cacheable > >reads" and get them all from the slaves, forcing an update on the later > >types? That way you get your coherency for free. > > Do you mean forcing the slave to update itself from the master? I see > two problems with that: > > 1) It could take a while for that to happen and therefore be slow. > > 2) The only way I know how to do that is specific to each database > implementation and involves issuing commands to both the master and > slave. > > If I missed your point, please let me know. I'm not sure if you missed my point. Maybe I missed yours. Merely forcing the read to the master will indeed force you to get the correct data, but it leaves the slave still out of date. Given the limitations on forcing the slave to become uptodate that you outline, though, I guess trying to piggyback the updating of the slaves onto the read isn't practical. Cheers, -- jra -- Jay R. Ashworth jr...@ba... Designer Baylink RFC 2100 Ashworth & Associates The Things I Think '87 e24 St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274 "You know: I'm a fan of photosynthesis as much as the next guy, but if God merely wanted us to smell the flowers, he wouldn't have invented a 3GHz microprocessor and a 3D graphics board." -- Luke Girardi |
From: Christian Z. <chr...@zo...> - 2004-07-09 23:59:02
|
Now change "reads" to "users" and it will make sense. We can easily decide on a user by user basis. As soon as a user updates something, he is a non-cacheable user for the rest of his session. Things like forcing or broadcasting updates would be bad anyway. JT Smith wrote: >> Would it be safer here to say "cacheable reads" and "non-cacheable >> reads" and get them all from the slaves, forcing an update on the later >> types? That way you get your coherency for free. > > > Do you mean forcing the slave to update itself from the master? I see > two problems with that: > > 1) It could take a while for that to happen and therefore be slow. > > 2) The only way I know how to do that is specific to each database > implementation and involves issuing commands to both the master and slave. > > If I missed your point, please let me know. > > > JT ~ Plain Black > > Create like a god, command like a king, work like a slave. |
From: Christian Z. <chr...@zo...> - 2004-07-09 23:55:05
|
Exactly, and this is, what the tag on the session should take care of (in my suggested solution). As soon as someone updates data (to the master), he get's the most up2date version for all following reads. That way people see the results of THEIR modifications immediately. Turning Admin mode would also set the tag on the session. Transparent and correct. That way, also things like adding USS would work correct (in the sense that no single user would notice any difference). With your solution, the USS wobject will not be able to use any replicants. At least not without the user noticing. I will not need replication in the near future, so how it is implemented is not important, as long as it doesn't break existing wobjects (that don't want to use the feature). Just my 2 cents. Chris JT Smith wrote: > It's already implemented so this is someone of a lame duck discussion, > but I think it's important that everyone understand what's going on. > > Using replicated databases isn't as easy as just all writes go to the > master and all reads come from the slave. There is a such thing as a > high-priority read. Let me give you an example. > > Let's say I'm editing my user profile (I'm not in admin mode). The data > is read from a slave and then when I hit save it's written to the > master. Now it may be a few minutes (or even longer) before the slave is > updates. Let's say I forgot to update my phone number as well. I > immediately go back and edit my profile, which reads from the slave. The > problem here is that my new email address hasn't been written to the > slave yet, so when the form is populated, it get's populated with stale > data, then when I hit save again, my new email address is overwritten > with the old one, but my phone number is up to date. > > Therefore you have reads, writes, and high-priority reads. The latter > two need to come from the master, and only the reads can come from the > slave. > > Then you might say, "Well then, anything that deals with editing must > come from the master." That's also easier said than done. For instance > if you have getter's and setters. The getters are used both to populate > edit forms, and to display data to the user. > > Anyway, all I'm trying to get across is that this isn't a simple flag to > be thrown. It's a decision, that must be made for each query by a human > at coding time. That's the reason why I implemented it in the way I did. > > > > > > On Fri, 09 Jul 2004 22:53:27 +0200 > Christian Zoellin <chr...@zo...> wrote: > >> So, why not transparently use the replicants until an UPDATE, CREATE, >> DELETE etc. statement is issued. After that, there should be a switch >> on the user session to tie the session to the master db. >> That way, users that just read from the database will use the >> replicants and users that change content will always see the most >> up2date version. Only updates to the database for statistics (we >> always have those for the page statistic) would have to be treated >> different in that they don't set the switch. >> >> I think, that would be as transparent as it gets. >> >> Chris >> >> >> JT Smith wrote: >> >>>> 1) Aren't there cases when people will want to update a central >>>> database >>>> but not have admin on? For instance, posting to a "public" message >>>> board. I'm assuming the RDBMS takes care of keeping this in sync? I >>>> haven't had a need to set up replicated DBs yet. >>> >>> >>> >>> >>> Yes there are, but in those cases the developer will the main >>> database and not one of the replicants. >>> >>> In fact, all writes should use the master and not a slave. >>> >>> >>>> 2) Transparency == good, normally. The more difficult path for you is >>>> probably the simplest for everyone. >>> >>> >>> >>> >>> Could you explain what you mean by that? >>> >>>> What will adding replication do to performance for (an assumed >>>> majority) >>>> of WebGUI users that run only one database? >>> >>> >>> >>> It will do nothing to the performance of the majority. They won't >>> have to set anything up differently or anything else. >>> >>> >>> In fact, I should have mentioned this before, if no slaves are >>> defined then the master will always be used regardless. >>> >>> >>> >>>> On Fri, 2004-07-09 at 10:49, JT Smith wrote: >>>> >>>>> I've been struggling over the last week with how to implement the >>>>> ability to use a replicated database as a read source. I've >>>>> experimented with about 10 different ways of doing this and all >>>>> either didn't work or were way too convoluted to be useful. I've >>>>> come down to two ways to implement this, but I need about 2 seconds >>>>> of feedback from a couple of people on this list to make sure I'm >>>>> on the right track. >>>>> >>>>> Way #1: Refactor WebGUI::SQL Change WebGUI::SQL to not use all >>>>> those class methods, but instead instance methods. So we'd add >>>>> several constructors and remove the ability to pass in a new >>>>> database handler on each seperate method. The new constructors >>>>> would be: >>>>> >>>>> new ( [ dbh ] ) >>>>> >>>>> This would default to the WebGUI db handler, but you could override >>>>> it by passing in a dbh. This would be the common one everyone would >>>>> use. >>>>> >>>>> newReplicant ( ) >>>>> >>>>> This would use a random one of the defined replicated databases, >>>>> except if the user was in admin mode. >>>>> >>>>> >>>>> newHandler ( dsn, user, pass [ , options ] ) >>>>> >>>>> This would create a new DBH and set it in the object instance. And >>>>> would require the use of a disconnect() method to destroy it. >>>>> >>>>> >>>>> So to use any of these you'd do something like this: >>>>> >>>>> >>>>> my $db = WebGUI::SQL->newReplicant; >>>>> my @arr = $db->quickArray($sql); >>>>> >>>>> >>>>> >>>>> >>>>> Way #2: Progie does more work, but gets more control We leave >>>>> WebGUI::SQL as is, except for adding one method like: >>>>> >>>>> WebGUI::SQL->getReplicant() >>>>> >>>>> >>>>> Then the programmer in his code would write something like (when he >>>>> wants to use a replicant): >>>>> >>>>> my $dbh = $session{dbh}; >>>>> unless ($session{var}{adminOn} || $shouldntUseReplicant) { >>>>> $dbh = WebGUI::SQL->getReplicant; >>>>> } >>>>> my @arr = WebGUI::SQL->quickAarry($sql,$dbh); >>>>> >>>>> >>>>> >>>>> Way #1 is cleaner, but requires refactoring all of WebGUI. (which >>>>> I'll do if you guys think this is the best way to go) >>>>> >>>>> Way #2 is not nearly as clean, but requires no change to the rest >>>>> of WebGUI. >>>>> >>>>> >>>>> I hope to implement one of these in a few hours, so quick feedback >>>>> is appreciated. >>>>> >>>>> >>>>> JT ~ Plain Black >>>>> >>>>> Create like a god, command like a king, work like a slave. >>>>> >>>>> ------------------------------------------------------- >>>>> This SF.Net email sponsored by Black Hat Briefings & Training. >>>>> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - >>>>> digital self defense, top technical experts, no vendor pitches, >>>>> unmatched networking opportunities. Visit www.blackhat.com >>>>> _______________________________________________ >>>>> Pbwebgui-development mailing list >>>>> Pbw...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/pbwebgui-development >>>> >>>> >>>> -- >>>> *-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-*^*-._.-* >>>> >>>> Daniel Collis Puro >>>> CTO and Lead Developer, MassLegalServices.org >>>> Massachusetts Law Reform Institute >>>> 99 Chauncy St., Suite 500 >>>> Boston, MA 02111 >>>> 617-357-0019 ext. 342 >>>> dp...@ml... >>>> http://www.masslegalservices.org >>>> |
From: JT S. <jt...@pl...> - 2004-07-09 22:14:27
|
>Would it be safer here to say "cacheable reads" and "non-cacheable >reads" and get them all from the slaves, forcing an update on the later >types? That way you get your coherency for free. Do you mean forcing the slave to update itself from the master? I see two problems with that: 1) It could take a while for that to happen and therefore be slow. 2) The only way I know how to do that is specific to each database implementation and involves issuing commands to both the master and slave. If I missed your point, please let me know. JT ~ Plain Black Create like a god, command like a king, work like a slave. |