From: Don S. <do...@se...> - 2002-10-22 17:05:54
|
I guess what you "think" doesn't count for much. I would require valid numbers. I'll assume you are using mysql with postnuke. Have you tested with postgres or mssql or oracle or anything else? It doesn't sound like it. I personally would want real evidence of quasi-scientific testing. Just setting up a page to directly call the mysql functions and then make those calls through PEAR is enough. Perhaps the PEAR folks have done such things... Don. On Tue, 22 Oct 2002, Eloi George wrote: > www.actionfigure.com is actually a postnuke site, but is a pretty good > representation of what we may hope that one of our (or our client's) sites > can achieve. Looking at his stats at > http://www.action-figure.com/modules.php?op=modload&name=Stats&file=index it > looks like the site has averaged 570,000 hits per month for the last 9 > months. That's 13 hits per second. > > Postnuke doesn't have any load-balancing features, so all this is handled on > one server. If we don't count the main page (13 modules), the average page > on the site looks like it uses 6 modules including banner rotations & user > stats. If we assume just 1 query per module, then we end up with an average > of 78 queries per second! > > Hmm. I don't think any db's can handle that much... > > -Eloi- > > ----- Original Message ----- > From: "Don Seiler" <do...@se...> > To: "Eloi George" <el...@re...> > Cc: <ma...@tu...>; <php...@li...> > Sent: Tuesday, October 22, 2002 10:49 AM > Subject: Re: [Phpwebsite-developers] PEAR DB and auto_increment > > > > > I know I'm coming in too late in the game, but it'd be great if phpWS > didn't > > > try to support every db in the known world. That way there wouldn't be > a > > > bunch of middle-layer generalized db access protocols to slow everything > > > down. > > > > If we use the PEAR standards, then phpWS doesn't need to worry about > > supporting things, the PEAR team will have done all that legwork, and I'm > > confident that the PEAR team will have done it the most efficient way > > possible. > > > > Even if phpWS only wanted to support two or three, once you support more > > than one you might as well use PEAR and support them all, rather than > > write the functions yourself. > > > > I'd be interested in seeing some benchmarks about how the pear DB layer > > affects performance. I don't think the hit would be that bad. What is > > the largest scale site running phpWS, btw? > > > > Don. > > > > > > > > > > > ------------------------------------------------------- > This sf.net emial is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ad.doubleclick.net/clk;4699841;7576301;v?http://www.sun.com/javavote > _______________________________________________ > Phpwebsite-developers mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwebsite-developers > |
From: Eloi G. <el...@re...> - 2002-10-22 18:57:30
|
> I guess what you "think" doesn't count for much. <chuckle> You're right, it doesn't. When I say "I think", what I'm actually saying is that I'm making a totally unqualified guess and I'm hoping that someone else who knows the answer would step up. Incidentally, I just found a post at http://www.phpbuilder.net/annotate/message.php3?id=1002047 which said that 200 queries/sec is doable. So I was wrong on that assumption. >I would require valid > numbers. I'll assume you are using mysql with postnuke. Have you tested > with postgres or mssql or oracle or anything else? It doesn't sound like > it. I guess I forgot to mention that that is -not- my site. It was a heavy-use site mentioned in a post on postnuke.com back in August. I have no idea what the db backend is. For all I know the stats page could have been made up to sell advertising. That's why I used it as a -representation- of what we hope to acheive with our own sites. There is a benchmarking suite available on the PEAR site. An old benchmark summary found on http://www.phplens.com/lens/adodb/ indicated a 152-176% overhead. Of course results will vary on different configurations & versions. <grin> -Eloi- |
From: Don S. <do...@se...> - 2002-10-22 19:03:26
|
Thanks. Perhaps the phpWS team wants to do some updated benchmarking tests if they have the time. ;) Sorry if I sounded caustic before. It just appeared to me that you were throwing around statements without any backing, troll-style. Thanks again, Don. On Tue, 22 Oct 2002, Eloi George wrote: > > I guess what you "think" doesn't count for much. > > <chuckle> You're right, it doesn't. When I say "I think", what I'm actually > saying is that I'm making a totally unqualified guess and I'm hoping that > someone else who knows the answer would step up. Incidentally, I just found > a post at http://www.phpbuilder.net/annotate/message.php3?id=1002047 which > said that 200 queries/sec is doable. So I was wrong on that assumption. > > >I would require valid > > numbers. I'll assume you are using mysql with postnuke. Have you tested > > with postgres or mssql or oracle or anything else? It doesn't sound like > > it. > > I guess I forgot to mention that that is -not- my site. It was a heavy-use > site mentioned in a post on postnuke.com back in August. I have no idea > what the db backend is. For all I know the stats page could have been made > up to sell advertising. That's why I used it as a -representation- of what > we hope to acheive with our own sites. > > There is a benchmarking suite available on the PEAR site. An old benchmark > summary found on http://www.phplens.com/lens/adodb/ indicated a 152-176% > overhead. Of course results will vary on different configurations & > versions. <grin> > > -Eloi- > > > > > ------------------------------------------------------- > This sf.net emial is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ad.doubleclick.net/clk;4699841;7576301;v?http://www.sun.com/javavote > _______________________________________________ > Phpwebsite-developers mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwebsite-developers > |
From: Adam M. <ad...@tu...> - 2002-10-23 13:48:38
|
I think our decision to stick with the PEAR standard of storing the sequence numbers is the best idea. The team here all felt like we were having a debate that had been done before (by the pear team). I will try doing some benchmarks later this week on the DB abstraction stuff and post the real numbers to the dev list. I am just as curious as everyone else and it sounds like a neat little side project...break the database, Adam! :) I also agree with Don that if we use one PEAR package we should go ahead and use as many as we can instead of re-inventing the wheel. Cheers! Adam --------------------------------- Adam Morton Developer - Web Technology Group Appalachian State University http://phpwebsite.appstate.edu |
From: Adam M. <ad...@tu...> - 2002-10-21 21:40:50
|
I'm with the single table idea for all sequence numbers. This way we can have one select on init that loads all sequence numbers into a session array. Then on inserts we only have to update the core_sequencer table instead of select AND update it (thinking of reducing database accesses). Also, with all the sequence numbers in the core table, we can get rid of the current _seq tables getting our table total back down to around 23 :) (organizational +). Another plus? A quick re-write of the sqlmaxid function can simply return the id in the sessioned sequence array (less database action=good...there may be more functions that can take advantage). We can continue to rely on the pear package, we just need to try and get the best of all worlds. Adam --------------------------------- Adam Morton Developer - Web Technology Group Appalachian State University http://phpwebsite.appstate.edu |
From: Steven L. <st...@tu...> - 2002-10-22 13:49:13
|
+1 to one table. > I'm with the single table idea for all sequence numbers. > > This way we can have one select on init that loads all sequence numbers > into a session array. Then on inserts we only have to update the > core_sequencer table instead of select AND update it (thinking of > reducing database accesses). > > Also, with all the sequence numbers in the core table, we can get rid of > the current _seq tables getting our table total back down to around 23 > :) (organizational +). > > Another plus? A quick re-write of the sqlmaxid function can simply > return the id in the sessioned sequence array (less database > action=good...there may be more functions that can take advantage). > > We can continue to rely on the pear package, we just need to try and get > the best of all worlds. > > Adam > > --------------------------------- > Adam Morton > Developer - Web Technology Group > Appalachian State University > http://phpwebsite.appstate.edu > > > > > ------------------------------------------------------- > This sf.net emial is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ad.doubleclick.net/clk;4699841;7576298;k?http://www.sun.com/javavote > _______________________________________________ > Phpwebsite-developers mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwebsite-developers -- Steven Levin Electronic Student Services Appalachian State University Phone: 828.262.2431 PhpWebsite Development Team URL: http://phpwebsite.appstate.edu Email: st...@NO... |
From: Adam M. <ad...@tu...> - 2002-10-22 14:49:26
|
OK after meeting with everyone here I've decided to change my vote to sticking with the PEAR standard. This will require more tables but will also reduce the "bottleneck" effect. So...+1 on PEAR standard Adam > I'm with the single table idea for all sequence numbers. > > This way we can have one select on init that loads all sequence numbers > into a session array. Then on inserts we only have to update the > core_sequencer table instead of select AND update it (thinking of > reducing database accesses). > > Also, with all the sequence numbers in the core table, we can get rid of > the current _seq tables getting our table total back down to around 23 > :) (organizational +). > > Another plus? A quick re-write of the sqlmaxid function can simply > return the id in the sessioned sequence array (less database > action=good...there may be more functions that can take advantage). > > We can continue to rely on the pear package, we just need to try and get > the best of all worlds. > > Adam > > --------------------------------- > Adam Morton > Developer - Web Technology Group > Appalachian State University > http://phpwebsite.appstate.edu > > > > > ------------------------------------------------------- > This sf.net emial is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ad.doubleclick.net/clk;4699841;7576298;k?http://www.sun.com/javavote > _______________________________________________ > Phpwebsite-developers mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwebsite-developers --------------------------------- Adam Morton Developer - Web Technology Group Appalachian State University http://phpwebsite.appstate.edu |
From: Don S. <do...@se...> - 2002-10-21 13:56:43
|
Assuming your sqlInsert function uses nextId(), it looks fine. I have reservations about your naming of parameters as "maxColumn" and "returnMax". Perhaps naming them "idColumn" and "returnId" would be better. I realize it's just names and doesn't really matter but might as well mention it before it's written. Don. On Mon, 21 Oct 2002, Matthew McNaney wrote: > Ok I figured out what I did wrong. I did not set my test table as primary, > it were merely indexed. I thought all my id columns were primary. > Anyway... > > I got it to work. Thanks Don. I actually found your letters to php.net > through Google. Man that thing is fast :) > > As you said (and Bob's -1 explained) it appear to increment correctly. I > am naming the sequence by the table name as will be unique per table. > > I have written the sqlInsert in my copy but I need a definative parameter > list. Please vote on the final outcome. > > function sqlInsert ($db_array, $table_name, $maxColumn=NULL, > $check_dup=FALSE, $returnMax=FALSE, $show_sql=FALSE) > > db_array : associate array of columns=>values > > table_name : self explanatory > > maxColumn : the id column to increment > > check_dup : does not insert a row if a duplicate db_array is found (will > ignore the id for checking purposes) > > returnMax : returns the max id if TRUE. > > show_sql : shows the sql string for error checking. > > --------------------------------- > User example: > $this->user_id = $GLOBALS["core"]->sqlInsert($sql_array, "mod_users", > "user_id", 1); > > > I would like to get this voted on as I am anxious to update my code. > > Please post :) > > Thanks again, > Matthew McNaney > Internet Systems Architect > Electronic Student Services > Email: ma...@tu... > URL: http://phpwebsite.appstate.edu > Phone: 828-262-6493 > ICQ: 141057403 > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Phpwebsite-developers mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phpwebsite-developers > |