From: David O. <da...@qc...> - 2013-11-18 16:26:04
|
Hi, Can anyone tell us if it's possible to use the Naviserver nsdbpg drivers to connect directly via SSL to a SSL enabled Postgresql database? I think we'd be able to encrypt communication between an application server and DB server using stunnel if not, but wondered if this is something nsdbpg supports directly? Regards, -- David |
From: Ian H. <har...@gm...> - 2013-12-05 23:44:53
|
Anyone? I am experimenting with Amazon RDS and it appears to require SSL. My naviserver nsdbipg module seems to barf on it. psql connects fine. On Mon, Nov 18, 2013 at 7:23 AM, David Osborne <da...@qc...> wrote: > > Hi, > > Can anyone tell us if it's possible to use the Naviserver nsdbpg drivers > to connect directly via SSL to a SSL enabled Postgresql database? > > I think we'd be able to encrypt communication between an application > server and DB server using stunnel if not, but wondered if this is > something nsdbpg supports directly? > > Regards, > -- > David > > > ------------------------------------------------------------------------------ > DreamFactory - Open Source REST & JSON Services for HTML5 & Native Apps > OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access > Free app hosting. Or install the open source package on any LAMP server. > Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native! > http://pubads.g.doubleclick.net/gampad/clk?id=63469471&iu=/4140/ostg.clktrk > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > |
From: Stephen D. <sd...@gm...> - 2013-12-06 00:58:27
|
On Thu, Dec 5, 2013 at 11:44 PM, Ian Harding <har...@gm...> wrote: > > My naviserver nsdbipg module seems to barf on it. > psql connects fine. nsbipg is configured with a datasource param which lets you pass any key=value pairs directly through to libpq. According to: http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-PARAMKEYWORDS ...the default sslmode is prefer, which apparently means "don't bother trying". You actually need sslmode=require. nsdbpg unfortunately implements it's own datasource parsing so you're stuck with user:host:db Are you also using nsssl? Looks like some modifications are required: http://www.postgresql.org/docs/9.3/static/libpq-ssl.html#LIBPQ-SSL-INITIALIZE If your application initializes libssl and/or libcrypto libraries and libpq is built with SSL support, you should call PQinitOpenSSL to tell libpq that the libssl and/or libcrypto libraries have been initialized by your application, so that libpq will not also initialize those libraries. But you could try connecting to the db via ssl with nsssl unloaded, to confirm that it works. Not sure what the best way is to coordinate with nsssl who should init the openssl library. |
From: John B. <jo...@ma...> - 2014-01-01 17:57:55
|
Hello navifans, I'm in the process of trying to move to Naviserver from Aolserver. I'm finding that naviserver is not launching new threads, and is just qeueing requests up to be handled by a single thread, despite maxthreads being set high. I can get around that problem if I change ns_section "ns/server/${servername}" ns_param minthreads 1 to ns_section "ns/server/${servername}" ns_param minthreads 100 and then the server *does* use multiple threads. That shouldn't be needed, though. Is there a config parameter I need to twiddle to enable one-demand thread creation? I also have the following two other problems: 1) if I try a minthreads of 50 or larger, nsd crashes on startup 2) apachebench with a concurrency of 50 causes nsd to crash (at 20 threads) I'm running debian squeeze, with a number of production aolservers that run stably also on the same box. Any thoughts? -john |
From: Wolfgang W. <wol...@di...> - 2014-01-02 07:39:22
|
Hello! We had crashes on startup as well. Here is a description and the solution of the problem: http://sourceforge.net/p/tcl/bugs/5238/ There is a very good information, why you should do this, even when you experience no crashes here: https://next-scripting.org/xowiki/docs/misc/thread-mallocs/index1 Concerning thread creation: use naviserver 4.99.5 from the repository, it has some significant improvements. Then check your server section in the configuration file. Our look like this: ns_section "ns/server/${servername}" ns_param directoryfile $directoryfile ns_param pageroot $pageroot ns_param enabletclpages true ;# Parse *.tcl files in pageroot. # Maximum number of connection structures ns_param maxconnections 240;# 100; determines queue size as well (good number: 100 + maxthreads) # Minimal and maximal number of connection threads ns_param minthreads 20 ns_param maxthreads 40 # Connection thread lifetime management ns_param connsperthread 10000 ;# Number of connections (requests) handled per thread ns_param threadtimeout [expr 30*60] ;# Timeout for idle threads # Connection thread creation eagerness #ns_param lowwatermark 10 ;# 10; create additional threads above this queue-full percentage #ns_param highwatermark 100 ;# 80; allow concurrent creates above this queue-is percentage The interesting bit ist the "maxconnections" parameter. This is the maximum queue length. In the above example, naviserver would put a maximum of 240 requests to the queue, before it returns errors to the clients. As there is a maximum of 40 threads, the real queue length ist 200. When number of requests in the queue reaches the lowwatermark (you can see the value on startup), threads are created. When it reaches the highwatermark, threads are created in parallel. With "ns_server stats" you get some statistics, which tell you how many requests have been queued. There is also a page in nsstats (nsstats?@page=process) where you get the value in percent. If this values are very hight, you should consider a higher value for minthreads. You can get much more information from here: https://next-scripting.org/xowiki/docs/misc/naviserver-connthreadqueue/index1 With naviserver 4.99.5, the tcmalloc and the settings above, we have a very stable system which handles 200 pages/secs over several hours up to 1500 pages/secs. regards, Wolfgang Am 2014-01-01 18:57, schrieb John Buckman: > Hello navifans, > > I'm in the process of trying to move to Naviserver from Aolserver. > > I'm finding that naviserver is not launching new threads, and is just qeueing requests up to be handled by a single thread, despite maxthreads being set high. > > I can get around that problem if I change > ns_section "ns/server/${servername}" > ns_param minthreads 1 > to > ns_section "ns/server/${servername}" > ns_param minthreads 100 > > and then the server *does* use multiple threads. That shouldn't be needed, though. > > Is there a config parameter I need to twiddle to enable one-demand thread creation? > > I also have the following two other problems: > 1) if I try a minthreads of 50 or larger, nsd crashes on startup > 2) apachebench with a concurrency of 50 causes nsd to crash (at 20 threads) > > I'm running debian squeeze, with a number of production aolservers that run stably also on the same box. > > Any thoughts? > > -john > > > ------------------------------------------------------------------------------ > Rapidly troubleshoot problems before they affect your business. Most IT > organizations don't have a clear picture of how application performance > affects their revenue. With AppDynamics, you get 100% visibility into your > Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro! > http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- *Wolfgang Winkler* Geschäftsführung wol...@di... mobil +43.699.19971172 dc:*büro* digital concepts Novak Winkler OG Software & Design Landstraße 68, 5. Stock, 4020 Linz www.digital-concepts.com <http://www.digital-concepts.com> tel +43.732.997117.72 tel +43.699.1997117.72 Firmenbuchnummer: 192003h Firmenbuchgericht: Landesgericht Linz PS: BESUCHEN SIE UNSERE NEUE SHOP INFO SEITE: www.shop-info.at |
From: John B. <jo...@ma...> - 2014-01-02 11:20:36
|
> > We had crashes on startup as well. Here is a description and the solution of the problem: > http://sourceforge.net/p/tcl/bugs/5238/ Hi Wolfgang, thanks for the tip. I don't see anything in your bug log indicating that you had a _startup_ problem, but rather a problem under load, which appears to be a multithreading bug with the tcl malloc. I'm going to rebuild Tcl with a 3rd party malloc, and I was thinking of using TCMalloc http://goog-perftools.sourceforge.net/doc/tcmalloc.html I *think* I built Naviserver using ActiveState's Tcl build, and so I'll rebuild using a from-source Tcl, with your mallow swap, and see if that helps. Apachebench reports almost 2x the performance with Naviserver vs Aolserver (250 pgs/sec, vs 150 pgs/sec) on a simple [clock seconds] page, so it's very much worth my time to try to get naviserver stable! -john |
From: Wolfgang W. <wol...@di...> - 2014-01-02 16:21:26
|
Am 2014-01-02 12:20, schrieb John Buckman: >> >> We had crashes on startup as well. Here is a description and the >> solution of the problem: >> http://sourceforge.net/p/tcl/bugs/5238/ > > Hi Wolfgang, thanks for the tip. I don't see anything in your bug log > indicating that you had a _startup_ problem, but rather a problem > under load, which appears to be a multithreading bug with the tcl malloc. > > I'm going to rebuild Tcl with a 3rd party malloc, and I was thinking > of using TCMalloc > http://goog-perftools.sourceforge.net/doc/tcmalloc.html > Hi John! The problem showed under heavy thread creation. When starting up with a high minhtreads value, the server crashed 1 out of 3 times. When hitting the server with ab or siege after it had started up and a lot of new threads where created, the server crashed as well. The stack trace was similar. Both problems where fixed with tcmalloc. wolfgang PS: BESUCHEN SIE UNSERE NEUE SHOP INFO SEITE: www.shop-info.at |
From: John B. <jo...@ma...> - 2014-01-02 19:59:05
|
After rebuilding tcl and naviserver (current) from source, with -ltcmalloc, my naviserver is stable. One small thing: I can crash naviserver on ctrl-c exit fairly easily: [02/Jan/2014:11:41:01][15491.7fee7ed75700][-conn:magnatune3:8] Fatal: nsthreads: pthread_join failed in Ns_ThreadJoin: Invalid argument I conducted two stress tests that went well. ------ A trivial stress test with 200 threads in naviserver, calling a page that just does: <%= [clock seconds] %> ab -n 2000 -c 200 http://magnatune.com:9000/timer2.adp yields Requests per second: 150.44 [#/sec] (mean) Performance did not substantially change as I upped concurrency, nor does it look at that different with tcmalloc linked in. However, this tcl test might be too simple to see tcmalloc performance effects. ----- A "long page load" test, of: ab -n 2000 -c 200 http://magnatune.com:9000/timer.adp against: <%= [after 5000] %> <%= [clock seconds] %> was made to see what would happen if requests came in faster than they could be fulfilled. A minthread of 5 was set, with a maxthread of 200. Good news: naviserver did indeed spawn new threads to deal with the backlog. ---- |
From: Gustaf N. <ne...@wu...> - 2014-01-02 13:53:05
|
Hi John, we are using naviserver on 20-30 installations with quite different configurations without the symptoms you are describing, so i would expect a problem in the configuration file. Is your setup heavy-weight (like OpenACS) or rather light-weight? If one is running a heavy-weight installation with minthreads 100 (as you are indicating) with limited resources, the os might kill the process (or e.g. an oom-killer). Find a sample configuration files sample-config.tcl https://bitbucket.org/naviserver/naviserver/src/ada2e2dd98fde3bd8bfdb4e7ca3531306116705f/sample-config.tcl.in?at=default openacs-config.tcl https://bitbucket.org/naviserver/naviserver/src/ada2e2dd98fde3bd8bfdb4e7ca3531306116705f/openacs-config.tcl Do you experience the "startup problem" with this config files as well? What is exactly this problem? Is it the case that the server starts and runs fine, but you do not see multiple connection threads created? Maybe the server is delivering the files quite fast such it does not need multiple connection threads? When you use NaviServer 4.99.5 or newer, the request queue management is quite different. Under normal operations, incoming requests are not queued but directly dispatched to connection threads. Only when all connection threads are busy, requests are added to the queue. For more details, see: https://next-scripting.org/xowiki/docs/misc/naviserver-connthreadqueue/ In order to control the eagerness of connection thread creation, please read the document above or read the comments in the mentioned config files. What is the reason for altering minthreads from 1 to 100 and not to a more sane value like e.g. 5 or 10? what version of naviserver + tcl are you using? Are you experiencing the problem with real web clients or with a synthetic benchmark? Is it possible to post the used configuration file here? best regards -gustaf neumann Am 01.01.14 18:57, schrieb John Buckman: > Hello navifans, > > I'm in the process of trying to move to Naviserver from Aolserver. > > I'm finding that naviserver is not launching new threads, and is just qeueing requests up to be handled by a single thread, despite maxthreads being set high. > > I can get around that problem if I change > ns_section "ns/server/${servername}" > ns_param minthreads 1 > to > ns_section "ns/server/${servername}" > ns_param minthreads 100 > > and then the server *does* use multiple threads. That shouldn't be needed, though. > > Is there a config parameter I need to twiddle to enable one-demand thread creation? > > I also have the following two other problems: > 1) if I try a minthreads of 50 or larger, nsd crashes on startup > 2) apachebench with a concurrency of 50 causes nsd to crash (at 20 threads) > > I'm running debian squeeze, with a number of production aolservers that run stably also on the same box. > > Any thoughts? > > -john > > -- Univ.Prof. Dr. Gustaf Neumann WU Vienna Institute of Information Systems and New Media Welthandelsplatz 1, A-1020 Vienna, Austria |
From: John B. <jo...@ma...> - 2014-01-02 18:03:02
|
Hi Gustaf, I found today your great article on alternative mallocs: https://next-scripting.org/xowiki/docs/misc/thread-mallocs/index1 could you possibly let me know what the correct way is to build Tcl and Naviserver with tcmalloc? My way to do it, was the modify the Tcl Makefile like so: LDFLAGS = $(LDFLAGS_DEBUG) -Wl,--export-dynamic -ltcmalloc but that way doesn't carry the -ltcmalloc through to tclConfig.sh, and so loadable extensions aren't built with tcmalloc, and so I suspect that my way wasn't the right way to do it. Or should I just hand-modify the Naviserver makefile, and tclConfig.sh to have that -ltcmalloc line on it? As to your question about stability of naviserver, I see now that I had previously built Naviserver using the Tcl binaries from ActiveTcl and I suspect that's the problem. I'm in the process of rebuilding entirely from source. I *have* successfully been using naviserver as a staging server, and for development for the past 6 months. There's lots to love. My config file is lightweight, just a slight mod from one of the samples I found from naviserver. My hunch of the source of the problem is that binary ActiveTcl distribution. I'm using Tcl 8.5.15 -john |
From: Gustaf N. <ne...@wu...> - 2014-01-03 14:27:44
|
Am 02.01.14 19:02, schrieb John Buckman: > Hi Gustaf, > > I found today your great article on alternative mallocs: > https://next-scripting.org/xowiki/docs/misc/thread-mallocs/index1 > > could you possibly let me know what the correct way is to build Tcl > and Naviserver with tcmalloc? In order to avoid tcl's zippy malloc, one has to patch the tcl sources, since practically every malloc from tcl and naviserver happens finally through Tcl's ckalloc(), which does not call the system malloc(), but its own implementation (in a threaded case, this is zippy malloc). Therefore a patch is required to make tcl call malloc(), and then one can use the either the "system malloc" shipped with your OS or one of the various malloc() implementation out there "on the market". I'll send you a patch for tcl 8.5 in a separate mail. Apply this to the tcl sources, recompile and install, and use the resulting tcl library for linking against naviserver. then one can use e.g. export LD_PRELOAD=/usr/lib64/libtcmalloc.so to use TCMalloc for naviserver all the best -gustaf > My way to do it, was the modify the Tcl Makefile like so: > LDFLAGS = $(LDFLAGS_DEBUG) -Wl,--export-dynamic > -ltcmalloc > > but that way doesn't carry the -ltcmalloc through to tclConfig.sh, and > so loadable extensions aren't built with tcmalloc, and so I suspect > that my way wasn't the right way to do it. > > Or should I just hand-modify the Naviserver makefile, and tclConfig.sh > to have that -ltcmalloc line on it? > > As to your question about stability of naviserver, I see now that I > had previously built Naviserver using the Tcl binaries from ActiveTcl > and I suspect that's the problem. I'm in the process of rebuilding > entirely from source. > > I *have* successfully been using naviserver as a staging server, and > for development for the past 6 months. There's lots to love. > > My config file is lightweight, just a slight mod from one of the > samples I found from naviserver. My hunch of the source of the > problem is that binary ActiveTcl distribution. I'm using Tcl 8.5.15 > > -john > > > > ------------------------------------------------------------------------------ > Rapidly troubleshoot problems before they affect your business. Most IT > organizations don't have a clear picture of how application performance > affects their revenue. With AppDynamics, you get 100% visibility into your > Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro! > http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk > > > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Univ.Prof. Dr. Gustaf Neumann WU Vienna Institute of Information Systems and New Media Welthandelsplatz 1, A-1020 Vienna, Austria |
From: David O. <da...@qc...> - 2013-12-06 09:09:26
|
Amazon Postgresql RDS? That was exactly why we were asking. (slightly off-topic - apologies) We found that Postgresql RDS SSL was optional though.. could happily connect directly to the RDS instance via psql "sslmode=disable"... and also via nsdbpg but think this connection wouldn't be encrypted. If you want to try stunnel we had to use the following client config to connect successfully directly to a Postgresql RDS instance (we haven't heavily tested this yet be warned): [postgresql] protocol = pgsql client = yes accept = localhost:5432 connect = pg_rds_server:5432 options = NO_TICKET -- David On 5 December 2013 23:44, Ian Harding <har...@gm...> wrote: > Anyone? I am experimenting with Amazon RDS and it appears to require > SSL. My naviserver nsdbipg module seems to barf on it. psql connects fine. > > |
From: Gustaf N. <ne...@wu...> - 2014-01-03 14:46:43
|
Am 02.01.14 20:53, schrieb John Buckman: > After rebuilding tcl and naviserver (current) from source, with > -ltcmalloc, my naviserver is stable. great. the tip version of naviserver have already some modifications to help against concurrency bugs in tcl (e.g. reducing the frequency of interp create operations, serializing these etc.). The tcl-alloc bug that wolfgang winkler has mentioned is avoided by using a different malloc(). Unfortunately i was not able to reproduce the problem reliably by using e.g. just a c-program + tcl to help the tcl-core team to provide a test-bed for fixing the problem. > One small thing: I can crash naviserver on ctrl-c exit fairly easily: > [02/Jan/2014:11:41:01][15491.7fee7ed75700][-conn:magnatune3:8] Fatal: > nsthreads: pthread_join failed in Ns_ThreadJoin: Invalid argument i am aware of this. the same problem can appear on server-shutdowns. as far i can tell this is tcl-version dependent, there is as well some work going on to make tcl-shutdown cleaner from the tcl-core community, my hope is that these changes will cause this annoyance to "go away". -gustaf |
From: John B. <jo...@ma...> - 2014-01-04 20:35:15
|
>> After rebuilding tcl and naviserver (current) from source, with -ltcmalloc, my naviserver is stable. > great. the tip version of naviserver have already some modifications > to help against concurrency bugs in tcl (e.g. reducing the frequency of interp > create operations, serializing these etc.). With Gustaf's patch in place, I was able to test tcmalloc vs jemalloc vs zippy in naviserver. Both tclmalloc and jemalloc gave me a 20% speed up on a simple [clock seconds] adp page. Both gave me about a 40% speedup on a much more complicated, unoptimized Tcl index.adp page. Once I received an openssl() assertion failure on startup with jemalloc and "minthreads 100" but was not able to recreate it. tcmalloc under 20 threads worked well, but performance was *terrible* at "minthreads 100", (about a 90% slowdown, even with apachebench -c 10). Neither zippy nor jemalloc had this problem. There's something about having many naivserver threads launched that makes tcmalloc not work well. Conclusion: For now, I'm using jemalloc, though the openssl assertion failure I had makes me nervous. tcmalloc's terrible performance with 100 threads makes me not want to use it. At the same time, I don't want to stick with zippy, as I've experienced the "bloat" problem that Gustaf talks about, solving it at BookMooch.com with 64GB of RAM (!) but that's not a very good solution, so it's worth my while to try other mallocs. -john |
From: Wolfgang W. <wol...@di...> - 2014-01-05 14:27:33
|
I was wondering, why you'll want to have 100 threads, because the number seems a little high to me. So I've just conducted some test on two of our development system. A test. similar to your simple adp page with maxthreads of 5 yields between 1500 and 2500 requests per seconds, including session handling in the preauth filter. With 24 maxthreads I get between 6800 and 7500 reqs/sec. For a real world not cached CMS page on a 6 core hyperthreaded CPU we still get more than 500 pages/sec with 5 maxthreads and 770-780 with 12 threads, 820 - 830 up to 950 with 24 maxthreads. From there it doesn't change much, until performance degrades slightly again at around 100 maxthreads (780 reqs/sec). I used ab with -n 20000 -c 50|200|500|1000 My conclusion: Having a higher number of maxthreads than the number of cores is helping performance, more than twice the amount isn't helping anymore. It might be different, when you've got some long running threads, where you have to pull information from other servers and the threads are idling. This was all done with tcmalloc, If I'll find the time, I'll rerun the tests with jemalloc as well. wolfgang Am Samstag, 4. Januar 2014, 20:35:05 schrieb John Buckman: > >> After rebuilding tcl and naviserver (current) from source, with > >> -ltcmalloc, my naviserver is stable.> > > great. the tip version of naviserver have already some modifications > > to help against concurrency bugs in tcl (e.g. reducing the frequency of > > interp create operations, serializing these etc.). > > With Gustaf's patch in place, I was able to test tcmalloc vs jemalloc vs > zippy in naviserver. > > Both tclmalloc and jemalloc gave me a 20% speed up on a simple [clock > seconds] adp page. Both gave me about a 40% speedup on a much more > complicated, unoptimized Tcl index.adp page. > > Once I received an openssl() assertion failure on startup with jemalloc and > "minthreads 100" but was not able to recreate it. > > tcmalloc under 20 threads worked well, but performance was *terrible* at > "minthreads 100", (about a 90% slowdown, even with apachebench -c 10). > Neither zippy nor jemalloc had this problem. There's something about > having many naivserver threads launched that makes tcmalloc not work well. > > Conclusion: > > For now, I'm using jemalloc, though the openssl assertion failure I had > makes me nervous. > > tcmalloc's terrible performance with 100 threads makes me not want to use > it. > > At the same time, I don't want to stick with zippy, as I've experienced the > "bloat" problem that Gustaf talks about, solving it at BookMooch.com with > 64GB of RAM (!) but that's not a very good solution, so it's worth my while > to try other mallocs. > > -john > > > > ---------------------------------------------------------------------------- > -- Rapidly troubleshoot problems before they affect your business. Most IT > organizations don't have a clear picture of how application performance > affects their revenue. With AppDynamics, you get 100% visibility into your > Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics > Pro! > http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg. clktrk > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- digital concepts OG Software & Design Landstrasse 68 / 5. Stock A - 4020 Linz |
From: John B. <jo...@ma...> - 2014-01-06 13:37:03
|
On Jan 5, 2014, at 2:27 PM, Wolfgang Winkler <wol...@di...> wrote: > I was wondering, why you'll want to have 100 threads, because the number seems a little high to me. So I've just conducted some test on two of our development system. You're right, 100 is high, but I do use SQL quite a lot, and so long running threads, where the cpu is blocking, waiting for a SQL server response, can generate a lot of pending threads. I wanted enough threads to be around to handle short running page requests too. But besides that, it's very odd that tcmalloc had this huge decrease in performance at 100 threads. It's perhaps not enough of a reason to avoid tcmalloc, but it's a cause to worry, nonetheless. Gustaf's tests didn't show that same slowdown result, and the big slowdown I have at 100 threads only happens with tcmalloc, not with jemalloc or zippy, with a simple [clock seconds] ADP page. -john |
From: Gustaf N. <ne...@wu...> - 2014-01-06 15:05:44
|
The "best" number of threads depends on many things, including the hardware (e.g. number of cores). For most cases, i recommend not more than than 4 threads/core. With high number of threads a low numbers of cores, context switching might degrade performance. concerning 100 threads vs. tcmalloc: you might try to increase |TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES| (defaults to 16MB). For more details, see: http://gperftools.googlecode.com/svn/trunk/doc/tcmalloc.html concerning waiting time in threads: a good number of threads can be determined via observing the queuing time and the number of queue requests recorded by naviserver (See in nsstats (from bitbucket) the "process" page). The best number of theads is the lowest number where the number of queued requests is low (e.g. less than 0.1%) or the queuing time is acceptable (e.g. less than 0.1 ms). The numbers certainly depend on the application, but when one has essentially no queued requests with 30 connection threads, then there is no benefit in increasing the number of connection threads. -gustaf Am 06.01.14 14:36, schrieb John Buckman: > > On Jan 5, 2014, at 2:27 PM, Wolfgang Winkler > <wol...@di... > <mailto:wol...@di...>> wrote: > >> I was wondering, why you'll want to have 100 threads, because the >> number seems a little high to me. So I've just conducted some test on >> two of our development system. > > You're right, 100 is high, but I do use SQL quite a lot, and so long > running threads, where the cpu is blocking, waiting for a SQL server > response, can generate a lot of pending threads. > > I wanted enough threads to be around to handle short running page > requests too. > > But besides that, it's very odd that tcmalloc had this huge decrease > in performance at 100 threads. It's perhaps not enough of a reason to > avoid tcmalloc, but it's a cause to worry, nonetheless. > > Gustaf's tests didn't show that same slowdown result, and the big > slowdown I have at 100 threads only happens with tcmalloc, not with > jemalloc or zippy, with a simple [clock seconds] ADP page. > > -john > > |
From: John B. <jo...@ma...> - 2014-01-06 17:17:50
|
Hi Gustaf, thanks for the suggestion. I tried adding a zero, and then another zero, to export TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES= 167772160 and still had terrible naviserver performance at 100 minthreads. When I dropped a digit, ie: export TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES= 1677721 naviserver failed to work at all, which serves as a good proof that the setting was taking effect. That being said, I went with your 4xCPU cores suggestion, and set minthreads at 32 to test again. At that setting, performance was the same as at minthreads=10, so I can live with that! -john On Jan 6, 2014, at 3:05 PM, Gustaf Neumann <ne...@WU...> wrote: > The "best" number of threads depends on many things, including the hardware (e.g. number of cores). For most cases, i recommend not more than than 4 threads/core. With high number of threads a low numbers of cores, context switching might degrade performance. > > concerning 100 threads vs. tcmalloc: you might try to increase TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES (defaults to 16MB). For more details, see: > http://gperftools.googlecode.com/svn/trunk/doc/tcmalloc.html > > concerning waiting time in threads: a good number of threads can be determined via observing the queuing time and the number of queue requests recorded by naviserver (See in nsstats (from bitbucket) the "process" page). The best number of theads is the lowest number where the number of queued requests is low (e.g. less than 0.1%) or the queuing time is acceptable (e.g. less than 0.1 ms). The numbers certainly depend on the application, but when one has essentially no queued requests with 30 connection threads, then there is no benefit in increasing the number of connection threads. |