You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(2) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
From: Gustaf N. <ne...@wu...> - 2023-11-19 17:18:02
|
Dear all, This is a follow-up to my own mail. The problem due to the changes of the billing rules on Bitbucket (see below) are somewhat sorted out, I have again access to the repository, but many of the former contributors are deleted from the "naviserver" group to get it functioning. There was also a change with "managed accounts", which made the problem for me even more complicated. In total, I count 31 emails with Atlassian support to sort these things out. Anyhow, these changes are signs that Atlassian does not have a high priority on open source projects with free accounts. We have discussed this with Zoran, and we came to the conclusion that it is the best to move the NaviServer repositories to GitHub https://github.com/orgs/naviserver-project/repositories We have now on GitHub an organization "naviserver-project" which contains now the 54 sub-repositories. This is essentially the same structure we had before. We also have a certain hope, that the move to GitHub will rather improve the visibility of NaviServer. All the newest commits were done on these new repositories. The plan is to move completely to GitHub and to delete the repositories on Bitbucket to avoid confusion. Please adjust your install/update scripts to point to the new link. All the best -g On 02.11.23 15:30, I wrote: > So far, these changes are only available on sourceforge, since i have > lost write access to the repository at bitbucket. The people on > Atlassian seem to have changed some account types, and - on to of this > - they annouced via the Blog post on September 27, 2023, that the > billing model changed (where they also refer to the > "unified-user-management"). It took me a while to figure out, what > happened. The blog post states: > > /From October 31st, 2023, Bitbucket Cloud will begin counting all > workspace members as a billable user. .... > > Free plans: If you're on a free plan and your billable user count > is higher than 5 as per the new definition of billable user, all > repositories in your workspace will become read-only until you > remove some users or upgrade your workspace to a paid plan./ > > It seems that the users of the "naviserver" group are now counted as > "billable users", and it contains 19 users. According to support, we > have to reduce this number to 5, otherwise nobody will be able to > commit anything. > > Due to the ability with PRs, i think the reduction will be possible > without too much loss in functionality. If nobody objects, i will go > back in history and reduce the number of commit-member based on the > most recent direct commits. I hope, that non of the "old members" will > be offended by this. One other option would be to upgrade to a paid > plain - but i am not sure, who is gonna pay for this. > |
From: Brian F. <bri...@ai...> - 2023-11-10 16:46:04
|
I enabled 2 driverthreads each on both nsssl and nssock on our Docker system, and I think it's working ,based on these driver stats: thread nsssl:1 module nsssl received 5K spooled 0 partial 5K errors 0 thread nsssl:0 module nsssl received 5K spooled 0 partial 5K errors 0 thread nssock:1 module nssock received 135 spooled 0 partial 135 errors 0 thread nssock:0 module nssock received 136 spooled 0 partial 136 errors 0 Still seeing a huge amount of these in the log though: [10/Nov/2023:16:08:08][37.7fb2857fa640][-driver:nsssl:0-] Notice: ... sockAccept accepted 2 connections [10/Nov/2023:16:08:16][37.7fb286ffd640][-driver:nsssl:1-] Notice: ... sockAccept accepted 2 connections [10/Nov/2023:16:08:17][37.7fb2857fa640][-driver:nsssl:0-] Notice: ... sockAccept accepted 2 connections [10/Nov/2023:16:08:21][37.7fb286ffd640][-driver:nsssl:1-] Notice: ... sockAccept accepted 2 connections [10/Nov/2023:16:08:26][37.7fb2857fa640][-driver:nsssl:0-] Notice: ... sockAccept accepted 2 connections I'll leave it running anyway, and see how it goes. thanks! Brian ________________________________ From: Gustaf Neumann <ne...@wu...> Sent: Thursday 9 November 2023 3:27 pm To: nav...@li... <nav...@li...> Subject: Re: [naviserver-devel] A selection of messages appearing in our error logs On 08.11.23 15:42, Brian Fenton wrote: Also regarding configuring a second driver thread - just to be clear are you referring to this mail you sent back in 2016 https://sourceforge.net/p/naviserver/mailman/message/35502664/ i.e. enable reuseport and set driverthreads = 2? It's unclear to me if this has benefit when running Naviserver in a Docker container - definitely beyond my level of comprehension. See https://github.com/moby/moby/issues/7536#issuecomment-1039443599 for discussion of SO_REUSEPORT and Docker. SO_REUSEPORT allows multiple sockets on the same host to bind to the same port at the same time, therefore, multiple driver threads can listen at the same time at the same port. I do not see, why this should not be possible in a container, ... but i am not a container expert. You can try a configuration with two driver threads, if it does not reduce the number of entries, then undo it. On a configuration with e.g. two spooler threads, it should look like the following, distributing the requests more or less evenly thread nsssl:1 module nsssl received 890K spooled 422 partial 437K errors 0 thread nsssl:0 module nsssl received 935K spooled 27K partial 491K errors 0 If the load distribution does not work on your docker machine (only one driver thread is doing all the work), then a configuration with two driver threads is indeed useless. Since the driver threads will work on different cores, this improving concurrency and scalability. Giving cores something to do is always preferable. But also, when one is defining two driver threads for the same port, don't expect that the messages will go away completely. 3. "exiting: exceeded max connections per thread" - actually we used to have "connsperthread" set to 10000, but I reduced to 1000 when we moved to Linux, as I noticed that was the value in latest OpenACS config, even though the default is still 10000 https://bitbucket.org/naviserver/naviserver/src/main/openacs-config.tcl Is 1000 the recommended value for OpenACS? The value of 1000 is probably a "traditional" value, which is forgiving for sloppy developers not cleaning up their garbage after a request. I have commented out the line in the repository such that users don't draw wrong conclusions. -g |
From: Gustaf N. <ne...@wu...> - 2023-11-09 15:28:03
|
On 08.11.23 15:42, Brian Fenton wrote: > Also regarding configuring a second driver thread - just to be clear > are you referring to this mail you sent back in 2016 > https://sourceforge.net/p/naviserver/mailman/message/35502664/ i.e. > enable reuseport and set driverthreads = 2? It's unclear to me if this > has benefit when running Naviserver in a Docker container - definitely > beyond my level of comprehension. See > https://github.com/moby/moby/issues/7536#issuecomment-1039443599 for > discussion of SO_REUSEPORT and Docker. SO_REUSEPORT allows multiple sockets on the same host to bind to the same port at the same time, therefore, multiple driver threads can listen at the same time at the same port. I do not see, why this should not be possible in a container, ... but i am not a container expert. You can try a configuration with two driver threads, if it does not reduce the number of entries, then undo it. On a configuration with e.g. two spooler threads, it should look like the following, distributing the requests more or less evenly thread nsssl:1 module nsssl received 890K spooled 422 partial 437K errors 0 thread nsssl:0 module nsssl received 935K spooled 27K partial 491K errors 0 If the load distribution does not work on your docker machine (only one driver thread is doing all the work), then a configuration with two driver threads is indeed useless. Since the driver threads will work on different cores, this improving concurrency and scalability. Giving cores something to do is always preferable. But also, when one is defining two driver threads for the same port, don't expect that the messages will go away completely. > 3. "exiting: exceeded max connections per thread" - actually we used > to have "connsperthread" set to 10000, but I reduced to 1000 when we > moved to Linux, as I noticed that was the value in latest OpenACS > config, even though the default is still 10000 > https://bitbucket.org/naviserver/naviserver/src/main/openacs-config.tcl > Is 1000 the recommended value for OpenACS? The value of 1000 is probably a "traditional" value, which is forgiving for sloppy developers not cleaning up their garbage after a request. I have commented out the line in the repository such that users don't draw wrong conclusions. -g |
From: Brian F. <bri...@ai...> - 2023-11-08 14:42:26
|
Hi Gustaf thank you so much for such a comprehensive reply. I feel like I just increased my IQ by a few percentage! A couple of follow up questions, if I may: 1. "sockAccept accepted 2 connections" - this was very useful to know, I will start keeping an eye out for values > 10. It would be very useful to be able to configure the threshold at which it appears in the log, as I mentioned we see a huge amount of these (over 2000 in the last few hours on our test server, almost all of them for 2 connections). Also regarding configuring a second driver thread - just to be clear are you referring to this mail you sent back in 2016 https://sourceforge.net/p/naviserver/mailman/message/35502664/ i.e. enable reuseport and set driverthreads = 2? It's unclear to me if this has benefit when running Naviserver in a Docker container - definitely beyond my level of comprehension. See https://github.com/moby/moby/issues/7536#issuecomment-1039443599 for discussion of SO_REUSEPORT and Docker. 2. "ns_cache create entry collision cache util_memoize key '__util_memoize_installed_p" - again very helpful. We actually have been partitioning our caches, and try not to add anything new to util-memoize cache. In fact, this particular message about '__util_memoize_installed_p' is the only cache collision notice we are seeing. Your comment helped me to track down to issue to the "util_memoize_initialized_p" and "apm_package_installed_p" procs, and I can see the new improved versions in later OpenACS versions, so I will try those out. 3. "exiting: exceeded max connections per thread" - actually we used to have "connsperthread" set to 10000, but I reduced to 1000 when we moved to Linux, as I noticed that was the value in latest OpenACS config, even though the default is still 10000 https://bitbucket.org/naviserver/naviserver/src/main/openacs-config.tcl Is 1000 the recommended value for OpenACS? thanks again Brian ________________________________ From: Gustaf Neumann <ne...@wu...> Sent: Wednesday 8 November 2023 9:00 am To: nav...@li... <nav...@li...> Subject: Re: [naviserver-devel] A selection of messages appearing in our error logs Hi Brian, In general, we try to follow with the logging severities the guideline [1]. "Notice" is informational, "warning" is something unexpected/unwanted (hitting some limits, got some data that looks like an attack, ...), "error" is something which should be looked at by an engineer. [1] https://openacs.org/xowiki/logging-conventions below, I comment inline. On 07.11.23 18:28, Brian Fenton wrote: Hi with the goal of improving signal-to-noise in our error logs, and also to increase my understanding of Naviserver internals, I have a few questions about various different messages we are seeing regularly in our logs. 1. [06/Nov/2023:14:58:08][77320.7f7b9a7fd640][-driver:nsssl:0-] Notice: ... sockAccept accepted 2 connections this is by far the most common entry in our logs - we see a LOT of these. Are they merely informative or indicative of some tuning action we could take? I don't see any way to disable them. I would love to remove these. This indicates some stress situation of the server, where multiple requests come in at the same time. The value of 2 is not of a concern, if you see values like 10 or 20, you should consider a change in your configuration (e.g., configuring a second driver thread). It would be possible to define a configuration value for the network driver to set the threshold when the notice is written to the log file. 1. 2. [06/Nov/2023:14:57:28][77320.7f7b63fff640][-conn:openacs:default:6:6292-] Notice: ns_cache create entry collision cache util_memoize key '__util_memoize_installed_p', no timeout We also see a quite a few of these, always for '__util_memoize_installed_p', and always with 'no timeout'. From looking at cache.c I see that this means that Naviserver failed to create the cache entry. I checked the 'util_memoize' cache utilization in case the cache was full, but it's quite low. What other cause could there be for this? This means that concurrent operations are executed to obtain the same cache value, and that the second request (here conn:openacs:default:6:6292) has to wait until the locking process has finished. The problem can become serious, when the locking process "hangs", i.e. taking a very long time to finish. This means, that more and more of caching requests for the same entry will pile up, potentially until a restart, since no timeout was specified. The warning was introduced to give the developer a hint, why suddenly the server is hanging (before the message was introduced, the cause was very hard to determine). This message tells as well something else: * It looks as if the installation is based on a very old version of openacs-core (including in particular acs-tcl) installed. At least since 9 years, the code has changed to perform this test just at startup. Upgrade to recent versions will improve the performance and increase security. * The message tells me as well that the application has a stress on the util-memoize cache. Old versions of OpenACS used this "util-memoize" cache as a kitchen-sink cache for everything. I've seen instances having 300K + entries in this cache. The size itself is not the problem, but there were/are many operations in OpenACS that obtain all keys from the cache. It means that a lock is created, a list with 300K+ entries is created and handed to the interpreter before the cache is released. In the meantime, *all* caching requests for this cache have to be halted, ... causing in high load situations degraded performance and limited concurrency. I would not be surprised, if you see high lock times on this cache. There are caches aside of the util-memoize cache, where cache collisions might happen. See e.g. [2] for more background. Cache partitioning is in this situation the instrument for scaling. [2] https://openacs.org/xowiki/cache-size-tuning 1. 2. These 5 appear usually as a group: [07/Nov/2023:13:46:07][38.7f352cff9640][-conn:openacs:default:3:4088-] Notice: exiting: exceeded max connections per thread [07/Nov/2023:13:46:07][38.7f3525b84640][-driver:nsssl:0-] Notice: NsEnsureRunningConnectionThreads wantCreate 1 waiting 1 idle 0 current 2 [07/Nov/2023:13:46:07][38.7f352d7fa640][-conn:openacs:default:5:0-] Notice: start update interpreter openacs to epoch 1, concurrent 1 [07/Nov/2023:13:46:08][38.7f352d7fa640][-conn:openacs:default:5:0-] Notice: update interpreter openacs to epoch 1 done, trace none, time 0.373006 secs concurrent 1 [07/Nov/2023:13:46:08][38.7f352d7fa640][-conn:openacs:default:5:0-] Notice: thread initialized (0.388878 secs) We see these pretty regularly, over 50 per day on a local development server. "maxconnections" is set to the default value, which is 100, I believe. "maxthreads" and "minthreads" are also set to default values. What could be causing these so regularly on a system with 1 logged-in user? The message tells that a connection thread has reached its end of live, as defined by the "connsperthread" parameter. When this happens, the thread is stopped, and depending on the configuration parameters, a new thread is created maybe immediately or on demand. The reason for stopping connection threads from time to time is to clean up some interpreter specific data, which was accumulated by some applications. With the modern versions of OpenACS and with well-behaved packages, this is not an issue, the value can be increased substantially. The relevant counter does not depend on the number of different users, but on the number of requests handled by a particular connection thread. 1. 2. These 3 are related to each other, I presume: [07/Nov/2023:15:36:24][38.7f3536b86640][-conn:openacs:default:28:28082-] Notice: nsdb: closing old handle in pool 'pool1' [07/Nov/2023:15:45:39][38.7f3536b86640][-conn:openacs:default:31:31287-] Notice: nsdb: closing idle handle in pool 'pool1' [07/Nov/2023:15:45:39][38.7f3536b86640][-conn:openacs:default:31:31287-] Notice: dbdrv: opening database 'nsoracle:192.168.1.1/ORACLE' These are quite frequent too. Am I correct in thinking that we can tune these with pool "maxidle" and "maxopen" params mentioned here https://naviserver.sourceforge.io/n/nsdb/files/ns_db.html ? Is there any reason not to turn off the automatic closing feature? There might be some reasons for keeping the number of db connections low (e.g. many servers use the same database with a limited number of client connections, etc.), since at least 20 years there is no technical reason for the common db drivers not to turn off the automatic closing feature. Reconnecting to the database can be costly, when the db is running on a different host and/or the connection is via HTTPS. hope this helps a little all the best -g |
From: Gustaf N. <ne...@wu...> - 2023-11-08 09:30:40
|
On 07.11.23 22:56, Andrew Piskorski wrote: > On the current NaviServer head, that problem has gone away! The > ns_config-7.4.1 test now runs fine for me on Windows. You see: time heals all wounds! -g |
From: Gustaf N. <ne...@wu...> - 2023-11-08 09:00:41
|
Hi Brian, In general, we try to follow with the logging severities the guideline [1]. "Notice" is informational, "warning" is something unexpected/unwanted (hitting some limits, got some data that looks like an attack, ...), "error" is something which should be looked at by an engineer. [1] https://openacs.org/xowiki/logging-conventions below, I comment inline. On 07.11.23 18:28, Brian Fenton wrote: > Hi > > with the goal of improving signal-to-noise in our error logs, and also > to increase my understanding of Naviserver internals, I have a few > questions about various different messages we are seeing regularly in > our logs. > > 1. [06/Nov/2023:14:58:08][77320.7f7b9a7fd640][-driver:nsssl:0-] > Notice: ... sockAccept accepted 2 connections > this is by far the most common entry in our logs - we see a LOT of > these. Are they merely informative or indicative of some tuning > action we could take? I don't see any way to disable them. I would > love to remove these. > This indicates some stress situation of the server, where multiple requests come in at the same time. The value of 2 is not of a concern, if you see values like 10 or 20, you should consider a change in your configuration (e.g., configuring a second driver thread). It would be possible to define a configuration value for the network driver to set the threshold when the notice is written to the log file. > 1. > > 2. > [06/Nov/2023:14:57:28][77320.7f7b63fff640][-conn:openacs:default:6:6292-] > Notice: ns_cache create entry collision cache util_memoize key > '__util_memoize_installed_p', no timeout > We also see a quite a few of these, always for > '__util_memoize_installed_p', and always with 'no timeout'. From > looking at cache.c I see that this means that Naviserver failed > to create the cache entry. I checked the 'util_memoize' cache > utilization in case the cache was full, but it's quite low. What > other cause could there be for this? > This means that concurrent operations are executed to obtain the same cache value, and that the second request (here conn:openacs:default:6:6292) has to wait until the locking process has finished. The problem can become serious, when the locking process "hangs", i.e. taking a very long time to finish. This means, that more and more of caching requests for the same entry will pile up, potentially until a restart, since no timeout was specified. The warning was introduced to give the developer a hint, why suddenly the server is hanging (before the message was introduced, the cause was very hard to determine). This message tells as well something else: * It looks as if the installation is based on a very old version of openacs-core (including in particular acs-tcl) installed. At least since 9 years, the code has changed to perform this test just at startup. Upgrade to recent versions will improve the performance and increase security. * The message tells me as well that the application has a stress on the util-memoize cache. Old versions of OpenACS used this "util-memoize" cache as a kitchen-sink cache for everything. I've seen instances having 300K + entries in this cache. The size itself is not the problem, but there were/are many operations in OpenACS that obtain all keys from the cache. It means that a lock is created, a list with 300K+ entries is created and handed to the interpreter before the cache is released. In the meantime, *all* caching requests for this cache have to be halted, ... causing in high load situations degraded performance and limited concurrency. I would not be surprised, if you see high lock times on this cache. There are caches aside of the util-memoize cache, where cache collisions might happen. See e.g. [2] for more background. Cache partitioning is in this situation the instrument for scaling. [2] https://openacs.org/xowiki/cache-size-tuning > 1. > 2. These 5 appear usually as a group: > [07/Nov/2023:13:46:07][38.7f352cff9640][-conn:openacs:default:3:4088-] > Notice: exiting: exceeded max connections per thread > [07/Nov/2023:13:46:07][38.7f3525b84640][-driver:nsssl:0-] Notice: > NsEnsureRunningConnectionThreads wantCreate 1 waiting 1 idle 0 > current 2 > [07/Nov/2023:13:46:07][38.7f352d7fa640][-conn:openacs:default:5:0-] > Notice: start update interpreter openacs to epoch 1, concurrent 1 > [07/Nov/2023:13:46:08][38.7f352d7fa640][-conn:openacs:default:5:0-] > Notice: update interpreter openacs to epoch 1 done, trace none, > time 0.373006 secs concurrent 1 > [07/Nov/2023:13:46:08][38.7f352d7fa640][-conn:openacs:default:5:0-] > Notice: thread initialized (0.388878 secs) > We see these pretty regularly, over 50 per day on a local > development server. "maxconnections" is set to the default value, > which is 100, I believe. "maxthreads" and "minthreads" are also > set to default values. What could be causing these so regularly > on a system with 1 logged-in user? > The message tells that a connection thread has reached its end of live, as defined by the "connsperthread" parameter. When this happens, the thread is stopped, and depending on the configuration parameters, a new thread is created maybe immediately or on demand. The reason for stopping connection threads from time to time is to clean up some interpreter specific data, which was accumulated by some applications. With the modern versions of OpenACS and with well-behaved packages, this is not an issue, the value can be increased substantially. The relevant counter does not depend on the number of different users, but on the number of requests handled by a particular connection thread. > 1. > > 2. > These 3 are related to each other, I presume: > [07/Nov/2023:15:36:24][38.7f3536b86640][-conn:openacs:default:28:28082-] > Notice: nsdb: closing old handle in pool 'pool1' > [07/Nov/2023:15:45:39][38.7f3536b86640][-conn:openacs:default:31:31287-] > Notice: nsdb: closing idle handle in pool 'pool1' > [07/Nov/2023:15:45:39][38.7f3536b86640][-conn:openacs:default:31:31287-] > Notice: dbdrv: opening database 'nsoracle:192.168.1.1/ORACLE' > These are quite frequent too. Am I correct in thinking that we can > tune these with pool "maxidle" and "maxopen" params mentioned here > https://naviserver.sourceforge.io/n/nsdb/files/ns_db.html ? Is > there any reason not to turn off the automatic closing feature? > There might be some reasons for keeping the number of db connections low (e.g. many servers use the same database with a limited number of client connections, etc.), since at least 20 years there is no technical reason for the common db drivers not to turn off the automatic closing feature. Reconnecting to the database can be costly, when the db is running on a different host and/or the connection is via HTTPS. hope this helps a little all the best -g |
From: Brian F. <bri...@ai...> - 2023-11-07 23:02:54
|
Hi with the goal of improving signal-to-noise in our error logs, and also to increase my understanding of Naviserver internals, I have a few questions about various different messages we are seeing regularly in our logs. 1. [06/Nov/2023:14:58:08][77320.7f7b9a7fd640][-driver:nsssl:0-] Notice: ... sockAccept accepted 2 connections this is by far the most common entry in our logs - we see a LOT of these. Are they merely informative or indicative of some tuning action we could take? I don't see any way to disable them. I would love to remove these. 2. [06/Nov/2023:14:57:28][77320.7f7b63fff640][-conn:openacs:default:6:6292-] Notice: ns_cache create entry collision cache util_memoize key '__util_memoize_installed_p', no timeout We also see a quite a few of these, always for '__util_memoize_installed_p', and always with 'no timeout'. From looking at cache.c I see that this means that Naviserver failed to create the cache entry. I checked the 'util_memoize' cache utilization in case the cache was full, but it's quite low. What other cause could there be for this? 3. These 5 appear usually as a group: [07/Nov/2023:13:46:07][38.7f352cff9640][-conn:openacs:default:3:4088-] Notice: exiting: exceeded max connections per thread [07/Nov/2023:13:46:07][38.7f3525b84640][-driver:nsssl:0-] Notice: NsEnsureRunningConnectionThreads wantCreate 1 waiting 1 idle 0 current 2 [07/Nov/2023:13:46:07][38.7f352d7fa640][-conn:openacs:default:5:0-] Notice: start update interpreter openacs to epoch 1, concurrent 1 [07/Nov/2023:13:46:08][38.7f352d7fa640][-conn:openacs:default:5:0-] Notice: update interpreter openacs to epoch 1 done, trace none, time 0.373006 secs concurrent 1 [07/Nov/2023:13:46:08][38.7f352d7fa640][-conn:openacs:default:5:0-] Notice: thread initialized (0.388878 secs) We see these pretty regularly, over 50 per day on a local development server. "maxconnections" is set to the default value, which is 100, I believe. "maxthreads" and "minthreads" are also set to default values. What could be causing these so regularly on a system with 1 logged-in user? 4. These 3 are related to each other, I presume: [07/Nov/2023:15:36:24][38.7f3536b86640][-conn:openacs:default:28:28082-] Notice: nsdb: closing old handle in pool 'pool1' [07/Nov/2023:15:45:39][38.7f3536b86640][-conn:openacs:default:31:31287-] Notice: nsdb: closing idle handle in pool 'pool1' [07/Nov/2023:15:45:39][38.7f3536b86640][-conn:openacs:default:31:31287-] Notice: dbdrv: opening database 'nsoracle:192.168.1.1/ORACLE' These are quite frequent too. Am I correct in thinking that we can tune these with pool "maxidle" and "maxopen" params mentioned here https://naviserver.sourceforge.io/n/nsdb/files/ns_db.html ? Is there any reason not to turn off the automatic closing feature? many thanks in advance for any tips Brian 1. |
From: Andrew P. <at...@pi...> - 2023-11-07 21:56:30
|
On Wed, Mar 15, 2023 at 03:57:40PM -0400, Andrew Piskorski wrote: > On Tue, Mar 14, 2023 at 04:44:21PM -0400, Andrew Piskorski wrote: > > > Debug Error! > > Program: C:\web\ns-fork-pub\naviserver\nsd\libnsd.dll > > Run-Time Check Failure #2 - Stack around the variable 'filter' was corrupted. > > > > I never see the usual "all.tcl" test summary output. Maybe nsd is > > hitting the above "Debug Error!" before getting there? > > On Windows, running the "ns_config.test" tests triggers that one. All > the tests through ns_config-7.4.0, pass, then it stops with no further > output. Yep, the next test, ns_config-7.4.1, is sufficient to trigger > the problem all by itself. On the current NaviServer head, that problem has gone away! The ns_config-7.4.1 test now runs fine for me on Windows. -- Andrew Piskorski <at...@pi...> |
From: Gustaf N. <ne...@wu...> - 2023-11-06 18:36:18
|
Hi Brian, The proper solution is not to skip the error message, but to skip the full connection output operations in error situations. Since this involves many commands (from ns_return*, ns_write, ns_cookie* ...) this is a larger change. I will look into this in the next days. Should be doable with moderate effort. all the best -g PS: bitbucket recovery still takes time, i am now at >15 email messages with support from support of atlassian and/or bitbucket. On 06.11.23 15:11, Brian Fenton wrote: > Hi Gustaf > > my apologies, I hadn't realised that silencing the log would lead to > different behaviour. If this is a bigger job than expected, please > feel free to revert back to previous version. It's a nice-to-have > feature for us. > > The attached script reproduces the issue when the parameter is set to > false. > > thanks > Brian > > ------------------------------------------------------------------------ > *From:* Gustaf Neumann <ne...@wu...> > *Sent:* Monday 6 November 2023 1:59 pm > *To:* nav...@li... > <nav...@li...> > *Subject:* Re: [naviserver-devel] NaviServer 4.99.29 available > > Hi Brian, > > > as stated several times, the right action is to fix your script (as > you did) rather than "silencing" NaviServer. I am not surprised, that > attempts to write on detached connections can lead to error conditions > on several occasions (generating errors avoids this). > > > But since we offer this silencing parameter, i do agree, the crashing > is harsh. If you could send a short script triggering the problem it > would help to work on such cases. > > > all the best > > -g > > > On 06.11.23 14:25, Brian Fenton wrote: >> Hi Gustaf >> >> I just built and ran some tests on the "rejectalreadyclosedconn" >> parameter to see how it handles code that triggers the "connection >> socket is detached" error. >> >> If I set "rejectalreadyclosedconn"to false, and browse to a page >> that triggers the "connection socket is detached" error, Naviserver >> crashes with the following error message: >> >> [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] >> Warning: NsWriterQueue: called without sockPtr size 414 bufs 1 flags >> 1030431 stream 000000 chan (nil) fd -1 >> [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] >> Fatal: received fatal signal 11 >> >> If I then fix the code that was triggering the "connection socket is >> detached" error, by adding the missing "return" after the offending >> "ad_returnredirect", everything works fine. >> >> Let me know if you need more info to help reproduce this. >> thanks, >> Brian >> >> ------------------------------------------------------------------------ >> *From:* Gustaf Neumann <ne...@wu...> <mailto:ne...@wu...> >> *Sent:* Thursday 2 November 2023 2:30 pm >> *To:* Navidevel <nav...@li...> >> <mailto:nav...@li...> >> *Subject:* [naviserver-devel] NaviServer 4.99.29 available >> Dear all, >> >> I am glad to announce that the release of NaviServer 4.99.29 is >> available at SourceForge [1]. This release is a pure bug-fix and >> maintenance release, which fixes a potentiall serious memory leak >> when working with PostgreSQL and large text contents. Furthermore, >> the release contains a small enhancement as requested by Brian not >> very long ago on this list. >> >> See below for a summary of the changes. >> >> So far, these changes are only available on sourceforge, since i have >> lost write access to the repository at bitbucket. The people on >> Atlassian seem to have changed some account types, and - on to of >> this - they annouced via the Blog post on September 27, 2023, that >> the billing model changed (where they also refer to the >> "unified-user-management"). It took me a while to figure out, what >> happened. The blog post states: >> >> /From October 31st, 2023, Bitbucket Cloud will begin counting all >> workspace members as a billable user. .... >> >> Free plans: If you're on a free plan and your billable user count >> is higher than 5 as per the new definition of billable user, all >> repositories in your workspace will become read-only until you >> remove some users or upgrade your workspace to a paid plan./ >> >> It seems that the users of the "naviserver" group are now counted as >> "billable users", and it contains 19 users. According to support, we >> have to reduce this number to 5, otherwise nobody will be able to >> commit anything. >> >> Due to the ability with PRs, i think the reduction will be possible >> without too much loss in functionality. If nobody objects, i will go >> back in history and reduce the number of commit-member based on the >> most recent direct commits. I hope, that non of the "old members" >> will be offended by this. One other option would be to upgrade to a >> paid plain - but i am not sure, who is gonna pay for this. >> >> All the best! >> >> -gustaf neumann >> >> [1] >> https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/ >> <https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/> >> [2] https://bitbucket.org/blog/billing-model-change >> <https://bitbucket.org/blog/billing-model-change> >> >> ======================================= >> NaviServer 4.99.29, released 2023-11-01 >> ======================================= >> >> 37 files changed, 261 insertions(+), 132 deletions(-) >> >> New Features: >> ------------- >> - Eased configuration of simple setups >> * don't require to specify a "defaultserver" when a single >> server is in use. >> >> * reduce warnings for per-server network drivers. This is a not >> recommended but possible configuration, global network drivers >> should be used. >> >> - The configuration option "rejectalreadyclosedconn", which warns >> about attempts to send data to the web client at times, when the >> connection is not available anymore, is now applied on closed and >> detached connections. Before it was only applied on closed >> connections, causing potentially many warnings for legacy >> applications. >> >> Bug Fixes: >> ---------- >> >> - Fixed a potential memory leak introduced two releases ago (in >> 4.99.27). >> >> - Fixed a potential compilation problem with glibc 2.38 or newer >> (released 31 Jul 2023) >> >> - Fixed reloading of certificates for mass virtual hosting >> >> Code Maintenance: >> ----------------- >> >> - fixed typos >> - fixed enum/int conversion flagged by gcc13 >> >> >> Modules: >> -------- >> The following list contains the most important changes: >> >> - module nsdbpg: >> fixed memory leak (see above). |
From: Brian F. <bri...@ai...> - 2023-11-06 14:11:35
|
Hi Gustaf my apologies, I hadn't realised that silencing the log would lead to different behaviour. If this is a bigger job than expected, please feel free to revert back to previous version. It's a nice-to-have feature for us. The attached script reproduces the issue when the parameter is set to false. thanks Brian ________________________________ From: Gustaf Neumann <ne...@wu...> Sent: Monday 6 November 2023 1:59 pm To: nav...@li... <nav...@li...> Subject: Re: [naviserver-devel] NaviServer 4.99.29 available Hi Brian, as stated several times, the right action is to fix your script (as you did) rather than "silencing" NaviServer. I am not surprised, that attempts to write on detached connections can lead to error conditions on several occasions (generating errors avoids this). But since we offer this silencing parameter, i do agree, the crashing is harsh. If you could send a short script triggering the problem it would help to work on such cases. all the best -g On 06.11.23 14:25, Brian Fenton wrote: Hi Gustaf I just built and ran some tests on the "rejectalreadyclosedconn" parameter to see how it handles code that triggers the "connection socket is detached" error. If I set "rejectalreadyclosedconn" to false, and browse to a page that triggers the "connection socket is detached" error, Naviserver crashes with the following error message: [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] Warning: NsWriterQueue: called without sockPtr size 414 bufs 1 flags 1030431 stream 000000 chan (nil) fd -1 [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] Fatal: received fatal signal 11 If I then fix the code that was triggering the "connection socket is detached" error, by adding the missing "return" after the offending "ad_returnredirect", everything works fine. Let me know if you need more info to help reproduce this. thanks, Brian ________________________________ From: Gustaf Neumann <ne...@wu...><mailto:ne...@wu...> Sent: Thursday 2 November 2023 2:30 pm To: Navidevel <nav...@li...><mailto:nav...@li...> Subject: [naviserver-devel] NaviServer 4.99.29 available Dear all, I am glad to announce that the release of NaviServer 4.99.29 is available at SourceForge [1]. This release is a pure bug-fix and maintenance release, which fixes a potentiall serious memory leak when working with PostgreSQL and large text contents. Furthermore, the release contains a small enhancement as requested by Brian not very long ago on this list. See below for a summary of the changes. So far, these changes are only available on sourceforge, since i have lost write access to the repository at bitbucket. The people on Atlassian seem to have changed some account types, and - on to of this - they annouced via the Blog post on September 27, 2023, that the billing model changed (where they also refer to the "unified-user-management"). It took me a while to figure out, what happened. The blog post states: >From October 31st, 2023, Bitbucket Cloud will begin counting all workspace members as a billable user. .... Free plans: If you're on a free plan and your billable user count is higher than 5 as per the new definition of billable user, all repositories in your workspace will become read-only until you remove some users or upgrade your workspace to a paid plan. It seems that the users of the "naviserver" group are now counted as "billable users", and it contains 19 users. According to support, we have to reduce this number to 5, otherwise nobody will be able to commit anything. Due to the ability with PRs, i think the reduction will be possible without too much loss in functionality. If nobody objects, i will go back in history and reduce the number of commit-member based on the most recent direct commits. I hope, that non of the "old members" will be offended by this. One other option would be to upgrade to a paid plain - but i am not sure, who is gonna pay for this. All the best! -gustaf neumann [1] https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/ [2] https://bitbucket.org/blog/billing-model-change ======================================= NaviServer 4.99.29, released 2023-11-01 ======================================= 37 files changed, 261 insertions(+), 132 deletions(-) New Features: ------------- - Eased configuration of simple setups * don't require to specify a "defaultserver" when a single server is in use. * reduce warnings for per-server network drivers. This is a not recommended but possible configuration, global network drivers should be used. - The configuration option "rejectalreadyclosedconn", which warns about attempts to send data to the web client at times, when the connection is not available anymore, is now applied on closed and detached connections. Before it was only applied on closed connections, causing potentially many warnings for legacy applications. Bug Fixes: ---------- - Fixed a potential memory leak introduced two releases ago (in 4.99.27). - Fixed a potential compilation problem with glibc 2.38 or newer (released 31 Jul 2023) - Fixed reloading of certificates for mass virtual hosting Code Maintenance: ----------------- - fixed typos - fixed enum/int conversion flagged by gcc13 Modules: -------- The following list contains the most important changes: - module nsdbpg: fixed memory leak (see above). |
From: Gustaf N. <ne...@wu...> - 2023-11-06 13:59:48
|
Hi Brian, as stated several times, the right action is to fix your script (as you did) rather than "silencing" NaviServer. I am not surprised, that attempts to write on detached connections can lead to error conditions on several occasions (generating errors avoids this). But since we offer this silencing parameter, i do agree, the crashing is harsh. If you could send a short script triggering the problem it would help to work on such cases. all the best -g On 06.11.23 14:25, Brian Fenton wrote: > Hi Gustaf > > I just built and ran some tests on the "rejectalreadyclosedconn" > parameter to see how it handles code that triggers the "connection > socket is detached" error. > > If I set "rejectalreadyclosedconn"to false, and browse to a page that > triggers the "connection socket is detached" error, Naviserver crashes > with the following error message: > > [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] > Warning: NsWriterQueue: called without sockPtr size 414 bufs 1 flags > 1030431 stream 000000 chan (nil) fd -1 > [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] > Fatal: received fatal signal 11 > > If I then fix the code that was triggering the "connection socket is > detached" error, by adding the missing "return" after the offending > "ad_returnredirect", everything works fine. > > Let me know if you need more info to help reproduce this. > thanks, > Brian > > ------------------------------------------------------------------------ > *From:* Gustaf Neumann <ne...@wu...> > *Sent:* Thursday 2 November 2023 2:30 pm > *To:* Navidevel <nav...@li...> > *Subject:* [naviserver-devel] NaviServer 4.99.29 available > Dear all, > > I am glad to announce that the release of NaviServer 4.99.29 is > available at SourceForge [1]. This release is a pure bug-fix and > maintenance release, which fixes a potentiall serious memory leak when > working with PostgreSQL and large text contents. Furthermore, the > release contains a small enhancement as requested by Brian not very > long ago on this list. > > See below for a summary of the changes. > > So far, these changes are only available on sourceforge, since i have > lost write access to the repository at bitbucket. The people on > Atlassian seem to have changed some account types, and - on to of this > - they annouced via the Blog post on September 27, 2023, that the > billing model changed (where they also refer to the > "unified-user-management"). It took me a while to figure out, what > happened. The blog post states: > > /From October 31st, 2023, Bitbucket Cloud will begin counting all > workspace members as a billable user. .... > > Free plans: If you're on a free plan and your billable user count > is higher than 5 as per the new definition of billable user, all > repositories in your workspace will become read-only until you > remove some users or upgrade your workspace to a paid plan./ > > It seems that the users of the "naviserver" group are now counted as > "billable users", and it contains 19 users. According to support, we > have to reduce this number to 5, otherwise nobody will be able to > commit anything. > > Due to the ability with PRs, i think the reduction will be possible > without too much loss in functionality. If nobody objects, i will go > back in history and reduce the number of commit-member based on the > most recent direct commits. I hope, that non of the "old members" will > be offended by this. One other option would be to upgrade to a paid > plain - but i am not sure, who is gonna pay for this. > > All the best! > > -gustaf neumann > > [1] > https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/ > <https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/> > [2] https://bitbucket.org/blog/billing-model-change > <https://bitbucket.org/blog/billing-model-change> > > ======================================= > NaviServer 4.99.29, released 2023-11-01 > ======================================= > > 37 files changed, 261 insertions(+), 132 deletions(-) > > New Features: > ------------- > - Eased configuration of simple setups > * don't require to specify a "defaultserver" when a single > server is in use. > > * reduce warnings for per-server network drivers. This is a not > recommended but possible configuration, global network drivers > should be used. > > - The configuration option "rejectalreadyclosedconn", which warns > about attempts to send data to the web client at times, when the > connection is not available anymore, is now applied on closed and > detached connections. Before it was only applied on closed > connections, causing potentially many warnings for legacy > applications. > > Bug Fixes: > ---------- > > - Fixed a potential memory leak introduced two releases ago (in > 4.99.27). > > - Fixed a potential compilation problem with glibc 2.38 or newer > (released 31 Jul 2023) > > - Fixed reloading of certificates for mass virtual hosting > > Code Maintenance: > ----------------- > > - fixed typos > - fixed enum/int conversion flagged by gcc13 > > > Modules: > -------- > The following list contains the most important changes: > > - module nsdbpg: > fixed memory leak (see above). |
From: Brian F. <bri...@ai...> - 2023-11-06 13:40:34
|
Hi Gustaf I just built and ran some tests on the "rejectalreadyclosedconn" parameter to see how it handles code that triggers the "connection socket is detached" error. If I set "rejectalreadyclosedconn" to false, and browse to a page that triggers the "connection socket is detached" error, Naviserver crashes with the following error message: [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] Warning: NsWriterQueue: called without sockPtr size 414 bufs 1 flags 1030431 stream 000000 chan (nil) fd -1 [06/Nov/2023:13:13:21][39.7f3489fb9640][-conn:openacs:default:1:30-] Fatal: received fatal signal 11 If I then fix the code that was triggering the "connection socket is detached" error, by adding the missing "return" after the offending "ad_returnredirect", everything works fine. Let me know if you need more info to help reproduce this. thanks, Brian ________________________________ From: Gustaf Neumann <ne...@wu...> Sent: Thursday 2 November 2023 2:30 pm To: Navidevel <nav...@li...> Subject: [naviserver-devel] NaviServer 4.99.29 available Dear all, I am glad to announce that the release of NaviServer 4.99.29 is available at SourceForge [1]. This release is a pure bug-fix and maintenance release, which fixes a potentiall serious memory leak when working with PostgreSQL and large text contents. Furthermore, the release contains a small enhancement as requested by Brian not very long ago on this list. See below for a summary of the changes. So far, these changes are only available on sourceforge, since i have lost write access to the repository at bitbucket. The people on Atlassian seem to have changed some account types, and - on to of this - they annouced via the Blog post on September 27, 2023, that the billing model changed (where they also refer to the "unified-user-management"). It took me a while to figure out, what happened. The blog post states: >From October 31st, 2023, Bitbucket Cloud will begin counting all workspace members as a billable user. .... Free plans: If you're on a free plan and your billable user count is higher than 5 as per the new definition of billable user, all repositories in your workspace will become read-only until you remove some users or upgrade your workspace to a paid plan. It seems that the users of the "naviserver" group are now counted as "billable users", and it contains 19 users. According to support, we have to reduce this number to 5, otherwise nobody will be able to commit anything. Due to the ability with PRs, i think the reduction will be possible without too much loss in functionality. If nobody objects, i will go back in history and reduce the number of commit-member based on the most recent direct commits. I hope, that non of the "old members" will be offended by this. One other option would be to upgrade to a paid plain - but i am not sure, who is gonna pay for this. All the best! -gustaf neumann [1] https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/ [2] https://bitbucket.org/blog/billing-model-change ======================================= NaviServer 4.99.29, released 2023-11-01 ======================================= 37 files changed, 261 insertions(+), 132 deletions(-) New Features: ------------- - Eased configuration of simple setups * don't require to specify a "defaultserver" when a single server is in use. * reduce warnings for per-server network drivers. This is a not recommended but possible configuration, global network drivers should be used. - The configuration option "rejectalreadyclosedconn", which warns about attempts to send data to the web client at times, when the connection is not available anymore, is now applied on closed and detached connections. Before it was only applied on closed connections, causing potentially many warnings for legacy applications. Bug Fixes: ---------- - Fixed a potential memory leak introduced two releases ago (in 4.99.27). - Fixed a potential compilation problem with glibc 2.38 or newer (released 31 Jul 2023) - Fixed reloading of certificates for mass virtual hosting Code Maintenance: ----------------- - fixed typos - fixed enum/int conversion flagged by gcc13 Modules: -------- The following list contains the most important changes: - module nsdbpg: fixed memory leak (see above). |
From: Andrew P. <at...@pi...> - 2023-11-02 17:25:33
|
On Thu, Nov 02, 2023 at 03:30:43PM +0100, Gustaf Neumann wrote: > So far, these changes are only available on sourceforge, since i have > lost write access to the repository at bitbucket. Wow. Yes, please remove whatever group members are necessary to restore write access, and let us know when the public NaviServer Git repository is again up to date. (As you say, we can always send you patches via forks in our personal repositories on Bitbucket, if necessary.) -- Andrew Piskorski <at...@pi...> |
From: Brian F. <bri...@ai...> - 2023-11-02 17:22:03
|
Dear Gustaf thank you for including my requested enhancement - a pleasant surprise! I have a few more thoughts and questions about reducing the volume of log entries - I will put that together and send it on in the near future. thanks again Brian ________________________________ From: Gustaf Neumann <ne...@wu...> Sent: Thursday 2 November 2023 2:30 pm To: Navidevel <nav...@li...> Subject: [naviserver-devel] NaviServer 4.99.29 available Dear all, I am glad to announce that the release of NaviServer 4.99.29 is available at SourceForge [1]. This release is a pure bug-fix and maintenance release, which fixes a potentiall serious memory leak when working with PostgreSQL and large text contents. Furthermore, the release contains a small enhancement as requested by Brian not very long ago on this list. See below for a summary of the changes. So far, these changes are only available on sourceforge, since i have lost write access to the repository at bitbucket. The people on Atlassian seem to have changed some account types, and - on to of this - they annouced via the Blog post on September 27, 2023, that the billing model changed (where they also refer to the "unified-user-management"). It took me a while to figure out, what happened. The blog post states: >From October 31st, 2023, Bitbucket Cloud will begin counting all workspace members as a billable user. .... Free plans: If you're on a free plan and your billable user count is higher than 5 as per the new definition of billable user, all repositories in your workspace will become read-only until you remove some users or upgrade your workspace to a paid plan. It seems that the users of the "naviserver" group are now counted as "billable users", and it contains 19 users. According to support, we have to reduce this number to 5, otherwise nobody will be able to commit anything. Due to the ability with PRs, i think the reduction will be possible without too much loss in functionality. If nobody objects, i will go back in history and reduce the number of commit-member based on the most recent direct commits. I hope, that non of the "old members" will be offended by this. One other option would be to upgrade to a paid plain - but i am not sure, who is gonna pay for this. All the best! -gustaf neumann [1] https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/ [2] https://bitbucket.org/blog/billing-model-change ======================================= NaviServer 4.99.29, released 2023-11-01 ======================================= 37 files changed, 261 insertions(+), 132 deletions(-) New Features: ------------- - Eased configuration of simple setups * don't require to specify a "defaultserver" when a single server is in use. * reduce warnings for per-server network drivers. This is a not recommended but possible configuration, global network drivers should be used. - The configuration option "rejectalreadyclosedconn", which warns about attempts to send data to the web client at times, when the connection is not available anymore, is now applied on closed and detached connections. Before it was only applied on closed connections, causing potentially many warnings for legacy applications. Bug Fixes: ---------- - Fixed a potential memory leak introduced two releases ago (in 4.99.27). - Fixed a potential compilation problem with glibc 2.38 or newer (released 31 Jul 2023) - Fixed reloading of certificates for mass virtual hosting Code Maintenance: ----------------- - fixed typos - fixed enum/int conversion flagged by gcc13 Modules: -------- The following list contains the most important changes: - module nsdbpg: fixed memory leak (see above). |
From: Gustaf N. <ne...@wu...> - 2023-11-02 14:30:58
|
Dear all, I am glad to announce that the release of NaviServer 4.99.29 is available at SourceForge [1]. This release is a pure bug-fix and maintenance release, which fixes a potentiall serious memory leak when working with PostgreSQL and large text contents. Furthermore, the release contains a small enhancement as requested by Brian not very long ago on this list. See below for a summary of the changes. So far, these changes are only available on sourceforge, since i have lost write access to the repository at bitbucket. The people on Atlassian seem to have changed some account types, and - on to of this - they annouced via the Blog post on September 27, 2023, that the billing model changed (where they also refer to the "unified-user-management"). It took me a while to figure out, what happened. The blog post states: /From October 31st, 2023, Bitbucket Cloud will begin counting all workspace members as a billable user. .... Free plans: If you're on a free plan and your billable user count is higher than 5 as per the new definition of billable user, all repositories in your workspace will become read-only until you remove some users or upgrade your workspace to a paid plan./ It seems that the users of the "naviserver" group are now counted as "billable users", and it contains 19 users. According to support, we have to reduce this number to 5, otherwise nobody will be able to commit anything. Due to the ability with PRs, i think the reduction will be possible without too much loss in functionality. If nobody objects, i will go back in history and reduce the number of commit-member based on the most recent direct commits. I hope, that non of the "old members" will be offended by this. One other option would be to upgrade to a paid plain - but i am not sure, who is gonna pay for this. All the best! -gustaf neumann [1] https://sourceforge.net/projects/naviserver/files/naviserver/4.99.29/ [2] https://bitbucket.org/blog/billing-model-change ======================================= NaviServer 4.99.29, released 2023-11-01 ======================================= 37 files changed, 261 insertions(+), 132 deletions(-) New Features: ------------- - Eased configuration of simple setups * don't require to specify a "defaultserver" when a single server is in use. * reduce warnings for per-server network drivers. This is a not recommended but possible configuration, global network drivers should be used. - The configuration option "rejectalreadyclosedconn", which warns about attempts to send data to the web client at times, when the connection is not available anymore, is now applied on closed and detached connections. Before it was only applied on closed connections, causing potentially many warnings for legacy applications. Bug Fixes: ---------- - Fixed a potential memory leak introduced two releases ago (in 4.99.27). - Fixed a potential compilation problem with glibc 2.38 or newer (released 31 Jul 2023) - Fixed reloading of certificates for mass virtual hosting Code Maintenance: ----------------- - fixed typos - fixed enum/int conversion flagged by gcc13 Modules: -------- The following list contains the most important changes: - module nsdbpg: fixed memory leak (see above). |
From: Brian F. <bri...@ai...> - 2023-10-27 18:29:53
|
Hi all Recently during a client security audit, the "Server: NaviServer/4.99.28" response header was flagged as an issue. The client has asked us to remove the header, if possible. The RFC suggests that the "Server: " header is optional, so I believe this should be OK to remove. https://www.rfc-editor.org/rfc/rfc7231#section-7.4.2 We would like to propose a new config file boolean parameter "showserverheader" with default true. Ns_ConnConstructHeaders in return.c could then check this parameter before outputting the "Server: " header e.g. something like this: if (Ns_ConfigBool(path, "showserverheader", NS_TRUE) == NS_TRUE) { Ns_DStringVarAppend(dsPtr, "Server: ", Ns_InfoServerName(), "/", Ns_InfoServerVersion(), "\r\n"); } Thoughts? Alternatives? thanks Brian |
From: Brian F. <bri...@ai...> - 2023-10-02 10:21:08
|
Hi Gustaf thanks for the explanation and the commit. I got confused because the discussion on https://openacs.org/forums/message-view?message_id=5659253 was about "connection socket is detached", when as you say that distinction came later. This kind of parameter is handy with clients running older versions of our application where it's not feasible to immediately upgrade them. With the parameter, we could theoretically reduce the noise in the error logs, while the client awaits an upgrade. thanks as always Brian ________________________________ From: Gustaf Neumann <ne...@wu...> Sent: Saturday 30 September 2023 11:08 am To: nav...@li... <nav...@li...> Subject: Re: [naviserver-devel] "connection already closed" vs "connection socket is detached" and the "rejectalreadyclosedconn" parameter Hi Brian, The parameter "rejectalreadyclosedconn" does what it is supposed to do (controls error messages, when someone tries to write to a connection, which was already closed). The parameter was introduced at a time, before the distinction between closed and detached connections was introduced in the code. So, I read your request to extend the meaning of the parameter to cover detached connections as well. > Have I misunderstood something or is this not possible? There is no hard reason rendering this as impossible. One could either introduce (a) another parameter "rejectalreadydetachedconn" or (b) make "rejectalreadyclosedconn" slightly misleading from the name to cover as well the detached cases. Since the parameter is just for legacy applications, which did not care about the semantics, when someone tries to write to a connection which is not available anymore (leading to potentially complex debugging questions), the updated code [1] follows the (b) approach. I see the potential misuse of the parameter to encourage lazy coding practices, but i do understand as well code-maintainers who see large amounts of errors in their log files on busy sites, not having the resources right now to address these... -gn [1] https://bitbucket.org/naviserver/naviserver/commits/725cf9475fe78f495918ed4549bf434cf2daaf96 _______________________________________________ naviserver-devel mailing list nav...@li... https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Gustaf N. <ne...@wu...> - 2023-09-30 10:09:13
|
On 29.09.23 12:56, Brian Fenton wrote: > Hi > > this discussion here > https://openacs.org/forums/message-view?message_id=5659253 suggests > that the "rejectalreadyclosedconn" parameter will prevent the > "connection socket is detached" message from showing in the logs. > However, this doesn't appear to work (tried in 4.99.28). Looking at > "NsConnRequire" in conn.c, it looks like > "reject_already_closed_connection" is only used in the "connection > already closed" case. > https://bitbucket.org/naviserver/naviserver/src/b21b6d75a8f2ff41cd00dd40cb4c143d9a8a55ce/nsd/conn.c#lines-2748 > > It would be good to be able to turn off the "connection socket is > detached" (understanding of course that we should in the meantime > resolve the source of those messages). Have I misunderstood something > or is this not possible? > > thanks > Brian > > > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Univ.Prof. Dr. Gustaf Neumann Head of the Institute of Information Systems and New Media of Vienna University of Economics and Business Program Director of MSc "Information Systems" |
From: Gustaf N. <ne...@wu...> - 2023-09-30 10:08:40
|
Hi Brian, The parameter "rejectalreadyclosedconn" does what it is supposed to do (controls error messages, when someone tries to write to a connection, which was already closed). The parameter was introduced at a time, before the distinction between closed and detached connections was introduced in the code. So, I read your request to extend the meaning of the parameter to cover detached connections as well. > Have I misunderstood something or is this not possible? There is no hard reason rendering this as impossible. One could either introduce (a) another parameter "rejectalreadydetachedconn" or (b) make "rejectalreadyclosedconn" slightly misleading from the name to cover as well the detached cases. Since the parameter is just for legacy applications, which did not care about the semantics, when someone tries to write to a connection which is not available anymore (leading to potentially complex debugging questions), the updated code [1] follows the (b) approach. I see the potential misuse of the parameter to encourage lazy coding practices, but i do understand as well code-maintainers who see large amounts of errors in their log files on busy sites, not having the resources right now to address these... -gn [1] https://bitbucket.org/naviserver/naviserver/commits/725cf9475fe78f495918ed4549bf434cf2daaf96 |
From: Brian F. <bri...@ai...> - 2023-09-29 14:30:21
|
Hi this discussion here https://openacs.org/forums/message-view?message_id=5659253 suggests that the "rejectalreadyclosedconn" parameter will prevent the "connection socket is detached" message from showing in the logs. However, this doesn't appear to work (tried in 4.99.28). Looking at "NsConnRequire" in conn.c, it looks like "reject_already_closed_connection" is only used in the "connection already closed" case. https://bitbucket.org/naviserver/naviserver/src/b21b6d75a8f2ff41cd00dd40cb4c143d9a8a55ce/nsd/conn.c#lines-2748 It would be good to be able to turn off the "connection socket is detached" (understanding of course that we should in the meantime resolve the source of those messages). Have I misunderstood something or is this not possible? thanks Brian |
From: Gustaf N. <ne...@wu...> - 2023-09-26 18:27:42
|
Dear NaviServer users, the updated version of nsstats supports now viewing smtp log files in a similar way to the HTTP client log files (see for an example the presentation of the last EuroTcl/OpenACS conference [1]). The SMTP sent log visualizes the performance, frequency and the result codes for the SMTP send commands, when the nssmptd module is used. See below for a sample configuration of for the nssmtpd module with logging configured. all the best -gn ns_section "ns/server/$server/module/nssmtpd" { ns_param port $smtpdport ns_param address $nssmtpd_address ;# 127.0.0.1 ns_param relay localhost:25 ns_param spamd localhost ns_param initproc smtpd::init ns_param rcptproc smtpd::rcpt ns_param dataproc smtpd::data ns_param errorproc smtpd::error ns_param relaydomains "localhost" ns_param localdomains "localhost" ns_param logging on ;# default: off ns_param logfile ${logroot}/smtpsend.log ns_param logrollfmt %Y-%m-%d ;# format appended to log filename #ns_param logmaxbackup 100 ;# 10, max number of backup log files #ns_param logroll true ;# true, should server log files automatically #ns_param logrollonsignal true ;# false, perform roll on a sighup #ns_param logrollhour 0 ;# 0, specify at which hour to roll } [1] https://openacs.org/conf2023/info/download/file/openacs-conf-2023-naviserver.pdf |
From: Maksym Z. <siq...@gm...> - 2023-09-17 18:35:54
|
Hi Gustaf, thank you for your explanation and link. On Fri, Sep 15, 2023 at 9:19 AM Gustaf Neumann <ne...@wu...> wrote: > Dear Maksym, > On 15.09.23 01:09, Maksym Zinchenko wrote: > > Hello, I've been struggling with a problem for a few days now. First I > thought it's something with my understanding of NX, but I think its has to > do something with Naviserver. > > actually, the problem is not with NaviServer, but it is rooted in the > XOTcl/NX serializer. When a NaviServer blueprint is created, NaviServer > performs "::Serializer all" to serialize the current workspace. Actually, > it does not serialize the full workspace, since the workspace contains as > well the NX internal class definitions. Therefore, the serializer has > "ignore patterns", including everything from, e.g., the "::ns"f namespace > [1]. > > When objects are created with "new", these are named by default > "::nsf::__#1", "::nsf::__#2". These names match the ignore pattern, and > therefore, these names are excluded from the blueprint. > > If you create the objects in your code with the "-childof" flag (see > below), you can place these outside the "::nsf" namespace, and everything > should be fine > > Hope this helps > > -g > > > [1] > https://github.com/gustafn/nsf/blob/master/library/serialize/serializer.tcl#L868 > > > nx::Class create B { > :method init {} { > ns_log notice "Im class [self]" > } > } > > nx::Class create C { > :method init {} { > ns_log notice "Im class [self]" > } > } > > nx::Class create A { > :property -accessor public inBObj:object > :property -accessor public inCObj:object > > :method init {} { > set :inBObj [B new -childof [::nsf::current object]] > set :inCObj [C new -childof [::nsf::current object]] > } > > :public method myObjs {} { > ns_log notice "My B obj is: [${:inBObj} info name]" > ns_log notice "My C obj is: [${:inCObj} info name]" > } > } > > > > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > |
From: Gustaf N. <ne...@wu...> - 2023-09-15 10:19:20
|
Dear Maksym, On 15.09.23 01:09, Maksym Zinchenko wrote: > Hello, I've been struggling with a problem for a few days now. First I > thought it's something with my understanding of NX, but I think its > has to do something with Naviserver. actually, the problem is not with NaviServer, but it is rooted in the XOTcl/NX serializer. When a NaviServer blueprint is created, NaviServer performs "::Serializer all" to serialize the current workspace. Actually, it does not serialize the full workspace, since the workspace contains as well the NX internal class definitions. Therefore, the serializer has "ignore patterns", including everything from, e.g., the "::ns"f namespace [1]. When objects are created with "new", these are named by default "::nsf::__#1", "::nsf::__#2". These names match the ignore pattern, and therefore, these names are excluded from the blueprint. If you create the objects in your code with the "-childof" flag (see below), you can place these outside the "::nsf" namespace, and everything should be fine Hope this helps -g [1] https://github.com/gustafn/nsf/blob/master/library/serialize/serializer.tcl#L868 nx::Class create B { :method init {} { ns_log notice "Im class [self]" } } nx::Class create C { :method init {} { ns_log notice "Im class [self]" } } nx::Class create A { :property -accessor public inBObj:object :property -accessor public inCObj:object :method init {} { set :inBObj [B new -childof [::nsf::current object]] set :inCObj [C new -childof [::nsf::current object]] } :public method myObjs {} { ns_log notice "My B obj is: [${:inBObj} info name]" ns_log notice "My C obj is: [${:inCObj} info name]" } } |
From: Maksym Z. <siq...@gm...> - 2023-09-14 23:10:12
|
Hello, I've been struggling with a problem for a few days now. First I thought it's something with my understanding of NX, but I think its has to do something with Naviserver. I have a global library named "oodz" in my tcl folder, insideI have some subfolders with my NX classes. I have an init.tcl file inside. According to documentation this "init.tcl" file executed first. First thing I do is looping through my subfolders and sourcing my NX Classes. It works fine as expected. Next Im creating some instances of classes I need inside "init.tcl" I made a simple example, to show: I have an NX class like that: nx::Class create b { > :method init {} { > :say > } > > :public method say {} { > puts "Im class B" > } > } > > nx::Class create c { > :method init {} { > :say > } > > :public method say {} { > puts "Im class C" > } > } > > nx::Class create a { > :property -accessor public inBObj:object > :property -accessor public inCObj:object > > :method init {} { > set :inBObj [b new] > set :inCObj [c new] > } > > :public method myObjs {} { > puts "My B obj is: [${:inBObj} info name]" > puts "My C obj is: [${:inCObj} info name]" > } > } > In my "init.tcl" I have line: a create aObj Here's what's happenin: 1. I can see in the server log that aObj was created, I see line: Im class B, Im class C 2. If I open nsshell console, and run aObj cget: aObj cget -inBObj ::nsf::__#3 aObj cget -inCObj ::nsf::__#4 3. But when I try to run myObjs, I get an error: aObj myObjs invalid command name "::nsf::__#3" The same code works fine in tcl shell, is it related to Naviserver? Or am I doing something wrong with NX? Thank you |
From: Gustaf N. <ne...@wu...> - 2023-09-10 12:42:54
|
Dear all, Part of the announcement of NaviServer 4.99.28 was the update of the nsdbbdb module (Berkley DB driver via nsdb). I did a few tests about its performance, that might interest a few here. The test of this module configured with lmdb is quite impressive (see below). For comparison, there are also accesses to nsv, ns_cache and similar included. All timings are from my notebook (Apple Silicon M1). This is just a micro-benchmark and measures buffered access through the NaviServer DB driver infrastructure. More detailed information about LMDB in comparison to Berkley DB concerning large Applications and more micro-benchmarks is in [1] all the best -gb 183 ns nsv_set foo x 1; time {nsv_get foo x} 100000 213 ns time {ns_cache_eval ns:memoize 1 {set x 1}} 100000 253 ns time {array set x {a 1 b 2 c 3}} 100000 308 ns ns_urlspace set -key foo1 /*.adp A; time {ns_urlspace get -key foo1 /static/test.adp} 100000 >>> 511 ns set ::db [ns_db gethandle lmdb]; time {ns_db 0or1row $::db "GET key1"} 100000 3633 ns time {parameter::get -package_id [ad_conn subsite_id] -parameter DefaultMaster -default "x"} 100000 36893 ns time {xo::dc get_value dbqd..qn {select title from acs_objects where object_id=221}} 100000 42523 ns time {::mongo::collection::query $::mongoColl {name string Gustaf}} 100000 Times are in nano seconds, the "slowest" is here the access to MongoDB (via nsf package) and PostgreSQL. The PostgreSQL access takes 36 micro seconds, this means, one can run ~27K of those in a second. But with LMDB it is possible to gets nearly 2 Mio DB accesses per second. [1]http://www.lmdb.tech/media/20130329-devox-MDB.pdf |