From: Robert S. (C. <rs...@fr...> - 2004-10-03 01:32:58
|
With the recent addition of the ability to set the buffers size for UDP and TCP sockets in 5.2, it seems like a good time to revisit the question of what the default value should be. The current behavior is to set them to 128k, which seems rather large. The rational, from the CVS log message: - set SO_SNDBUF and SO_RCVBUF to 128Kb for newly-opened UDP sockets, to enable large PDUs to be sent and received. Some implementations default very low (Solaris 2.7 8Kb, Linux 2.4 64Kb). The other important bit of information about socket size is that if affects the number of packets that can be received while a process is busy. The original author of the patch to allow increased buffers sizes was motivated by the fact that his snmptrapd was losing packets due to an insufficient receive buffer. The new patch allows on to set independent buffer sizes for client vs server, and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than 64k, a default buffer size of 128k seems excessive. I suggest that: 1) snmpd and snmptraps both set the default receive buffers size to at least 128k, if not more, to minimize the changes of missing packets. 2) if no buffer size is specified, that the default be to use the default size specified by the OS. I'm guessing that the average PDU is pretty small (<1k), so the OS defaults are probably very safe. Thoughts, opinions and arguments welcomed. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Geert De P. <ge...@de...> - 2004-10-03 07:24:30
|
The recent addition only allows tweaking of the UDP buffers. First a note on why I "forgot" about TCP On the TCP side I looked at the code and saw the return code of the send = is intercepted. Therefore I made the (wrong) assumption that this return = code would be interpreted appropriately and an failed PDU would be resend, instead of freed, as I think it is right now :-(. Haven't looked at the required changes for fixing this on TCP level too, = but if guaranteed delivery (to the socket layer) is needed for TCP, probably = an internal buffer should be kept and the sender should maintain it's registration in the select() loop until the packet has been delivered. = (I have to admit have probably not spend enough time in that piece of code = to make more assumptions here). So for TCP buffers I have concerns if "we" should tune those on the OS = level like we do with UDP (yes, with UDP there is no other alternative). This patch to allow UDP buffer tuning was written because it was assumed = a 1<<17K buf is good enough for everyone. Sadly enough it sometimes isn't = and there was no way to change this 1<<17K hardcoded limit (besides changing = the code). The new default behaves "exactly" how it used to be. This means you = will normally end up with a 128K receive and send buffer (yes, normally, = because the tuning parameters in the OS have to allow this size). I'm in favour = of not changing default behavior unless there is a reason for it, but have = to agree that this might be a bit excessive for most get/set operations. As you said, the SNMP_MAX_PDU_SIZE size is 64K, that means that every application that does a synchronous network call (as far as I can see = this is every snmp client application) will not need more than 64K (ever). Therefore the default send and receive buffer should be 64K for client = apps using synch network calls. For server applications I would keep the default SNMP_MAX_PDU_SIZE * 2 = but recommend higher values (if memory is not an issue). The patch allows everyone to tune their send and receive buffers anyway. = If people are really short on memory then they can even change the buffer = below SNMP_MAX_PDU_SIZE. So my recommendation is. 1) use SNMP_MAX_PDU_SIZE for send/receive buffers of client apps 2) use SNMP_MAX_PDU_SIZE * 2 for send/receive buffers of server apps This would save you some memory on the client apps (while still staying = in the safe range). On the server side I wouldn't change anything by default. High volume trap receivers should be aware of potential buffer overflow issues and should tune their buffers accordingly to minimize loss (SNMP_MAX_PDU_SIZE * 16 for example, which is close to the default Sun suggests for their high performance networking - HPPI/P) ... I'm sure = one size won't fit all in this area. Cheers, -- Geert -----Original Message----- From: net...@li... [mailto:net...@li...] On Behalf Of Robert Story (Coders) Sent: Sunday, October 03, 2004 3:33 AM To: net...@li...; John Naylon Subject: default sock buffer size: what should it be? With the recent addition of the ability to set the buffers size for UDP = and TCP sockets in 5.2, it seems like a good time to revisit the question of what the default value should be. The current behavior is to set them to 128k, which seems rather large. The rational, from the CVS log message: - set SO_SNDBUF and SO_RCVBUF to 128Kb for newly-opened UDP sockets, to enable large PDUs to be sent and received. Some implementations default very low (Solaris 2.7 8Kb, Linux 2.4 64Kb). The other important bit of information about socket size is that if = affects the number of packets that can be received while a process is busy. The original author of the patch to allow increased buffers sizes was = motivated by the fact that his snmptrapd was losing packets due to an insufficient receive buffer. The new patch allows on to set independent buffer sizes for client vs server, and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than = 64k, a default buffer size of 128k seems excessive. I suggest that: 1) snmpd and snmptraps both set the default receive buffers size to at = least 128k, if not more, to minimize the changes of missing packets. 2) if no buffer size is specified, that the default be to use the = default size specified by the OS. I'm guessing that the average PDU is pretty = small (<1k), so the OS defaults are probably very safe. Thoughts, opinions and arguments welcomed. --=20 Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=3Dnet-snmp-coders> You are lost in a twisty maze of little standards, all different.=20 ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal = Use IT products in your business? Tell us what you think of them. Give us = Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Net-snmp-coders mailing list Net...@li... https://lists.sourceforge.net/lists/listinfo/net-snmp-coders |
From: Geert De P. <ge...@de...> - 2004-10-03 08:09:03
|
My previous post made me realize it that the current patch might still = not be optimal in some situations... In case someone has a default socket buffer size which is greater than = our default (currently 128K) then we should probably respect that. At the moment, if the end user doesn't override the buffer size in their configuration, then regardless of the OS defined buffer size we will = change the buffer size to our default, which could result in downsizing the = buffer. Let's say someone has a high performance network and sized their udp = buffers to 1Mb (as part of tuning the OS), then unless the administrator put 1Mb = in the buffer configuration for snmp, we will downsize it to 128K. So my suggestion (which combines the previous posting and this posting) = is: If (server) DEFAULT_BUFFER =3D SNMP_MAX_PDU_SIZE * 2; else DEFAULT_BUFFER =3D SNMP_MAX_PDU_SIZE; if (valid snmp udp buffer size has been specified in config file) { change udp buffer to the the specified size (this could mean upsize/downsize ... or Super Size for only $.40 more = ;-) } else if (current OS udp buffer < DEFAULT_BUFFER) { upsize udp buffer to the DEFAULT_BUFFER size } This leaves an OS buffer which is bigger than the DEFAULT_BUFFER = untouched. The codechange required for this is minimal ... And I could code it up = if you think it makes sense. -- Geert -----Original Message----- From: net...@li... [mailto:net...@li...] On Behalf Of Geert = De Peuter Sent: Sunday, October 03, 2004 9:23 AM To: net...@li...; 'John Naylon' Subject: RE: default sock buffer size: what should it be? The recent addition only allows tweaking of the UDP buffers. First a note on why I "forgot" about TCP On the TCP side I looked at the code and saw the return code of the send = is intercepted. Therefore I made the (wrong) assumption that this return = code would be interpreted appropriately and an failed PDU would be resend, instead of freed, as I think it is right now :-(. Haven't looked at the required changes for fixing this on TCP level too, but if guaranteed delivery (to the socket layer) is needed for TCP, probably an internal buffer should be kept and the sender should maintain it's registration = in the select() loop until the packet has been delivered. (I have to admit have probably not spend enough time in that piece of code to make more assumptions here). So for TCP buffers I have concerns if "we" should = tune those on the OS level like we do with UDP (yes, with UDP there is no = other alternative). This patch to allow UDP buffer tuning was written because it was assumed = a 1<<17K buf is good enough for everyone. Sadly enough it sometimes isn't = and there was no way to change this 1<<17K hardcoded limit (besides changing = the code). The new default behaves "exactly" how it used to be. This means you = will normally end up with a 128K receive and send buffer (yes, normally, = because the tuning parameters in the OS have to allow this size). I'm in favour = of not changing default behavior unless there is a reason for it, but have = to agree that this might be a bit excessive for most get/set operations. As you said, the SNMP_MAX_PDU_SIZE size is 64K, that means that every application that does a synchronous network call (as far as I can see = this is every snmp client application) will not need more than 64K (ever). Therefore the default send and receive buffer should be 64K for client = apps using synch network calls. For server applications I would keep the default SNMP_MAX_PDU_SIZE * 2 = but recommend higher values (if memory is not an issue). The patch allows everyone to tune their send and receive buffers anyway. = If people are really short on memory then they can even change the buffer = below SNMP_MAX_PDU_SIZE. So my recommendation is. 1) use SNMP_MAX_PDU_SIZE for send/receive buffers of client apps 2) use SNMP_MAX_PDU_SIZE * 2 for send/receive buffers of server apps This would save you some memory on the client apps (while still staying = in the safe range). On the server side I wouldn't change anything by default. High volume trap receivers should be aware of potential buffer overflow issues and should tune their buffers accordingly to minimize loss (SNMP_MAX_PDU_SIZE * 16 for example, which is close to the default Sun suggests for their high performance networking - HPPI/P) ... I'm sure = one size won't fit all in this area. Cheers, -- Geert -----Original Message----- From: net...@li... [mailto:net...@li...] On Behalf Of Robert Story (Coders) Sent: Sunday, October 03, 2004 3:33 AM To: net...@li...; John Naylon Subject: default sock buffer size: what should it be? With the recent addition of the ability to set the buffers size for UDP = and TCP sockets in 5.2, it seems like a good time to revisit the question of what the default value should be. The current behavior is to set them to 128k, which seems rather large. The rational, from the CVS log message: - set SO_SNDBUF and SO_RCVBUF to 128Kb for newly-opened UDP sockets, to enable large PDUs to be sent and received. Some implementations default very low (Solaris 2.7 8Kb, Linux 2.4 64Kb). The other important bit of information about socket size is that if = affects the number of packets that can be received while a process is busy. The original author of the patch to allow increased buffers sizes was = motivated by the fact that his snmptrapd was losing packets due to an insufficient receive buffer. The new patch allows on to set independent buffer sizes for client vs server, and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than = 64k, a default buffer size of 128k seems excessive. I suggest that: 1) snmpd and snmptraps both set the default receive buffers size to at = least 128k, if not more, to minimize the changes of missing packets. 2) if no buffer size is specified, that the default be to use the = default size specified by the OS. I'm guessing that the average PDU is pretty = small (<1k), so the OS defaults are probably very safe. Thoughts, opinions and arguments welcomed. --=20 Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=3Dnet-snmp-coders> You are lost in a twisty maze of little standards, all different.=20 ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal = Use IT products in your business? Tell us what you think of them. Give us = Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Net-snmp-coders mailing list Net...@li... https://lists.sourceforge.net/lists/listinfo/net-snmp-coders ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal = Use IT products in your business? Tell us what you think of them. Give us = Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Net-snmp-coders mailing list Net...@li... https://lists.sourceforge.net/lists/listinfo/net-snmp-coders |
From: Robert S. (C. <rs...@fr...> - 2004-10-03 14:22:53
|
On Sun, 3 Oct 2004 10:07:25 +0200 Geert wrote: GDP> In case someone has a default socket buffer size which is greater than our GDP> default (currently 128K) then we should probably respect that. I think this is another good argument for leaving the OS default alone, unless a buffer size is explicitly set in the conf files (or at application startup). GDP> This leaves an OS buffer which is bigger than the DEFAULT_BUFFER GDP> untouched. Except that it is conceivable that and application actually wants a smaller buffer size that the default. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Geert De P. <ge...@de...> - 2004-10-03 14:37:19
|
Technically I agree with leaving the UDP buffers alone (unless they are explicitly set in the configuration) like you suggest. However, the consequences will be that more traps will get lost and bigger packets might not make it when people upgrade to version 5.2, just because their OS default happens to be smaller than our originally hardcoded 128K. Not a problem for me at all, but definitely not easy to debug as an enduser why traps are lost "with this new version". What is the "upgrade/backward compatibility" strategy for the net-snmp project ? If such a change in operational behaviour is acceptable, then I'm all for it. -- Geert -----Original Message----- From: Robert Story (Coders) [mailto:rs...@fr...] Sent: Sunday, October 03, 2004 4:21 PM To: Geert De Peuter Cc: net...@li...; 'Geert De Peuter'; 'John Naylon' Subject: Re: default sock buffer size: what should it be? On Sun, 3 Oct 2004 10:07:25 +0200 Geert wrote: GDP> In case someone has a default socket buffer size which is greater GDP> than our default (currently 128K) then we should probably respect GDP> that. I think this is another good argument for leaving the OS default alone, unless a buffer size is explicitly set in the conf files (or at application startup). GDP> This leaves an OS buffer which is bigger than the DEFAULT_BUFFER GDP> untouched. Except that it is conceivable that and application actually wants a smaller buffer size that the default. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Robert S. (C. <rs...@fr...> - 2004-10-03 14:46:18
|
On Sun, 3 Oct 2004 16:35:58 +0200 Geert wrote: GDP> Technically I agree with leaving the UDP buffers alone (unless they are GDP> explicitly set in the configuration) like you suggest. GDP> However, the consequences will be that more traps will get lost and bigger GDP> packets might not make it when people upgrade to version 5.2, just because GDP> their OS default happens to be smaller than our originally hardcoded 128K. Note that my original reply did explicitly specify upping the buffer size for snmptrapd and snmpd. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Robert S. (C. <rs...@fr...> - 2004-10-03 18:07:09
|
On Sun, 3 Oct 2004 18:49:17 +0200 Geert wrote: GDP> OK ... Replying off-list [...] Replying on-list, because everyone should see all the arguments (eg this isn't solely my decision, so you need to present your arguments to everyone). GDP> You did mention in your original reply to up the buffers of the server GDP> apps. Now what about an snmpset with a huge payload (which might work, GDP> because send buffers could be treated differently than receive buffers) GDP> ... Or even snmpbulkget, snmpget which will have to return a large amount GDP> of data. You could even have a MIB with unrealistically big OID values GDP> (dynamic MIBS for example), in that case the amount of data won't even GDP> matter ... Yes, those are all considerations. GDP> So in that case when the OS has a default buffer that is too small for the GDP> PDU, then reply won't fit in the udp buffer, the message will never be GDP> received (think the client app will timeout). Well, that depends on the client side, and we can't assume that the client side has the same buffer size as us anyways. Not everyone uses net-snmp! ;-) The minimum size required in the RFCs is just 484 bytes. GDP> I feel this is breaking with backward compatibilty (because in the past, GDP> you had 128K at your disposal), but my definition of backward compatible GDP> might be off with yours. I think that backwards compatibility applies to the APIs, not the internals. GDP> I feel that a client app should therefore at least have a GDP> SNMP_MAX_PDU_SIZE buffer available on it's sockets (which was a suggestion GDP> I sent in the very first reply). I'm not convinced. I still think we shouldn't mess with os defaults. If someone want's their default to be larger, they now have a mechanism to to increase it, without tuning the kernel or recompiling the application. Another possibility (if you want to submit another patch), would be to add a configure option to specify the default at configure time. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Simon L. <si...@li...> - 2004-10-07 19:42:14
|
Geert De Peuter writes: > If (server) > DEFAULT_BUFFER = SNMP_MAX_PDU_SIZE * 2; > else > DEFAULT_BUFFER = SNMP_MAX_PDU_SIZE; Hm... when I try to find a reasonable buffer size, I don't think I would care much about SNMP_MAX_PDU_SIZE, but rather about the EXPECTED traffic that I want the OS to buffer in times of congestion, i.e. about what's a reasonable number of trap (or request) PDUs to buffer, and what is the TYPICAL size. As long as SNMP_MAX_PDU_SIZE is 65536, your formulas happen to yield reasonable results... but I would probably use something more along the lines of If (server) DEFAULT_BUFFER = SNMP_TYPICAL_PDU_SIZE * 200; or, if you want to make sure we can receive maximally large PDUs, If (server) { DEFAULT_BUFFER = SNMP_TYPICAL_PDU_SIZE * 200; if (SNMP_MAX_PDU_SIZE > DEFAULT_BUFFER) DEFAULT_BUFFER = SNMP_MAX_PDU_SIZE; -- Simon. |
From: Geert De P. <ge...@de...> - 2004-10-07 20:45:50
|
SL> Hm... when I try to find a reasonable buffer size, I don't think=20 SL> I would care much about SNMP_MAX_PDU_SIZE, but rather about the=20 SL> EXPECTED traffic that I want the OS to buffer in times of=20 SL> congestion, i.e. about what's a reasonable number of trap=20 SL> (or request) PDUs to buffer, and what is the TYPICAL size. What I wanted to say in my email was that the buffer should NEVER be = less than MAX_PDU_SIZE. I tried to bring this up as a counterargument = against using the default system buffer size (which could be in the "couple of = K" ranges). The rationale behind this: I feel we shouldn't miss a = perfectly legit packet when we are not in a bursting scenario... SL> If (server) SL> DEFAULT_BUFFER =3D SNMP_TYPICAL_PDU_SIZE * 200; Your formula is based on two empirical values. The definition of = "TYPICAL" is probably "something that usually works". The 200 is probably based = on your experience. The product of these two is "something that usually = works based on your experience" ;-) SL> if (SNMP_MAX_PDU_SIZE > DEFAULT_BUFFER) SL> DEFAULT_BUFFER =3D SNMP_MAX_PDU_SIZE; So why do you want to add this extra check ? I also have the feeling = you seem to agree with the assumption that the DEFAULT_BUFFER should never = be less than the MAX_PDU_SIZE ... so somehow you do seem to care about SNMP_MAX_PDU_SIZE in contrary to what you wrote earlier. Anyway, I think the discussion slowed down after it was felt it would be better to: "Use the system default buffer size and don't touch the UDP buffers, unless someone has it specified in the conf files". This gives a flexibility to the system administrators to set the UDP = buffers the way they want it and gives corrective power in the snmpd.conf in = case the default values are not good enough for the snmp apps. An incremental patch has been written for this behaviour (it also adds = some compile time configure options for allowing a hardcoded minimal default buffer if needed) "http://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D1022787&gro= up_id=3D1 2694&atid=3D312694" The latest status of this patch is "Pending" (I think because a = consensus has not been reached yet) Cheers, -- Geert -----Original Message----- From: Simon Leinen [mailto:si...@li...]=20 Sent: Thursday, October 07, 2004 9:42 PM To: Geert De Peuter Cc: net...@li...; 'John Naylon' Subject: Re: default sock buffer size: what should it be? Geert De Peuter writes: > If (server) > DEFAULT_BUFFER =3D SNMP_MAX_PDU_SIZE * 2; > else > DEFAULT_BUFFER =3D SNMP_MAX_PDU_SIZE; As long as SNMP_MAX_PDU_SIZE is 65536, your formulas happen to yield reasonable results... but I would probably use something more along the lines of If (server) DEFAULT_BUFFER =3D SNMP_TYPICAL_PDU_SIZE * 200; or, if you want to make sure we can receive maximally large PDUs, If (server) { DEFAULT_BUFFER =3D SNMP_TYPICAL_PDU_SIZE * 200; if (SNMP_MAX_PDU_SIZE > DEFAULT_BUFFER) DEFAULT_BUFFER =3D SNMP_MAX_PDU_SIZE; --=20 Simon. |
From: Wes H. <har...@us...> - 2004-10-08 16:41:26
|
>>>>> On Thu, 7 Oct 2004 22:45:27 +0200, "Geert De Peuter" <ge...@de...> said: Geert> The latest status of this patch is "Pending" (I think because a Geert> consensus has not been reached yet) Hmm... No it should have been "closed". I think it was a submit bug (IE, I changed it to pending when I meant closed). So is patch3 now an additional patch? If so, is it against current CVS (where your patch was applied) or against pre-your-patch? -- Wes Hardaker Sparta |
From: Robert S. (C. <rs...@fr...> - 2004-10-08 18:04:59
|
On Fri, 08 Oct 2004 09:41:21 -0700 Wes wrote: WH> Geert> The latest status of this patch is "Pending" (I think because a WH> Geert> consensus has not been reached yet) WH> WH> Hmm... WH> WH> No it should have been "closed". I think it was a submit bug (IE, I WH> changed it to pending when I meant closed). Have we reached consensus? I don't remember see your opinion... -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Wes H. <har...@us...> - 2004-10-08 20:35:31
|
>>>>> On Fri, 8 Oct 2004 14:04:47 -0400, Robert Story (Coders) <rs...@fr...> said: WH> Hmm... WH> WH> No it should have been "closed". I think it was a submit bug (IE, I WH> changed it to pending when I meant closed). Robert> Have we reached consensus? I don't remember see your Robert> opinion... No, I meant I closed (err... pendinged it) it before the argument even started. -- Wes Hardaker Sparta |
From: Michael J. S. <sl...@be...> - 2004-10-03 13:41:19
|
Robert Story (Coders) wrote: > With the recent addition of the ability to set the buffers size for UDP and TCP > sockets in 5.2, it seems like a good time to revisit the question of what the > default value should be. The current behavior is to set them to 128k, which > seems rather large. The rational, from the CVS log message: > > - set SO_SNDBUF and SO_RCVBUF to 128Kb for newly-opened UDP sockets, > to enable large PDUs to be sent and received. Some > implementations default very low (Solaris 2.7 8Kb, Linux 2.4 > 64Kb). > > The other important bit of information about socket size is that if affects the > number of packets that can be received while a process is busy. The original > author of the patch to allow increased buffers sizes was motivated by the fact > that his snmptrapd was losing packets due to an insufficient receive buffer. > > The new patch allows on to set independent buffer sizes for client vs server, > and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than 64k, a default > buffer size of 128k seems excessive. > > > I suggest that: > > > 1) snmpd and snmptraps both set the default receive buffers size to at least > 128k, if not more, to minimize the changes of missing packets. > > 2) if no buffer size is specified, that the default be to use the default size > specified by the OS. I'm guessing that the average PDU is pretty small (<1k), > so the OS defaults are probably very safe. > > > Thoughts, opinions and arguments welcomed. > > I may be overacting. You get to decide. Are there not O.S. platform specific mechanisms for buffer tuning that exist already ? Would those be better documented and well known ? Would network administrators know the interactions of application specific tuning with the system level tuning ? Would Net-SNMP users, in general very nice people, be also crack network programmers or administrators that can triage tuning inefficiences ? A number of them are. Would Net-SNMP developers given incomplete information provide sufficient guidance to assist in the successful deployment of some performance tuned application scenario ? I can see that happen. With the greatest respect to the author of the patch, I think the default should be "do not set this". Better yet, I think the patch should be removed, and left in the "This Works For Me" kind of patches that we collect, and not incorporated into the project. If the community really wants the patch, well, I will reconsider my arguments in that light. Other ways more pertinent to this discussion may be to reconsider how some of the logic is implemented. I think there are several well thought low impact performance improvements that are somewhere in the bug, patch, mail archive. [Finding them is not a trivial exercise!] This one, for instance claims improved response to traps through more efficient use of the snmp_oid_compare() call. Patch 1022941 Speed up adding a row to a table There may be a number of other methods worth dis-covering. Best Regards, -Mike Slifcak |
From: Geert De P. <ge...@de...> - 2004-10-03 14:25:47
|
Just as a quick FYI. If "it is decided" to pull the patch then the network admins/system = admins would not be able to tweak anything anymore, because the situation as it = was "before the patch" reset both the send and receive buffer to 128K (hardcoded). That means, even before the patch all net-snmp apps were changing buffers to make at least the PDU fit in the OS assigned buffer. If your suggestion is to remove all buffer handling code from net-snmp, = then that might leave us with the situation that a user will only get an 8k buffer (default on Solaris, as mentioned by Robert). In that case = snmptrapd would simply not be able to receive traps more than 8k, and it wouldn't = take a lot of traps before they get lost in the UDP stack (for snmptrapd). Furthermore, in case a user is smart enough he has encountered a buffer issue (and therefore he missed traps) then the network/sysadmin would = only be able to change the default buffer size os a system wide setting. So in case the admin changes the UDP default buffer size to 128K by = default, every process that creates a UDP socket would end up with 128K of kernel memory... The question would remain: how can we satisfy the administrator who = wants a big receive buffer for snmptrapd (because those are network-buffer = demanding applications), but use the system default for all the rest of the UDP sockets ? I think the patch is the only way to give an administrator that = flexibility - and I'm not saying that because I happen to write the patch ;-) Of course if anyone can think of a way to only give a certain process a bigger send/receive buffer (and not as a system wide setting), then it = would definitely be interesting to check it out as an alternative. Cheers, -- Geert PS : Would love to see patch 1022941 added too ! -----Original Message----- From: net...@li... [mailto:net...@li...] On Behalf Of = Michael J. Slifcak Sent: Sunday, October 03, 2004 3:40 PM To: net...@li... Cc: John Naylon Subject: Re: default sock buffer size: what should it be? Robert Story (Coders) wrote: > With the recent addition of the ability to set the buffers size for=20 > UDP and TCP sockets in 5.2, it seems like a good time to revisit the=20 > question of what the default value should be. The current behavior is=20 > to set them to 128k, which seems rather large. The rational, from the=20 > CVS log message: >=20 > - set SO_SNDBUF and SO_RCVBUF to 128Kb for newly-opened UDP sockets, > to enable large PDUs to be sent and received. Some > implementations default very low (Solaris 2.7 8Kb, Linux 2.4 > 64Kb). >=20 > The other important bit of information about socket size is that if=20 > affects the number of packets that can be received while a process is=20 > busy. The original author of the patch to allow increased buffers=20 > sizes was motivated by the fact that his snmptrapd was losing packets=20 > due to an insufficient receive buffer. >=20 > The new patch allows on to set independent buffer sizes for client vs=20 > server, and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than = > 64k, a default buffer size of 128k seems excessive. >=20 >=20 > I suggest that: >=20 >=20 > 1) snmpd and snmptraps both set the default receive buffers size to at = > least 128k, if not more, to minimize the changes of missing packets. >=20 > 2) if no buffer size is specified, that the default be to use the=20 > default size specified by the OS. I'm guessing that the average PDU is = > pretty small (<1k), so the OS defaults are probably very safe. >=20 >=20 > Thoughts, opinions and arguments welcomed. >=20 >=20 I may be overacting. You get to decide. Are there not O.S. platform specific mechanisms for buffer tuning that = exist already ? Would those be better documented and well known ? Would = network administrators know the interactions of application=20 specific tuning with the system level tuning ? Would Net-SNMP users, in general very nice people, be also crack network programmers or administrators that can triage tuning inefficiences ? A number of them are. Would Net-SNMP developers given incomplete = information provide sufficient guidance to assist in the successful deployment of = some performance tuned application scenario ? I can see that happen. With the greatest respect to the author of the patch, I think the default should be "do not set this". Better yet, I think the patch should be removed, and left in the "This Works For Me" kind of patches that we collect, and not incorporated into the project. If the community really wants the patch, well, I will reconsider my arguments in that light. Other ways more pertinent to this discussion may be to reconsider how = some of the logic is implemented. I think there are several well thought low impact performance = improvements that are somewhere in the bug, patch, mail archive. [Finding them is not = a trivial exercise!] This one, for instance claims improved response to traps through more efficient use of the snmp_oid_compare() call. Patch 1022941 Speed up adding a row to a table There may be a number of other methods worth dis-covering. Best Regards, -Mike Slifcak ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal = Use IT products in your business? Tell us what you think of them. Give us = Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Net-snmp-coders mailing list Net...@li... https://lists.sourceforge.net/lists/listinfo/net-snmp-coders |
From: Robert S. (C. <rs...@fr...> - 2004-10-03 14:44:49
|
On Sun, 03 Oct 2004 09:39:52 -0400 Michael wrote: MJS> Robert Story (Coders) wrote: MJS> > The new patch allows on to set independent buffer sizes for client vs MJS> > server, and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than MJS> > 64k, a default buffer size of 128k seems excessive. MJS> > MJS> > MJS> > I suggest that: MJS> > MJS> > 1) snmpd and snmptraps both set the default receive buffers size to at MJS> > least 128k, if not more, to minimize the changes of missing packets. MJS> > MJS> > 2) if no buffer size is specified, that the default be to use the MJS> > default size specified by the OS. I'm guessing that the average PDU is MJS> > pretty small (<1k), so the OS defaults are probably very safe. MJS> MJS> Are there not O.S. platform specific mechanisms for buffer tuning MJS> that exist already ? Yes, for the default buffer size for the whole system. Some applications may have different needs. MJS> With the greatest respect to the author of the patch, MJS> I think the default should be "do not set this". It's not just about the patch. For the past 3.5 years, the default has set a fixed buffer size of 128k. From the tone of your message, I imply that you would eliminate this fixed size too (eg #2 in my list). MJS> I think the patch should be removed, and left in the "This Works For Me" MJS> kind of patches that we collect, and not incorporated into the project. I disagree here. I think it is very useful to be able to resize the buffers without having to patch the code. In particular, it's very useful for snmptrapd, which may receive lots of traps very quickly, and isn't particularly efficient about processing them. MJS> I think there are several well thought low impact performance MJS> improvements that are somewhere in the bug, patch, mail archive. MJS> [Finding them is not a trivial exercise!] MJS> MJS> This one, for instance claims improved response to traps through MJS> more efficient use of the snmp_oid_compare() call. MJS> MJS> Patch 1022941 Speed up adding a row to a table Good one! I'll be applying it shortly. Actually, it would probably be worth changing the table_data to use netsnmp_container, instead of a linked list. That would improve performance of walking large tables too. Maybe for 5.3. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |
From: Michael J. S. <sl...@be...> - 2004-10-03 14:41:10
|
Michael J. Slifcak wrote: > > With the greatest respect to the author of the patch, > I think the default should be "do not set this". Better yet, > I think the patch should be removed, and left in the "This Works For Me" > kind of patches that we collect, and not incorporated into the project. > > If the community really wants the patch, well, I will reconsider > my arguments in that light. I reread this and thought "what a pompous ass I've become." Please don't give my opinion much weight. I was concerned with the additional overhead and how that might make managing Net-SNMP deployments more difficult. That's all. > > > Other ways more pertinent to this discussion may be to reconsider > how some of the logic is implemented. > > I think there are several well thought low impact performance > improvements that are somewhere in the bug, patch, mail archive. > [Finding them is not a trivial exercise!] > > This one, for instance claims improved response to traps through > more efficient use of the snmp_oid_compare() call. > > Patch 1022941 Speed up adding a row to a table > > > There may be a number of other methods worth dis-covering. > > Best Regards, > -Mike Slifcak > |
From: Simon L. <si...@li...> - 2004-10-07 19:33:45
|
Michael J Slifcak writes: > With the greatest respect to the author of the patch, > I think the default should be "do not set this". Better yet, > I think the patch should be removed, and left in the "This Works For Me" > kind of patches that we collect, and not incorporated into the project. > If the community really wants the patch, well, I will reconsider > my arguments in that light. I have some experience with UDP-packet-consuming applications, and I think that tunable receive buffers for the UDP sockets in snmpd and snmptrapd are an EXCELLENT idea. I also think that 128 KB is a very reasonable default value. I wouldn't worry too much about memory waste, because typically one has very few snmpds and snmptrapds listening for UDP packets. Having the system buffer a burst of 128 KB worth of traps while snmptrapd is somehow busy is neat. In particular when you consider that people can run scripts off their snmptrapd.conf. And an snmptrapd will often get traps from many devices, sometimes almost at once as the result of a failure (e.g. when a router that acts as a BGP Route Reflector is rebooted and all route reflector clients send adjacency traps). For the snmpd I'm not sure how much incoming traffic you really want the OS to buffer - probably depends on how many independent pollers you want to support. I guess that the buffer requirement would typiclly be somewhat lower than for snmptrapd, but 128 KB seems reasonable here, too. I don't really have an opinion on the UDP sending buffers. For the sake of symmetry, it's nice to have them configurable, too, but I cannot think of good defaults. So it seems to make sense to leave the system-wide defaults in place when nothing is specified here. -- Simon. |
From: Robert S. (C. <rs...@fr...> - 2004-10-08 18:02:12
|
On Thu, 07 Oct 2004 21:33:08 +0200 Simon wrote: SL> I also think that 128 KB is a very reasonable default value. I SL> wouldn't worry too much about memory waste, because typically one has SL> very few snmpds and snmptrapds listening for UDP packets. But these defaults are in the library, so it affects anyone who uses a library. Someone with an asynchronous app polling 1,000 devices could be using quite a few sockets (depending on how the app is coded). Again, I think the applications (snmpd, snmptrapd) should explicitly request a larger buffer, but leave the defaults to the OS. -- Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/> <irc://irc.freenode.net/#net-snmp> Archive: <http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders> You are lost in a twisty maze of little standards, all different. |