From: Simon P. <sim...@th...> - 2005-04-26 15:15:34
|
I am beginning to use Ecore in an embedded-device application, and using ecore_con to implement a TCP server. The nature of the application and the resource limitations of the device mean that I can only afford to service one client at a time. In a traditional Unix server, I would accomplish this by not forking for each incoming connection, and not calling accept() again until I have finished servicing the previous connection. I could then rely on the TCP subsystem to queue incoming connections for me. I'm struggling to understand the sequence of events in _ecore_main_loop_iterate_internal clearly enough to work out if there is a way of using Ecore_Con that acheives this effect? I think there isn't, but I may be wrong. If there isn't, then would a patch of the following nature do the trick? 1. Add field "char allow_concurrent_connections : 1;" to _Ecore_Con_Server (ecore_con_private.h) 2. At the start of ecore_con_svr_handler (ecore_con.c), if allow_concurrent_connections==0 and the svr->clients list is not empty, return without calling accept(). If I get confirmation that this isn't currently supported and this patch will work, then I will code and submit... -- Simon Poole www.appliancestudio.com |
From: Nathan I. <nin...@gm...> - 2005-04-26 15:58:23
|
It looks like the current scheme requires you to wait for an ECORE_CON_CLIENT_ADD event on the server, at which point you could check the number of active connections and call ecore_con_client_del if the valid number is exceeded. So it has to accept the connection, setup the descriptors, and iterate through the event loop before the teardown can occur. Would this be too much overhead in your case? Rather than a single flag allowing concurrent connections, maybe a limit on the number of current connections with -1 (default?) indicating unlimited would be more useful. For your case you could set it to 1 and get the desired effect. On 4/26/05, Simon Poole <sim...@th...> wrote: > I am beginning to use Ecore in an embedded-device application, and using > ecore_con to implement a TCP server. >=20 > The nature of the application and the resource limitations of the device > mean that I can only afford to service one client at a time. In a > traditional Unix server, I would accomplish this by not forking for each > incoming connection, and not calling accept() again until I have > finished servicing the previous connection. I could then rely on the > TCP subsystem to queue incoming connections for me. >=20 > I'm struggling to understand the sequence of events in > _ecore_main_loop_iterate_internal clearly enough to work out if there is > a way of using Ecore_Con that acheives this effect? I think there > isn't, but I may be wrong. >=20 > If there isn't, then would a patch of the following nature do the trick? > 1. Add field "char allow_concurrent_connections : 1;" to > _Ecore_Con_Server (ecore_con_private.h) > 2. At the start of ecore_con_svr_handler (ecore_con.c), if > allow_concurrent_connections=3D=3D0 and the svr->clients list is not empt= y, > return without calling accept(). >=20 > If I get confirmation that this isn't currently supported and this patch > will work, then I will code and submit... >=20 > -- > Simon Poole > www.appliancestudio.com >=20 > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=3D6595&alloc_id=3D14396&op=3Dclick > _______________________________________________ > enlightenment-devel mailing list > enl...@li... > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel > |
From: Carsten H. (T. R. <ra...@ra...> - 2005-04-27 03:06:40
|
On Tue, 26 Apr 2005 10:58:17 -0500 Nathan Ingersoll <nin...@gm...> babbled: > It looks like the current scheme requires you to wait for an > ECORE_CON_CLIENT_ADD event on the server, at which point you could > check the number of active connections and call ecore_con_client_del > if the valid number is exceeded. So it has to accept the connection, > setup the descriptors, and iterate through the event loop before the > teardown can occur. Would this be too much overhead in your case? i doubt it would :) that's not much overhead at all... it is convenient as you have fine-grained control over who to accept and deny - like being able to priorities clients based on if they are authenticated "users" and then maybe even different users have different privilege levels, some may be registered users but not paying users, some may pay and register, some anonymous for example, so if you're too busy you deny the anonymous first, if still too busy deny non paying users etc. :) my original idea was to just make ecore_con very dumb and always be friendly and let the callbacks and app decide policy from there (as an accept is fairly trivial in terms of overhead... compared to the rest of the program) > Rather than a single flag allowing concurrent connections, maybe a > limit on the number of current connections with -1 (default?) > indicating unlimited would be more useful. For your case you could set > it to 1 and get the desired effect. damn.. just what i said! :) > On 4/26/05, Simon Poole <sim...@th...> wrote: > > I am beginning to use Ecore in an embedded-device application, and using > > ecore_con to implement a TCP server. > > > > The nature of the application and the resource limitations of the device > > mean that I can only afford to service one client at a time. In a > > traditional Unix server, I would accomplish this by not forking for each > > incoming connection, and not calling accept() again until I have > > finished servicing the previous connection. I could then rely on the > > TCP subsystem to queue incoming connections for me. > > > > I'm struggling to understand the sequence of events in > > _ecore_main_loop_iterate_internal clearly enough to work out if there is > > a way of using Ecore_Con that acheives this effect? I think there > > isn't, but I may be wrong. > > > > If there isn't, then would a patch of the following nature do the trick? > > 1. Add field "char allow_concurrent_connections : 1;" to > > _Ecore_Con_Server (ecore_con_private.h) > > 2. At the start of ecore_con_svr_handler (ecore_con.c), if > > allow_concurrent_connections==0 and the svr->clients list is not empty, > > return without calling accept(). > > > > If I get confirmation that this isn't currently supported and this patch > > will work, then I will code and submit... > > > > -- > > Simon Poole > > www.appliancestudio.com > > > > ------------------------------------------------------- > > SF email is sponsored by - The IT Product Guide > > Read honest & candid reviews on hundreds of IT Products from real users. > > Discover which products truly live up to the hype. Start reading now. > > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > > _______________________________________________ > > enlightenment-devel mailing list > > enl...@li... > > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel > > > > > ------------------------------------------------------- > SF.Net email is sponsored by: Tell us your software development plans! > Take this survey and enter to win a one-year sub to SourceForge.net > Plus IDC's 2005 look-ahead and a copy of this survey > Click here to start! http://www.idcswdc.com/cgi-bin/survey?id5hix > _______________________________________________ > enlightenment-devel mailing list > enl...@li... > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel > -- ------------- Codito, ergo sum - "I code, therefore I am" -------------- The Rasterman (Carsten Haitzler) ra...@ra... 裸好多 ra...@de... Tokyo, Japan (東京 日本) |
From: Carsten H. (T. R. <ra...@ra...> - 2005-04-27 03:03:00
|
On Tue, 26 Apr 2005 16:13:38 +0100 Simon Poole <sim...@th...> babbled: > I am beginning to use Ecore in an embedded-device application, and using > ecore_con to implement a TCP server. > > The nature of the application and the resource limitations of the device > mean that I can only afford to service one client at a time. In a > traditional Unix server, I would accomplish this by not forking for each > incoming connection, and not calling accept() again until I have > finished servicing the previous connection. I could then rely on the > TCP subsystem to queue incoming connections for me. THAT limited? or is the nature of the servicing itself very heavy (eg xferring lots of encoded video data) etc.? > I'm struggling to understand the sequence of events in > _ecore_main_loop_iterate_internal clearly enough to work out if there is > a way of using Ecore_Con that acheives this effect? I think there > isn't, but I may be wrong. ok - ecore_con accepts everyone. it's a very friendly little fellow. very accepting. it's up to you what you do. it wont fork per child - it kind of expects the app to multi-task itself here. whenever a new clients gets accepted you get a new event ECORE_CON_CLIENT_ADD - this gives you the client handle, and you may choose at this point what to do. you can delete the client (ecore_con_client_del()) to instantly disconnect it - or you could simply queue it. it ends up just a client handle. it will read data sent and give it to you - but you can choose to just ignore it for a while, or whatever you please. there is no control on ecore_con in handling accepts as ecore is a "batching system" that does things in stages in a pipeline. basically it will sit in select() waiting for a timeout or for data to wake it up on an fd. in the case of clients trying to connect the fd for the listening socket will wake up and then ecore_con will do some accepts - as many as needed to accept all current clients. it generates client structures and stores them then generates client add events and puts them in a queue. it will keep running along handling any other fd data that needs handling, timeouts that may have now expired etc. THEN it gets to processing the event queue - where it loops over it calling callbacks registered for that event type. this does end up quite efficient as it promoted batching and it allows code to batch process stuff easily and even easily weed out pointless operations (move X to A, move X to B in sequence can (depending on the semantics of the system) be compressed to: move X to B as its time spent in A is of no consequence (assuming the nature of the task can allow this) - eg mouse move events work quite well as u really are interested in the latest position - and position on certain boundaries (enter and leave) but if you are "too slow" to read the 200 move events in the middle - then you need to play catch up by skipping events). > If there isn't, then would a patch of the following nature do the trick? > 1. Add field "char allow_concurrent_connections : 1;" to > _Ecore_Con_Server (ecore_con_private.h) sure - or add a client count limiter as well. the problem with this is ecore_con will still accept connections until control is in your callbacks so it may accept 2 or 3 clients if they connect at the same time. the intent of the client_add events was to allow filtering by instant disconnection ot by some form of authentication (anonymous clients may now be disconnected and authenticated ones kept). i would make 2 flags. 1. a client count limiter ( negative == unlimited, 0 == too busy to accept any clients atm, 1+ == max number of concurrent clients.) and 2. a excess client accept policy. do they get instantly disconnected by ecore_con, or just ecore_con stops calling accept() until client count < max client count (thus allowing the kernel to buffer). sooner or later clients will not be accepted anymore by the kernel either. the question is - where do you want this control? > 2. At the start of ecore_con_svr_handler (ecore_con.c), if > allow_concurrent_connections==0 and the svr->clients list is not empty, > return without calling accept(). kind of a subset of the above :) > If I get confirmation that this isn't currently supported and this patch > will work, then I will code and submit... indeed it would work. unless u are happy to accept then disconnect (clients then KNOW the server is too busy) whereas not accepting a client has no idea whats going on... :) btw - sounds like you are working on something interesting. i assume you can't share the gory details? :) > -- > Simon Poole > www.appliancestudio.com > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > enlightenment-devel mailing list > enl...@li... > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel > -- ------------- Codito, ergo sum - "I code, therefore I am" -------------- The Rasterman (Carsten Haitzler) ra...@ra... 裸好多 ra...@de... Tokyo, Japan (東京 日本) |
From: Simon P. <sim...@th...> - 2005-04-27 09:51:19
|
Thanks again for providing such a detailed response! Carsten Haitzler (The Rasterman) wrote: > THAT limited? or is the nature of the servicing itself very heavy (eg > xferring lots of encoded video data) etc.? It's the nature of the servicing -- memory intensive image processing. > ok - ecore_con accepts everyone. it's a very friendly little fellow. > very accepting. it's up to you what you do. it wont fork per child - > it kind of expects the app to multi-task itself here. whenever a new > clients gets accepted you get a new event ECORE_CON_CLIENT_ADD - this > gives you the client handle, and you may choose at this point what to > do. you can delete the client (ecore_con_client_del()) to instantly > disconnect it Deleting the client isn't acceptable in this case -- it needs to be queued. > - or you could simply queue it. it ends up just a > client handle. it will read data sent and give it to you - but you > can choose to just ignore it for a while, or whatever you please. I considered doing my own queuing, but presumably whilst I'm servicing a client then I'll still be receiving data from the queued clients, using up my precious RAM (embedded device, no swap!). I couldn't see a way of "pausing" the reads on the queued clients until I'm ready for them. Unfortunately the protocol we're using doesn't allow me to negotiate with the client to ask it to hold back. It's a one-way push mechanism. ...snip... > calling callbacks registered for that event type. this does end up > quite efficient as it promoted batching and it allows code to batch > process stuff easily and even easily weed out pointless operations > (move X to A, move X to B in sequence can (depending on the semantics > of the system) be compressed to: move X to B as its time spent in A > is of no consequence (assuming the nature of the task can allow this) > - eg mouse move events work quite well as u really are interested in > the latest position - and position on certai! n boundaries (enter and > leave) but if you are "too slow" to read the 200 move events in the > middle - then you need to play catch up by skipping events). This is one of the things I really like about Ecore. It has really strong architectural simplicity. It makes it incredibly quick and easy to use. Well done! > i would > make 2 flags. 1. a client count limiter ( negative == unlimited, 0 == > too busy to accept any clients atm, 1+ == max number of concurrent > clients.) and 2. a excess client accept policy. do they get instantly > disconnected by ecore_con, or just ecore_con stops calling accept() > until client count < max client count (thus allowing the kernel to > buffer). sooner or later clients will not be accepted anymore by the > kernel either. the question is - where do you want this control? > ...snip... > > indeed it would work. unless u are happy to accept then disconnect > (clients then KNOW the server is too busy) whereas not accepting a > client has no idea whats going on... :) Sounds good to me. I'll give it a go and submit for comment. > btw - sounds like you are working on something interesting. i assume > you can't share the gory details? :) I'm working on the new electronic signage products for Lucid Signs (http://www.lucidsigns.com/ http://linuxdevices.com/articles/AT6529561810.html). They are essentially LCD-based signs that think they are network printers. The print process uses our own binary protocol over TCP port 9100. Clients (typically a Windows 2000/XP print driver) connect to this port and send binary image data, which has to be decompressed, processed, bit-packed, dithered (our framebuffer is limited to 16bpp) and recompressed on the device as it comes in. This is necessary because bit-packing and dithering on the fly at display-time is too slow on our platform. Images vary in size, depending on screen resolution, and print jobs may contain multiple pages. We have found it better to queue clients and then process each job in turn. In fact, it would be extremely unusual for more than one person to try to print to a single sign at the same time anyway. If I reject a Windows print driver client because I'm busy, then Windows will report an error and then retry, which has the right effect of queuing it, but looks extremely flaky to an end user! This product is already on the market. The reason I'm moving our code over to use Ecore is that we're doing a touchscreen interactive version of the RoomSign product, and from what I've seen I believe that Evas is going to be perfect for doing the UI. Moving to Ecore has dramatically simplified some of our code, so we're very grateful. When the EFL-based version gets released, I'll contact LinuxDevices and get them to add a nice credit! -- Simon Poole www.appliancestudio.com |
From: Carsten H. (T. R. <ra...@ra...> - 2005-04-29 04:54:14
|
On Wed, 27 Apr 2005 10:49:30 +0100 Simon Poole <sim...@th...> babbled: > > xferring lots of encoded video data) etc.? > > It's the nature of the servicing -- memory intensive image processing. ok - fair enough :) > > do. you can delete the client (ecore_con_client_del()) to instantly > > disconnect it > > Deleting the client isn't acceptable in this case -- it needs to be queued. hmm - though it depends where you queue it (kernel or userspace) - but i from your info later you don't have cntrol over the client... so... this makes sense :) > > - or you could simply queue it. it ends up just a > > client handle. it will read data sent and give it to you - but you > > can choose to just ignore it for a while, or whatever you please. > > I considered doing my own queuing, but presumably whilst I'm servicing a > client then I'll still be receiving data from the queued clients, using > up my precious RAM (embedded device, no swap!). I couldn't see a way of > "pausing" the reads on the queued clients until I'm ready for them. yeah. that's true. ecore_con was meant to be highly simplistic - taking away a lot of "hassle" for dealing with multiple clients and making it do a lot of the buffering and handling work for you, so in thew end u deal with only 3 simple things - a client got added, data from a client, and a client disconnected. but as you have found - it's missing controls for more specialised uses :) > Unfortunately the protocol we're using doesn't allow me to negotiate > with the client to ask it to hold back. It's a one-way push mechanism. ok - no control there :( > > leave) but if you are "too slow" to read the 200 move events in the > > middle - then you need to play catch up by skipping events). > > This is one of the things I really like about Ecore. It has really > strong architectural simplicity. It makes it incredibly quick and easy > to use. Well done! thanks :) it's just a solution to having had to have done the same things again and again myself - so i throw it into a "kitchen sink" library that just simplifies it all so i only solve the problem once and get on with my app :) it's not the only thing of its kind though - glib can do similar things. i personally prefer my own methods - but it's much of a muchness - ecore does thngs glib doesn't and vice-versa. the hugs advantage of ecore is i can get changes into it without a political battle :) > I'm working on the new electronic signage products for Lucid Signs > (http://www.lucidsigns.com/ > http://linuxdevices.com/articles/AT6529561810.html). They are > essentially LCD-based signs that think they are network printers. > > The print process uses our own binary protocol over TCP port 9100. > Clients (typically a Windows 2000/XP print driver) connect to this port > and send binary image data, which has to be decompressed, processed, > bit-packed, dithered (our framebuffer is limited to 16bpp) and > recompressed on the device as it comes in. This is necessary because > bit-packing and dithering on the fly at display-time is too slow on our > platform. > > Images vary in size, depending on screen resolution, and print jobs may > contain multiple pages. We have found it better to queue clients and > then process each job in turn. In fact, it would be extremely unusual > for more than one person to try to print to a single sign at the same > time anyway. > > If I reject a Windows print driver client because I'm busy, then Windows > will report an error and then retry, which has the right effect of > queuing it, but looks extremely flaky to an end user! > > This product is already on the market. The reason I'm moving our code > over to use Ecore is that we're doing a touchscreen interactive version > of the RoomSign product, and from what I've seen I believe that Evas is > going to be perfect for doing the UI. well from what you describe - evas will remove tonnes of your own code for the gfx side - it does the dithering, decompression etc. for you - mind u - you end up having to decode the imaging protocol into ARGB data for evas i guess. but you already have that code floating about - the advantage is you can now "think" in ARGB and just object layers and high level and let the canvas do the dirty work for you. evas is not the only thing of its kind around - but its competition amazingly is fairly behind in terms of either speed, abilities or quality - and almost always a combination of them (and almost always the competition is a fraction of the speed in trying to do the same things) - so if you looked at competing canvases it'd be interesting to know what you looked at and how it compares. i know i can improve speed more in evas - there's a few more tricks i can pull in the ram vs speed tradeoff wars. i can futz with ARGB formatting and get speed for dest alpha targets - caching pre-scaled images, and maybe taking a new look at the scaling code - but i'm not sure i can get massive speedups (ie > 50%) without hardware acceleration :( to date that's the worst thing entirely as any subsystem to do hw accel has its own set of problems from stability to speed (its software fallbacks are so awful its a decelerator by 20-200 times slower), or limited hw support etc. > Moving to Ecore has dramatically simplified some of our code, so we're > very grateful. When the EFL-based version gets released, I'll contact > LinuxDevices and get them to add a nice credit! ooh - now that makes a lot of sense now - it helps to have a bigger picture :) -- ------------- Codito, ergo sum - "I code, therefore I am" -------------- The Rasterman (Carsten Haitzler) ra...@ra... 裸好多 ra...@de... Tokyo, Japan (東京 日本) |
From: Simon P. <sim...@th...> - 2005-04-27 12:23:42
|
Carsten Haitzler (The Rasterman) wrote: > i would make 2 flags. 1. a client count limiter ( negative == > unlimited, 0 == too busy to accept any clients atm, 1+ == max number > of concurrent clients.) and 2. a excess client accept policy. do they > get instantly disconnected by ecore_con, or just ecore_con stops > calling accept() until client count < max client count (thus allowing > the kernel to buffer). Is there a reason why ecore_con still uses the deprecated list stuff? The ecore_list_nodes() function would be handy here. I'll port it across to the new stuff if there's no obvious impediment to doing so. What would be the preferred way to handle the fact that the new list functions don't use void* arguments? Change the type of "clients" in Ecore_Con_Server to be Ecore_List* (from Ecore_Con_Client*)? Or cast explicitly when calling the ecore_list_ functions? > sooner or later clients will not be accepted anymore by the kernel > either. the question is - where do you want this control? I've looked into this. Ecore_Con is calling listen with backlog=4096. This should set the number of kernel-queued connections to 4096 unless the kernel has a lower internal limit. The Linux limit appears to 128, while "man 2 accept" says BSD's may be 5. Both of these limits are entirely acceptable from my applications point of view, and I will document the limitations in the code markup. -- Simon Poole www.appliancestudio.com |
From: Simon P. <sim...@th...> - 2005-04-27 12:35:05
|
Simon Poole wrote: > What would be the preferred way to handle the fact that the new list > functions don't use void* arguments? Change the type of "clients" in > Ecore_Con_Server to be Ecore_List* (from Ecore_Con_Client*)? Or cast > explicitly when calling the ecore_list_ functions? Scratch that question. I've looked at how the new lists are handled and seen that casting isn't going to work! The right way would be to change the type in the Ecore_Con_Server struct and follow through with the required changes. -- Simon Poole www.appliancestudio.com |
From: Simon P. <sim...@th...> - 2005-04-27 13:47:53
Attachments:
ecore-0.9.9.004-ecore_con-newlists.patch
|
Simon Poole wrote: > Is there a reason why ecore_con still uses the deprecated list stuff? > The ecore_list_nodes() function would be handy here. I'll port it > across to the new stuff if there's no obvious impediment to doing so. Here's a patch against ecore-0.9.9.004 that just brings ecore_con onto the new Ecore_List code. It seems to work fine, but I'd appreciate someone sanity checking it. -- Simon Poole www.appliancestudio.com |
From: Carsten H. (T. R. <ra...@ra...> - 2005-04-29 04:54:14
|
On Wed, 27 Apr 2005 13:21:25 +0100 Simon Poole <sim...@th...> babbled: > Carsten Haitzler (The Rasterman) wrote: > > i would make 2 flags. 1. a client count limiter ( negative == > > unlimited, 0 == too busy to accept any clients atm, 1+ == max number > > of concurrent clients.) and 2. a excess client accept policy. do they > > get instantly disconnected by ecore_con, or just ecore_con stops > > calling accept() until client count < max client count (thus allowing > > the kernel to buffer). > > Is there a reason why ecore_con still uses the deprecated list stuff? > The ecore_list_nodes() function would be handy here. I'll port it > across to the new stuff if there's no obvious impediment to doing so. it hasn't been moved over. also the old stuff does work quite well and is simple to use :) > What would be the preferred way to handle the fact that the new list > functions don't use void* arguments? Change the type of "clients" in > Ecore_Con_Server to be Ecore_List* (from Ecore_Con_Client*)? Or cast > explicitly when calling the ecore_list_ functions? your next mail answers that :) > > sooner or later clients will not be accepted anymore by the kernel > > either. the question is - where do you want this control? > > I've looked into this. Ecore_Con is calling listen with backlog=4096. > This should set the number of kernel-queued connections to 4096 unless > the kernel has a lower internal limit. The Linux limit appears to 128, > while "man 2 accept" says BSD's may be 5. yeah - i just went "all out" there :) ie "kernel - queue as much as you can - i'll get back to you asap" :) > Both of these limits are entirely acceptable from my applications point > of view, and I will document the limitations in the code markup. sounds good. :) > -- > Simon Poole > www.appliancestudio.com > > -- ------------- Codito, ergo sum - "I code, therefore I am" -------------- The Rasterman (Carsten Haitzler) ra...@ra... 裸好多 ra...@de... Tokyo, Japan (東京 日本) |
From: Simon P. <sim...@th...> - 2005-04-27 15:35:26
Attachments:
ecore-0.9.9.004-ecore_con-client-limit.patch
|
Simon Poole wrote: > Carsten Haitzler (The Rasterman) wrote: > >>i would make 2 flags. 1. a client count limiter ( negative == >>unlimited, 0 == too busy to accept any clients atm, 1+ == max number >>of concurrent clients.) and 2. a excess client accept policy. do they >>get instantly disconnected by ecore_con, or just ecore_con stops >>calling accept() until client count < max client count (thus allowing >>the kernel to buffer). > And here's the patch that implements this behaviour. It requires my earlier patch from this thread to be applied first (ecore-0.9.9.004-ecore_con-newlists.patch). Comments please. To use, call ecore_con_server_client_limit_set(...), usually straight after ecore_con_server_add(...). All is explained in the code: /** * Sets a limit on the number of clients that can be handled concurrently * by the given server, and a policy on what to do if excess clients try to * connect. * Beware that if you set this once ecore is already running, you may * already have pending CLIENT_ADD events in your event queue. Those * clients have already connected and will not be affected by this call. * Only clients subsequently trying to connect will be affected. * @param svr The given server. * @param client_limit The maximum number of clients to handle * concurrently. -1 means unlimited (default). 0 * effectively disables the server. * @param reject_excess_clients Set to 1 to automatically disconnect * excess clients as soon as they connect if you are * already handling client_limit clients. Set to 0 * (default) to just hold off on the "accept()" * system call until the number of active clients * drops. This causes the kernel to queue up to 4096 * connections (or your kernel's limit, whichever is * lower). * @ingroup Ecore_Con_Server_Group */ void ecore_con_server_client_limit_set(Ecore_Con_Server *svr, int client_limit, char reject_excess_clients); -- Simon Poole www.appliancestudio.com |
From: Carsten H. (T. R. <ra...@ra...> - 2005-04-29 04:54:11
|
On Wed, 27 Apr 2005 16:33:49 +0100 Simon Poole <sim...@th...> babbled: > Simon Poole wrote: > > Carsten Haitzler (The Rasterman) wrote: > > > >>i would make 2 flags. 1. a client count limiter ( negative == > >>unlimited, 0 == too busy to accept any clients atm, 1+ == max number > >>of concurrent clients.) and 2. a excess client accept policy. do they > >>get instantly disconnected by ecore_con, or just ecore_con stops > >>calling accept() until client count < max client count (thus allowing > >>the kernel to buffer). > > looks fine (except for not bracketing if conditions) ie if (a == b && c < d) shoudl really be if ((a == b) && (c < d)) imho - because it makes it unambiguous as to what the coder intended. when you end up with a big string of &&, ||'s things like a + b -c / 2 ^ g < b you start to like the extra ()'s :) anyway - other than that - looks fine :) i'll mirror the api additions in ecore_ipc :) > And here's the patch that implements this behaviour. It requires my > earlier patch from this thread to be applied first > (ecore-0.9.9.004-ecore_con-newlists.patch). Comments please. > > To use, call ecore_con_server_client_limit_set(...), usually straight > after ecore_con_server_add(...). All is explained in the code: > > /** > * Sets a limit on the number of clients that can be handled concurrently > * by the given server, and a policy on what to do if excess clients try to > * connect. > * Beware that if you set this once ecore is already running, you may > * already have pending CLIENT_ADD events in your event queue. Those > * clients have already connected and will not be affected by this call. > * Only clients subsequently trying to connect will be affected. > * @param svr The given server. > * @param client_limit The maximum number of clients to handle > * concurrently. -1 means unlimited (default). 0 > * effectively disables the server. > * @param reject_excess_clients Set to 1 to automatically disconnect > * excess clients as soon as they connect if you are > * already handling client_limit clients. Set to 0 > * (default) to just hold off on the "accept()" > * system call until the number of active clients > * drops. This causes the kernel to queue up to 4096 > * connections (or your kernel's limit, whichever is > * lower). > * @ingroup Ecore_Con_Server_Group > */ > void > ecore_con_server_client_limit_set(Ecore_Con_Server *svr, int > client_limit, char reject_excess_clients); > > -- > Simon Poole > www.appliancestudio.com -- ------------- Codito, ergo sum - "I code, therefore I am" -------------- The Rasterman (Carsten Haitzler) ra...@ra... 裸好多 ra...@de... Tokyo, Japan (東京 日本) |