From: Toby C. <tco...@pl...> - 2007-12-02 05:14:14
|
Hi, There are some inconsistencies in the data modes, particularly when it comes to driver to driver comms. Drivers on the same server effectively post directly to each others queues, this means they bypass the message semantics altogether as far as I can see (effectively giving PUSH mode). Remote driver connections are just like any other client and therefore default to PULL mode, unfortunately this is not so useful for drivers. This raises a few questions: 1) should comms within a server behave the same as between (I believe that is a Yes), 1a) How do we get push/pull mode to work between drivers on the same server 2) Can we make the default mode PUSH, with clients by default switching to PULL (as opposed to the server defaulting to PULL). Toby |
From: Paul O. <new...@ki...> - 2007-12-02 10:45:12
|
On Sun, 2 Dec 2007, Toby Collett wrote: > Hi, > There are some inconsistencies in the data modes, particularly when it > comes to driver to driver comms. Drivers on the same server effectively > post directly to each others queues, this means they bypass the message > semantics altogether as far as I can see (effectively giving PUSH mode). > Remote driver connections are just like any other client and therefore > default to PULL mode, unfortunately this is not so useful for drivers. > This raises a few questions: > > 1) should comms within a server behave the same as between (I believe > that is a Yes), > 1a) How do we get push/pull mode to work between drivers on the same server > 2) Can we make the default mode PUSH, with clients by default switching > to PULL (as opposed to the server defaulting to PULL). > > Toby > I'm voting for pull=1|0 option for server side, the same as playerv has it already. Those who plan to run player serwer which will be used by another player server will have to turn default pull mode off. Also I would like to start player server on embedded systems the same way: for some (yet unknown) reason player performance runs down really badly on tiny embedded systems when clients are using pull mode. I wish I could change default mode back to push in mendioned circumstances. Cheers, Paul |
From: Brian G. <br...@ge...> - 2007-12-05 20:29:08
|
On Dec 1, 2007, at 9:13 PM, Toby Collett wrote: > Hi, > There are some inconsistencies in the data modes, particularly when it > comes to driver to driver comms. Drivers on the same server > effectively > post directly to each others queues, this means they bypass the > message > semantics altogether as far as I can see (effectively giving PUSH > mode). > Remote driver connections are just like any other client and therefore > default to PULL mode, unfortunately this is not so useful for drivers. > This raises a few questions: > > 1) should comms within a server behave the same as between (I believe > that is a Yes), Yes, and I would go further to say that comms between any two entities, whether client or driver, whether in the same process space, or not, should be the same. > 1a) How do we get push/pull mode to work between drivers on the > same server Ideally, we'd hide it in the messaging internals, so that a driver never needs to consider it. This is what we do on the client-side: if you're in PULL mode, then Read() will invisibly request a round of data. > 2) Can we make the default mode PUSH, with clients by default > switching > to PULL (as opposed to the server defaulting to PULL). Is the implication that PUSH is best for in-process messaging, and PULL is best for inter-process messaging? I can see that being true, but I can also imagine a computationally intensive (or otherwise slow) driver that wants to read from other drivers (which may be in- process) in PULL mode. So we'd need to support both modes. Now I'm wondering whether we even need PUSH and PULL. Can we get the nearly the same diversity of useful behavior with far less complexity by only supporting PUSH, and manipulating message replacement rules? brian. |
From: Geoffrey B. <gb...@ki...> - 2007-12-05 22:50:30
|
Well, let's not forget why PUSH and PULL were added back into Player 2 when they were originally removed: to make the message replacement rules actually work on operating systems with TCP buffers. If we can fix that issue some other way, we don't need PUSH and PULL. Given the case of OS TCP buffers not absorbing queued messages, the actual behaviour we want is entirely achievable with message replacement rules. Geoff Brian Gerkey wrote: > Now I'm wondering whether we even need PUSH and PULL. Can we get the > nearly the same diversity of useful behavior with far less complexity > by only supporting PUSH, and manipulating message replacement rules? |
From: Eagle J. <ea...@ne...> - 2007-12-06 01:04:33
|
What if the TCP urgent flag (MSG_OOB option to send()) were set when message replacement is turned on? When the client receives SIGURG, it checks how much unread data is in the buffer, and stores that along with the message. (One unread counter and one set-aside message need to be maintained per message type/subtype). As it continues to read, it decrements the unread count -- if it gets to where the urgent message was received, it then goes ahead and processes that message. However, if an urgent message is received while another of the same type/subtype is being held, the new message is put in its place, without resetting the unread data count. An easier approach might be to have the client turn on the inline flag for urgent data and maintain a message queue. When the client is looking for a message, it first checks the queue - if empty, it goes to the network. However, When SIGURG is received, the client repeatedly calls read, storing each message in the queue and following replacement rules until it clears the urgent data. -Eagle On Dec 5, 2007 2:50 PM, Geoffrey Biggs <gb...@ki...> wrote: > Well, let's not forget why PUSH and PULL were added back into Player 2 > when they were originally removed: to make the message replacement rules > actually work on operating systems with TCP buffers. If we can fix that > issue some other way, we don't need PUSH and PULL. Given the case of OS > TCP buffers not absorbing queued messages, the actual behaviour we want > is entirely achievable with message replacement rules. > > Geoff > > Brian Gerkey wrote: > > Now I'm wondering whether we even need PUSH and PULL. Can we get the > > nearly the same diversity of useful behavior with far less complexity > > by only supporting PUSH, and manipulating message replacement rules? > > > ------------------------------------------------------------------------- > SF.Net email is sponsored by: The Future of Linux Business White Paper > from Novell. From the desktop to the data center, Linux is going > mainstream. Let it simplify your IT future. > http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 > _______________________________________________ > Playerstage-developers mailing list > Pla...@li... > https://lists.sourceforge.net/lists/listinfo/playerstage-developers > |
From: Toby C. <tco...@pl...> - 2007-12-06 01:15:54
|
Just to add another option to the discussion, an alternative is rather than PUSH/PULL use something more along the lines of XON/XOFF, i.e. flow control. So we start in PUSH mode and we can suspend transfer (and therefore utilise the replace rules) with the flow control messages, down side, there is two requests to send to get a bunch of data equiv to PULL mode, upside is there is only one mode as far as the server is concerned ... Toby On 06/12/2007, Eagle Jones <ea...@ne...> wrote: > > What if the TCP urgent flag (MSG_OOB option to send()) were set when > message replacement is turned on? > > When the client receives SIGURG, it checks how much unread data is in > the buffer, and stores that along with the message. (One unread > counter and one set-aside message need to be maintained per message > type/subtype). As it continues to read, it decrements the unread count > -- if it gets to where the urgent message was received, it then goes > ahead and processes that message. However, if an urgent message is > received while another of the same type/subtype is being held, the new > message is put in its place, without resetting the unread data count. > > An easier approach might be to have the client turn on the inline flag > for urgent data and maintain a message queue. When the client is > looking for a message, it first checks the queue - if empty, it goes > to the network. However, When SIGURG is received, the client > repeatedly calls read, storing each message in the queue and following > replacement rules until it clears the urgent data. > > -Eagle > > On Dec 5, 2007 2:50 PM, Geoffrey Biggs <gb...@ki...> wrote: > > Well, let's not forget why PUSH and PULL were added back into Player 2 > > when they were originally removed: to make the message replacement rules > > actually work on operating systems with TCP buffers. If we can fix that > > issue some other way, we don't need PUSH and PULL. Given the case of OS > > TCP buffers not absorbing queued messages, the actual behaviour we want > > is entirely achievable with message replacement rules. > > > > Geoff > > > > Brian Gerkey wrote: > > > Now I'm wondering whether we even need PUSH and PULL. Can we get the > > > nearly the same diversity of useful behavior with far less complexity > > > by only supporting PUSH, and manipulating message replacement rules? > > > > > > > ------------------------------------------------------------------------- > > SF.Net email is sponsored by: The Future of Linux Business White Paper > > from Novell. From the desktop to the data center, Linux is going > > mainstream. Let it simplify your IT future. > > http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 > > _______________________________________________ > > Playerstage-developers mailing list > > Pla...@li... > > https://lists.sourceforge.net/lists/listinfo/playerstage-developers > > > > ------------------------------------------------------------------------- > SF.Net email is sponsored by: The Future of Linux Business White Paper > from Novell. From the desktop to the data center, Linux is going > mainstream. Let it simplify your IT future. > http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 > _______________________________________________ > Playerstage-developers mailing list > Pla...@li... > https://lists.sourceforge.net/lists/listinfo/playerstage-developers > -- This email is intended for the addressee only and may contain privileged and/or confidential information |
From: Geoffrey B. <gb...@ki...> - 2007-12-06 23:06:07
|
The problem there is that message replacement rules are server-side, not client side. While we could look at changing that, it is nice not to have to saturate the network buffer and rely on the client performing a read before they get full. Geoff Eagle Jones wrote: > What if the TCP urgent flag (MSG_OOB option to send()) were set when > message replacement is turned on? > > When the client receives SIGURG, it checks how much unread data is in > the buffer, and stores that along with the message. (One unread > counter and one set-aside message need to be maintained per message > type/subtype). As it continues to read, it decrements the unread count > -- if it gets to where the urgent message was received, it then goes > ahead and processes that message. However, if an urgent message is > received while another of the same type/subtype is being held, the new > message is put in its place, without resetting the unread data count. > > An easier approach might be to have the client turn on the inline flag > for urgent data and maintain a message queue. When the client is > looking for a message, it first checks the queue - if empty, it goes > to the network. However, When SIGURG is received, the client > repeatedly calls read, storing each message in the queue and following > replacement rules until it clears the urgent data. > > -Eagle > > On Dec 5, 2007 2:50 PM, Geoffrey Biggs <gb...@ki...> wrote: >> Well, let's not forget why PUSH and PULL were added back into Player 2 >> when they were originally removed: to make the message replacement rules >> actually work on operating systems with TCP buffers. If we can fix that >> issue some other way, we don't need PUSH and PULL. Given the case of OS >> TCP buffers not absorbing queued messages, the actual behaviour we want >> is entirely achievable with message replacement rules. >> >> Geoff >> >> Brian Gerkey wrote: >>> Now I'm wondering whether we even need PUSH and PULL. Can we get the >>> nearly the same diversity of useful behavior with far less complexity >>> by only supporting PUSH, and manipulating message replacement rules? |
From: Brian G. <br...@ge...> - 2007-12-05 20:30:29
|
On Dec 2, 2007, at 2:44 AM, Paul Osmialowski wrote: > Also I would > like to start player server on embedded systems the same way: for some > (yet unknown) reason player performance runs down really badly on tiny > embedded systems when clients are using pull mode. I wish I could > change > default mode back to push in mendioned circumstances. While I haven't tested the PULL mode too much, I'm surprised by this. I was aiming to fix the age-old performance problem of overflowing queues, which caused data backup and terrible control. Paul, do you have some idea of where the slow-down in PULL mode is occurring? brian. |
From: Paul O. <new...@ki...> - 2007-12-06 19:23:33
|
On Wed, 5 Dec 2007, Brian Gerkey wrote: > > On Dec 2, 2007, at 2:44 AM, Paul Osmialowski wrote: > >> Also I would >> like to start player server on embedded systems the same way: for some >> (yet unknown) reason player performance runs down really badly on tiny >> embedded systems when clients are using pull mode. I wish I could >> change >> default mode back to push in mendioned circumstances. > > While I haven't tested the PULL mode too much, I'm surprised by > this. I was aiming to fix the age-old performance problem of > overflowing queues, which caused data backup and terrible control. > Paul, do you have some idea of where the slow-down in PULL mode is > occurring? > > brian. Hi Brian, I've looked closer at it today and what I've found reminded me, that once I knew the reason, I must have forgotten about it. The problem lies in the playerv which now uses PULL mode by default. When the link is slow (bad wifi connection) read call which is blocking in pull mode blocks whole process including GUI. When it's locked for some longer time, GUI locks totally and whole playerv must be killed by SIGKILL signal. I've tested this today: the worst thing that happend was when I moved position commander far to the right to make Roomba go forward, then playerv has locked totally so Roomba was moving forward also after playerv was killed from the other shell session (fortunately, player server didn't die, bumplock code has stopped Roomba right after it hit the wall). This never happens when playerv is switched to PUSH mode. Cheers, Paul |
From: Toby C. <tco...@pl...> - 2007-12-06 19:38:53
|
Hi, the client library was recently fixed so that peek/readifwaiting would work in PULL mode, does changing playerv to use this fix the problem? Toby On 07/12/2007, Paul Osmialowski <new...@ki...> wrote: > > > > On Wed, 5 Dec 2007, Brian Gerkey wrote: > > > > > On Dec 2, 2007, at 2:44 AM, Paul Osmialowski wrote: > > > >> Also I would > >> like to start player server on embedded systems the same way: for some > >> (yet unknown) reason player performance runs down really badly on tiny > >> embedded systems when clients are using pull mode. I wish I could > >> change > >> default mode back to push in mendioned circumstances. > > > > While I haven't tested the PULL mode too much, I'm surprised by > > this. I was aiming to fix the age-old performance problem of > > overflowing queues, which caused data backup and terrible control. > > Paul, do you have some idea of where the slow-down in PULL mode is > > occurring? > > > > brian. > Hi Brian, > I've looked closer at it today and what I've found reminded me, that > once I knew the reason, I must have forgotten about it. The problem > lies in the playerv which now uses PULL mode by default. When the link is > slow (bad wifi connection) read call which is blocking in pull mode blocks > whole process including GUI. When it's locked for some longer time, GUI > locks totally and whole playerv must be killed by SIGKILL signal. I've > tested this today: the worst thing that happend was when I moved position > commander far to the right to make Roomba go forward, then playerv has > locked totally so Roomba was moving forward also after playerv was killed > from the other shell session (fortunately, player server didn't > die, bumplock code has stopped Roomba right after it hit the wall). This > never happens when playerv is switched to PUSH mode. > > Cheers, > Paul > > ------------------------------------------------------------------------- > SF.Net email is sponsored by: The Future of Linux Business White Paper > from Novell. From the desktop to the data center, Linux is going > mainstream. Let it simplify your IT future. > http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 > _______________________________________________ > Playerstage-developers mailing list > Pla...@li... > https://lists.sourceforge.net/lists/listinfo/playerstage-developers > -- This email is intended for the addressee only and may contain privileged and/or confidential information |
From: Paul O. <new...@ki...> - 2007-12-06 20:14:52
|
On Fri, 7 Dec 2007, Toby Collett wrote: > Hi, > the client library was recently fixed so that peek/readifwaiting would work > in PULL mode, does changing playerv to use this fix the problem? > > Toby I've used CVS snapshot taken today and I guess playerv makes use of it, the problem still occurs. Paul > > On 07/12/2007, Paul Osmialowski <new...@ki...> wrote: >> >> >> >> On Wed, 5 Dec 2007, Brian Gerkey wrote: >> >>> >>> On Dec 2, 2007, at 2:44 AM, Paul Osmialowski wrote: >>> >>>> Also I would >>>> like to start player server on embedded systems the same way: for some >>>> (yet unknown) reason player performance runs down really badly on tiny >>>> embedded systems when clients are using pull mode. I wish I could >>>> change >>>> default mode back to push in mendioned circumstances. >>> >>> While I haven't tested the PULL mode too much, I'm surprised by >>> this. I was aiming to fix the age-old performance problem of >>> overflowing queues, which caused data backup and terrible control. >>> Paul, do you have some idea of where the slow-down in PULL mode is >>> occurring? >>> >>> brian. >> Hi Brian, >> I've looked closer at it today and what I've found reminded me, that >> once I knew the reason, I must have forgotten about it. The problem >> lies in the playerv which now uses PULL mode by default. When the link is >> slow (bad wifi connection) read call which is blocking in pull mode blocks >> whole process including GUI. When it's locked for some longer time, GUI >> locks totally and whole playerv must be killed by SIGKILL signal. I've >> tested this today: the worst thing that happend was when I moved position >> commander far to the right to make Roomba go forward, then playerv has >> locked totally so Roomba was moving forward also after playerv was killed >> from the other shell session (fortunately, player server didn't >> die, bumplock code has stopped Roomba right after it hit the wall). This >> never happens when playerv is switched to PUSH mode. >> >> Cheers, >> Paul >> >> ------------------------------------------------------------------------- >> SF.Net email is sponsored by: The Future of Linux Business White Paper >> from Novell. From the desktop to the data center, Linux is going >> mainstream. Let it simplify your IT future. >> http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 >> _______________________________________________ >> Playerstage-developers mailing list >> Pla...@li... >> https://lists.sourceforge.net/lists/listinfo/playerstage-developers >> > > > > -- > This email is intended for the addressee only and may contain privileged > and/or confidential information > |
From: Brian G. <br...@ge...> - 2007-12-07 02:14:04
|
I think that I've fixed the messaging system on HEAD. I've tried a variety of client-server and server-server setups, and things seem to work well. Can folks give it a try and let me know how it goes? Notable points: - In-server queues default to PUSH, without replacement. - TCP-mediated queues (both client-server and server-server) default to PUSH, with replacement, but libplayerc (and thus libplayerc++) automatically switches to PULL on connection. You can switch back to PUSH, and/or install your own replacement rules, if you like. - playerv accepts a new option: -rate <hz>. If hz is positive, then playerv operates in PULL and polls the server at the desired rate. If hz is zero, then playerv switches to PUSH. - vfh was broken, and is now fixed - amcl is not responding to SET_POSE requests. Debugging help here would be appreciated brian. |
From: Brian G. <br...@ge...> - 2007-12-07 02:44:22
|
On Dec 6, 2007, at 6:13 PM, Brian Gerkey wrote: > - amcl is not responding to SET_POSE requests. Debugging help here > would be appreciated Ok, I found and fixed some bad pointer math in there. Seems to work now. Speak up if you find otherwise. brian. |
From: Paul O. <new...@ki...> - 2007-12-10 16:47:27
|
Hi Brian, I've taken CVS snapshot today, unfortunately, not everything works as it should... On Thu, 6 Dec 2007, Brian Gerkey wrote: > > I think that I've fixed the messaging system on HEAD. I've tried a > variety of client-server and server-server setups, and things seem to > work well. > > Can folks give it a try and let me know how it goes? > > Notable points: > > - In-server queues default to PUSH, without replacement. That's working right > > - TCP-mediated queues (both client-server and server-server) default > to PUSH, with replacement, but libplayerc (and thus libplayerc++) > automatically switches to PULL on connection. You can switch back to > PUSH, and/or install your own replacement rules, if you like. > For some reason, server-server communication still cannot work for me. I have started two player servers: 192.168.1.2:6888 driver ( name "camerav4l" provides ["camera:1"] port "/dev/video0" source 0 mode "RGB888" size [320 240] ) driver ( name "cameracompress" provides ["camera:0"] requires ["camera:1"] ) 192.168.1.1:6969 driver ( name "cameracompress" provides ["camera:1"] requires ["192.168.1.2:6888:camera:1"] alwayson 0 ) Then I've connected usual player client to 192.168.1.1:6969 (previously this client worked fine connected directly to 192.168.1.2:6888). Client program started to wait forever for incoming data. At the same time the middle server 192.168.1.1:6969 thrown these messages: listening on 6969 Listening on ports: 6969 accepted TCP client 0 on port 6969, fd 6 connected to: 192.168.1.2:6888 error : timed out reading response header from remote server error : unable to subscribe to camera device warning : subscription failed for device camera:1 Nothing interesting happend on 192.168.1.1, it just has shown the message that client connected and disconnected. Something is still wrong with server-server communication. > - playerv accepts a new option: -rate <hz>. If hz is positive, then > playerv operates in PULL and polls the server at the desired rate. > If hz is zero, then playerv switches to PUSH. playerv now works fine, however I didn't use -rate option. Cheers, Paul |
From: Brian G. <br...@ge...> - 2007-12-10 19:49:58
|
On Dec 10, 2007, at 8:47 AM, Paul Osmialowski wrote: > > Something is still wrong with server-server communication. Ok, it works on OS X, but not on Linux. I'm looking into it now. > >> - playerv accepts a new option: -rate <hz>. If hz is positive, then >> playerv operates in PULL and polls the server at the desired rate. >> If hz is zero, then playerv switches to PUSH. > > playerv now works fine, however I didn't use -rate option. Good. If you don't give -rate, playerv assumes "-rate 5.0". brian. |
From: Brian G. <br...@ge...> - 2007-12-11 02:24:53
|
On Dec 10, 2007, at 8:47 AM, Paul Osmialowski wrote: > > > Something is still wrong with server-server communication. I tracked down and fixed some bad mutex management in libplayertcp, along with a bugs in the passthrough, vfh, and wavefront drivers. Could folks give it another try? I've been testing with Stage like so: $ player worlds/simple.cfg In a 2nd shell: $ player worlds/wavefront-remote.cfg In a 3rd shell: $ playernav localhost:7000 This tests passthrough in addition to server-server comms. brian. |
From: Paul O. <new...@ki...> - 2007-12-11 14:55:10
|
Brian, Now it works fine, and I was able to complete cameracompress tests. Unfortunately one test case shown there's an easy to fix Segmentation Fault error in the cameracompress driver. I've placed a bugfix on a tracker. Cheers, Paul On Mon, 10 Dec 2007, Brian Gerkey wrote: > > On Dec 10, 2007, at 8:47 AM, Paul Osmialowski wrote: > >> >> >> Something is still wrong with server-server communication. > > I tracked down and fixed some bad mutex management in libplayertcp, > along with a bugs in the passthrough, vfh, and wavefront drivers. > > Could folks give it another try? > > I've been testing with Stage like so: > > $ player worlds/simple.cfg > > In a 2nd shell: > > $ player worlds/wavefront-remote.cfg > > In a 3rd shell: > > $ playernav localhost:7000 > > This tests passthrough in addition to server-server comms. > > brian. > > ------------------------------------------------------------------------- > SF.Net email is sponsored by: > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php > _______________________________________________ > Playerstage-developers mailing list > Pla...@li... > https://lists.sourceforge.net/lists/listinfo/playerstage-developers > |
From: Brian G. <br...@ge...> - 2007-12-11 16:32:25
|
On Dec 11, 2007, at 6:55 AM, Paul Osmialowski wrote: > > Now it works fine, and I was able to complete cameracompress tests. > Unfortunately one test case shown there's an easy to fix Segmentation > Fault error in the cameracompress driver. I've placed a bugfix on a > tracker. Great, thanks. brian. |