On Wed, 20 May 2009, Toby Collett wrote:
> Are you able to try a slightly different setup for testing purposes where
> all the globalgoto drivers are in the same player server as stage, and leave
> the wavefront drivers in their own server. This will get rid of anything
> between global goto and the stage driver. If this behaves better then it
> suggests RemoteDriver is definately the culprit.
I've did it. No change. See this image:
trials shows that two robot approached their targets properly, one more is
on its (good) way. We can also see that another one robot missed the
target. Why only one robot, why that robot, that's a mistery for me.
I've checked debug messages, wavefront didn't have any additional targets
to go, looks like globalgoto didn't stop its routine when robot was at the
target (although I've checked once more, it should keep on sending stop
command for 1.5sec after target is achieved).
Finally robot stopped and wavefront directed it to its proper target (it
is not always that nice from wavefront side).
> Also if global goto is just a proportinal controler, have you considered
> making in unthreaded. In this way it could only send a command to the
> underlying velocity device if it recieved a new position update, or a new
> target. This is a seperate change though and it should not be needed for
> normal operation.
Good point. Main() method of globalgoto calls this->InQueue->Wait(), so it
does not necessary need to run in separate thread. I'm thinking about
reimplementing this. However I doubt it would be good if all message
procerssing drivers will be running in one (server) thread - parallel
processing is better in most cases.
(possibly two or more days break. I'm too busy now)
> 2009/5/20 Paul Osmialowski <newchief@...>
>> On Wed, 20 May 2009, Toby Collett wrote:
>>> Okay, probably it is good to focus on this specific example for the
>>> resolve that and then see if it is the same problem everywhere.
>>> A few more questions:
>>> 1) Is GlocalGoto threaded or non threaded
>> It is threaded and designed for the new SVN-trunk API.
>>> 2) Is global goto running in the same server as stage in your tests.
>> Typically it is two different Player instances running in the same
>> machine. However, I'm also using it in one Player instance together with
>> Stage, and also on different Player instances running on different
>> machines (also differently loaded).
>>> My understanding of the code (which is not always 100% correct, of
>>> is as follows. Stage runs unthreaded, which means it updates in the main
>>> server thread. In theory when the server calls update for stage it
>>> all the messages that were on the stage incoming queue at that time.
>>> means if there are three velocity commands is should process them one
>>> the other, which *should* lead to only the most recent being seen in the
>>> next stage update step.
>> What I'm worrying about is what happens (in Stage) when one command is
>> processed and at the same time few new commands arrived. Are they queued?
>> If the command processing is finished, which new command is going to be
>> processed: the first from the queue or the most recent arrived? For me it
>> would be good if only the most recently arrived is processed (it does not
>> mean that it would be good solution for everyone else, therefore I've
>> suggested to add new option to configuration file, however I don't know
>> what impact it would make on Stage).
>>> Obviously with the behaviour you describe this is not what is happening.
>>> Which means either something is more complicated in your setup, or
>>> in my understanding is wrong.
>> I'll do more investigation about it. In my simulation I'm running five
>> robots at a time (so there are seven Player instances running: one for
>> Stage, and five more - started by shell script - each with
>> wavefront and GlobalGoto; then one more instance is started with some
>> complex team controller which subscribes and sends goal requests to those
>> five instances running wavefront with GlobalGoto). The reason why
>> wavefront with GlobalGoto are in separate UNIX processes is simple:
>> wavefront performs much better for a group of robots when started that way.
>> As a result, some robots go to the target properly (and stop there) while
>> other robots stop too far from the target which confuses wavefront
>> badly which affects higher layers of whole thing (I suspect that either
>> stop command from GlobalGoto arrived too late or it was too far in the
>> queue; first case is not so likely as robots are moving rather slow). One
>> more detail: GlobalGoto passes through stop velocity commands from
>> wavefront to underlying robot device - this also stops and cancels
>> GlobalGoto routine until new position command is issued by wavefront.
>>> I am sure we can get to the bottom of this.
>> I hope so
>>> 2009/5/20 Paul Osmialowski <newchief@...>
>>>>> Queue can grow rapidly, GlobalGoto is an open loop
>>>>> controller which works in read->think->act loop, each loop turn results
>>>> I meant closed loop controller of course. Sorry, I'm not native English
>>>> speaker, such mistakes happens to me.
>>>> Crystal Reports - New Free Runtime and 30 Day Trial
>>>> Check out the new simplified licensing option that enables
>>>> unlimited royalty-free distribution of the report engine
>>>> for externally facing server and web deployment.
>>>> Playerstage-developers mailing list
>>> This email is intended for the addressee only and may contain privileged
>>> and/or confidential information
>> Crystal Reports - New Free Runtime and 30 Day Trial
>> Check out the new simplified licensing option that enables
>> unlimited royalty-free distribution of the report engine
>> for externally facing server and web deployment.
>> Playerstage-developers mailing list
> This email is intended for the addressee only and may contain privileged
> and/or confidential information