From: Joschka B. <jos...@am...> - 2008-05-16 01:05:53
|
Hi all, a while ago, we started discussions about a new vision perceptor for the agents which would be restricted in its field of view, and tied to the orientation of it's parent body. Furthermore, we talked a bit about extending the vision messages for the agents (include information about position of hands and feet of other agents). I've proposed these changes to the TC and those members who have joined the discussion were (mostly) in favor of their implementation for RC08 _if_ it can be done within the next two weeks. Therefore I'd like to ask for volunteers for this task. If you think you could help with this, please contact me as soon as possible. Thanks a lot in advance! Joschka |
From: Yuan X. <xuy...@gm...> - 2008-05-16 02:48:46
|
Hi Joschka, > a while ago, we started discussions about a new vision perceptor for > the agents which would be restricted in its field of view, and tied to > the orientation of it's parent body. Furthermore, we talked a bit > about extending the vision messages for the agents (include > information about position of hands and feet of other agents). > Nice ;-) Because of limited bandwidth of network, and there are more robots, I think the vision perceptor should be restrict by: - frequency: maybe one vision sense per 3-5 cycles is enough (it is already implemented, just set a parameter in .rsg file), - distance: the robot can not be seen or can not be seen clearly (i.e. can not see its arms and legs) if it is far away. What do you think about it? > I've proposed these changes to the TC and those members who have > joined the discussion were (mostly) in favor of their implementation > for RC08 _if_ it can be done within the next two weeks. Therefore I'd > like to ask for volunteers for this task. If you think you could help > with this, please contact me as soon as possible. That is very hurry schedule, does anyone like to try? If you are all too busy, I would like to have a try. -- Best wishes! Xu Yuan School of Automation Southeast University, Nanjing, China mail: xuy...@gm... xy...@ya... web: http://xuyuan.cn.googlepages.com -------------------------------------------------- |
From: Joschka B. <jos...@am...> - 2008-05-16 04:40:22
|
Hi Yuan, thanks for the quick reply :-) On May 16, 2008, at 11:48 AM, Yuan Xu wrote: > >> a while ago, we started discussions about a new vision perceptor for >> the agents which would be restricted in its field of view, and tied >> to >> the orientation of it's parent body. Furthermore, we talked a bit >> about extending the vision messages for the agents (include >> information about position of hands and feet of other agents). >> > > Nice ;-) > Because of limited bandwidth of network, and there are more robots, > I think the vision perceptor should be restrict by: > - frequency: maybe one vision sense per 3-5 cycles is enough (it is > already implemented, just set a parameter in .rsg file), A vision message every 4 cycles would mean 12.5 frames per second, right? It's not very much, but somewhat realistic I'd say (anybody know the frame rate of Nao?). Let's start with that and increase the frame rate in case there are problems (if the network and the server can handle it). > - distance: the robot can not be seen or can not be seen clearly > (i.e. can not see its arms and legs) if it is far away. > What do you think about it? In principal it's a nice idea, but given our relatively small field size (sth. between 6mx4m and 12mx8m, to be decided), I'm not sure whether this is realistic. Other opinions? Should we wait with this until we're playing on bigger fields? > >> I've proposed these changes to the TC and those members who have >> joined the discussion were (mostly) in favor of their implementation >> for RC08 _if_ it can be done within the next two weeks. Therefore I'd >> like to ask for volunteers for this task. If you think you could help >> with this, please contact me as soon as possible. > > That is very hurry schedule, does anyone like to try? Yes, unfortunately :-( But it seems necessary since this is quite a big change, so teams need time to adapt. One more thing: it was mentioned that information about the lines should be included in the vision messages if the field of vision is restricted since we only have a few flags for orientation. What do you think about that? > If you are all too busy, I would like to have a try. > That's great! Thanks a lot, Yuan. If there's anybody else who would like to help, I'm sure Yuan would wouldn't mind ;-) Cheers, Joschka |
From: Sander v. D. <sgv...@gm...> - 2008-05-17 22:22:27
|
Hey, > > Nice ;-) > > Because of limited bandwidth of network, and there are more robots, > > I think the vision perceptor should be restrict by: > > - frequency: maybe one vision sense per 3-5 cycles is enough (it is > > already implemented, just set a parameter in .rsg file), > > A vision message every 4 cycles would mean 12.5 frames per second, > right? It's not very much, but somewhat realistic I'd say (anybody > know the frame rate of Nao?). Let's start with that and increase the > frame rate in case there are problems (if the network and the server > can handle it). I'm not sure yet whether I agree on this. On one side I think it's reasonable to say a Nao can't acquire images and process them to the high level information our vision perceptor gives in more than 12.5 fps. But on the other side I think 3D sim should be able to do things that are not possible yet in the real leagues and we can asume that in a few years humanoid robots will be able to do this processing. However when network bandwidth is really running out of course it's an important point (although then switching to a far more efficient/binary communication method may be a better solution). Does anybody have any data on this, e.g. mb/sec sent by the server? > - distance: the robot can not be seen or can not be seen clearly > > (i.e. can not see its arms and legs) if it is far away. > > What do you think about it? > > In principal it's a nice idea, but given our relatively small field > size (sth. between 6mx4m and 12mx8m, to be decided), I'm not sure > whether this is realistic. Other opinions? Should we wait with this > until we're playing on bigger fields? The same holds here. I think a Nao with the current level of real time computer vision methods may have a hard time to figure out another agent's pose over more than a few meters. However a human player (the standard to achieve in 2050) can do so from across a real size football field. Though if it is really necesary for bandwidth reasons, these are good restrictions for now, which probably won't make too big a difference for game play considering the current level and speed of games. > > >> I've proposed these changes to the TC and those members who have > >> joined the discussion were (mostly) in favor of their implementation > >> for RC08 _if_ it can be done within the next two weeks. Therefore I'd > >> like to ask for volunteers for this task. If you think you could help > >> with this, please contact me as soon as possible. > > > > That is very hurry schedule, does anyone like to try? > > Yes, unfortunately :-( But it seems necessary since this is quite a > big change, so teams need time to adapt. > > One more thing: it was mentioned that information about the lines > should be included in the vision messages if the field of vision is > restricted since we only have a few flags for orientation. What do you > think about that? It would be a nice extra, especially if noise is used again and definatly with the distance restriction suggested by Yuan also applied to goal posts and flags. But maybe not a necesary one, in the spheres age teams didn't have problems doing localization without line information either. In light of the tight schedule I think this can be on the bottom of the vision TODO list. Cheers, Sander |