From: Hesham <hes...@gm...> - 2008-02-19 20:38:48
|
Hi Yuan, Sorry for getting back to this late. On 27/01/2008, Yuan Xu <xuy...@gm...> wrote: 1. to get precise time, use RealTime-Linux as platform [It will narrow OS platform] Apparently the new kernels already have high resolution timers (they have problem just with old cpus). I can get nanosecond with this program on my laptop: #include <ctime> #include <iostream> using namespace std; int main() { float temp; struct timespec ts1; struct timespec ts2; clock_gettime( CLOCK_REALTIME, &ts1 ); for (size_t i=0; i<50000000; i++) temp = 10.0 + 0.1; clock_gettime( CLOCK_REALTIME, &ts2 ); long long int count1 = static_cast<long long>(1000000000UL)*static_cast<long long>(ts1.tv_sec) + static_cast<long long>(ts1.tv_nsec); long long int count2 = static_cast<long long>(1000000000UL)*static_cast<long long>(ts2.tv_sec) + static_cast<long long>(ts2.tv_nsec); cout<<(count2-count1)<<endl; return 0; } And also there is no need to worry about new multi-core CPUs, they have synchronized cores (at least that's the case with Intel, according to the comments in the kernel code). I did some tests with nanosleep() too. The error is around 0.5 msec (much less than 10 msec that you mentioned), so it shows we may can use it if that error is acceptable (to me it is). Bests, Hesham On 27/01/2008, Yuan Xu <xuy...@gm...> wrote: > > Hi Hesham and all, > > > > After profiling the server I saw SDL_GetTicks() is taking around 25% of > the > > time (in the tests I used 7 robots and with the single thread mode). To > make > > long short, I saw this loop (while) in simulationserver.cpp: > > .... > > Since SimulationServer::Step() takes care of this case > > (int(mSumDeltaTime*100) < int(mSimStep*100)) I thought that while in > Run() > > is not necessary. So after commenting it out, SDL_GetTicks() takes less > than > > 10% of the time: > > The phenomenon is reasonable. > Since the time queries are needed for timing in single thread, > the percentage of queries means that your machine are light underload( > not too busy ), > I guess your robots did not do anything. You will notice that the > percentage of queries will lower if robots are collides... > But if you remove the codes, the timer is noneffective. Then if the > robots collides, the server will very very slow, otherwise the > simulation time elapse very quickly. > You may ask why use query to get time, ok, I do not think it is a good > idea. > The server cycle is 20ms, but the time precision of Linux is only 10ms, > some function such as "sleep" also can not help on it. > In fact, SDL_GetTicks() is used to get "preciser"(seemly). > And it is possibly that SDL causes the Input problem in multi-threads. > > I have some idea to this: > 1. to get precise time, use RealTime-Linux as platform [It will > narrow OS platform] > 2. use a new thread to time, The timing is in InputControl thread in > current multi-thread implementation, but query is also used(It can be > improved little). [It may make the timing-thread busy, and can not > guarantee precision.] > 3. use other timer instead SDLTimer, such as boost::xtime. [It need > more investigation.] > > > BTW, the other day I came across the multi-threaded version of ODE by > Intel. > > Sounds interesting. > > > I think it's worth a try if Yuan has time to help :-) There is something > > wrong with the multi-threaded mode of the server on my system. So first > I > > need to find out what's the problem. > > I am free recently. Give some hints of your problem. > > > -- > Best wishes! > > Xu Yuan > School of Automation > Southeast University, Nanjing, China > > mail: xuy...@gm... > xy...@ya... > web: http://xuyuan.cn.googlepages.com > -------------------------------------------------- > |