From: Yuan X. <xuy...@gm...> - 2008-02-20 07:44:10
|
Hi Hesham, Thanks for your comments and code. In fact, SDL_GetTicks() call clock_gettime(CLOCK_MONOTONIC, &now), if it's available[1]. But the clocks are different, the SDL use CLOCK_MONOTONIC. I am not sure which is better, since clock_getres() with the two clocks returns the same precision in my systeam(Suse10.2): they are both ( 0 4000250). And you mentioned the nanosleep() error is around 0.5 msec. But I think it is the same mean with the 10 msec precision. I have done a test on different sleeps: usleep, boost::thread::sleep and nanosleep. The results showed that there are nearly the same: Input(ms) boost usleep nanosleep gettimeofday clock_gettime 0 4 4 4.03694 0.000556 0.000409 1 4.01191 4.0041 4.00091 1.00193 1.00087 2 4.044 4.01201 4.02362 2.00263 2.00181 3 8.03601 8.08801 8.07893 3.00388 3.01715 4 8.00801 7.9996 8.01276 4.00417 4.00744 5 8.46001 12.0281 10.2573 5.00107 5.0015 6 8.02001 8.02391 8.2374 6.04204 6.01252 7 12.024 12.008 12.0635 7.00603 7.01054 8 12.052 12.532 12.0045 8.0039 8.00347 9 13.8839 12.056 12.0787 9.04589 9.01147 The program which I used to test have been attached, you can compile it by: "g++ test_timer.cpp -lrt -lboost_thread" I noticed that SDL_Delay() is implemented by a while loop(also see [1]), which is similar as our current code. The last two rows of above table are results of 'while loop' method with gettimeofday() and clock_gettime(). I think I can make a conclusion: if we use 'sleep', no matter which one, the precision can not guaranteed; if we use 'while loop' the precision is better(It is also tested in my program), but it will cost lots of computation. [1] http://www.libsdl.org/cgi/viewvc.cgi/trunk/SDL/src/timer/unix/SDL_systimer.c?view=markup > > Apparently the new kernels already have high resolution timers (they have > problem just with old cpus). I can get nanosecond with this program on my > laptop: > > #include <ctime> > #include <iostream> > using namespace std; > int main() > { > float temp; > struct timespec ts1; > struct timespec ts2; > clock_gettime( CLOCK_REALTIME, &ts1 ); > for (size_t i=0; i<50000000; i++) > temp = 10.0 + 0.1; > clock_gettime( CLOCK_REALTIME, &ts2 ); > long long int count1 = static_cast<long > long>(1000000000UL)*static_cast<long long>(ts1.tv_sec) + > static_cast<long long>(ts1.tv_nsec); > long long int count2 = static_cast<long > long>(1000000000UL)*static_cast<long long>(ts2.tv_sec) + > static_cast<long long>(ts2.tv_nsec); > cout<<(count2-count1)<<endl; > return 0; > } > > And also there is no need to worry about new multi-core CPUs, they have > synchronized cores (at least that's the case with Intel, according to the > comments in the kernel code). > > I did some tests with nanosleep() too. The error is around 0.5 msec (much > less than 10 msec that you mentioned), so it shows we may can use it if that > error is acceptable (to me it is). > > Bests, > Hesham > > > On 27/01/2008, Yuan Xu <xuy...@gm...> wrote: > > Hi Hesham and all, > > > > > > > After profiling the server I saw SDL_GetTicks() is taking around 25% of > the > > > time (in the tests I used 7 robots and with the single thread mode). To > make > > > long short, I saw this loop (while) in simulationserver.cpp: > > > .... > > > Since SimulationServer::Step() takes care of this case > > > (int(mSumDeltaTime*100) < int(mSimStep*100)) I thought that while in > Run() > > > is not necessary. So after commenting it out, SDL_GetTicks() takes less > than > > > 10% of the time: > > > > The phenomenon is reasonable. > > Since the time queries are needed for timing in single thread, > > the percentage of queries means that your machine are light underload( > > not too busy ), > > I guess your robots did not do anything. You will notice that the > > percentage of queries will lower if robots are collides... > > But if you remove the codes, the timer is noneffective. Then if the > > robots collides, the server will very very slow, otherwise the > > simulation time elapse very quickly. > > You may ask why use query to get time, ok, I do not think it is a good > idea. > > The server cycle is 20ms, but the time precision of Linux is only 10ms, > > some function such as "sleep" also can not help on it. > > In fact, SDL_GetTicks() is used to get "preciser"(seemly). > > And it is possibly that SDL causes the Input problem in multi-threads. > > > > I have some idea to this: > > 1. to get precise time, use RealTime-Linux as platform [It will > > narrow OS platform] > > 2. use a new thread to time, The timing is in InputControl thread in > > current multi-thread implementation, but query is also used(It can be > > improved little). [It may make the timing-thread busy, and can not > > guarantee precision.] > > 3. use other timer instead SDLTimer, such as boost::xtime. [It need > > more investigation.] > > > > > BTW, the other day I came across the multi-threaded version of ODE by > Intel. > > > > Sounds interesting. > > > > > I think it's worth a try if Yuan has time to help :-) There is something > > > wrong with the multi-threaded mode of the server on my system. So first > I > > > need to find out what's the problem. > > > > I am free recently. Give some hints of your problem. > > > > > > -- > > Best wishes! > > > > Xu Yuan > > School of Automation > > Southeast University, Nanjing, China > > > > mail: xuy...@gm... > > xy...@ya... > > web: http://xuyuan.cn.googlepages.com > > -------------------------------------------------- > > > > -- Best wishes! Xu Yuan School of Automation Southeast University, Nanjing, China mail: xuy...@gm... xy...@ya... web: http://xuyuan.cn.googlepages.com -------------------------------------------------- |