Re: [Algorithms] General purpose task parallel threading approach
Brought to you by:
vexxed72
|
From: <asy...@gm...> - 2009-04-04 16:49:49
|
I believe that from cpu point of view kernel scheduler and user-space scheduler are in the same situation here in regard to cpu cache usage. Yes, you need to keep your data close to cache, avoid task/thread migration and avoid cache conflicts. I don't see why a user space version of task scheduler can't be as good as kernel side one, since the kernel and userspace scheduler have the same amount of information in order to take a best decision, or am I wrong here ? And as I mentioned - in user space you need to save/restore muuch smaller state, so you have initial speed benefit here. Alexander 2009/4/4 Jon Watte <jw...@gm...> > asy...@gm... wrote: > > >Have you measured it? > > I composed a quick sample - 2 threads which ping pong each other via 2 > > events (1 core utilized in total), each thread did 10 million > > iterations of passing execution to another thread (20 million > > execution passes in total), and did the same test with tasks. > > I don't think that's a representative measurement, though, because it is > the absolute best case: both contexts are in L1, on the same core. For > any kind of real workload with blocking waits, that will not be the > case. I believe the Windows kernel version will not slow down much in > that case, but the asynclib version will. Of course, if your workload is > actually highly cached and interdependent, then this measurement may be > more relevant. > > Sincerely, > > jw > > > > ------------------------------------------------------------------------------ > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Regards, Alexander Karnakov |