Re: [Algorithms] General purpose task parallel threading approach
Brought to you by:
vexxed72
|
From: <asy...@gm...> - 2009-04-04 18:09:12
|
Blocking is only part of a problem (I used it for demonstration purporses basically), on top of that you have all sort of IO and task dependencies. You can compare my approach to hyperthreading in hardware world which is becoming more and more popular these days (sun's ultrasparc, intel's larabee, and gpus are also using that scheme as far as I'm aware of). Hyperthreading switches virtual threads when currently executed thread needs to wait on something (lime memory access to finish). Is it needed ? Not if you layout your data in a way where you don't have memory access latencies, but you usually do have them, because in many cases to eliminate latencies is very difficult/time consuming/completely not possible to do. I'm not saying that one should not plan his code for asynchronous execution scenario, but the delays because of all sort of blocking conditions can be hidden like it happens in hyperthreading. Alexander. 2009/4/4 Oscar Forth <os...@tr...> > >but it seems to >me< that you are attacking the problem of multiple >> cores in totally the wrong way. >> Explain. If I make things work at the same speed as a standard approach >> (when there is no or very little contention) or faster (in case of >> contention or blocking io), what is wrong here ? > > > Again im not saying im correct ... but you stated that you are trying to > handle the "ability to handle blocking conditions somehow". I'm suggesting > that you don't need this to be a problem. Maybe we are talking about the > same thing? Maybe im just misunderstanding you? BUT, it still seems to me > that if blocking is a problem then you are attacking the problem wrongly. > > >> Could you explain a bit more about your approach ? And what do you do when >> physics iteration finishes faster than rendering iteration or the other way >> around ? > > > If my physics system finishes early i return its thread to the thread pool > and give its thread to the object iterations. I only ever have twice the > number of threads that there are cores in the machine > (Experimentation showed me that this is a good tradeoff between leaving the > core stalled on things like memory access and general thread throughput. I > would love to experiment on massively multi-core systems and it looks likely > i'll get a play with a 1024 core system soon to do some testing for a non > games related problem). > > I dunno what more I can tell you about my approach from my last post. > Basically I am aiming to get tasks as parallel to each other as possible. A > given task, when it finishes, has a "callback" that tells it when it next > has to bother doing any AI decisions. Otherwise it only handles animation > and rendering of itself. Basically I'm looking at where a stall can occur > and trying to eliminate that stall. Currently my only stalling occurs when > I add to global lists (message list and render list). > > The system seems perfectly compatible with a networking system whereby you > don't know exactly when the next update will occur. In fact things like the > rendering and physics could be offloaded, easily, to whole other computers > if such a need ever came about. > > I'm pretty crap at explaining things. If you have any specific questions > to ask on my approach then I'll happily attempt to answer them. I'm better > at that :) That said, I will re-iterate, I'm not saying your solution is > wrong. It just goes contrary to my experience and kowledge (which is > inevitably flawed). My system will, however, port nicely to >any< > pre-emptively multi-tasking operating system which was my aim. > > > ------------------------------------------------------------------------------ > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Regards, Alexander Karnakov |