From: Brian P. <br...@tu...> - 2002-11-29 14:50:07
|
Felix K=FChling wrote: >=20 > My approach was basically inspired by the fact that there is something > in mesa that is called "pipeline". So I thought, why not implement it > like a real pipeline. If we really want to parallelize MESA, then we > should consider all options. I'm probably biased towards my proposal ;-= ) A few years ago another group developed "pmesa": http://pmesa.sourceforge= .net/ You might look at that. I think someone else brought this up on the Mesa-dev or DRI list earlier this year. I have to say I'm skeptical. A hardware TCL driver like the radeon or r200 won't benefit from this. In most other cases, I think the overhead of parallelization will result in very modest speed-ups, if any. The only situation in which I can see benefit is when applications draw very long primitive strips (many thousands of vertices). In that case, splitting the N vertices into N/P pieces for P processors and transformin= g them in parallel would be the best approach. I think that's what pmesa d= id. Implementing a true threaded pipeline could be very compilicated. State changes are the big issue. If you stall/flush the pipeline for every state change you wouldn't gain anything. The alternative is to associate the GL state with each chunk of vertex data as it passes through the pipeline AND reconfigure the pipeline in the midst of state changes. Again, I think this would be very complicated. As for Chromium, there are situations in which multiprocessors can be helpful, but it depends largely on the nature of the application and the best speed-ups come from parallelizing the application itself so that multiple threads of instances of the application all issue rendering commands in parallel to an array of graphics cards. With modern graphics cards, the bottleneck is often the application itsel= f: the card starves because the app can't issue rendering commands and verte= x data fast enough. So, feel free to experiment with this but realize that you may be dissapo= inted with the results with typical applications. -Brian |