From: Younes M. <you...@gm...> - 2009-12-10 16:26:23
|
On Thu, Dec 10, 2009 at 5:32 AM, Zack Rusin <za...@vm...> wrote: > On Wednesday 09 December 2009 20:30:56 Igor Oliveira wrote: >> Hi Zack, >> >> 1) agreed. OpencCL is a complete different project and should exist in >> a different repository. >> 1.1) Well use Gallium as CPU backend is a software dilemma: >> "All problems in computer science can be solved by another level of >> indirection...except for the problem of too many layers of >> indirection" >> But in my opinion we can use Gallium for CPU operations too, using >> gallium as a backend for all device types we maintain a code >> consistency. > > Yes, it will certainly make the code a lot cleaner. I think using llvmpipe we > might be able to get it working fairly quickly. I'll need to finish a few > features in Gallium3d first. In particular we'll need to figure out how to > handle memory hierarchies, i.e. private/shared/global memory accesses in > shaders. Then we'll have some basic tgsi stuff like scatter reads and writes to > structured buffers, types in tgsi (int{8-64}, float, double}, barrier and memory > barrier instructions, atomic reduction instructions, performance events and > likely trap/breakpoint instructions. > We'll be getting all those fixed within the next few weeks. Doesn't seem like the current pipe_context is suited to the requirements of a compute API. Should it be made larger or is another kind of context in order? Under the hood on nvidia cards there's are seperate hardware interfaces for compute, graphics, video, even though there is some duplicate functionality, so it's not like most of the code of our current pipe_context would be reused*, so to me a different type of context makes sense. * Assuming noone wants to do compute via 3D on older HW that doesn't have all of the modern compute facilities. |