From: Jeff H. <jha...@va...> - 2001-07-12 18:32:50
|
James Simmons wrote: >> "Agpgart doesn't have a interface so that any driver can ask for X amount of >> graphics memory. It assumes that only one client is using and he knows where >> he binded the memory." > > > >From all the emails I have seen it is really strange how the system for > agp is setup. On IRIX systems each OpenGL client allocates memory and uses > a seperate library (udma) to lock down the memory. Then the OpenGL driver > for the card sends the kernel driver what memory to use. The driver sets > it up and data is tranfered. Once done the client frees the memory. > > > The IRIX implementers had access to much different hardware, agp is not as flexible as some of the uma stuff in SGI hardware. Inserting random pieces of memory into the agp aperture requires a cache flush because of the use of write combining memory. You want to avoid swapping memory in and out of cached and ucwc spaces, its terribly expensive. Also if the client accesses the memory through its normal mappings while it is mapped into ucwc space, things will go boom. This means that you have to invalidate the original mappings when you lock down the memory. All this is feasible to handle with PAT, but it will perform like shit on an x86. This sort of interface does not make sense for agp. It should also be noted that removing pieces of memory from the agp aperture will normally require a graphics pipeline flush, so we don't want drivers doing that sort of operation too often. Basically the agp interface has been well thought out, and I believe that it is the best solution considering all the problems and the fact that PAT isn't shipping with 2.4. I'm sure some people might want a malloc type interface, but I don't want to be responsible for creating a driver which keeps tons of mm book keeping in the kernel where it can't be swapped. -Jeff |