From: Leif D. <lde...@re...> - 2003-04-11 20:45:11
|
On Fri, 11 Apr 2003, =5Biso-8859-15=5D Jos=E9 Fonseca wrote: > Leif, > = > On Thu, Apr 10, 2003 at 10:50:00AM -0500, Leif Delgass wrote: > > Jose, > > = > > Sorry, I had meant to reply to your previous email about this. I thi= nk we = > > may as well move this work to the current mach64-0-0-6-branch, which = is = > > where the bsd port is happening. Also, the mach64 DRM in the new bra= nch = > > is converted to use the filp changes and os-independence templates. > = > Good. I thought that too. > = > > A couple of other things I wanted to mention: > > = > > Although it's true that the private buffers will be used in order for= the > > most part, there is the case of cliprects where we reuse buffers. Wi= th > > the current copy/verify code, we can't reuse buffers on subsequent io= ctls, > > but they are reused for up to 9? cliprects per ioctl. That means that= the > > buffers won't necessarily complete in order. > = > I was forgeting about that. Spite it strikes me as a little cumbersome > using a fixed array in the SAREA to pass =5Ba fixed number of=5D clipre= cts > instead of passing a pointer to a userspace =5Bvariable length=5D array= of > cliprects during the DMA buffer dispatch ioctl. = Currently, the Mesa driver copies the cliprects from the drawable struct to the SAREA, and then the kernel can read the cliprects without a copy_from_user(). I think you'd need to have the kernel copy directly from the drawable struct to make up for that. At any rate, the case of more than 8 cliprects is not very common (think of all the overlapping = windows you need to hit that case). But as you say, it might simplify = things to use only one ioctl for a full set of cliprects. > > I think if possible we > > should re-use the list code and have some sort of private/public para= meter = > > (with the private buffers on a separate list). In addition, if the X= > > server maps the private buffers, we'll have the same out-of-order > > situation that we do with public buffers. > = > You're probably right - it's not worth, at least for the time being, to= > do things differently. In this case we need to have two sets of lists > (for private and public buffer pool), and change the existing functions= > to operate on either one. > = > > Another thing that I've been trying out is filling in buf->list with > > DRM_FREE, DRM_NONE (placeholder), and DRM_PEND. This allow tracking > > which list the buffers are on through /proc/dri/=5Bnum=5D/bufs. It a= lso > > gives us a way to get the list of actual buffers represented by the > > placeholder list, since the buf pointers on the placeholder list > > aren't necessarily valid (it's really just a pool of list entries tha= t > > need to be filled in when used, though the nuber of entries in the > > pool matches the number held by clients). This can be used to free > > placeholders, i.e. buffers held by clients but not submitted, on > > release when a client dies. = > = > I see. So currently when a buffer dies its buffers are effectively leak= ed? It's possible, though it wouldn't happen often in practice. If the clien= t dies between the time the buffer is claimed with drmDMA and the time it's= submitted, then the buffer is effectively leaked. This is a fairly small= window of time, and is currently only possible with blits to card memory.= = I should also point out that the public DMA buffers are now only requeste= d by clients for PCI-only operation (PCI card or ForcePCIMode) or when the LocalTextures option is used. I confirmed the possiblity of a leak by adding code to grab and hold a DM= A buffer in the client that is never submitted. Using the /proc interface as I mentioned above, I could see that the buffer remains on the placeholder list indefinitely after the client exits or is interrupted. = = If all the buffers are leaked, you reach a deadlock where any ioctl that needs freelist_get will fail. > > The alternative is to search for the > > correct buffer in the placeholder list rather than grabbing the first= > > one when clients submit buffers. > = > You mean, having the placeholder list pointing to the real = = Not sure how you were going to finish this sentence ;), but what I mean is: The placeholder list starts out with =22real=22 buffer entries that = were moved there from the freelist, but we ignore the buffer entries when moving a placeholder entry to the pending list when a client submits a buffer. We just reset the buffer pointer to the submitted buffer. That lets us grab any old placeholder (the first one) without doing a search o= n every submit. We could do the search and keep the placeholder list accurate, but then we'd be optimizing for the wrong thing, I think. The release cleanup is not as time critical as buffer submits. However, I haven't actually compared the two methods, so I don't if it would be an appreciable difference. Unless there are a large number of clients doing= = texture blits at the same time, it probably wouldn't make a difference. > Don't forget that now this only applies for texture/bitmap data. For th= e > rest, it will be used a userspace buffer - no need to waste precious DM= A > memory for holding vertex data that will never be read directly by the > card. Right. We already use a userspace vertex buffer in the current code, and= as I said above the client only requests DMA buffers for textures with PC= I cards or local textures (which aren't used by default for AGP cards). -- = Leif Delgass = http://www.retinalburn.net |