From: Jerome G. <gl...@fr...> - 2010-01-06 20:55:18
|
On Wed, Jan 06, 2010 at 01:57:08PM -0500, Alex Deucher wrote: > On Wed, Jan 6, 2010 at 1:29 PM, Jerome Glisse <jg...@re...> wrote: > > R300 family will hard lockup if host path read cache flush is > > done through MMIO to HOST_PATH_CNTL. But scheduling same flush > > through ring seems harmless. This patch remove the hdp_flush > > callback and add a flush after each fence emission which means > > a flush after each IB schedule. Thus we should have same behavior > > without the hard lockup. > > We really only need to flush the HDP cache after rendering to vram (or > UMA in the IGP case). Wouldn't it be better to just flush in those > cases? We may want to use the hdp flush callback after sw access to > vram as well, so having a separate hdp callback might be better. See > other comments inline below. > > Alex Do you know if always flushing impact performances ? I didn't benchmarked but i think arena did have same fps before or after the patch. The idea of the patch was to have the same behavior accross different family and avoid having fake callback. > > On r6xx+, mmio should be ok. > Yeah i did change every family just for consistency. > > @@ -1785,6 +1780,8 @@ void r600_fence_ring_emit(struct radeon_device *rdev, > > radeon_ring_write(rdev, PACKET3(PACKET3_SET_CONFIG_REG, 1)); > > radeon_ring_write(rdev, ((rdev->fence_drv.scratch_reg - PACKET3_SET_CONFIG_REG_OFFSET) >> 2)); > > radeon_ring_write(rdev, fence->seq); > > + radeon_ring_write(rdev, PACKET0(R_005480_HDP_MEM_COHERENCY_FLUSH_CNTL, 0)); > > + radeon_ring_write(rdev, 1); > > If you add additional packets here, you'll need to adjust the fence > counts in r600_blit_prepare_copy() in r600_blit_kms.c as well. > Will check that, what i did in others part of the code was to always ask for a lot of room in the ring (64 dwords) so that i could grow what is in fence emit or ib schedule without having to worry. I will change r600 path to do the same. It doesn't hurt to ask for more dword than needed. Cheers, Jerome |