From: Michel D. <mi...@da...> - 2009-08-10 21:48:03
|
On Wed, 2009-08-05 at 12:06 +0200, Thomas Hellström wrote: > Michel Dänzer wrote: > > On Wed, 2009-08-05 at 11:18 +0200, Thomas Hellström wrote: > > > >> Aargh. Wait. I remember now. > >> > >> The fbcon bo is exported through the fbdev address space at offset 0. > >> The vm_node is for the drm device address space only. So it is perfectly > >> legal and actually correct for it not to have a vm_node, unless it's > >> going to be accessible from the drm device. Does it need to for KMS? > >> > > > > I don't think so. > > > > > >> I'm a bit unsure whether it's OK to export a bo through two different > >> address spaces. In particular, unmap_mapping_range() on the drm device > >> will not kill the fbdev user-space mappings. > >> > > > > Hmm, so that would mean that if an fbdev mapping is created while the BO > > is in VRAM, it would still access VRAM after the BO has been evicted? Is > > there a solution for this? > > > > > Yes, You need to call unmap_mapping_range() on the fbdev address space. > See how that is done in ttm_bo_unmap_virtual() for the drm address > space. Actually, I think you need to set up a bo_driver hook in > ttm_bo_unmap_virtual() to do this every time the bo is moved or swapped out. The attached patches should hopefully address all the feedback I've received here and on IRC. Let me know what you think. -- Earthling Michel Dänzer | http://www.vmware.com Libre software enthusiast | Debian, X and DRI developer |