From: Thomas H. <th...@tu...> - 2008-03-04 09:54:56
|
Jesse Barnes wrote: > On Monday, March 03, 2008 10:34 am Thomas Hellström wrote: > >> 1) Allocating all pages at once: >> Yes, I think this might improve performance in some cases. The reason >> it hasn't been done already is the added complexity needed to keep track >> of the different allocation sizes. One optimization that´s already in >> the pipeline is to page in more that a single page (for example 16) when >> we hit a pagefault in nopfn(). >> > > Can you elaborate on the added complexity? In looking at the code the other > day, it seemed like the bits were already there to check for previously > allocated pages in the fault handler, as well as a place where we could > easily add in a real allocation call, drm_ttm_alloc_pages (which should > probably be called drm_ttm_alloc_pagedir or something instead, since it just > allocates the page pointers). > > Do you just mean the code to try allocating with different orders until the > allocation succeeds? > Yes, that and the info needed to call free_pages with the correct order. However I'm not saying this should stop us from implementing this. /Thomas > Jesse > |