From: Jesse B. <jb...@vi...> - 2008-03-03 19:31:15
|
On Monday, March 03, 2008 10:34 am Thomas Hellström wrote: > 1) Allocating all pages at once: > Yes, I think this might improve performance in some cases. The reason > it hasn't been done already is the added complexity needed to keep track > of the different allocation sizes. One optimization that´s already in > the pipeline is to page in more that a single page (for example 16) when > we hit a pagefault in nopfn(). Can you elaborate on the added complexity? In looking at the code the other day, it seemed like the bits were already there to check for previously allocated pages in the fault handler, as well as a place where we could easily add in a real allocation call, drm_ttm_alloc_pages (which should probably be called drm_ttm_alloc_pagedir or something instead, since it just allocates the page pointers). Do you just mean the code to try allocating with different orders until the allocation succeeds? Jesse |