From: Julian S. <js...@ac...> - 2002-11-25 23:29:54
|
> > Spent ages trying to think of good LRU algorithms which don't use > > reference bits so they will work with translation chaining. It's very > > difficult -- many constraints. > > What are the constraints? The need for reference bits isn't so hard if > you're willing to periodically unchain in order to take a measurement. My understanding of this was enhanced by reading "Page Replacement and Reference Bit Emulation in Mach" (Richard P Draves), at CMU. Operating systems for the VAX are also of particular interest because apparently the VAX didn't support reference bits, so there is no straightforward way to do LRU. Anyway. In one sense our problem is simpler since we don't have to deal with clean vs dirty "pages"; they are always clean. However, our problem is more constrained in two ways. Firstly, we cannot just throw away translations individually; they have to be all unchained to make this safe. We even have to do this if we merely want to move them around a bit. This is a global action which we cannot afford to do too often. We could concievably keep track of which blocks point to which other blocks and so do incremental unchaining, but that sounds complex. Second problem is that our pages are of varying sizes, and are small. This makes dealing with queues of them awkward; and they are so small that keeping them in linked lists is a significant space overhead. ------------- Pretty much all the papers I looked at on Sunday suggest that the brain-dead throw-it-all-away approach is about as good as it gets. I did find a detailed presentation (37 slides) about WABI at Hot Chips (?). WABI got pretty sophisticated in the end, and they suggest an improvement (which is similar to VMSs way of doing paging on the VAX): operate the cache as a FIFO. When it gets full, unchain everything, throw away the oldest N% of them, and keep going (you can see the brain-dead algorithm as a special case where N=100). This is simple to do and according to them somewhat ameliorates the phenomenon of throwing away translations still in active use. This is simple enough to implement that I might try it. VMS does FIFO on its pages, but shifted-out pages are not immediately dumped. Instead they are put in a holding-pen style arrangement from where they can be bought back into active use should a reference arise. This protects VMS from the worst consequences of the FIFO scheme. (so-called Second-Chance FIFO). We could do this too, it is awkward but not impossible with variable- length pages, but I just can't be bothered. J |