|
From: Kristian V. D. V. <va...@li...> - 2005-01-12 19:25:45
|
On Wednesday 12 January 2005 00:49, Daniel Gryniewicz wrote: > On Tue, 2005-01-11 at 21:32 +0000, Kristian Van Der Vliet wrote: >> One change I've considered is rather than trying to calculate the CPU >> time over the total life of the thread is to keep a count of the number >> of times a thread runs to the end of it's quantum. If the quantum >> expires the counter increments by one. If the thread blocks before the >> quantum expires the counter is decremented by one. The penalty would >> then be applied according to the counter value. Threads with a count of >> five would be scaled back by -10, a count of ten by -20, fifteen by -30 >> etc. (The real numbers would probably require some tuning) The advantage >> here is that the calculation is much easier. It also solves the problem >> where threads which have spent a long time idle suddenly shoot to 100% >> CPU usage. Under the current calculation based on the total life of the >> thread the CPU usage would still be calculated as a low percentage and no >> penalty imposed. Under the new scheme a thread at 100% CPU would quickly >> acumulate a high count, and be penalized. > > Yes, that sounds generally better. Another possibility is a decaying > average, which ends up being very similar to what you want to do. > > In general, the current method you used will help in the normal case of > a run-away thread. It's much more unlikely, in my opinion, for a thread > to sleep for a long time and then go berzerk. :) > > Still, it'll likely be an improvement. I'll try and get the kernel > built on my box here and get some ideas and testing in. This scheme turned out to be tricky to implement, with no real benefit over the current scheme. It suffered from a "ping pong" problem where the dynamic priority would ping-pong between two values as the algorithm fought over itself. Certain threads were also very suseptable to what appeared to be priority inversion E.g. the desktop thread at login would use a large amount of CPU time, get down to a certain low priority and then hang, likely deadlocked with another thread. The testcase showed less of an improvement also, so the net benefit was smaller. I didn't get more testing done as I really need to rebuild my machine; it has remnants of Glibc 2.3.3 testing on it which are playing havoc. Once I've got it back to a clean state I'll get back to fiddling with some different ideas. -- Vanders http://www.syllable.org http://www.liqwyd.com |