|
From: Daniel G. <da...@fp...> - 2005-01-11 17:59:22
|
Overall, this looks quite good. Nice and simple, and straight forward.
It's quite likely that the actual dynamic priority calculation and
quantum sizes could be tweaked for better fairness/whatever, but that's
easy to do. Maybe a macro/function to calculate quantum, so it's easy
to change globally?
One note: in the dynamic priority calculation, you check that the
reduction won't leave the priority space. I suspect what you want is
this:
if( nRunTotal > 50 && nRunTotal <= 70 )
{
psThread->tr_nDynamicPriority -= 15;
}
else if( nRunTotal > 70 && nRunTotal <= 90 )
{
psThread->tr_nDynamicPriority -= 35;
}
else if( nRunTotal > 90 )
{
psThread->tr_nDynamicPriority -= 50;
}
if (psThread->tr_nDynamicPriority <= -100)
{
psThread->tr_nDynamicPriority = -99;
}
Otherwise, nothing will ever get below 85, and more well-behaved
processes will get penalized worse than badly behaved processes (they'll
get to -85, but the bad ones will only get to -50)
Attached is my few suggestions for changes. It's against current CVS,
so you won't be able to apply it on top of your changes. Basically, it
makes the above change, adds a macro for GET_QUANTUM, and adds
documentation for recalculate_dynamic_priority().
Daniel
On Tue, 2005-01-11 at 16:46 +0000, Kristian Vandervliet wrote:
> I've played around with the scheduler today, to see if I could
> find a solution to the thread-starvation problem. I think I may
> have a workable solution, which although it is not a complete
> rewrite as an O(1) algorithm, it does fix the problem.
>
> I've modified the scheduler to use "dyanmic" priorities for each
> thread. The dynamic priority is calculated for each thread
> every time the thread execution is completed (E.g. the thread is
> pre-empted). The algorithm simply works out the amount of CPU
> time being used as a percentage, based on the time the thread
> was created (It's total life-time) and the total CPU time it has
> used so far. If the percentage is deemed to be too high, it's
> dynamic priority is lowered, using a sliding scale. Well behaved
> threads which use less than 50% CPU time are re-issued with their
> "static" priority, which is the priority given to the thread at
> creation.
>
> The two scheduler functions which are involved in thread selection,
> add_thread_to_ready() and select_thread(), use the dynamic priority
> instead of the static priority when they are handling threads.
>
> In this way CPU-bound threads which do not block can not longer
> choke of lower priority threads, which may be bloking often. The
> dynamic priority is scaled back until the thread releases CPU time
> for other threads, and it slips back in the list of runable threads.
>
> In addition, the quantum of every thread is calculated, based on
> the threads static priority. Higher priority threads get a longer
> quantum. Essentially this allows heavily CPU bound tasks to still
> get more CPU time, but it is no longer at the total expense of the
> other threads.
>
> An experimental kernel can be downloaded from
> http://www.liqwyd.com/syllable/kernel.zip Unzip it and copy it to
> /system/ You might want to back up your original (known good)
> kernel first!
>
> I have also written a very small test program which demonstrates
> the problem on the old kernel, and the bevahour of the new kernel.
> It is available from http://www.liqwyd.com/syllable/sched_test.zip
> The source and binary are included. The binary takes one optional
> argument, "-s" With no arguments the program creates two threads;
> one to print info and the second which sits in a very tight loop,
> eating your CPU. The tight-loop thread is created at a priority of
> 150 E.g realtime. If you pass -s to the test binary, the thread
> will sleep for one second per iteration.
>
> If you run the test on an existing kernel, your machine will
> essentially become non-responsive. This is because the CPU bound
> thread gets 100% of the CPU at a higher priority than the appserver.
> If you run the test on the patched kernel, the thread will be
> dynamically scalled back. It will still claim as much CPU as it
> can, but it can no longer starve other threads. If you run the test
> on the patched kernel with the -s option, you will see that the
> dynamic priority is never altered as the thread is "well behaved"
>
> The kernel patch is attached. I would appreciate feedback and
> especially any improvements. I do not intend to apply these patches
> to the mainline kernel without plenty of testing, and preferably
> discussion. Note that the experimental kernel is the stock CVS
> kernel with the scheduler patches, but none of Arno's recent patches.
> No doubt we'll merge our patches soon.
>
> --
> Vanders
> http://www.syllable.org
> http://www.liqwyd.com
>
>
>
> _____________________________________________________________________
> This message has been checked for all known viruses by Xenicom delivered through the MessageLabs Virus Control Centre.
|