From: Jun N. <ju...@sc...> - 2001-01-26 15:15:25
|
I completely agree on your explanation below. And I think I figured out the difference. The original code triggers preemption when preemption_goodness(tsk, p, cpu) returns more than *1*. oldest_idle = (cycles_t) -1; target_tsk = NULL; max_prio = 1; ... if (oldest_idle == -1ULL) { int prio = preemption_goodness(tsk, p, cpu); if (prio > max_prio) { max_prio = prio; target_tsk = tsk; } ... But your code does that when it finds a task with a lower na_goodness value, compared to na_goodness(p). Because of this, it tends to cause reschedule more. So a quick fix to this would be like: saved_na_goodness = na_goodness(p); - tmp_min_na_goodness = saved_na_goodness; + tmp_min_na_goodness = saved_na_goodness - 1; Mike Kravetz wrote: > > On Thu, Jan 25, 2001 at 05:43:39PM -0500, Jun Nakajima wrote: > > > > I think your code slightly differs from the original (I could be wrong). > > > <code deleted> > > > > if cpu becomes (cpu == tsk_cpu), stack_list[cpu] does not get > > PROC_CHANGE_PENALTY, and if (stack_list[cpu] < tmp_min_na_goodness), > > then tmp_min_na_goodness is changed to the smaller one. And in the > > following iterations, the test "if (stack_list[cpu] < > > tmp_min_na_goodness)" becomes harder to be true. > > > > Whereas the orignal code checks the max value returned by > > preemption_goodness(tsk, p, cpu) (tsktsk = cpu_curr(cpu)). > > preemption_goodness is pretty simple, so I'll include it here. > > static inline int preemption_goodness(struct task_struct * prev, > struct task_struct * p, int cpu) > { > return goodness(p, cpu, prev->active_mm) - > goodness(prev, cpu, prev->active_mm); > } > > As you can see, the lower the goodness value of prev (in this case tsk) > the higher the value returned by preemption_goodness(). So in essence > the original code was looking for the CPU which is executing the task > with the lowest goodness value. Also note that in the original code > cpu is tsk->processor. Therefore, PROC_CHANGE_PENALTY is always added > to the goodness value for tsk. Our code does pretty much the same thing > (I believe). We are looking for the task with the lowest goodness value > relative to the cpu 'p' previously ran on. Therefore, for all remote > CPUs we add PROC_CHANGE_PENALTY to account for the loss of cache affinity. > I believe this matches what preemption_goodness does. When cpu == tsk_cpu > we don't add PROC_CHANGE_PENALTY because we want to do a direct comparison > of na_goodness values. This would be similar to calling preemption_goodness > when both tasks have the same 'processor' value. In this case > PROC_CHANGE_PENALTY is added to both. > > Hope this makes sense (and I hope the code works as described/expected). > > > > > The code I'm talking about is the one below. __wake_up_common() is the > > common body of the wake_up family. Basically it prefers a process on the > > current CPU to ones on the other CPUs, depending on the mode/flag. This > > is reasonable because the initiator CPU of the interrupt potentialy has > > warm cache for handling it. Sompe platform delivers interrupts to the > > initiator CPU. > > That may be the case, but I believe reschedule_idle is the code that > actually determines what CPU the task should run on. This behavior is > not changed in the multiqueue scheduler. > > -- > Mike Kravetz mkr...@se... > IBM Linux Technology Center > > _______________________________________________ > Lse-tech mailing list > Lse...@li... > http://lists.sourceforge.net/lists/listinfo/lse-tech -- Jun U Nakajima Core OS Development SCO/Murray Hill, NJ Email: ju...@sc..., Phone: 908-790-2352 Fax: 908-790-2426 |