RE: [Linux-hls-devel] Processes freezing up
Status: Pre-Alpha
Brought to you by:
lucabe
|
From: Paul K. <pko...@au...> - 2003-09-04 07:11:23
|
Hi all,
> > [Aside: from this I take it that CPU reservations are stronger than=20
> > SCHED_RR/SCHED_FIFO?
> In the default hierarchy that is built when the HLS module is=20
> loaded, yes. There is a RR scheduler (rr1), and the res1=20
> scheduler is scheduled on rr1 with priority 20. All the=20
> "regular tasks" are scheduled on another RR scheduler (rr2),=20
> which is scheduled on rr1 with priority 10. Hence, res tasks=20
> are scheduled in foreground respect to all the other tasks.=20
> Of course, you can change a task to rr1 (with priority > 20)=20
> to schedule 1 in foreground respect to res tasks...
I noted in one of John's HLS papers a scheduler hierarchy that looked
like this:
ROOT---|
| |
RES--JOIN
|
PS
Does this hierarchy give stronger guarantees to a RES task than the
standard hierarchy in HLS? I assume the answer is "no" iff there are no
rr1 tasks with priority >=3D 20?
I recall you said the join scheduler is currently broken. Is the
following hierarchy valid and similar (only hard reservations allowed,
no soft ones)?
ROOT---|
| |
RES PS =20
I'm trying to understand whether or not the standard hierarchy will
suffice for our application, or whether we need to compose a (more)
appropriate hierarchy. We want to guarantee our application meets its
real time constraints (no missed deadlines). More below...
>=20
> > With the latest talk about linux tasks vs HLS
> > tasks, what is the relationship between HLS round-robin and linux's=20
> > SCHED_OTHER, SCHED_FIFO and SCHED_RR?
> Currently, all linux tasks are scheduled in background=20
> respect to HLS tasks (a non-HLS task is scheduled only when=20
> no HLS tasks are ready). Hence, doing=20
I thought that all tasks were converted to HLS tasks (under rr2) when
the scheduler is first loaded into the kernel? This seems to be what
/proc/HLS/tasks shows. When any new tasks appear, I assumed they end up
on the default HLS scheduler (rr2) unless otherwise directed. When you
say "background", do you mean rr2? I assumed that the HLS rr2 scheduler
is basically playing the role SCHED_OTHER did before HLS was loaded? The
rr1 scheduler in the hierarchy is confusing me a little bit
(apologies!).
> sched_setsched(SCHED_RR) can be dangerous, because it results=20
> to do the opposite of what the user expects... This is what I=20
> am trying to fix.
>=20
> > Do SCHED_OTHER tasks -> HLS
> > round-robin? What about SCHED_RR/SCHED_FIFO tasks?
> This is exactly what we have to decide right now... The=20
> implemented solution (SCHED_OTHER, SCHED_RR, SCHED_FIFO -->=20
> background respect to
> HLS) is not good.
>=20
> > Is that what you're
> > grappling with at the moment, thinking about a HLS rt scheduler?
> Yes... You can for example set "rt scheduler =3D rr1" (for the=20
> standard hierarchy), so that changing a task to SCHED_FIFO or=20
> SCHED_RR will really increase its priority.
What happens to res1 tasks in this case? Will the sched_setscheduler()
call for a res1 task then fail if there are "demanding" SCHED_FIFO/RR
tasks or vice versa?
> =20
> > Based on the third simulation, I would've thought our application=20
> > would be okay. However it continues to show missed deadline=20
> problems.=20
> > It could be we need to revisit reservation requirements,=20
> but I'm also=20
> > keen to understand interrupt latency issues and scheduling issues.
> Does your application do some setsched(SCHED_FIFO) (or=20
> SCHED_RR)? Does it create a high I/O load?
>=20
Our application only schedules itself under res1 (5ms/6.625ms).
It does a fair bit of I/O: in each 6.625ms period, it does some number
crunching, sets up 3 DMA transfers to a cPCI card (1 read, 2 writes,
tens of KB) and does read/writes to 2 Ethernet cards (low throughput, <
1MB/s); once all that is done, it blocks. It gets woken up by the RTC
interrupts every 1ms and polls the cPCI card time counters to check if
they've ticked over into a new 6.625ms period; if so, it does all its
processing again,etc; if not, it immediately blocks again. Shortly we'll
forget the RTC 1ms interrupt and polling mechanism, and get the cPCI to
interrupt every 6.625ms. I hope that made sense!
Regards,
Paul. =20
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf _______________________________________________
Linux-hls-devel mailing list Lin...@li...
https://lists.sourceforge.net/lists/listinfo/linux-hls-devel
|