From: Michael S. <mi...@c6...> - 2004-01-25 15:41:12
|
Hi! tmbinc and me have been talking about how to implement interrupts on=20 the GameCube. The GameCube has an interrupt controller with 14 sources.=20= E=05ach source is a device, and each device can trigger one or more=20 different interrupts by only having one line to the interrupt=20 controller. This is quite similar to other architectures: Interrupt 8=20 in this case comes from the VI (Video Interface), and the VI has 4=20 different sources for interrupts. The original GameCube system software presents a linear system to the=20= user. It is aware of all possible interrupts sources at the second=20 level and presents them in a single list. The user must only register=20 for one specific VI interrupt, for example, and not for the common VI=20 interrupt and then check which one it was, using the VI registers.=20 GCLIb also implements this behavior. It is my understanding that it is common in Linux to only implement the=20= first level in the kernel interrupt handling code, i.e. 14 sources, and=20= that drivers must do the rest manually. This is done because the=20 generic kernel interrupt code cannot be aware of all different devices=20= in a system - on a normal computer you can change the configuration=20 easily and remove or add devices, like PCI cards. On the GameCube, the=20= number of different interrupts is fixed; there is no way to add a=20 device that can trigger another interrupt, so the generic interrupt=20 code would already handle all interrupts at the sources and pretend=20 that this two-level design didn't exist. There are pros for each side. Presenting two levels is more the Linux=20 way, while linearizing the interrupts is easier for driver developers=20 and has no real disadvantages. I tend to prefer the Linux way, tmbinc=20 seems to prefer the GameCube SDK way. What are your opinions? Michael= |
From: Richard E. <ric...@br...> - 2004-01-25 17:18:47
|
>There are pros for each side. Presenting two levels is more the Linux >way, while linearizing the interrupts is easier for driver developers >and has no real disadvantages. I tend to prefer the Linux way, tmbinc >seems to prefer the GameCube SDK way. I really prefer the two-level Linux way of doing it, simply because I think it's more clean. I my opinion the first level should not be aware of the different interrupts for the different devices. This should be up to the specific driver to deal with. Isn't it quite obvious that it's up to the second level dvd driver (or whatever driver) to find out if it's a dvd_transfer_complete or a dvd_transfer_error interrupt? This is just my point of view anyway.... Richard Eng |
From: Franck <ro...@de...> - 2004-01-25 18:04:01
|
erf, the message has forked :/ On Sun, 25 Jan 2004 18:19:46 +0100, Richard Eng wrote: > I really prefer the two-level Linux way of doing it, simply because I think > it's > more clean. I my opinion the first level should not be aware of the > different > interrupts for the different devices. This should be up to the specific > driver to deal with. > Isn't it quite obvious that it's up to the second level dvd driver (or > whatever driver) to find out if it's a > dvd_transfer_complete or a dvd_transfer_error interrupt? > I agree with that, and even if the numbers of interrupts are fixed and there's no way to add foreign hardware, we should keep the Linux way to handle interrupts... > This is just my point of view anyway.... And mine, but we need more opinions about that, the drivers part depends on that and we should take care of it ;) Franck |
From: Felix D. <tm...@el...> - 2004-01-25 18:44:11
|
(Sorry for not having proper f'ups, is it possible to post to the newsgroup (archive) instead of writing to the mailing list? i'd prefer the newsgroup interface). Anyway, as Mist said, i prefer the linearized view of IRQs, at least of the EXI irqs. Take a BBA driver as example. The BBA driver wants to handle the EXI2 EXTIRQ. With my model, this would just be a "request_irq", as every linux driver would do it. With the two-level-way, we would have to implement an exi_request_irq, which is imho something that the original request_irq could handle fine. As we can't hardcode interrupt handling (because it might be a modem or a bba or anything), we would effectively dublicate a linux api. Why should we do this? For example, on x86, where you have a cascaded irq, they are linearized, too. You don't have to handle irq 2 to see which hi irq you have to handle. You simply hook the hi irq and everything is fine. I think EXI handling is different to DVD handling. I certainly don't want to linearize everything, only the EXI handling, since there are multiple drivers attaching to different exi IRQs. VI, DVD etc. will all be ONE driver, so we wouldn't have to implement an API rather to hardcode the several sub-interrupts. Felix |
From: Michael S. <st...@in...> - 2004-01-25 18:47:25
|
On Jan 25, 2004, at 7:45 PM, Felix Domke wrote: > (Sorry for not having proper f'ups, is it possible to post to the > newsgroup (archive) instead of writing to the mailing list? i'd prefer > the newsgroup interface). news://news.gmane.org/gmane.linux.ports.game-cube.devel http://news.gmane.org/thread.php?group=gmane.linux.ports.game-cube.devel Gmane allows that. Michael |
From: Groepaz <gr...@gm...> - 2004-01-25 15:52:26
|
On Sunday 25 January 2004 16:41, Michael Steil wrote: > easily and remove or add devices, like PCI cards. On the GameCube, the > number of different interrupts is fixed; there is no way to add a > device that can trigger another interrupt, mmmh... you could always have a custom device and/or a different device connected to any of the externally (or even internally) available ports that can trigger an interrupt.... "hardcoding" these kindof limits the potential use a bit to much imho. gpz |
From: Owen W. <ow...@ni...> - 2004-01-25 18:53:52
|
>There are pros for each side. Presenting two levels is more the Linux >way, while linearizing the interrupts is easier for driver developers >and has no real disadvantages. I tend to prefer the Linux way, tmbinc >seems to prefer the GameCube SDK way. Personally, although I'm not a kernel developer, I prefer the 'Linux way' of doing things, as it makes more sense for the driver to work out exactly what kind of interrupt was generated, rather than the kernel needing to know the 21 different types of DVD interrupts. Although there is little chance of any extra hardware being added, so the number of interrupts will always be same, the 'Linux way' is how this is currently implemented in Linux, so why should it change just because this is a Gamecube? My opinion anyway... Owen |
From: poulpy <dam...@fr...> - 2004-01-25 22:25:31
|
> There are pros for each side. Presenting two levels is more the Linux > way, while linearizing the interrupts is easier for driver developers > and has no real disadvantages. I tend to prefer the Linux way, tmbinc > seems to prefer the GameCube SDK way. The linux way sounds good to me, two levels are a cleaner way to do the job. Let's drivers deal with second level interrupts. We can shorten some paths like this one, but I do beleive we're here to bring linux on the GC nicely not as some kind of rubber patch. My two (euro)cents... Damien |
From: Joe M. <jo...@no...> - 2004-01-26 01:37:02
|
On Sun, Jan 25, 2004 at 04:41:01PM +0100, Michael Steil wrote: > It is my understanding that it is common in Linux to only implement the > first level in the kernel interrupt handling code, i.e. 14 sources, and > that drivers must do the rest manually. This is done because the > generic kernel interrupt code cannot be aware of all different devices > in a system - on a normal computer you can change the configuration > easily and remove or add devices, like PCI cards. On the GameCube, the > number of different interrupts is fixed; there is no way to add a > device that can trigger another interrupt, so the generic interrupt > code would already handle all interrupts at the sources and pretend > that this two-level design didn't exist. > > There are pros for each side. Presenting two levels is more the Linux > way, while linearizing the interrupts is easier for driver developers > and has no real disadvantages. I tend to prefer the Linux way, tmbinc > seems to prefer the GameCube SDK way. > > What are your opinions? If the two-stage behaviour only exists to work around a problem (reconfigurable devices) that we simply don't have here, I see no reason to stick with it. If we decide later that the kernel should support modifying the GC hardware, we can always add an "other" interrupt for each of the 14 sources. Joe |
From: Free T. C. <Fre...@fr...> - 2004-01-26 09:12:54
|
Hi ! I don't want to say stupid things, but IIRC the Linux kernel can handle an Interrupt quickly (first level) but can report its treatment later (second level) to improve the speed/latency, can not do it !? If so, it should be better to use the Linux way... Free The Cube. Michael Steil wrote: > Hi! > > tmbinc and me have been talking about how to implement interrupts on the > GameCube. The GameCube has an interrupt controller with 14 sources. Each > source is a device, and each device can trigger one or more different > interrupts by only having one line to the interrupt controller. This is > quite similar to other architectures: Interrupt 8 in this case comes > from the VI (Video Interface), and the VI has 4 different sources for > interrupts. > > The original GameCube system software presents a linear system to the > user. It is aware of all possible interrupts sources at the second level > and presents them in a single list. The user must only register for one > specific VI interrupt, for example, and not for the common VI interrupt > and then check which one it was, using the VI registers. GCLIb also > implements this behavior. > > It is my understanding that it is common in Linux to only implement the > first level in the kernel interrupt handling code, i.e. 14 sources, and > that drivers must do the rest manually. This is done because the generic > kernel interrupt code cannot be aware of all different devices in a > system - on a normal computer you can change the configuration easily > and remove or add devices, like PCI cards. On the GameCube, the number > of different interrupts is fixed; there is no way to add a device that > can trigger another interrupt, so the generic interrupt code would > already handle all interrupts at the sources and pretend that this > two-level design didn't exist. > > There are pros for each side. Presenting two levels is more the Linux > way, while linearizing the interrupts is easier for driver developers > and has no real disadvantages. I tend to prefer the Linux way, tmbinc > seems to prefer the GameCube SDK way. > > What are your opinions? > > Michael > > > ------------------------------------------------------- > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > Gc-linux-devel mailing list > Gc-...@li... > https://lists.sourceforge.net/lists/listinfo/gc-linux-devel > > |
From: Michael S. <st...@in...> - 2004-01-26 09:16:50
|
On Jan 26, 2004, at 10:12 AM, Free The Cube wrote: > I don't want to say stupid things, but IIRC the Linux kernel can > handle an Interrupt quickly (first level) but can report its treatment > later (second level) to improve the speed/latency, can not do it !? > If so, it should be better to use the Linux way... Sounds to me like top half and bottom half, what you are talking about. There is a monimal interrupt handler, the top half. The bottom half is the actual code that does something, but it gets called but the scheduler later, not by the interrupt directly. If that's what you were thinking about - that's something else. Michael |
From: Free T. C. <Fre...@fr...> - 2004-01-26 09:36:02
|
Michael Steil wrote: > Sounds to me like top half and bottom half, what you are talking about. > There is a monimal interrupt handler, the top half. The bottom half is > the actual code that does something, but it gets called but the > scheduler later, not by the interrupt directly. > > If that's what you were thinking about - that's something else. Yes, I think about 'top half' and 'bottom half'... So, I say stupid things, sorry :) ! Free The Cube. |