From: David B. <da...@pa...> - 2005-12-01 18:31:35
|
On Wednesday 30 November 2005 11:17 pm, Vitaly Wool wrote: > Mark Underwood wrote: > > >>However, there also are some advantages of our core compared to David's I'd like to mention > >> > >>- it can be compiled as a module > > >So can David's. You can use BIOS tables in which case you must compile the SPI core into the > >kernel but you can also use spi_new_device which allows the SPI core to be built as a module (and > >is how I am using it). > > You limit the functionality, so it's not the case. As noted in my comparison of last week (you're still ignoring that): - Mine lets board-specific device tables be declared in the relevant arch_setup() thing (board-*.c). Both frameworks allow later board specific code to dynamically declare the devices, with binary (Dave's) or parsed-text (Dmitry's) descriptions. What Mark said was that in this case he used the "late" init. You seem to be saying he's not allowed to do that. Which is nonsense; there are distinct mechanisms for the good reason that "late" init doesn't work so well without dynamic discovery ... which SPI itself doesn't support. Hence the need for board-specific "this hardware exists" tables. > If there's more than one SPI controller onboard, spi_write_then_read > will serialize the transfers ... Which, as has been pointed out, would be a trivial thing to fix if anyone were actually to have a problem. Sure it'd incur the cost of a kmalloc on at least some paths -- serializing in the slab layer instead! -- but that's one price of using convenience helpers not performance oriented calls. > Moreover, if, say, two > kernel threads with different priorities are working with two SPI > controllers respectively *priority inversion* will happen. That characteristic being inherited from semaphores (or were they updated with RT_PREEMPT?), and being in common with most I/O queues in the system. Not something to blame on any line of code I wrote. Oh, and I noticed a priority inversion in your API which shows up with one SPI controller managing two devices. Whoops! I'd far rather have such inversions be implementation artifacts; it's easy to patch an implementation, hard to change all API users. > >>- it's more adapted for use in real-time environments > >>- it's not so lightweight, but it leaves less effort for the bus driver developer. > > > >But also less flexibility. A core layer shouldn't _force_ a policy > > Nope, it's just a default policy. One that every driver pays the price for. Allocating a task even when it doesn't need it; every call going through a midlayer that wants to take over queue management policy; and more. (Unless you made a big un-remarked change in a patch you called "refresh"...) > >on a bus driver. I am currently developing an adapter driver for David's system and I wouldn't say > >that the core is making me do things I think the core should do. Please could you provide examples > >of where you think Davids SPI core requires 'effort'. > > Main are > - the need to call 'complete' in controller driver So you think it's better to have consistent semantics be optional? That seems to be the notion behind your spi_transfer() call, which can't decide whether it's going to be synchronous or asynchronous. Instead, it decided to be error prone and be both. :) > - the need to implement policy in controller driver The "policy" in question is something that sometimes needs to be board-specific -- priority to THAT device, synch with THIS external signal, etc -- which is why I see it as a drawback that you insist the core implement one policy. One policy is painfully easy to implement: FIFO, processing the requests in the order they arrive. Easy to implement, even with spinlocks, in a dozen lines of code. If anyone has a hard time writing that, they shouldn't be trying to write a device driver. - Dave |