From: Maxim L. <max...@gm...> - 2009-05-28 14:35:06
|
Hi, My notebook has a IR receiver, so called ENE CIR. It is actually a embedded controller, which has a IR sampler inside. It is found in my acer aspire 5720G, and is exposed in acpi tables as ENE0100. While its windows driver works only with MS approved remotes, and moreover I have seen on the web that only a bundled remote works with it, it is actually a generic receiver, and works here fine with all my remotes (since I reverse engineered it). The only catch is that it holds only 8 bytes of IR timing, which are divided in 2 halfs, one is active, other is supposed to be read by host. Thus my current userspace app sucks, since it can't use interrupts. I want to write a kernel driver for this device, but I don't know if I need to write it against lirc, or recently posted on LKML in-kernel IR framework. Do you expect lirc to be merged any time? What are your suggestions? Best regards, Maxim Levitsky |
From: <li...@ba...> - 2009-05-29 22:05:08
|
Hi! Maxim Levitsky "max...@gm..." wrote: [...] > I want to write a kernel driver for this device, but I don't know if I > need to write it against lirc, or recently posted on LKML in-kernel IR > framework. [...] > What are your suggestions? Just implement a LIRC driver. The last suggestion that seemed to be accepted was that the IR data is received by the LIRC drivers and is passed to a in-kernel decoder if the user configures it so. Otherwise it's passed to lircd in user-space. The in-kernel decoder code was far from complete and I still have concerns that it can work in all situations. Christoph |
From: Maxim L. <max...@gm...> - 2009-05-29 23:44:35
|
On Fri, 2009-05-29 at 23:50 +0200, Christoph Bartelmus wrote: > Hi! > > Maxim Levitsky "max...@gm..." wrote: > [...] > > I want to write a kernel driver for this device, but I don't know if I > > need to write it against lirc, or recently posted on LKML in-kernel IR > > framework. > [...] > > What are your suggestions? > > Just implement a LIRC driver. > > The last suggestion that seemed to be accepted was that the IR data is > received by the LIRC drivers and is passed to a in-kernel decoder if the > user configures it so. Otherwise it's passed to lircd in user-space. > > The in-kernel decoder code was far from complete and I still have concerns > that it can work in all situations. Me too. In my opinion, kernel drivers should pass raw IR data to userspace, but lircd should process it and use uinput to send it back to kernel. Here are 2 reasons why: 1 - obscure remotes: Here is a real word example, I have a remote for a air conditioner unit. The remote has 'state'. It displays current settings on its LCD, and uploads all settings when any of them change. It sends quite big packet, encoded with typical pulse distance encoding. Thus the typical approach of sending keys won't work, and besides there is really no point to add support for this in kernel. Instead I could write my own (trivial) program to do decoding, and then use decoded data as I wish (actually I already have one...) Thus I really think kernel should at least allow to see raw data, and not via some 'debug' interface. This still doesn't rule out in kernel approach. 2 - userspace driver: There will always be pure userspace drivers, like the one that records from soundcard. These won't have access to in kernel decoder, thus will have to use their own decoder, and, most importantly, they won't be able to use configfs interface, thus we end up with two decoders, two config interfaces, etc... But the idea of having a state machine is really great, and thus I think that: 1 - we should cleanup kernel drivers, and merge them upstream, These drivers should expose a unified interface to allow reading the ir timings, (unified scale, in microseconds I think). Reading/writing should be done via char device as it is already done. 2 - make lirc use uinput, and _depricate_ its own interface. (and in same time clean it up) It is possible even to rewrite userspace, kernel space or both parts or lirc, they aren't that big. Also it would be nice to make lirc automaticly guess the protocol, using these state machines, thus allowing you to create more 'high level' config files for lirc. Thats all, that what I think Best regards, Maxim Levitsky |
From: <li...@ba...> - 2009-05-30 08:57:22
|
Hi! Maxim Levitsky "max...@gm..." wrote: [...] > But the idea of having a state machine is really great, and thus I think > that: > > 1 - we should cleanup kernel drivers, and merge them upstream, *we* sounds good and actually this is the only problem that there simply are not enough people helping here. But due to the big efforts of Jarod and Janne, the code quality is now much much better than before. [...] > 2 - make lirc use uinput, and _depricate_ its own interface. > (and in same time clean it up) lircd already does have uinput support... > Also it would be nice to make lirc automaticly guess the protocol, > using these state machines, thus allowing you to create more 'high > level' config files for lirc. This already is possible if you enable dynamic code generation and simply put all config files from the remotes/generic folder into your lircd.conf. This will work for simple protocols, but I can tell several cases where this definitely will not work and is not possible. Christoph |
From: Maxim L. <max...@gm...> - 2009-05-30 09:29:47
|
On Sat, 2009-05-30 at 10:56 +0200, Christoph Bartelmus wrote: > Hi! > > Maxim Levitsky "max...@gm..." wrote: > [...] > > But the idea of having a state machine is really great, and thus I think > > that: > > > > 1 - we should cleanup kernel drivers, and merge them upstream, > > *we* sounds good and actually this is the only problem that there simply > are not enough people helping here. But due to the big efforts of Jarod > and Janne, the code quality is now much much better than before. I can help a bit, I am going to write new lirc driver anyway, so I could look at existing drivers, and also at least do some cleanups, lirc drivers have the property of being quite short. However, I don't have many receivers (just one, without driver...), thus, don't know what I could do, except trivial cleanups. What are current problems with the code? I think that the first, and most importaint thing, is to clean up kernel drivers. (I don't have much kernel experience, I have a idea what it is, I even have few semi trivial patches there, etc) Anyway, I will start reading lirc sources right away. How about focusing on merging the common core, and few selected drivers? > > [...] > > 2 - make lirc use uinput, and _depricate_ its own interface. > > (and in same time clean it up) > > lircd already does have uinput support... > Very nice..... > > Also it would be nice to make lirc automaticly guess the protocol, > > using these state machines, thus allowing you to create more 'high > > level' config files for lirc. > > This already is possible if you enable dynamic code generation and simply > put all config files from the remotes/generic folder into your lircd.conf. > This will work for simple protocols, but I can tell several cases where > this definitely will not work and is not possible. > Very nice indeed! Best regards, Maxim Levitsky |
From: Maxim L. <max...@gm...> - 2009-05-30 11:19:28
|
On Sat, 2009-05-30 at 10:56 +0200, Christoph Bartelmus wrote: > Hi! > > Maxim Levitsky "max...@gm..." wrote: > [...] > > But the idea of having a state machine is really great, and thus I think > > that: > > > > 1 - we should cleanup kernel drivers, and merge them upstream, > > *we* sounds good and actually this is the only problem that there simply > are not enough people helping here. But due to the big efforts of Jarod > and Janne, the code quality is now much much better than before. While I am not by any means can judge the quality of code, indeed, the lirc code seems to be ok, at least I don't see some bold violations, etc. Maybe it needs to be submitted to LKML, so they could review and tell what they don't like? Or not? Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 14:23:40
|
On Fri, May 29, 2009 at 5:50 PM, Christoph Bartelmus <li...@ba...> wrote: > Hi! > > Maxim Levitsky "max...@gm..." wrote: > [...] >> I want to write a kernel driver for this device, but I don't know if I >> need to write it against lirc, or recently posted on LKML in-kernel IR >> framework. > [...] >> What are your suggestions? > > Just implement a LIRC driver. > > The last suggestion that seemed to be accepted was that the IR data is > received by the LIRC drivers and is passed to a in-kernel decoder if the > user configures it so. Otherwise it's passed to lircd in user-space. > > The in-kernel decoder code was far from complete and I still have concerns > that it can work in all situations. Let's work on the design of this. My embedded systems they are using remotes that transmit the common protocols, the in-kernel state machine handles them fine. I also agree that there are weird device and protocols that the in-kernel statement won't handle. I also agree that there are user space implementations like the IR receiver via a sound card. There are a whole lot of things that haven't been designed. For example what does a minimal in-kernel IR device driver look like? What is it's sysfs interface? I attempted to make these drivers as small as possible. There is a sysfs attribute that holds the raw input from the device, etc. I just did a first version of this model, it needs more design effort. For example it would be simple to add an attribute that stops the device pulses from being forwarded to the state machine and require use of the user space decoder. My goal would be to handle the 90% of common hardware/remotes with the in-kernel solution and then pass the rest off to user space. The in-kernel solution is only about 30KB of code. Last documentation I posted: New release of in-kernel IR support implementing evdev support. The goal of in-kernel IR is to integrate IR events into the evdev input event queue and maintain ordering of events from all input devices. Still looking for help with this project. Git tree: git://github.com/jonsmirl/digispeaker.git ir-core is now a module. Minimal IR code is built into input, that code can't be separated from input. The rest of the core IR code is moved to a module. The IR code in input will be passive if IR-core isn't loaded. Now uses configfs to build mappings from remote buttons to key strokes. When ir-core loads it creates /config/remotes. Make a directory for each remote you have; this will cause a new input devices to be created. Inside these directories make a directory for each key on the remote. In the key directory attributes fill in the protocol, device, command, keycode. Since this is configfs all of this can be easily scripted. Now when a key is pressed on a remote, the configfs directories are searched for a match on protocol, device, command. If a matches is found, a key stroke corresponding to keycode is created and sent on the input device that was created when the directory for the remote was made. The configfs directories are pretty flexible. You can use them to map multiple remotes to the same key stroke, or send a single button push to multiple apps. To do the mapping it uses configfs (part of the kernel). The main directory is remotes. You use a shell script to build mappings between the IR event and key stroke. mkdir /config/remotes/sony -- this creates a new evdev device mkdir remotes/sony/one echo 7 >remotes/sony/one/procotol echo 264 >remotes/sony/one/command echo 2 >remotes/sony/one/keycode This transforms a button push of 1 on my remote into a key stroke for KEY_1 * configfs root * --remotes * ----specific remote * ------keymap * --------protocol * --------device * --------command * --------keycode * ------repeat keymaps * --------.... * ----another remote * ------more keymaps You can map the 1 button from multiple remote to KEY_1 if you want. Or you can use a single remote to create multiple virtual keyboards. >From last release... Raw mode. There are three sysfs attributes - ir_raw, ir_carrier, ir_xmitter. Read from ir_raw to get the raw timing data from the IR device. Set carrier and active xmitters and then copy raw data to ir_raw to send. These attributes may be better on a debug switch. You would use raw mode when decoding a new protocol. After you figure out the new protocol, write an in-kernel encoder/decoder for it. The in-kernel code is tiny, about 20K including a driver. >From last post... Note that user space IR device drivers can use the existing support in evdev to inject events into the input queue. Send and receive are implemented. Received IR messages are decoded and sent to user space as input messages. Send is done via an IOCTL on the input device. Two drivers are supplied. mceusb2 implements send and receive support for the Microsoft USB IR dongle. The GPT driver implements receive only support for a GPT pin - GPT is a GPIO with a timer attached. Code is only lightly tested. Encoders and decoders have not been written for all protocols. Repeat is not handled for any protocol. I'm looking for help. There are 15 more existing LIRC drivers. > > Christoph > > ------------------------------------------------------------------------------ > Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT > is a gathering of tech-side developers & brand creativity professionals. Meet > the minds behind Google Creative Lab, Visual Complexity, Processing, & > iPhoneDevCamp as they present alongside digital heavyweights like Barbarian > Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com > -- Jon Smirl jon...@gm... |
From: Maxim L. <max...@gm...> - 2009-05-30 14:43:07
|
On Sat, 2009-05-30 at 10:23 -0400, Jon Smirl wrote: > On Fri, May 29, 2009 at 5:50 PM, Christoph Bartelmus <li...@ba...> wrote: > > Hi! > > > > Maxim Levitsky "max...@gm..." wrote: > > [...] > >> I want to write a kernel driver for this device, but I don't know if I > >> need to write it against lirc, or recently posted on LKML in-kernel IR > >> framework. > > [...] > >> What are your suggestions? > > > > Just implement a LIRC driver. > > > > The last suggestion that seemed to be accepted was that the IR data is > > received by the LIRC drivers and is passed to a in-kernel decoder if the > > user configures it so. Otherwise it's passed to lircd in user-space. > > > > The in-kernel decoder code was far from complete and I still have concerns > > that it can work in all situations. > > Let's work on the design of this. My embedded systems they are using > remotes that transmit the common protocols, the in-kernel state > machine handles them fine. I also agree that there are weird device > and protocols that the in-kernel statement won't handle. I also agree > that there are user space implementations like the IR receiver via a > sound card. > > There are a whole lot of things that haven't been designed. For > example what does a minimal in-kernel IR device driver look like? What > is it's sysfs interface? I attempted to make these drivers as small as > possible. There is a sysfs attribute that holds the raw input from > the device, etc. I just did a first version of this model, it needs > more design effort. For example it would be simple to add an attribute > that stops the device pulses from being forwarded to the state machine > and require use of the user space decoder. > > My goal would be to handle the 90% of common hardware/remotes with the > in-kernel solution and then pass the rest off to user space. The > in-kernel solution is only about 30KB of code. The problem is that for these 10% of remotes, one will have to create a separate decoder, which is ok, but it will also have to create a keycode <-> input keycode mapper similar to your configfs implementation. And this is bad. Also, one might like to do wierd things with standard protocol remotes. You already had a request to create a 'shift' key. Again, why not to process data in user space? You don't have to use lircd, you could write your own daemon with same state machines. On the bonus side, it will be much easier to you to include your code in kernel. Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 15:56:26
|
On Sat, May 30, 2009 at 10:42 AM, Maxim Levitsky <max...@gm...> wrote: > On Sat, 2009-05-30 at 10:23 -0400, Jon Smirl wrote: >> On Fri, May 29, 2009 at 5:50 PM, Christoph Bartelmus <li...@ba...> wrote: >> > Hi! >> > >> > Maxim Levitsky "max...@gm..." wrote: >> > [...] >> >> I want to write a kernel driver for this device, but I don't know if I >> >> need to write it against lirc, or recently posted on LKML in-kernel IR >> >> framework. >> > [...] >> >> What are your suggestions? >> > >> > Just implement a LIRC driver. >> > >> > The last suggestion that seemed to be accepted was that the IR data is >> > received by the LIRC drivers and is passed to a in-kernel decoder if the >> > user configures it so. Otherwise it's passed to lircd in user-space. >> > >> > The in-kernel decoder code was far from complete and I still have concerns >> > that it can work in all situations. >> >> Let's work on the design of this. My embedded systems they are using >> remotes that transmit the common protocols, the in-kernel state >> machine handles them fine. I also agree that there are weird device >> and protocols that the in-kernel statement won't handle. I also agree >> that there are user space implementations like the IR receiver via a >> sound card. >> >> There are a whole lot of things that haven't been designed. For >> example what does a minimal in-kernel IR device driver look like? What >> is it's sysfs interface? I attempted to make these drivers as small as >> possible. There is a sysfs attribute that holds the raw input from >> the device, etc. I just did a first version of this model, it needs >> more design effort. For example it would be simple to add an attribute >> that stops the device pulses from being forwarded to the state machine >> and require use of the user space decoder. >> >> My goal would be to handle the 90% of common hardware/remotes with the >> in-kernel solution and then pass the rest off to user space. The >> in-kernel solution is only about 30KB of code. > > The problem is that for these 10% of remotes, one will have to create a > separate decoder, which is ok, but it will also have to create a keycode > <-> input keycode mapper similar to your configfs implementation. > And this is bad. You have that problem with any solution that is picked. It doesn't matter if the decoder for the weird 10% is in the kernel or lirc, it still has to be written. AFAIK configfs is the only simple way to implement the mapping in-kernel. I wanted to reuse what was already available. Start off with state machines in user space and move them to the kernel when they are stable. A normal user will never deal with configfs. They'll run a little app that gets the state machine output from the kernel and then generates a script that builds the configfs entries to map it to a keystroke. That script will get added to init.d so that it gets restored on each boot. Longer term I'd like to see a set of default mappings developed, ie, always mapping the 1 button to KEY_1. I came up with the state machine approach trying to make Sony remotes works. Sony remotes transmit multiple protocols. LIRC only supports a single protocol in a config file so Sony remotes need to be used in raw mode with LIRC. With in-kernel there are state machines for the three Sony protocols, you can build a single config file that covers the three protocols. > Also, one might like to do wierd things with standard protocol remotes. > You already had a request to create a 'shift' key. > Again, why not to process data in user space? > You don't have to use lircd, you could write your own daemon with same > state machines. The idea is to have 30KB of very reliable in-kernel code that can handle the common cases. Kicking everything to user space gets into problems with process priorities on heavily loaded embedded CPUs. You are coming from a world that has four 3GHz CPU cores and 8GB of RAM. Things are a lot different when you have a single 150Mhz CPU and 32MB RAM. lircd is 400KB plus all of the libraries it pulls in (the entire kernel is 1200KB). The 90% in-kernel solution is 30KB and no libraries. But that's life in user space, all user space apps are 10x the size of a in-kernel equivalent. > On the bonus side, it will be much easier to you to include your code in > kernel. I should have no problem getting the current code included in the kernel. The code is no where near finished, I need the lirc community on board with support for it as the driver model going forward. The community also needs to contribute to it's design so that the right data is exposed. -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2009-05-30 16:53:45
|
Hi! Jon Smirl "jon...@gm..." wrote: [...] > I came up with the state machine approach trying to make Sony remotes > works. Sony remotes transmit multiple protocols. LIRC only supports a > single protocol in a config file so Sony remotes need to be used in > raw mode with LIRC. Statements like this give me the impression that you actually never really looked into LIRC and what it does. Of course LIRC does support any number of protocols in a single config file. LIRC is now in use for >10 years. It's very well tested and does its job. Again: if your only concern is code size and responsiveness, then please create a hook in lirc_dev for your in-kernel decoder. You won't have to write a single device driver and can just use the LIRC code base and we can stop discussing and get some work done. Win-win. [...] > You are coming from a world that has four 3GHz CPU cores and 8GB of > RAM. Things are a lot different when you have a single 150Mhz CPU and > 32MB RAM. lircd is 400KB If you want to discuss on that level, please get your numbers right. On my system lircd is 110kB. Of course if you compile in all user space drivers you will get a much bigger executable. But it's comparing apples with bananas anyway, as lircd does much more than just decoding the IR data. BTW, I've started development on LIRC on a 486-DX 40MHz with 20MB of RAM and it was working fine... Christoph |
From: Jon S. <jon...@gm...> - 2009-05-30 17:26:52
|
On Sat, May 30, 2009 at 12:52 PM, Christoph Bartelmus <li...@ba...> wrote: > Hi! > > Jon Smirl "jon...@gm..." wrote: > [...] >> I came up with the state machine approach trying to make Sony remotes >> works. Sony remotes transmit multiple protocols. LIRC only supports a >> single protocol in a config file so Sony remotes need to be used in >> raw mode with LIRC. > > Statements like this give me the impression that you actually never really > looked into LIRC and what it does. Of course LIRC does support any number > of protocols in a single config file. > LIRC is now in use for >10 years. It's very well tested and does its job. I doubt that IR support is going to be accepted into the kernel until its design is more closely aligned with the model followed by the other input devices in the kernel. That implies getting rid of special devices with custom protocols. Ignore the code I wrote, what is another way to achieve a closer alignment with input? Nobody is saying LIRC doesn't work, the problem is that it is needlessly different than the rest of the input subsystem. Being needless different is no one's fault, LIRC predates the development of the input subsystem. > > Again: if your only concern is code size and responsiveness, then please > create a hook in lirc_dev for your in-kernel decoder. You won't have to > write a single device driver and can just use the LIRC code base and we > can stop discussing and get some work done. Win-win. > > [...] >> You are coming from a world that has four 3GHz CPU cores and 8GB of >> RAM. Things are a lot different when you have a single 150Mhz CPU and >> 32MB RAM. lircd is 400KB > > If you want to discuss on that level, please get your numbers right. > On my system lircd is 110kB. Of course if you compile in all user space > drivers you will get a much bigger executable. But it's comparing apples > with bananas anyway, as lircd does much more than just decoding the IR > data. > BTW, I've started development on LIRC on a 486-DX 40MHz with 20MB of RAM > and it was working fine... > > Christoph > > ------------------------------------------------------------------------------ > Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT > is a gathering of tech-side developers & brand creativity professionals. Meet > the minds behind Google Creative Lab, Visual Complexity, Processing, & > iPhoneDevCamp as they present alongside digital heavyweights like Barbarian > Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com > -- Jon Smirl jon...@gm... |
From: Maxim L. <max...@gm...> - 2009-05-30 16:16:21
|
On Sat, 2009-05-30 at 11:56 -0400, Jon Smirl wrote: > On Sat, May 30, 2009 at 10:42 AM, Maxim Levitsky > <max...@gm...> wrote: > > On Sat, 2009-05-30 at 10:23 -0400, Jon Smirl wrote: > >> On Fri, May 29, 2009 at 5:50 PM, Christoph Bartelmus <li...@ba...> wrote: > >> > Hi! > >> > > >> > Maxim Levitsky "max...@gm..." wrote: > >> > [...] > >> >> I want to write a kernel driver for this device, but I don't know if I > >> >> need to write it against lirc, or recently posted on LKML in-kernel IR > >> >> framework. > >> > [...] > >> >> What are your suggestions? > >> > > >> > Just implement a LIRC driver. > >> > > >> > The last suggestion that seemed to be accepted was that the IR data is > >> > received by the LIRC drivers and is passed to a in-kernel decoder if the > >> > user configures it so. Otherwise it's passed to lircd in user-space. > >> > > >> > The in-kernel decoder code was far from complete and I still have concerns > >> > that it can work in all situations. > >> > >> Let's work on the design of this. My embedded systems they are using > >> remotes that transmit the common protocols, the in-kernel state > >> machine handles them fine. I also agree that there are weird device > >> and protocols that the in-kernel statement won't handle. I also agree > >> that there are user space implementations like the IR receiver via a > >> sound card. > >> > >> There are a whole lot of things that haven't been designed. For > >> example what does a minimal in-kernel IR device driver look like? What > >> is it's sysfs interface? I attempted to make these drivers as small as > >> possible. There is a sysfs attribute that holds the raw input from > >> the device, etc. I just did a first version of this model, it needs > >> more design effort. For example it would be simple to add an attribute > >> that stops the device pulses from being forwarded to the state machine > >> and require use of the user space decoder. > >> > >> My goal would be to handle the 90% of common hardware/remotes with the > >> in-kernel solution and then pass the rest off to user space. The > >> in-kernel solution is only about 30KB of code. > > > > The problem is that for these 10% of remotes, one will have to create a > > separate decoder, which is ok, but it will also have to create a keycode > > <-> input keycode mapper similar to your configfs implementation. > > And this is bad. > > You have that problem with any solution that is picked. It doesn't > matter if the decoder for the weird 10% is in the kernel or lirc, it > still has to be written. AFAIK configfs is the only simple way to > implement the mapping in-kernel. I wanted to reuse what was already > available. Start off with state machines in user space and move them > to the kernel when they are stable. > > A normal user will never deal with configfs. They'll run a little app > that gets the state machine output from the kernel and then generates > a script that builds the configfs entries to map it to a keystroke. > That script will get added to init.d so that it gets restored on each > boot. Longer term I'd like to see a set of default mappings developed, > ie, always mapping the 1 button to KEY_1. > > I came up with the state machine approach trying to make Sony remotes > works. Sony remotes transmit multiple protocols. LIRC only supports a > single protocol in a config file so Sony remotes need to be used in > raw mode with LIRC. With in-kernel there are state machines for the > three Sony protocols, you can build a single config file that covers > the three protocols. > > > Also, one might like to do wierd things with standard protocol remotes. > > You already had a request to create a 'shift' key. > > Again, why not to process data in user space? > > You don't have to use lircd, you could write your own daemon with same > > state machines. > > The idea is to have 30KB of very reliable in-kernel code that can > handle the common cases. Kicking everything to user space gets into > problems with process priorities on heavily loaded embedded CPUs. > > You are coming from a world that has four 3GHz CPU cores and 8GB of > RAM. Things are a lot different when you have a single 150Mhz CPU and > 32MB RAM. lircd is 400KB plus all of the libraries it pulls in (the > entire kernel is 1200KB). The 90% in-kernel solution is 30KB and no > libraries. But that's life in user space, all user space apps are 10x > the size of a in-kernel equivalent. This isn't true. my userspace app, which combines driver and decoder for NEC like protocols is 11 KB in binary form. (It only prints decoded byte sequences though).. The thing is that you don't have to use lirc! Write your own small app, and I bet it can be fit into 10 KB What I really don't like it to have data parsing in kernel. (parsing according to a userspace policy that is). Just another example, say user has 3 remotes, and uses them to control devices, and with lirc: Say I mark a unused key on each of the remotes, and unless I press it, pc ignores that remote, thus it pays attention only to one remote that did send that magic key. Again this is easy to do in userspace. not so in kernel. ( I agree that one might want to do same with ordinary keyboards, but remotes are much more limited, thus wierd stuff like the above might be reasonable) Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 16:50:37
|
On Sat, May 30, 2009 at 12:16 PM, Maxim Levitsky <max...@gm...> wrote: > On Sat, 2009-05-30 at 11:56 -0400, Jon Smirl wrote: >> On Sat, May 30, 2009 at 10:42 AM, Maxim Levitsky >> <max...@gm...> wrote: >> > On Sat, 2009-05-30 at 10:23 -0400, Jon Smirl wrote: >> >> On Fri, May 29, 2009 at 5:50 PM, Christoph Bartelmus <li...@ba...> wrote: >> >> > Hi! >> >> > >> >> > Maxim Levitsky "max...@gm..." wrote: >> >> > [...] >> >> >> I want to write a kernel driver for this device, but I don't know if I >> >> >> need to write it against lirc, or recently posted on LKML in-kernel IR >> >> >> framework. >> >> > [...] >> >> >> What are your suggestions? >> >> > >> >> > Just implement a LIRC driver. >> >> > >> >> > The last suggestion that seemed to be accepted was that the IR data is >> >> > received by the LIRC drivers and is passed to a in-kernel decoder if the >> >> > user configures it so. Otherwise it's passed to lircd in user-space. >> >> > >> >> > The in-kernel decoder code was far from complete and I still have concerns >> >> > that it can work in all situations. >> >> >> >> Let's work on the design of this. My embedded systems they are using >> >> remotes that transmit the common protocols, the in-kernel state >> >> machine handles them fine. I also agree that there are weird device >> >> and protocols that the in-kernel statement won't handle. I also agree >> >> that there are user space implementations like the IR receiver via a >> >> sound card. >> >> >> >> There are a whole lot of things that haven't been designed. For >> >> example what does a minimal in-kernel IR device driver look like? What >> >> is it's sysfs interface? I attempted to make these drivers as small as >> >> possible. There is a sysfs attribute that holds the raw input from >> >> the device, etc. I just did a first version of this model, it needs >> >> more design effort. For example it would be simple to add an attribute >> >> that stops the device pulses from being forwarded to the state machine >> >> and require use of the user space decoder. >> >> >> >> My goal would be to handle the 90% of common hardware/remotes with the >> >> in-kernel solution and then pass the rest off to user space. The >> >> in-kernel solution is only about 30KB of code. >> > >> > The problem is that for these 10% of remotes, one will have to create a >> > separate decoder, which is ok, but it will also have to create a keycode >> > <-> input keycode mapper similar to your configfs implementation. >> > And this is bad. >> >> You have that problem with any solution that is picked. It doesn't >> matter if the decoder for the weird 10% is in the kernel or lirc, it >> still has to be written. AFAIK configfs is the only simple way to >> implement the mapping in-kernel. I wanted to reuse what was already >> available. Start off with state machines in user space and move them >> to the kernel when they are stable. >> >> A normal user will never deal with configfs. They'll run a little app >> that gets the state machine output from the kernel and then generates >> a script that builds the configfs entries to map it to a keystroke. >> That script will get added to init.d so that it gets restored on each >> boot. Longer term I'd like to see a set of default mappings developed, >> ie, always mapping the 1 button to KEY_1. >> >> I came up with the state machine approach trying to make Sony remotes >> works. Sony remotes transmit multiple protocols. LIRC only supports a >> single protocol in a config file so Sony remotes need to be used in >> raw mode with LIRC. With in-kernel there are state machines for the >> three Sony protocols, you can build a single config file that covers >> the three protocols. >> >> > Also, one might like to do wierd things with standard protocol remotes. >> > You already had a request to create a 'shift' key. >> > Again, why not to process data in user space? >> > You don't have to use lircd, you could write your own daemon with same >> > state machines. >> >> The idea is to have 30KB of very reliable in-kernel code that can >> handle the common cases. Kicking everything to user space gets into >> problems with process priorities on heavily loaded embedded CPUs. >> >> You are coming from a world that has four 3GHz CPU cores and 8GB of >> RAM. Things are a lot different when you have a single 150Mhz CPU and >> 32MB RAM. lircd is 400KB plus all of the libraries it pulls in (the >> entire kernel is 1200KB). The 90% in-kernel solution is 30KB and no >> libraries. But that's life in user space, all user space apps are 10x >> the size of a in-kernel equivalent. > > This isn't true. my userspace app, which combines driver and decoder for > NEC like protocols is 11 KB in binary form. Obviously all of this can be done in user space. Microkernels move everything to user space. Linux is monolithic and device drivers go into the kernel. > (It only prints decoded byte sequences though).. It's all of the other code that makes it bigger. > The thing is that you don't have to use lirc! > Write your own small app, and I bet it can be fit into 10 KB > What I really don't like it to have data parsing in kernel. > (parsing according to a userspace policy that is). Turn off the check box in Kconfig for building in the in-kernel state machines. Use sysfs to extract the raw data. Deal with the process priority problems when other things prevent your process from running. I want this stuff in kernel so that it behaves more deterministically. The in-kernel mapping to key strokes is not required. Turn it off in Kconfig. The decoded IR data is available on the input devices. Modify the apps to read this data instead of expecting key strokes. Of course you'll have to teach the apps about all of the different remotes. BTW It's not a parser, it is a one to one mapping function. We map device address to human names all over the kernel. If you hate configfs, I could build little modules for each remote that does the mapping. But then you'd need to use the C compiler to change things. The state machines are defined by the IR hardware they are not configurable. > Just another example, say user has 3 remotes, and uses them to control > devices, and with lirc: > Say I mark a unused key on each of the remotes, and unless I press it, > pc ignores that remote, thus it pays attention only to one remote that > did send that magic key. > Again this is easy to do in userspace. not so in kernel. I did not want to add any scripting capability to the in-kernel interface. It just sends the events out the /dev/input devices. They come out in two forms - as IR events, or as mapped key stroke events. If you really want to do this, write an app that reads /dev/input, scripts the events, and then sends the data back in via uevent. You can't do something like this with a mouse or keyboard either. They would also need to be scripted. > ( I agree that one might want to do same with ordinary keyboards, but > remotes are much more limited, thus wierd stuff like the above might > be reasonable) Look in the input h file. There are predefined codes for over 500 keys. All of the common ones are there and more can be added. Think of it this way: why is IR special? Isn't it just another input method like mouse, keyboard, joystick, touchpad, etc. If it is not special, why can't the drivers be implemented in-kernel like all of the other Linux input drivers? If you flip this around, why shouldn't all of the mouse, keyboard, joystick, touchpad, etc drivers be removed from the kernel and reimplemented in user space? > > > Best regards, > Maxim Levitsky > > > -- Jon Smirl jon...@gm... |
From: Maxim L. <max...@gm...> - 2009-05-30 18:01:11
|
On Sat, 2009-05-30 at 12:50 -0400, Jon Smirl wrote: > On Sat, May 30, 2009 at 12:16 PM, Maxim Levitsky > <max...@gm...> wrote: > > On Sat, 2009-05-30 at 11:56 -0400, Jon Smirl wrote: > >> On Sat, May 30, 2009 at 10:42 AM, Maxim Levitsky > >> <max...@gm...> wrote: > >> > On Sat, 2009-05-30 at 10:23 -0400, Jon Smirl wrote: > >> >> On Fri, May 29, 2009 at 5:50 PM, Christoph Bartelmus <li...@ba...> wrote: > >> >> > Hi! > >> >> > > >> >> > Maxim Levitsky "max...@gm..." wrote: > >> >> > [...] > >> >> >> I want to write a kernel driver for this device, but I don't know if I > >> >> >> need to write it against lirc, or recently posted on LKML in-kernel IR > >> >> >> framework. > >> >> > [...] > >> >> >> What are your suggestions? > >> >> > > >> >> > Just implement a LIRC driver. > >> >> > > >> >> > The last suggestion that seemed to be accepted was that the IR data is > >> >> > received by the LIRC drivers and is passed to a in-kernel decoder if the > >> >> > user configures it so. Otherwise it's passed to lircd in user-space. > >> >> > > >> >> > The in-kernel decoder code was far from complete and I still have concerns > >> >> > that it can work in all situations. > >> >> > >> >> Let's work on the design of this. My embedded systems they are using > >> >> remotes that transmit the common protocols, the in-kernel state > >> >> machine handles them fine. I also agree that there are weird device > >> >> and protocols that the in-kernel statement won't handle. I also agree > >> >> that there are user space implementations like the IR receiver via a > >> >> sound card. > >> >> > >> >> There are a whole lot of things that haven't been designed. For > >> >> example what does a minimal in-kernel IR device driver look like? What > >> >> is it's sysfs interface? I attempted to make these drivers as small as > >> >> possible. There is a sysfs attribute that holds the raw input from > >> >> the device, etc. I just did a first version of this model, it needs > >> >> more design effort. For example it would be simple to add an attribute > >> >> that stops the device pulses from being forwarded to the state machine > >> >> and require use of the user space decoder. > >> >> > >> >> My goal would be to handle the 90% of common hardware/remotes with the > >> >> in-kernel solution and then pass the rest off to user space. The > >> >> in-kernel solution is only about 30KB of code. > >> > > >> > The problem is that for these 10% of remotes, one will have to create a > >> > separate decoder, which is ok, but it will also have to create a keycode > >> > <-> input keycode mapper similar to your configfs implementation. > >> > And this is bad. > >> > >> You have that problem with any solution that is picked. It doesn't > >> matter if the decoder for the weird 10% is in the kernel or lirc, it > >> still has to be written. AFAIK configfs is the only simple way to > >> implement the mapping in-kernel. I wanted to reuse what was already > >> available. Start off with state machines in user space and move them > >> to the kernel when they are stable. > >> > >> A normal user will never deal with configfs. They'll run a little app > >> that gets the state machine output from the kernel and then generates > >> a script that builds the configfs entries to map it to a keystroke. > >> That script will get added to init.d so that it gets restored on each > >> boot. Longer term I'd like to see a set of default mappings developed, > >> ie, always mapping the 1 button to KEY_1. > >> > >> I came up with the state machine approach trying to make Sony remotes > >> works. Sony remotes transmit multiple protocols. LIRC only supports a > >> single protocol in a config file so Sony remotes need to be used in > >> raw mode with LIRC. With in-kernel there are state machines for the > >> three Sony protocols, you can build a single config file that covers > >> the three protocols. > >> > >> > Also, one might like to do wierd things with standard protocol remotes. > >> > You already had a request to create a 'shift' key. > >> > Again, why not to process data in user space? > >> > You don't have to use lircd, you could write your own daemon with same > >> > state machines. > >> > >> The idea is to have 30KB of very reliable in-kernel code that can > >> handle the common cases. Kicking everything to user space gets into > >> problems with process priorities on heavily loaded embedded CPUs. > >> > >> You are coming from a world that has four 3GHz CPU cores and 8GB of > >> RAM. Things are a lot different when you have a single 150Mhz CPU and > >> 32MB RAM. lircd is 400KB plus all of the libraries it pulls in (the > >> entire kernel is 1200KB). The 90% in-kernel solution is 30KB and no > >> libraries. But that's life in user space, all user space apps are 10x > >> the size of a in-kernel equivalent. > > > > This isn't true. my userspace app, which combines driver and decoder for > > NEC like protocols is 11 KB in binary form. > > Obviously all of this can be done in user space. Microkernels move > everything to user space. > Linux is monolithic and device drivers go into the kernel. Lets not turn this into a flamewar, its not about kernel vs userspace, its about the fact that it is harder to modify the kernel, especially for all kind of user needs, thus kernel tends to do what it is best suited for, thats it talk to hardware and process the data, and hand it over to userspace. I for example won't be against a kernel printer driver, or a kernel video driver, or something like that, as long as kernel doesn't have to obey complex settings. And I consider parsing IR data, and mapping it to keys qute complex task. > > > (It only prints decoded byte sequences though).. > > It's all of the other code that makes it bigger. > > > The thing is that you don't have to use lirc! > > Write your own small app, and I bet it can be fit into 10 KB > > What I really don't like it to have data parsing in kernel. > > (parsing according to a userspace policy that is). > > Turn off the check box in Kconfig for building in the in-kernel state > machines. Use sysfs to extract the raw data. Deal with the process > priority problems when other things prevent your process from running. > I want this stuff in kernel so that it behaves more deterministically. If you want this for embedded systems, it might make a sense, for normal usage I would prefer a userspace app. > > The in-kernel mapping to key strokes is not required. Turn it off in > Kconfig. The decoded IR data is available on the input devices. > Modify the apps to read this data instead of expecting key strokes. > Of course you'll have to teach the apps about all of the different > remotes. BTW It's not a parser, it is a one to one mapping function. > We map device address to human names all over the kernel. > > If you hate configfs, I could build little modules for each remote > that does the mapping. But then you'd need to use the C compiler to > change things. > > The state machines are defined by the IR hardware they are not configurable. > > > > Just another example, say user has 3 remotes, and uses them to control > > devices, and with lirc: > > Say I mark a unused key on each of the remotes, and unless I press it, > > pc ignores that remote, thus it pays attention only to one remote that > > did send that magic key. > > Again this is easy to do in userspace. not so in kernel. > > I did not want to add any scripting capability to the in-kernel > interface. It just sends the events out the /dev/input devices. They > come out in two forms - as IR events, or as mapped key stroke events. > If you really want to do this, write an app that reads /dev/input, > scripts the events, and then sends the data back in via uevent. This was just an example, of course there is no need for such things, but yet userspace gives more flexibility. > You can't do something like this with a mouse or keyboard either. They > would also need to be scripted. I agree with that. > > > ( I agree that one might want to do same with ordinary keyboards, but > > remotes are much more limited, thus wierd stuff like the above might > > be reasonable) > > Look in the input h file. There are predefined codes for over 500 > keys. All of the common ones are there and more can be added. > > Think of it this way: why is IR special? Isn't it just another input > method like mouse, keyboard, joystick, touchpad, etc. If it is not > special, why can't the drivers be implemented in-kernel like all of > the other Linux input drivers? If you flip this around, why shouldn't > all of the mouse, keyboard, joystick, touchpad, etc drivers be removed > from the kernel and reimplemented in user space? These drivers are very different. They know the hardware very well, they don't have to use user supplied config for their job. They fit perfectly the kernel. Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 18:10:52
|
On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: >> Think of it this way: why is IR special? Isn't it just another input >> method like mouse, keyboard, joystick, touchpad, etc. If it is not >> special, why can't the drivers be implemented in-kernel like all of >> the other Linux input drivers? If you flip this around, why shouldn't >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed >> from the kernel and reimplemented in user space? > These drivers are very different. > They know the hardware very well, they don't have to use user supplied > config for their job. They fit perfectly the kernel. Checkout keyboard translation tables: man loadkeys > > > Best regards, > Maxim Levitsky > > -- Jon Smirl jon...@gm... |
From: Maxim L. <max...@gm...> - 2009-05-30 18:26:30
|
On Sat, 2009-05-30 at 14:10 -0400, Jon Smirl wrote: > On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: > > > >> Think of it this way: why is IR special? Isn't it just another input > >> method like mouse, keyboard, joystick, touchpad, etc. If it is not > >> special, why can't the drivers be implemented in-kernel like all of > >> the other Linux input drivers? If you flip this around, why shouldn't > >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed > >> from the kernel and reimplemented in user space? > > These drivers are very different. > > They know the hardware very well, they don't have to use user supplied > > config for their job. They fit perfectly the kernel. > > Checkout keyboard translation tables: > man loadkeys Well, this is for console, and ugly. I had a lot of fight with kernel keymaps btw. I have even wrote my own keymap. I don't use linux console anymore. Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 19:04:19
|
On Sat, May 30, 2009 at 2:26 PM, Maxim Levitsky <max...@gm...> wrote: > On Sat, 2009-05-30 at 14:10 -0400, Jon Smirl wrote: >> On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: >> >> >> >> Think of it this way: why is IR special? Isn't it just another input >> >> method like mouse, keyboard, joystick, touchpad, etc. If it is not >> >> special, why can't the drivers be implemented in-kernel like all of >> >> the other Linux input drivers? If you flip this around, why shouldn't >> >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed >> >> from the kernel and reimplemented in user space? >> > These drivers are very different. >> > They know the hardware very well, they don't have to use user supplied >> > config for their job. They fit perfectly the kernel. >> >> Checkout keyboard translation tables: >> man loadkeys > Well, this is for console, and ugly. > > I had a lot of fight with kernel keymaps btw. > I have even wrote my own keymap. > I don't use linux console anymore. The point of having keymaps in the kernel is so that all apps can share them. X "the Operating System" is doing a bunch of stuff that really belongs to the kernel. That is slowly being changed - X doesn't scan the PCI bus any more. Input is being worked on. Mode setting is migrating to the kernel. Those keymaps inside of X can't be used by command line apps because X may not be running. That path has led to duplicate keymapping systems. Sure we could make a universal keymapping daemon. But that's the microkernel vs monolithic kernel argument. You can push the entire kernel in to user space - UML has already done it. But do you really want to do that? > > > Best regards, > Maxim Levitsky > > -- Jon Smirl jon...@gm... |
From: Maxim L. <max...@gm...> - 2009-05-30 19:27:05
|
On Sat, 2009-05-30 at 15:04 -0400, Jon Smirl wrote: > On Sat, May 30, 2009 at 2:26 PM, Maxim Levitsky <max...@gm...> wrote: > > On Sat, 2009-05-30 at 14:10 -0400, Jon Smirl wrote: > >> On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: > >> > >> > >> >> Think of it this way: why is IR special? Isn't it just another input > >> >> method like mouse, keyboard, joystick, touchpad, etc. If it is not > >> >> special, why can't the drivers be implemented in-kernel like all of > >> >> the other Linux input drivers? If you flip this around, why shouldn't > >> >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed > >> >> from the kernel and reimplemented in user space? > >> > These drivers are very different. > >> > They know the hardware very well, they don't have to use user supplied > >> > config for their job. They fit perfectly the kernel. > >> > >> Checkout keyboard translation tables: > >> man loadkeys > > Well, this is for console, and ugly. > > > > I had a lot of fight with kernel keymaps btw. > > I have even wrote my own keymap. > > I don't use linux console anymore. > > The point of having keymaps in the kernel is so that all apps can > share them. X "the Operating System" is doing a bunch of stuff that > really belongs to the kernel. That is slowly being changed - X doesn't > scan the PCI bus any more. Input is being worked on. Mode setting is > migrating to the kernel. > > Those keymaps inside of X can't be used by command line apps because X > may not be running. That path has led to duplicate keymapping systems. > > Sure we could make a universal keymapping daemon. But that's the > microkernel vs monolithic kernel argument. > > You can push the entire kernel in to user space - UML has already done > it. But do you really want to do that? Lets stop here. I am not against kernel code. period. The opposite is true! Everybody has the right to have its own opinion, I currently think that IR decoding should be done in userspace, _due_ to pure userspace drivers. Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 19:44:44
|
On Sat, May 30, 2009 at 3:26 PM, Maxim Levitsky <max...@gm...> wrote: > On Sat, 2009-05-30 at 15:04 -0400, Jon Smirl wrote: >> On Sat, May 30, 2009 at 2:26 PM, Maxim Levitsky <max...@gm...> wrote: >> > On Sat, 2009-05-30 at 14:10 -0400, Jon Smirl wrote: >> >> On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: >> >> >> >> >> >> >> Think of it this way: why is IR special? Isn't it just another input >> >> >> method like mouse, keyboard, joystick, touchpad, etc. If it is not >> >> >> special, why can't the drivers be implemented in-kernel like all of >> >> >> the other Linux input drivers? If you flip this around, why shouldn't >> >> >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed >> >> >> from the kernel and reimplemented in user space? >> >> > These drivers are very different. >> >> > They know the hardware very well, they don't have to use user supplied >> >> > config for their job. They fit perfectly the kernel. >> >> >> >> Checkout keyboard translation tables: >> >> man loadkeys >> > Well, this is for console, and ugly. >> > >> > I had a lot of fight with kernel keymaps btw. >> > I have even wrote my own keymap. >> > I don't use linux console anymore. >> >> The point of having keymaps in the kernel is so that all apps can >> share them. X "the Operating System" is doing a bunch of stuff that >> really belongs to the kernel. That is slowly being changed - X doesn't >> scan the PCI bus any more. Input is being worked on. Mode setting is >> migrating to the kernel. >> >> Those keymaps inside of X can't be used by command line apps because X >> may not be running. That path has led to duplicate keymapping systems. >> >> Sure we could make a universal keymapping daemon. But that's the >> microkernel vs monolithic kernel argument. >> >> You can push the entire kernel in to user space - UML has already done >> it. But do you really want to do that? > Lets stop here. > I am not against kernel code. period. > The opposite is true! > > > Everybody has the right to have its own opinion, I currently think that > IR decoding should be done in userspace, _due_ to pure userspace > drivers. Then forget about merging LIRC into the kernel because that's not going to happen until LIRC is integrated into the evdev input event model. > > > Best regards, > Maxim Levitsky > > -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2009-05-30 20:15:18
|
Hi! Jon Smirl "jon...@gm..." wrote: [...] > Then forget about merging LIRC into the kernel because that's not > going to happen until LIRC is integrated into the evdev input event > model. Says who? Christoph |
From: Maxim L. <max...@gm...> - 2009-05-30 19:54:47
|
On Sat, 2009-05-30 at 15:43 -0400, Jon Smirl wrote: > On Sat, May 30, 2009 at 3:26 PM, Maxim Levitsky <max...@gm...> wrote: > > On Sat, 2009-05-30 at 15:04 -0400, Jon Smirl wrote: > >> On Sat, May 30, 2009 at 2:26 PM, Maxim Levitsky <max...@gm...> wrote: > >> > On Sat, 2009-05-30 at 14:10 -0400, Jon Smirl wrote: > >> >> On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: > >> >> > >> >> > >> >> >> Think of it this way: why is IR special? Isn't it just another input > >> >> >> method like mouse, keyboard, joystick, touchpad, etc. If it is not > >> >> >> special, why can't the drivers be implemented in-kernel like all of > >> >> >> the other Linux input drivers? If you flip this around, why shouldn't > >> >> >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed > >> >> >> from the kernel and reimplemented in user space? > >> >> > These drivers are very different. > >> >> > They know the hardware very well, they don't have to use user supplied > >> >> > config for their job. They fit perfectly the kernel. > >> >> > >> >> Checkout keyboard translation tables: > >> >> man loadkeys > >> > Well, this is for console, and ugly. > >> > > >> > I had a lot of fight with kernel keymaps btw. > >> > I have even wrote my own keymap. > >> > I don't use linux console anymore. > >> > >> The point of having keymaps in the kernel is so that all apps can > >> share them. X "the Operating System" is doing a bunch of stuff that > >> really belongs to the kernel. That is slowly being changed - X doesn't > >> scan the PCI bus any more. Input is being worked on. Mode setting is > >> migrating to the kernel. > >> > >> Those keymaps inside of X can't be used by command line apps because X > >> may not be running. That path has led to duplicate keymapping systems. > >> > >> Sure we could make a universal keymapping daemon. But that's the > >> microkernel vs monolithic kernel argument. > >> > >> You can push the entire kernel in to user space - UML has already done > >> it. But do you really want to do that? > > Lets stop here. > > I am not against kernel code. period. > > The opposite is true! > > > > > > Everybody has the right to have its own opinion, I currently think that > > IR decoding should be done in userspace, _due_ to pure userspace > > drivers. > > Then forget about merging LIRC into the kernel because that's not > going to happen until LIRC is integrated into the evdev input event > model. Why not integrated? _kernel_ sends ir data. lirc decodes it and passes it back via uinput This is the quote from Christoph Bartelmus's mail: " > 2 - make lirc use uinput, and _depricate_ its own interface. > (and in same time clean it up) lircd already does have uinput support... " What wrong with that? Best regards, Maxim Levitsky |
From: Jon S. <jon...@gm...> - 2009-05-30 20:22:29
|
On Sat, May 30, 2009 at 3:53 PM, Maxim Levitsky <max...@gm...> wrote: > On Sat, 2009-05-30 at 15:43 -0400, Jon Smirl wrote: >> On Sat, May 30, 2009 at 3:26 PM, Maxim Levitsky <max...@gm...> wrote: >> > On Sat, 2009-05-30 at 15:04 -0400, Jon Smirl wrote: >> >> On Sat, May 30, 2009 at 2:26 PM, Maxim Levitsky <max...@gm...> wrote: >> >> > On Sat, 2009-05-30 at 14:10 -0400, Jon Smirl wrote: >> >> >> On Sat, May 30, 2009 at 2:01 PM, Maxim Levitsky <max...@gm...> wrote: >> >> >> >> >> >> >> >> >> >> Think of it this way: why is IR special? Isn't it just another input >> >> >> >> method like mouse, keyboard, joystick, touchpad, etc. If it is not >> >> >> >> special, why can't the drivers be implemented in-kernel like all of >> >> >> >> the other Linux input drivers? If you flip this around, why shouldn't >> >> >> >> all of the mouse, keyboard, joystick, touchpad, etc drivers be removed >> >> >> >> from the kernel and reimplemented in user space? >> >> >> > These drivers are very different. >> >> >> > They know the hardware very well, they don't have to use user supplied >> >> >> > config for their job. They fit perfectly the kernel. >> >> >> >> >> >> Checkout keyboard translation tables: >> >> >> man loadkeys >> >> > Well, this is for console, and ugly. >> >> > >> >> > I had a lot of fight with kernel keymaps btw. >> >> > I have even wrote my own keymap. >> >> > I don't use linux console anymore. >> >> >> >> The point of having keymaps in the kernel is so that all apps can >> >> share them. X "the Operating System" is doing a bunch of stuff that >> >> really belongs to the kernel. That is slowly being changed - X doesn't >> >> scan the PCI bus any more. Input is being worked on. Mode setting is >> >> migrating to the kernel. >> >> >> >> Those keymaps inside of X can't be used by command line apps because X >> >> may not be running. That path has led to duplicate keymapping systems. >> >> >> >> Sure we could make a universal keymapping daemon. But that's the >> >> microkernel vs monolithic kernel argument. >> >> >> >> You can push the entire kernel in to user space - UML has already done >> >> it. But do you really want to do that? >> > Lets stop here. >> > I am not against kernel code. period. >> > The opposite is true! >> > >> > >> > Everybody has the right to have its own opinion, I currently think that >> > IR decoding should be done in userspace, _due_ to pure userspace >> > drivers. >> >> Then forget about merging LIRC into the kernel because that's not >> going to happen until LIRC is integrated into the evdev input event >> model. > > > > Why not integrated? The state decoders for the protocols is a grand total of 730 lines of code. Pushing that into user space requires an interface to be spec'd and a separate app to be distributed and documented. Now the interface needs to be versioned and the app has to match versions. It open up problems with the user space daemon not running when it needs to. The user space daemon can get killed by OOM. Now init has to be modified to make sure it restarts, etc, etc.... Just put the 730 lines of code in the kernel and get rid of these issues. All of the other input devices do it that way. > > _kernel_ sends ir data. > lirc decodes it and passes it back via uinput > > > This is the quote from Christoph Bartelmus's mail: > > " >> 2 - make lirc use uinput, and _depricate_ its own interface. >> (and in same time clean it up) > > lircd already does have uinput support... > > " > > > What wrong with that? > > > Best regards, > Maxim Levitsky > > -- Jon Smirl jon...@gm... |
From: Maxim L. <max...@gm...> - 2009-07-31 18:46:17
|
On Thu, 2009-05-28 at 17:35 +0300, Maxim Levitsky wrote: > Hi, > > My notebook has a IR receiver, so called ENE CIR. > It is actually a embedded controller, which has a IR sampler inside. > > It is found in my acer aspire 5720G, and is exposed in acpi tables as > ENE0100. > > While its windows driver works only with MS approved remotes, and > moreover I have seen on the web that only a bundled remote works with > it, it is actually a generic receiver, and works here fine with all my > remotes (since I reverse engineered it). > > The only catch is that it holds only 8 bytes of IR timing, which are > divided in 2 halfs, one is active, other is supposed to be read by host. > Thus my current userspace app sucks, since it can't use interrupts. > > I want to write a kernel driver for this device, but I don't know if I > need to write it against lirc, or recently posted on LKML in-kernel IR > framework. > > Do you expect lirc to be merged any time? I am finally free, I will start working right away on my driver. I would really like also to see lirc merged, since I always want all linux drivers to be in tree, and :-) I want my future driver be there. I have about 15 days of free time. I have a question, since I don't yet know much about kernel development (only basics) could you point me to areas of lirc that need cleanup? Best regards, Maxim Levitsky |
From: Lee N. <Lee...@ku...> - 2009-08-03 21:32:16
|
Good Afternoon Everyone, After going round and round with Jerod trying to get my MCE remote working with my ffdc SoundGraph device, we determined that ir_protocol=1 isn't working for my older model. That was unfortunate, but at least I could still use my iMON remote for controlling XBMC. Well not so fast. I can run IRW and get all the correct output for the buttons I am pressing, I configured Lircmap.xml and remote.xml correctly (and double checked them to make sure I didn't miss something). The weird thing is that when XBMC boots all my remote will do is act like a mouse. No buttons work, only the pad in the middle operates the mouse. I checked the parms for lirc_imon and I only found 3 (debug, ir_protocol, and nomouse). I rmmod lirc_imon and then modprobe lirc_imon nomouse=1. No change, pad always acts as a mouse and no buttons work. Am I missing something? Thanks, Lee |
From: Jarod W. <ja...@wi...> - 2009-08-05 01:14:36
|
On 08/03/2009 05:32 PM, Lee Nugent wrote: > Good Afternoon Everyone, > > After going round and round with Jerod trying to get my MCE remote > working with my ffdc SoundGraph device, we determined that ir_protocol=1 > isn't working for my older model. Was reasonably obvious why after reading over a bit of code... Setting the ir protocol requires a device with control urb endpoints, which the ffdc devices don't have. I've fixed the code in cvs so that it'll print a warning when someone tries to set the proto on a device that doesn't support it, and just fall back to imon proto. > That was unfortunate, but at least I > could still use my iMON remote for controlling XBMC. Well not so fast. > I can run IRW and get all the correct output for the buttons I am > pressing, I configured Lircmap.xml and remote.xml correctly (and double > checked them to make sure I didn't miss something). The weird thing is > that when XBMC boots all my remote will do is act like a mouse. No > buttons work, only the pad in the middle operates the mouse. I checked > the parms for lirc_imon and I only found 3 (debug, ir_protocol, and > nomouse). I rmmod lirc_imon and then modprobe lirc_imon nomouse=1. No > change, pad always acts as a mouse and no buttons work. > > Am I missing something? Even with nomouse set, if no lirc client is attached, the pad will stay in mouse mode. So what is sounds like to me, is that xbmc isn't actually connecting as an lirc client. Since you're running cvs lirc, my bet would be that xmbc is trying to connect to the lircd socket at /dev/lircd, while its now /var/run/lirc/lircd it needs to connect to. --jarod |