From: Jon S. <jon...@gm...> - 2007-02-13 23:04:14
|
LIRC folks, there is an on-going discussion about possible future designs for IR support in the kernel. What's your take on how to do this? Forwarded Conversation Subject: USB IR receivers and evdev ------------------------ From: Jon Smirl <jon...@gm...> To: Greg KH <gr...@kr...> Cc: lin...@li..., Vojtech Pavlik <vo...@su...> Date: Mon, Feb 12, 2007 at 10:10 PM On 2/12/07, Greg KH <gr...@kr...> wrote: > On Sun, Feb 11, 2007 at 03:20:20PM -0500, Jon Smirl wrote: > > Most drivers for USB IR remote controls are out of tree. They are locat= ed here: > > http://www.lirc.org/ > > > > Why aren't these in-tree? > > Because the lirc developers do not have the time to submit them upstream > :( > > Feel free to clean them up and work with the developers to get them > upstream, I tried over a year ago and gave up... How should IR receivers be handled in the kernel? For example the ati_remote2 support already in the kernel has a mapping table like this for keys that get passed up through evdev. static struct { int hw_code; int key_code; } ati_remote2_key_table[] =3D { { 0x00, KEY_0 }, { 0x01, KEY_1 }, { 0x02, KEY_2 }, { 0x03, KEY_3 }, { 0x04, KEY_4 }, { 0x05, KEY_5 }, { 0x06, KEY_6 }, This model breaks down when you point remotes from different manufacturers at the receiver. You want a model more along the lines of: 1) receiving a xmit with a new manufacturer id cases an evdev device to be dynamically created 2) wait for user space to load a mapping table into new device (udev event?= ) 3) now start passing mapped events on up to user space. LIRC passes up the full event to user space and then uses tables to decode it there. But that model eliminates using evdev. -- Jon Smirl jon...@gm... -------- From: Greg KH <gr...@kr...> To: Jon Smirl <jon...@gm...> Cc: lin...@li..., Vojtech Pavlik <vo...@su...> Date: Tue, Feb 13, 2007 at 1:26 AM [Quoted text hidden]I don't know what model to use, ask the input subsystem developers :) thanks, greg k-h -------- From: Ville Syrj=E4l=E4 <sy...@sc...> Reply-To: Jon Smirl <jon...@gm...>, Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... To: Jon Smirl <jon...@gm...> Cc: Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... Date: Tue, Feb 13, 2007 at 3:17 AM On Mon, Feb 12, 2007 at 10:10:07PM -0500, Jon Smirl wrote: [Quoted text hidden]AFAIK uinput should allow you to write userspace evdev drivers. -- Ville Syrj=E4l=E4 sy...@sc... http://www.sci.fi/~syrjala/ -------- From: Jon Smirl <jon...@gm...> To: Jon Smirl <jon...@gm...>, Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... Date: Tue, Feb 13, 2007 at 10:37 AM [Quoted text hidden]That is what LIRC is doing currently. The trouble with that is that every app that wants to use LIRC needs code to do it. Or you write macros that get triggered on a remote button press that inject key strokes into the application. With an evdev model the remote looks like another keyboard to the system. Hitting a number on the remote will generate the same events as hitting a number on your keyboard (except that they appear on two different evdev devices). The question is, how unified do you want the input model? For simple remote controls you might pick the first one. But then you realize that there are IR keyboards and mice that communicate the same way. LIRC has code to convert these devices into X input. The two in-tree ATI remote drivers have gone the evdev route. All other remotes are handled by LIRC. > > -- > Ville Syrj=E4l=E4 > sy...@sc... > http://www.sci.fi/~syrjala/ > [Quoted text hidden] -------- From: Ville Syrj=E4l=E4 <sy...@sc...> Reply-To: Jon Smirl <jon...@gm...>, Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... To: Jon Smirl <jon...@gm...> Cc: Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... Date: Tue, Feb 13, 2007 at 2:22 PM [Quoted text hidden]I thought it has a lircd daemon that apps talk to. > The trouble with that is that > every app that wants to use LIRC needs code to do it. AFAIK with uinput the apps doesn't need any special code. The only difference to in-kernel evdev drivers is that the actual device driver lives in a user space daemon. So if lircd would use uinput apps should just open("/dev/input/event<something>") and use read(), write(), ioctl() to talk to the driver, same as other evdev devices. Someone correct me if I'm totally wrong here. ... > The two in-tree ATI remote drivers have gone the evdev route. All > other remotes are handled by LIRC. BTW the ATI remotes are RF w/ their own USB receivers, not IR. -- [Quoted text hidden] -------- From: Jon Smirl <jon...@gm...> To: Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... Date: Tue, Feb 13, 2007 at 3:32 PM [Quoted text hidden]I poked around in what meager uinput doc I could find. It is not clear if events fed into uinput appear on their own evdev device or if they get merged into another stream. For sure there are two different styles of doing this: 1) ir driver sends raw ir codes to user space, user space maps, sends back via uinput 2) load mapping table into driver and have driver directly generate evdev e= vents There are 1000s of mapping tables but they are less than 1K each. You need one table for each remote. lirc daemon is about 80K. Which is the better model? Most IR units function as a blaster too, so the events have to go back the other way too. Current LIRC code doesn't use uinput, it has a socket interface. > > The two in-tree ATI remote drivers have gone the evdev route. All > > other remotes are handled by LIRC. > > BTW the ATI remotes are RF w/ their own USB receivers, not IR. RF protocol is probably proprietary so there is no way to point another transmitter at the receiver so it doesn't have the multi-remote mapping problem that IR has. > > -- > Ville Syrj=E4l=E4 > sy...@sc... > http://www.sci.fi/~syrjala/ > [Quoted text hidden] -------- From: Ville Syrj=E4l=E4 <sy...@sc...> Reply-To: Jon Smirl <jon...@gm...>, Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... To: Jon Smirl <jon...@gm...> Cc: Greg KH <gr...@kr...>, Vojtech Pavlik <vo...@su...>, lin...@li... Date: Tue, Feb 13, 2007 at 4:44 PM On Tue, Feb 13, 2007 at 03:32:38PM -0500, Jon Smirl wrote: > On 2/13/07, Ville Syrj=E4l=E4 <sy...@sc...> wrote: > > > > AFAIK with uinput the apps doesn't need any special code. The only > > difference to in-kernel evdev drivers is that the actual device driver > > lives in a user space daemon. So if lircd would use uinput apps should > > just open("/dev/input/event<something>") and use read(), write(), > > ioctl() to talk to the driver, same as other evdev devices. Someone > > correct me if I'm totally wrong here. > > I poked around in what meager uinput doc I could find. It is not clear > if events fed into uinput appear on their own evdev device or if they > get merged into another stream. A quick look at the code suggests that using write() + UI_DEV_CREATE ioctl you can register an input device (which should appear as /dev/input/eventX via udev). However it looks like you need a daemon process per device since there is no 'add new device' ioctl and it uses struct file to identify the device. > For sure there are two different styles of doing this: > 1) ir driver sends raw ir codes to user space, user space maps, sends > back via uinput > 2) load mapping table into driver and have driver directly generate evdev= events Maybe this could use the firmware loader. > There are 1000s of mapping tables but they are less than 1K each. You > need one table for each remote. lirc daemon is about 80K. So there would be a huge number of "firmwares". > Which is the better model? I don't know. Can option 2 cover all current cases or does lirc do something more than table lookups? > Most IR units function as a blaster too, so the events have to go back > the other way too. BTW the ati_remote2 receiver has two jacks, which are for connecting IR blasters (according to rumor). However I don't think such blasters were ever sold :( > Current LIRC code doesn't use uinput, it has a socket interface. Requiring additional support code in apps/libs :( > > > The two in-tree ATI remote drivers have gone the evdev route. All > > > other remotes are handled by LIRC. > > > > BTW the ATI remotes are RF w/ their own USB receivers, not IR. > > RF protocol is probably proprietary so there is no way to point > another transmitter at the receiver so it doesn't have the > multi-remote mapping problem that IR has. Exactly, so the lirc drivers for these should not be merged. -- [Quoted text hidden] -------- |
From: <li...@ba...> - 2007-02-14 02:46:59
|
Hi! Jon Smirl "jon...@gm..." wrote: > LIRC folks, there is an on-going discussion about possible future > designs for IR support in the kernel. What's your take on how to do > this? Pass all events to userspace and let lircd do the mapping/decoding. In case of the atiusb device there are currently 17 different remotes =20 known. Handling this in userspace is a lot easier. It is of course possible to do the mapping in kernel space and it seems =20 like it makes configuration for the user easier, but from my experience =20 users simply don't want that their remotes act like a normal keyboard. From=20time to time your will see requests on the LIRC mailing list like =20 "Help, when I press buttons on my remote, some weird symbols appear in =20 my terminal. How do I disable this?" If users want that their remote works like a keyboard this is still =20 possible using uinput. Actually you can use kbdd [1] to do exactly this =20 already. If there was enough interest I could easily integrate this in lircd =20 without having to use an additional program and having to do additional =20 configuration. lircd could convert remote signals automatically to =20 uinput events using the remote's button name. E.g. a button called "1" =20 could generate a KEY_1 event or "MUTE" could generate a KEY_MUTE event. If you are still not convinced that lircd should handle mapping/=20 decoding, then look at a device like the Streamzap remote or the Windows =20 Media Center remote. These devices do not decode the IR signals =20 themselves but just deliver the signal waveform to the host PC. The =20 signal waveform is decoded by lircd. I know dozens of different IR =20 protocols with countless variations. Trying to decode these protocols in =20 kernel space is out of question for me. It would mean putting most parts =20 of lircd into kernel space. I know that there are currently some TV card IR drivers that decode RC5 =20 in kernel space. These drivers prevent the user from choosing a =20 different remote control which uses a different protocol. Additionally the Windows Media Center can be used as IR blaster. The =20 input layer cannot be used to make use of this feature. Now look at the IR devices that are handled by LIRC completely in =20 userspace. For example you can connect an IR receiver to your soundcard =20 and let lircd use ALSA to receive the signals. Or look at the receivers =20 that connect to the serial port and use standard 1200 8N1 communication =20 through the kernel's serial port driver. There's no way for these =20 drivers to use the kernel's input system other than using uinput. That's why I suggest using the LIRC driver system like it is with lircd =20 being the uniform interface for applications. If the user wants his =20 remote to act like a keyboard he could configure lircd to enable this =20 feature independent of how the hardware is working internally. The only drawback of this approach I can see is that for some devices =20 there is additional overhead as the input events have to be routed =20 through lircd although the drivers could generate the input events =20 directly. But the advantages of the current LIRC approach outweigh this =20 drawback by far IMHO. Christoph 1: http://www.handhelds.org/moin/moin.cgi/kbdd |
From: Jon S. <jon...@gm...> - 2007-02-14 04:46:58
|
On 14 Feb 2007 03:45:00 +0100, Christoph Bartelmus <li...@ba...> wrote: > Hi! > > Jon Smirl "jon...@gm..." wrote: > > LIRC folks, there is an on-going discussion about possible future > > designs for IR support in the kernel. What's your take on how to do > > this? > > Pass all events to userspace and let lircd do the mapping/decoding. > > In case of the atiusb device there are currently 17 different remotes > known. Handling this in userspace is a lot easier. > It is of course possible to do the mapping in kernel space and it seems > like it makes configuration for the user easier, but from my experience > users simply don't want that their remotes act like a normal keyboard. > From time to time your will see requests on the LIRC mailing list like > "Help, when I press buttons on my remote, some weird symbols appear in > my terminal. How do I disable this?" > If users want that their remote works like a keyboard this is still > possible using uinput. Actually you can use kbdd [1] to do exactly this > already. > If there was enough interest I could easily integrate this in lircd > without having to use an additional program and having to do additional > configuration. lircd could convert remote signals automatically to > uinput events using the remote's button name. E.g. a button called "1" > could generate a KEY_1 event or "MUTE" could generate a KEY_MUTE event. evdev doesn't just handle keyboards. It is a general purpose event interface for getting UI events out of the kernel. The idea is that there is a standardized message structure and a set of common IDs. Each remote would cause a new device to be created in /dev/input, for example event4 or event5. Applications then read events from these device nodes in the standardized format. include/linux/input.h struct input_event { struct timeval time; __u16 type; __u16 code; __s32 value; }; Keyboard works this way and so does the mouse. There are also drivers for a lot miscellaneous input devices. Everything isn't mapped to the keyboard. For example, your mouse events aren't showing up on your keyboard since they occur on a different event node. IR remotes would get an event node for each remote seen. An example: pulseaudio needs two input modules: Volume Control # module-mmkbd-evdev # module-lirc If lirc were to report events over evdev pulseaudio would only need the evdev module. Using evdev to do event reporting can be done with either a user or kernel space LIRC implementation. The uinput driver allows user space programs to generate evdev events. evdev would allow LIRC to get rid of the client library. Example programs using evdev: http://www.frogmouth.net/hid-doco/c537.html > If you are still not convinced that lircd should handle mapping/ > decoding, then look at a device like the Streamzap remote or the Windows > Media Center remote. These devices do not decode the IR signals > themselves but just deliver the signal waveform to the host PC. The > signal waveform is decoded by lircd. I know dozens of different IR > protocols with countless variations. Trying to decode these protocols in > kernel space is out of question for me. It would mean putting most parts > of lircd into kernel space. These are a lot of good arguments for keeping lircd. How about getting your existing kernel drivers merged into mainline? > I know that there are currently some TV card IR drivers that decode RC5 > in kernel space. These drivers prevent the user from choosing a > different remote control which uses a different protocol. This is part of the problem, remote controls are not being handled uniformly in the kernel. There are those USB ATI radio based remotes too. > Additionally the Windows Media Center can be used as IR blaster. The > input layer cannot be used to make use of this feature. Vojtech, is this some way to send events the other direction through evdev? > Now look at the IR devices that are handled by LIRC completely in > userspace. For example you can connect an IR receiver to your soundcard > and let lircd use ALSA to receive the signals. Or look at the receivers > that connect to the serial port and use standard 1200 8N1 communication > through the kernel's serial port driver. There's no way for these > drivers to use the kernel's input system other than using uinput. > > That's why I suggest using the LIRC driver system like it is with lircd > being the uniform interface for applications. If the user wants his > remote to act like a keyboard he could configure lircd to enable this > feature independent of how the hardware is working internally. > > The only drawback of this approach I can see is that for some devices > there is additional overhead as the input events have to be routed > through lircd although the drivers could generate the input events > directly. But the advantages of the current LIRC approach outweigh this > drawback by far IMHO. > > Christoph > > 1: http://www.handhelds.org/moin/moin.cgi/kbdd > -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2007-02-14 06:18:51
|
Hi Jon, on 13 Feb 07 at 23:46, you wrote: [...] > evdev doesn't just handle keyboards. Yes, sorry for simplifying it too much. I just opt for using uinput from user-space so that configuration is consistent for all devices. If some devices use the kernel input layer directly, you will have to configure the mapping from codes to events in kernel space. Otherwise you always do the mapping in lircd.conf in user space. [...] > keyboard since they occur on a different event node. IR remotes would > get an event node for each remote seen. > > An example: pulseaudio needs two input modules: > Volume Control > # module-mmkbd-evdev > # module-lirc > > If lirc were to report events over evdev pulseaudio would only need > the evdev module. > > Using evdev to do event reporting can be done with either a user or > kernel space LIRC implementation. The uinput driver allows user space > programs to generate evdev events. evdev would allow LIRC to get rid > of the client library. Personally I don't want to get rid of the client library because it gives me much more configuration options. But I have no problem giving users the possibility to choose whether they want to use evdev or the lirc client library. [...] > How about getting your existing kernel drivers merged into mainline? I would like to see the kernel drivers being merged into mainline. But don't wait for me doing it. It simply won't happen. [...] >> Additionally the Windows Media Center can be used as IR blaster. The >> input layer cannot be used to make use of this feature. > Vojtech, is this some way to send events the other direction through evdev? Sending an IR signal involves things like setting the carrier frequency, duty cycle, and then writing a stream of timing values. I think that evdev in general is not the right interface for this. Christoph |
From: Jon S. <jon...@gm...> - 2007-02-14 14:45:01
|
On 14 Feb 2007 07:16:00 +0100, Christoph Bartelmus <li...@ba...> wrote: > Hi Jon, > > on 13 Feb 07 at 23:46, you wrote: > [...] > > evdev doesn't just handle keyboards. > > Yes, sorry for simplifying it too much. I just opt for using uinput from > user-space so that configuration is consistent for all devices. > If some devices use the kernel input layer directly, you will have to > configure the mapping from codes to events in kernel space. Otherwise > you always do the mapping in lircd.conf in user space. > Adding uinput support would also let you get rid of all the X specific code since X knows how to read evdev.You just need to do is add a small section to your xorg.conf. Are your interested in adding this support? Something like uinput is not going to get used overnight but if the docs emphasize using evdev first and then only using the client library if you want to do something complicated, over time we can reduce the complexity of apps using LIRC for simple things. This will work towards the goal of unified input event handling. > [...] > > keyboard since they occur on a different event node. IR remotes would > > get an event node for each remote seen. > > > > An example: pulseaudio needs two input modules: > > Volume Control > > # module-mmkbd-evdev > > # module-lirc > > > > If lirc were to report events over evdev pulseaudio would only need > > the evdev module. > > > > Using evdev to do event reporting can be done with either a user or > > kernel space LIRC implementation. The uinput driver allows user space > > programs to generate evdev events. evdev would allow LIRC to get rid > > of the client library. > > Personally I don't want to get rid of the client library because it > gives me much more configuration options. > But I have no problem giving users the possibility to choose whether > they want to use evdev or the lirc client library. > > [...] > > How about getting your existing kernel drivers merged into mainline? > > I would like to see the kernel drivers being merged into mainline. > But don't wait for me doing it. It simply won't happen. If we fix the code up for submission will you test it and maintain the drivers in the future? I looked at the driver code and it isn't too bad, but all of the multiple kernel version support will need to be removed. I only have a couple of the devices so I can't really test the changes. > > [...] > >> Additionally the Windows Media Center can be used as IR blaster. The > >> input layer cannot be used to make use of this feature. > > > Vojtech, is this some way to send events the other direction through evdev? > > Sending an IR signal involves things like setting the carrier frequency, > duty cycle, and then writing a stream of timing values. > I think that evdev in general is not the right interface for this. This could be worked into the evdev support. lircd.conf would contain entries for the controls you wanted to send codes for. This info could be used to created an uinput/evdev entry for the control. If evdev supports pushing events, lircd could take the event and convert it to IR info and push it down into the lirc drivers. We need Vojtech to tell us if we can push events into uinput. > > Christoph > -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2007-02-15 04:42:07
|
Hi Jon, on 14 Feb 07 at 09:44, you wrote: [...] > Adding uinput support would also let you get rid of all the X specific > code since X knows how to read evdev.You just need to do is add a > small section to your xorg.conf. > > Are your interested in adding this support? Yes, I like the concept you presented. I can try to implement it in lircd as soon as I will find some time. But my todo list is quite long, so don't hold your breath. I'm only a bit concerned about scalability. It's easily possible to have hundreds of remote control definitions in your lircd.conf. Creating a device node for each of them will be very inefficient. [...] >> I would like to see the kernel drivers being merged into mainline. >> But don't wait for me doing it. It simply won't happen. > If we fix the code up for submission will you test it and maintain the > drivers in the future? I only own a small fraction of the devices supported by LIRC myself. I could take over one or two drivers, but definitely not all of them. Christoph |
From: Jon S. <jon...@gm...> - 2007-02-15 05:34:35
|
On 15 Feb 2007 05:40:00 +0100, Christoph Bartelmus <li...@ba...> wrote: > Hi Jon, > > on 14 Feb 07 at 09:44, you wrote: > [...] > > Adding uinput support would also let you get rid of all the X specific > > code since X knows how to read evdev.You just need to do is add a > > small section to your xorg.conf. > > > > Are your interested in adding this support? > > Yes, I like the concept you presented. I can try to implement it in > lircd as soon as I will find some time. > But my todo list is quite long, so don't hold your breath. > > I'm only a bit concerned about scalability. It's easily possible to have > hundreds of remote control definitions in your lircd.conf. Creating a > device node for each of them will be very inefficient. I have about ten remotes and I thought that was way too many. How can someone live with hundreds? You could only put the remotes you owned into the config file. Or have them all in the config file but don't create the evdev node until the remote is actually used. I'd say only put the ones you care about into the config file or put them all there in a commented out form and enable the ones your want. > [...] > >> I would like to see the kernel drivers being merged into mainline. > >> But don't wait for me doing it. It simply won't happen. > > > If we fix the code up for submission will you test it and maintain the > > drivers in the future? > > I only own a small fraction of the devices supported by LIRC myself. I > could take over one or two drivers, but definitely not all of them. There are sixteen devices drivers in the lirc repo, are maintainers still around for all of them? Getting device drivers into the kernel isn't as hard as you think it is. Just fix up the obvious things like remove support for multiple kernel versions and then post it to some place like usb-devel. The people there will be happy to tell you in great detail what is then wrong with it. It usually takes a part-time effort over a week or two to make everyone happy and you're in. The lirc drivers are tiny compared to some of the other drivers in the kernel. The big problem is if I fix the drivers up without owning the hardware I can't check to make sure they are still functioning. So testers need to be found that own all of the devices. lirc_dev - base driver I'll look at this one and see if I can get it in shape to post this week. What about the rest of these? lirc_atiusb lirc_bt829 lirc_cmdir lirc_gpio lirc_i2c lirc_igorplugusb lirc_imon lirc_it87 lirc_mceusb lirc_mceusb2 - I have this one, that would be my next target lirc_parallel lirc_sasem lirc_serial lirc_sir lirc_streamzap -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2007-02-19 04:01:49
|
Hi Jon, on 15 Feb 07 at 00:34, you wrote: [...] >> I'm only a bit concerned about scalability. It's easily possible to have >> hundreds of remote control definitions in your lircd.conf. Creating a >> device node for each of them will be very inefficient. > I have about ten remotes and I thought that was way too many. How can > someone live with hundreds? It is common practice to download the remote package from the lirc homepage and just use all of them or just all of one brand to find out which config file may support the remote control you have best without having to create a new config file. Also I know that some people in TV repair shops would use lirc to control some TV sets or other equipment with lirc where the remote control is not available. They would just use a huge lircd.conf containing all available configs and send the commands they need at that moment. [...] >> I only own a small fraction of the devices supported by LIRC myself. I >> could take over one or two drivers, but definitely not all of them. > There are sixteen devices drivers in the lirc repo, are maintainers > still around for all of them? Only for some. Usually I would make updates for new kernel versions myself. [...] > What about the rest of these? > [...] > lirc_parallel [...] > lirc_streamzap I could take over the two above. Christoph |
From: Kevin T. <lis...@pc...> - 2007-02-15 14:29:32
|
The MCE(1) and MCE2 drivers could be merged into a single driver. They both use the same command set. The most substantial difference between them is the USB interface chip. The MCE1 has a FTDI serial chip, the MCE2 has a Philips parallel chip. The FTDI chip needs a few initialization packets sent to it, the Philips chip does not. The FTDI chip also has a two byte line status header on each received block that has to be stripped off. Everything else is the same. There are several features of these devices that are not being used by the current drivers. The IR Tx frequency can be set (default is 66 kHz!) and various rx parameters can be setup. I can provide all the technical info if someone wants to write code to support all the capabilities of the hardware. My experience with Linux is minimal, so I can't contribute any code right now. At 12:34 AM 2/15/2007, you wrote: >The big problem is if I fix the drivers up without owning the hardware >I can't check to make sure they are still functioning. So testers need >to be found that own all of the devices. > >lirc_dev - base driver >I'll look at this one and see if I can get it in shape to post this week. > > >What about the rest of these? > >lirc_atiusb >lirc_bt829 >lirc_cmdir >lirc_gpio >lirc_i2c >lirc_igorplugusb >lirc_imon >lirc_it87 >lirc_mceusb >lirc_mceusb2 - I have this one, that would be my next target >lirc_parallel >lirc_sasem >lirc_serial >lirc_sir >lirc_streamzap > > >-- >Jon Smirl >jon...@gm... |
From: Vojtech P. <vo...@su...> - 2007-02-14 15:49:33
|
On Wed, Feb 14, 2007 at 09:44:46AM -0500, Jon Smirl wrote: > >Sending an IR signal involves things like setting the carrier frequency, > >duty cycle, and then writing a stream of timing values. > >I think that evdev in general is not the right interface for this. > > This could be worked into the evdev support. lircd.conf would contain > entries for the controls you wanted to send codes for. This info could > be used to created an uinput/evdev entry for the control. If evdev > supports pushing events, lircd could take the event and convert it to > IR info and push it down into the lirc drivers. > > We need Vojtech to tell us if we can push events into uinput. You can. However, the event direction depends on the event type: EV_KEY always goes from device to handler, EV_LED goes both ways. You'd have a new event type for sending out IR events: This is different from pressing a key. But it's trivial to add. -- Vojtech Pavlik Director SuSE Labs |
From: Ludwig N. <lud...@su...> - 2007-02-15 15:41:48
|
Kevin Timmerman wrote: > The MCE(1) and MCE2 drivers could be merged into a single driver. > They both use the same command set. The most substantial difference > between them is the USB interface chip. The MCE1 has a FTDI serial > chip, the MCE2 has a Philips parallel chip. The FTDI chip needs a few > initialization packets sent to it, the Philips chip does not. The > FTDI chip also has a two byte line status header on each received > block that has to be stripped off. Everything else is the same. > > There are several features of these devices that are not being used > by the current drivers. The IR Tx frequency can be set (default is 66 > kHz!) and various rx parameters can be setup. > > I can provide all the technical info if someone wants to write code > to support all the capabilities of the hardware. My experience with > Linux is minimal, so I can't contribute any code right now. btw, is there any reason why those drivers are implemented as kernel modules? Couldn't lircd just use libusb and talk directly to the devices instead? cu Ludwig -- (o_ Ludwig Nussel //\ SUSE Labs V_/_ http://www.suse.de/ SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nuernberg) |
From: Jon S. <jon...@gm...> - 2007-02-16 01:05:18
|
On 2/15/07, Ludwig Nussel <lud...@su...> wrote: > Kevin Timmerman wrote: > > The MCE(1) and MCE2 drivers could be merged into a single driver. > > They both use the same command set. The most substantial difference > > between them is the USB interface chip. The MCE1 has a FTDI serial > > chip, the MCE2 has a Philips parallel chip. The FTDI chip needs a few > > initialization packets sent to it, the Philips chip does not. The > > FTDI chip also has a two byte line status header on each received > > block that has to be stripped off. Everything else is the same. > > > > There are several features of these devices that are not being used > > by the current drivers. The IR Tx frequency can be set (default is 66 > > kHz!) and various rx parameters can be setup. > > > > I can provide all the technical info if someone wants to write code > > to support all the capabilities of the hardware. My experience with > > Linux is minimal, so I can't contribute any code right now. > > btw, is there any reason why those drivers are implemented as kernel > modules? Couldn't lircd just use libusb and talk directly to the > devices instead? This is a very good question. It's not like LIRC is high speed. A big reason to be in the kernel is to manipulate device register with root priv, but these drivers don't need to do that. USB devices have libusb parallel and serial can just open the devices how about GPIO and anything else weird? -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2007-02-19 04:01:50
|
Hi! Jon Smirl "jon...@gm..." wrote: [...] >> btw, is there any reason why those drivers are implemented as kernel >> modules? Couldn't lircd just use libusb and talk directly to the >> devices instead? > This is a very good question. It's not like LIRC is high speed. > A big reason to be in the kernel is to manipulate device register with > root priv, but these drivers don't need to do that. Driver deveopment started when libusb was not stable/mature enough to rely on it. Later nobody really cared to convert the drivers to userspace except for the atiusb code, where also a userspace driver is available, but it does not support all the receivers that the kernel module supports. One reason I would like to keep the drivers as kernel modules is that you can use tools like mode2 or xmode2 to easily test them. > USB devices have libusb > parallel and serial can just open the devices The parallel and serial kernel modules need to implement an interrupt handler. You cannot use the standard devices. > how about GPIO and anything else weird? I don't know about GPIO, but for the I2C driver a userspace driver also is available, but I did not have time to integrate it into the code base yet. Christoph |
From: Jon S. <jon...@gm...> - 2007-02-19 05:28:23
|
On 19 Feb 2007 05:01:00 +0100, Christoph Bartelmus <li...@ba...> wrote: > Hi! > > Jon Smirl "jon...@gm..." wrote: > [...] > >> btw, is there any reason why those drivers are implemented as kernel > >> modules? Couldn't lircd just use libusb and talk directly to the > >> devices instead? > > > This is a very good question. It's not like LIRC is high speed. > > A big reason to be in the kernel is to manipulate device register with > > root priv, but these drivers don't need to do that. > > Driver deveopment started when libusb was not stable/mature enough to rely on it. Later nobody really cared to convert the drivers to userspace except for the atiusb code, where also a userspace driver is available, but it does not support all the receivers that the kernel module supports. > > One reason I would like to keep the drivers as kernel modules is that you can use tools like mode2 or xmode2 to easily test them. These should be able to work with a user space library that provides the right API. I was already to try and merge these drivers into the kernel, but now I am having second thoughts about doing a user space implementation instead. > > > USB devices have libusb > > parallel and serial can just open the devices > > The parallel and serial kernel modules need to implement an interrupt handler. You cannot use the standard devices. > > > how about GPIO and anything else weird? > > I don't know about GPIO, but for the I2C driver a userspace driver also is available, but I did not have time to integrate it into the code base yet. All of the parallel, serial, gpio, i2c type devices probably have to go into the kernel because they need to do accurate timings. Once you are in user space it becomes difficult to time things. For example the user space implementation for i2c (I2C_CHARDEV) still bit twiddles in the kernel and then only presents you with the results. > > Christoph > -- Jon Smirl jon...@gm... |
From: <li...@ba...> - 2007-02-19 11:31:14
|
Hi! Jon Smirl "jon...@gm..." wrote: [...] >> One reason I would like to keep the drivers as kernel modules is that you >> can use tools like mode2 or xmode2 to easily test them. > These should be able to work with a user space library that provides > the right API. And you are volunteering to implement that? ;-) > I was already to try and merge these drivers into the kernel, but now > I am having second thoughts about doing a user space implementation > instead. Ok. [...] >> I don't know about GPIO, but for the I2C driver a userspace driver also is >> available, but I did not have time to integrate it into the code base yet. > All of the parallel, serial, gpio, i2c type devices probably have to > go into the kernel because they need to do accurate timings. Once you > are in user space it becomes difficult to time things. > > For example the user space implementation for i2c (I2C_CHARDEV) still > bit twiddles in the kernel and then only presents you with the > results. Yes, but the all LIRC related parts can be in user space in case of the I2C driver. Christoph |
From: Daniel M <li...@ra...> - 2007-02-15 20:45:00
|
Hi Kevin, I would love to have more information about the hardware features and capabilities of the mce devices. When I added the transmitter support to the lirc_mceusb2 driver I had almost no information at all to go by. So if you have more documentation about these devices that you would like to share I would be very much interested. /Daniel M On Thu, 2007-02-15 at 09:28 -0500, Kevin Timmerman wrote: > The MCE(1) and MCE2 drivers could be merged into a single driver. > They both use the same command set. The most substantial difference > between them is the USB interface chip. The MCE1 has a FTDI serial > chip, the MCE2 has a Philips parallel chip. The FTDI chip needs a few > initialization packets sent to it, the Philips chip does not. The > FTDI chip also has a two byte line status header on each received > block that has to be stripped off. Everything else is the same. > > There are several features of these devices that are not being used > by the current drivers. The IR Tx frequency can be set (default is 66 > kHz!) and various rx parameters can be setup. > > I can provide all the technical info if someone wants to write code > to support all the capabilities of the hardware. My experience with > Linux is minimal, so I can't contribute any code right now. > > > At 12:34 AM 2/15/2007, you wrote: > >The big problem is if I fix the drivers up without owning the hardware > >I can't check to make sure they are still functioning. So testers need > >to be found that own all of the devices. > > > >lirc_dev - base driver > >I'll look at this one and see if I can get it in shape to post this week. > > > > > >What about the rest of these? > > > >lirc_atiusb > >lirc_bt829 > >lirc_cmdir > >lirc_gpio > >lirc_i2c > >lirc_igorplugusb > >lirc_imon > >lirc_it87 > >lirc_mceusb > >lirc_mceusb2 - I have this one, that would be my next target > >lirc_parallel > >lirc_sasem > >lirc_serial > >lirc_sir > >lirc_streamzap > > > > > >-- > >Jon Smirl > >jon...@gm... > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV |
From: Kevin T. <lis...@pc...> - 2007-02-16 01:12:43
|
Here are my notes for the MCE2004/2005 IR transceivers... Command Summary --------------- 80 End of tranmission 8x Data, length of x, x=[1...4] 9F 03 Ping? 9F 05 ??? 9F 06 mm ff Set tx carrier mode and frequency 9F 07 Get tx carrier mode and frequency 9F 08 bb Set tx blaster bitmask 9F 0C msb lsb Set rx timeout (units of 50 microseconds) 9F 0D Get rx timeout 9F 0F ??? 9F 13 Get tx blaster bitmask 9F 14 nn Set rx sensor (01 = long range demodulator, 02 = short range detector) 9F 15 Get rx sensor 00 FF AA Reset - 00 seems to be optional FF 0B Get hw/sw revision??? FF 18 ??? Response Summary ---------------- 80 End of reception 8x Data, length of x, x=[1...4] 9F 01 nn Report rx sensor used 9F 03 Ping? 9F 04 xx xx Response to 9F 05 - MCE2004=00FA, MCE2005=01F4 9F 06 mm ff Confirm carrier mode and frequency setting 9F 08 bb Confirm tx blaster bitmask 9F 0C msb lsb Confirm rx timeout setting 9F 14 nn Confirm rx sensor setting 9F 15 msb lsb Report rx pulse count 9F FE Error! Hardware is probably wedged. Try a 00 FF AA reset. --------------- Command Details ----------------------- 80 End of tranmission Send this after the last 8x block. Finalize IR transmission 8x Data, length of x, x=[1...4] X data bytes follow. Each data byte has on/off state in bit 7, and time in bits 0 to 6. Time is in units of 50 microseconds. 9F 05 ??? Unknown purpose. Response will be 9F 04 00 FA for MCE2004 or 9F 04 01 F4 for MCE2005 9F 06 mm ff Set tx carrier mode and frequency mm clk 00 10000000 01 2500000 02 625000 03 156250 ff = ( clk / frequency ) - 1 frequency = clk / ( ff + 1 ) so... 9F 06 00 AE 57143 Hz 57 kHz 9F 06 00 AF 56818 Hz 57 kHz 9F 06 01 2B 56818 Hz 57 kHz 9F 06 01 3D 40323 Hz 40 kHz 9F 06 00 F9 40000 Hz 40 kHz 9F 06 01 3E 39682 Hz 40 kHz 9F 06 01 40 38462 Hz 38 kHz 9F 06 01 41 37879 Hz 38 kHz 9F 06 01 44 36232 Hz 36 kHz 9F 06 01 45 35714 Hz 36 kHz Default frequency is 66 kHz! ( 9F 06 00 96 ) if mm == 0x80 then... frequency = 9765.625 Hz (2500000/256, 102.4 uS period) on time = ( 256 - ff ) * .4 uS off time = ff * .4 uS so... 9F 06 80 00 = no modulation 9F 06 80 80 = 9.765 kHz with 50% duty cycle etc... note: if bit 7 of mm is set, all other bits are ignored (prescaler is not selectable) 9F 08 bb Set tx blaster bitmask - No confirmation?! Enable specified IR blaster(s) 9F 0C msb lsb Set rx timeout (units of 50 microseconds) When no IR has been seen for this amount of time, the 80 response will be sent followed by 9F 01 and 9F 05 responses. 9F 0F ??? Unknown purpose. Response is 9F 0E 00 9F 14 nn Set rx sensor (01 = long range demodulator, 02 = short range detector) The MCE hardware has a long rage IR demodulator for remote control and a short range IR detector for learning. The short range sensor can detect each IR pulse, so the 9F 15 response will be non-zero and the carrier frequency can be estimated. 00 FF AA Reset Try this if the hardware stops responding. The leading 00 seems to be optional. FF 0B Get hw/sw revision??? MCE2004 will respond with FF 0B 45 FF 1B 08. MCE2005 will respond with FF 0B 50 FF 1B 42. MCE2004 FTDI chip initialization -------------------------------- Send these USB control blocks... Reset 40 00 00 00 00 00 00 00 Note: 64 bytes of garbage will be received, and must be discared Get modem status (optional?) C0 05 00 00 00 00 02 00 00 00 .. Set bit rate to 38400 bps 40 03 4E C0 00 00 00 00 Set char length to 8 bits 40 04 08 08 00 00 00 00 Set handshaking to use DTR/DSR 40 02 00 00 00 01 00 00 Note: Each block received from the FTDI chip will have a 2 byte line status header that can be discarded. ------------------------------------------ Sample code for rx... static UINT rx_state=0; static UINT total_on_pulses=0; static UINT total_on_time=0; static UINT mce2004=0; static const UINT max_array_size=1024; static int times[max_array_size]; static UINT array_size=0; void ProcessRxMCE(UINT len, BYTE* data) { static BYTE mce_last_b=0x80; static UINT mce_chunk_len; static BYTE mce_rsp; static UINT mce_rsp_data; if(mce2004) { if(len<2) return; len-=2; if(!len) return; } while(len--) { const BYTE b=*data++; switch(rx_state) { case 0: if((b & 0xE0) == 0x80) { mce_chunk_len = b & 0x1F; switch(mce_chunk_len) { case 0: TRACE0("End of packet\n"); TRACE2("Total on time: %i uS / %i periods\n",total_on_time,total_on_time/50); if((array_size<max_array_size) && (times[array_size]!=0)) ++array_size; AddRxToQueue(); array_size=0; total_on_time=0; total_on_pulses=0; mce_last_b=0x80; times[0]=0; break; case 0x1F: rx_state=2; break; default: rx_state=1; break; } } else if(b==0xFF) { rx_state=4; } else { TRACE1("Bogus data: %02X\n",b); ASSERT(rx_state==0); } break; case 1: if(array_size < max_array_size) { const int t=50*((b&0x80)?b&0x7F:-static_cast<int>(b&0x7F)); if(((mce_last_b^b)&0x80)==0x80) { mce_last_b=b; if(times[array_size]>0) total_on_time+=times[array_size]; if(++array_size < max_array_size) { times[array_size]=t; } } else times[array_size]+=t; } if(!--mce_chunk_len) rx_state=0; break; case 2: // 9F response mce_rsp_data=0; ++rx_state; switch(mce_rsp=b) { case 0x01: // Rx sensor report mce_chunk_len=1; break; case 0x15: // Rx pulse count report mce_chunk_len=2; break; case 0x04: // ??? - Response to 9F 05 mce_chunk_len=2; break; case 0x06: // Tx carrier freq confirm mce_chunk_len=2; break; case 0x08: // Tx blaster confirm mce_chunk_len=1; break; case 0x0C: // Rx timeout confirm mce_chunk_len=2; break; case 0x0E: // ??? mce_chunk_len=1; break; case 0x14: // Rx sensor confirm mce_chunk_len=1; break; case 0x03: // Ping ??? TRACE0("Ping?\n"); rx_state=0; break; case 0xFE: rx_state=0; TRACE0("Invalid 9F command issued\n*** Dongle is probably wedged ***\n"); break; default: rx_state=0; TRACE1("Invalid 9F response: %02X\n",mce_rsp); ASSERT(FALSE); break; } break; case 3: mce_rsp_data=(mce_rsp_data<<8)|b; if(!--mce_chunk_len) { switch(mce_rsp) { case 0x01: // Rx sensor report TRACE1("Rx input: %i\n",mce_rsp_data); break; case 0x15: // Rx pulse count report TRACE1("Rx pulse count: %i\n",mce_rsp_data); total_on_pulses=mce_rsp_data; break; case 0x04: // Response to 9F 05 TRACE2("9F 05 response data: %04X (%i)\n",mce_rsp_data,mce_rsp_data); break; case 0x06: // Tx carrier freq confirm if(mce_rsp_data & 0x8000) { TRACE1("Tx pulse width: %i/256\n",256-(mce_rsp_data&0xFF)); } else { static const clocks[4]={10000000,2500000,625000,156250}; const UINT clock=clocks[(mce_rsp_data>>8)&3]; TRACE1("Tx clock: %i\n",clock); TRACE1("Tx frequency: %i Hz\n",clock/((mce_rsp_data&0xFF)+1)); } break; case 0x08: // Tx blaster confirm TRACE1("Tx blaster bitmask: %02X\n",mce_rsp_data); break; case 0x0C: // Rx timeout confirm TRACE1("Rx timeout: %i mS\n",mce_rsp_data/20); break; case 0x14: // Rx sensor confirm TRACE1("Rx sensor: %i\n",mce_rsp_data); break; default: TRACE2("9F %02X response data: %X\n",mce_rsp,mce_rsp_data); break; } rx_state=0; } break; case 4: // FF response mce_rsp_data=0; ++rx_state; switch(mce_rsp=b) { case 0x0B: case 0x1B: mce_chunk_len=1; break; case 0x18: mce_chunk_len=4; break; case 0xFE: rx_state=0; TRACE0("Invalid FF command issued\n"); TRACE0("*** Dongle is probably wedged ***\n"); break; default: rx_state=0; TRACE1("Invalid FF response: %02X\n",mce_rsp); ASSERT(FALSE); break; } break; case 5: mce_rsp_data=(mce_rsp_data<<8)|b; if(!--mce_chunk_len) { switch(mce_rsp) { case 0x0B: // ??? report TRACE1("FF 0B response: %02X\n",mce_rsp_data); break; case 0x1B: // ?? report TRACE1("FF 1B response: %02X\n",mce_rsp_data); break; case 0x18: // ??? report TRACE1("FF 18 response: %08X\n",mce_rsp_data); break; default: TRACE2("FF %02X response data: %X\n",mce_rsp,mce_rsp_data); break; } rx_state=0; } break; } } } |
From: Jon S. <jon...@gm...> - 2007-02-14 15:50:35
|
On 2/14/07, Vojtech Pavlik <vo...@su...> wrote: > On Wed, Feb 14, 2007 at 09:44:46AM -0500, Jon Smirl wrote: > > >Sending an IR signal involves things like setting the carrier frequency, > > >duty cycle, and then writing a stream of timing values. > > >I think that evdev in general is not the right interface for this. > > > > This could be worked into the evdev support. lircd.conf would contain > > entries for the controls you wanted to send codes for. This info could > > be used to created an uinput/evdev entry for the control. If evdev > > supports pushing events, lircd could take the event and convert it to > > IR info and push it down into the lirc drivers. > > > > We need Vojtech to tell us if we can push events into uinput. > > You can. However, the event direction depends on the event type: EV_KEY > always goes from device to handler, EV_LED goes both ways. You'd have a > new event type for sending out IR events: This is different from > pressing a key. But it's trivial to add. Does pushing an event work with an app using uinput? > > -- > Vojtech Pavlik > Director SuSE Labs > -- Jon Smirl jon...@gm... |
From: Vojtech P. <vo...@su...> - 2007-02-14 16:02:12
|
On Wed, Feb 14, 2007 at 10:50:27AM -0500, Jon Smirl wrote: > On 2/14/07, Vojtech Pavlik <vo...@su...> wrote: > >On Wed, Feb 14, 2007 at 09:44:46AM -0500, Jon Smirl wrote: > >> >Sending an IR signal involves things like setting the carrier frequency, > >> >duty cycle, and then writing a stream of timing values. > >> >I think that evdev in general is not the right interface for this. > >> > >> This could be worked into the evdev support. lircd.conf would contain > >> entries for the controls you wanted to send codes for. This info could > >> be used to created an uinput/evdev entry for the control. If evdev > >> supports pushing events, lircd could take the event and convert it to > >> IR info and push it down into the lirc drivers. > >> > >> We need Vojtech to tell us if we can push events into uinput. > > > >You can. However, the event direction depends on the event type: EV_KEY > >always goes from device to handler, EV_LED goes both ways. You'd have a > >new event type for sending out IR events: This is different from > >pressing a key. But it's trivial to add. > > Does pushing an event work with an app using uinput? Interesting question. Of course uinput can generate any events. But they will only be delivered either back to it, or to event handlers (evdev, ...) -- Vojtech Pavlik Director SuSE Labs |
From: Jon S. <jon...@gm...> - 2007-02-14 16:27:50
|
On 2/14/07, Vojtech Pavlik <vo...@su...> wrote: > On Wed, Feb 14, 2007 at 10:50:27AM -0500, Jon Smirl wrote: > > On 2/14/07, Vojtech Pavlik <vo...@su...> wrote: > > >On Wed, Feb 14, 2007 at 09:44:46AM -0500, Jon Smirl wrote: > > >> >Sending an IR signal involves things like setting the carrier frequency, > > >> >duty cycle, and then writing a stream of timing values. > > >> >I think that evdev in general is not the right interface for this. > > >> > > >> This could be worked into the evdev support. lircd.conf would contain > > >> entries for the controls you wanted to send codes for. This info could > > >> be used to created an uinput/evdev entry for the control. If evdev > > >> supports pushing events, lircd could take the event and convert it to > > >> IR info and push it down into the lirc drivers. > > >> > > >> We need Vojtech to tell us if we can push events into uinput. > > > > > >You can. However, the event direction depends on the event type: EV_KEY > > >always goes from device to handler, EV_LED goes both ways. You'd have a > > >new event type for sending out IR events: This is different from > > >pressing a key. But it's trivial to add. > > > > Does pushing an event work with an app using uinput? > > Interesting question. Of course uinput can generate any events. But they > will only be delivered either back to it, or to event handlers (evdev, > ...) You need something like this: lircd opens /dev/uinput once for each remote control. This creates the corresponding /dev/input/event* nodes. Remote gets clicked, lircd catches event and passes it to uinput. User app can read events from the /dev/input/event* nodes Now user app wants to generate an event. User app writes it to the appropriate /dev/input/event* node. Does this node know it was created by /dev/uinput Will the event written to /dev/input/event* then be readable on the corresponding /dev/uinput handle? You don't want the app opening /dev/uinput because that would force lircd to read events from a /dev/input/event* for each app, and lircd isn't going to know what is going on. -- Jon Smirl jon...@gm... |
From: Vojtech P. <vo...@su...> - 2007-02-14 19:18:25
|
On Wed, Feb 14, 2007 at 11:27:44AM -0500, Jon Smirl wrote: > You need something like this: > > lircd opens /dev/uinput once for each remote control. This creates the > corresponding /dev/input/event* nodes. > > Remote gets clicked, lircd catches event and passes it to uinput. User > app can read events from the /dev/input/event* nodes > > Now user app wants to generate an event. User app writes it to the > appropriate /dev/input/event* node. That's fine as long as it's EV_IR_SEND event type, which can be trivially added. EV_KEY will not work here, as EV_KEY doesn't get passed up, since it's not possible to physically press a key on the remote by software action. The keys and the transmit ability will have to be treated as different event types, but that's it. > Does this node know it was created > by /dev/uinput Will the event written to /dev/input/event* then be > readable on the corresponding /dev/uinput handle? The kernel knows it, and it'll pass it to the uinput for reading by the appropriate process/handle. -- Vojtech Pavlik Director SuSE Labs |
From: Jon S. <jon...@gm...> - 2007-02-14 22:52:11
|
On 2/14/07, Vojtech Pavlik <vo...@su...> wrote: > On Wed, Feb 14, 2007 at 11:27:44AM -0500, Jon Smirl wrote: > > > You need something like this: > > > > lircd opens /dev/uinput once for each remote control. This creates the > > corresponding /dev/input/event* nodes. > > > > Remote gets clicked, lircd catches event and passes it to uinput. User > > app can read events from the /dev/input/event* nodes > > > > Now user app wants to generate an event. User app writes it to the > > appropriate /dev/input/event* node. > > That's fine as long as it's EV_IR_SEND event type, which can be > trivially added. EV_KEY will not work here, as EV_KEY doesn't get passed > up, since it's not possible to physically press a key on the remote by > software action. That's not techinically true. We really are pressing a key. In this case the computer is acting as if it were the remote control and we are sending the press out to some other device. There are real IR keyboards and mice too. > The keys and the transmit ability will have to be treated as different > event types, but that's it. > > > Does this node know it was created > > by /dev/uinput Will the event written to /dev/input/event* then be > > readable on the corresponding /dev/uinput handle? > > The kernel knows it, and it'll pass it to the uinput for reading by the > appropriate process/handle. > > -- > Vojtech Pavlik > Director SuSE Labs > -- Jon Smirl jon...@gm... |