emacs-vr-mode-devel Mailing List for Emacs VR Mode
Brought to you by:
grifgrif
You can subscribe to this list here.
2002 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2010 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Albert <ab....@gm...> - 2010-04-21 15:40:31
|
Hi, In the emacs 23.1.96 pretest, one of the last for the release candidate, an arithmetic overflow error occurs when loading vr.el. This is probably due to a change in the lisp reader, see e.g. http://lists.gnu.org/archive/html/emacs-devel/2010-03/msg00169.html The problem is caused by the following piece of code in vr.el (release 0.11): (defun vr-pack-msg (msg) (if (featurep 'bindat) (bindat-pack vr-length-pack-spec (list (cons 'length (length msg)) (cons 'msg msg))) ; fall back to old method (let ((i (length msg))) (format "%c%c%c%c%s" (lsh (logand i 4278190080 -24) <<<<<<<<<<<<<<<<<<<< here (lsh (logand i 16711680) -16) (lsh (logand i 65280) -8) (logand i 255) msg)))) If the offending line (the one with 4278190080) gets a comment is works again, since in emacs 23 bindat is available. Albert. |
From: ATT W. <tga...@or...> - 2006-12-05 07:30:45
|
30496 |
From: Jan R. <ja...@ry...> - 2005-04-26 16:02:56
|
>>>>> "Ken" == Ken Olum <kd...@co...> writes: Ken> I'm thinking of trying to connect xvoice to the NaturallySpeaking Ken> back end using WINE. I had the idea that I would use the latest Ken> version of NaturallySpeaking, so I asked ScanSoft about their Ken> software development kit, but they want $2000 for it. That seems Ken> like a lot, especially since I have no idea whether it will work. Ken> What does VR mode use? It looks like it is still the software Ken> development kit from Dragon, who I guess made it freely available Ken> long ago. Is that right? Does it still work with ScanSoft Ken> versions of NaturallySpeaking? [...replying late...] To develop vx-mode, which is based on VR Mode, I used the SDK which used to be freely available. The resulting binary seems to work just fine with NaturallySpeaking 7 and 8 (these are the versions I owned and tested with). Sorry to hear about ScanSoft's attitude -- I find it strange and disconcerting. Hope that helps... --J. |
From: Ken O. <kd...@co...> - 2005-04-18 16:05:04
|
I'm thinking of trying to connect xvoice to the NaturallySpeaking back end using WINE. I had the idea that I would use the latest version of NaturallySpeaking, so I asked ScanSoft about their software development kit, but they want $2000 for it. That seems like a lot, especially since I have no idea whether it will work. What does VR mode use? It looks like it is still the software development kit from Dragon, who I guess made it freely available long ago. Is that right? Does it still work with ScanSoft versions of NaturallySpeaking? Thanks very much. Ken Olum |
From: Jan R. <ja...@ry...> - 2004-08-14 22:40:13
|
VX Mode allows you to use Dragon NaturallySpeaking with XEmacs. It is a rewrite of Barry Jaspan's VR-mode. I started with small modifications to VR-mode, the goal being to make it work with XEmacs, but I ended up rewriting most of it. The main goal of this project was to provide continuous dictation support for natural text in XEmacs. Command and programming support were considered secondary. I wanted to have a tool that would enable me to dictate my e-mail in Gnus, as well as write long documents. Another important goal was to be able to dictate without using Windows. Obviously, Dragon NaturallySpeaking only runs under Windows, so one has to use it in some way -- but I wanted to be able to run my Windows installation in a VMware emulator or on a separate machine, with my Emacs communicating with this machine over the network. I'm trying to achieve as much as possible without actually interacting with the Windows desktop, which might not even be visible in normal circumstances. Main features: -- Works over the network -- your XEmacs does not have to run under Windows, nor does it have to run on the same machine that Dragon NaturallySpeaking runs on. -- Supports Select-and-Say and an Emacs-based corrections menu. (Click the third mouse button on a word or on a selection to get the corrections menu) -- Works with all modern XEmacs versions (tested with 21.4.15 and 21.5.17). -- Works with abbrev-mode. -- Synchronizes correctly. In fact, I don't even have a manual synchronize function, and I have not managed to break synchronization in a long time now. -- Synchronizes efficiently. Instead of sending every change as a character (with huge protocol overhead), we keep track of which region of text has been changed and resynchronize the whole region (but not the whole buffer) at the beginning of each utterance. I wanted to announce this even though the software is really of Alpha quality, because a number people have asked me to. For those of you who want to dictate into an XEmacs running under Linux (or on a different machine from the one that you run DNS on), this is a solution that works right now. I'm running Dragon NaturallySpeaking under VMware on a Linux host, and I use VX Mode to connect to it. If you want to dictate into an XEmacs running under Windows, you will have to wait -- I have to learn how to tell Dragon NaturallySpeaking not to provide its own dictation "support" to XEmacs first. If you're brave enough, get the software from http://jan.rychter.com/vx-mode/vx-mode-0.08.tar.bz2 Plans for the future: 1. Rework the VX.exe executable to use the same APIs that Natlink uses. They are much more advanced than the standard ones that Dragon supplies and documents. 2. Use the advanced grammar APIs to implement proper context-sensitive command support. Emacs can provide us with loads of context information, we should use it! (Think of what TAB completion and dabbrev-mode do, grammars based on that would allow you to navigate your Emacs easily) 3. Implement functionality that would allow Emacs to manipulate the user's dictionary, so that clicking the third mouse button on a word gives several correction choices and the option to add this word to the dictionary. I'm looking for help from people who are skilled with Windows C++ programming -- I could really use some help with task number (1) above. --J. |
From: Jan R. <ja...@ry...> - 2004-07-26 04:08:53
|
Hello again, Thanks for the feedback. I was beginning to be a little bit concerned that I was the only user of this software. > Wow, reimplementing VR mode, that's impressive! I fully agree that > there are things in it that should be redone, but it works for me and > will largely be made obsolete by VoiceCoder anyway, so I haven't been > motivated to do anything. I was actually wondering about the relationship between VoiceCoder and VR-mode. From what I could understand, the goals for the VoiceCoder project are different -- it is supposed to support programming, instead of continuous text dictation. But perhaps I'm wrong? [...] > In any case, I had no idea xEmacs did not have overlays, that's too bad! > I think it would be in simpler in any case to just use change events > instead of overlays, but that's how it was written. XEmacs has extents instead of overlays. That is a different API to what amounts to basically the same thing. I have the impression that XEmacs extents are a little more powerful than overlays. > >In the process of rebuilding the code I have learned quite a bit about > >how it works, also diving into VR.exe along the way. I have discovered > >many things which were rather puzzling to me -- for example, in spite of > >the efforts to keep all changes synchronized after every buffer > >modification, a complete resynchronization is done at the beginning of > >each recognition. > > This is not how it should work. A complete resynchronization should > only happen if the buffers loose sync. Are you actually seeing this > happen? In that case I am very puzzled. If you are just inferring from > the code, I believe you missed something. It certainly does not > resynchronize every time for me. Sorry about that. I confused several issues here. Indeed, it doesn't resynchronize every time. > >The first changes to VR.exe were rather simple -- since my goal was to > >dictate into Emacs/XEmacs running anywhere (not necessarily on the same > >Windows desktop), I had to change the way I dictation was activated. In > >principle, I wanted to dictate "into" the VR.exe window. The advantage > >of these changes is that it let me get rid of the "hook.DLL" dependency. > > I'm confused. VR mode does provide for the possibility of dictating > into instances of Emacs running on other computers, as long as you have > a window (through an X server or terminal window) that Dragon can > connect to. Do you really mean you want to dictate into an Emacs that > you can't see? Or am I not understanding what you mean? Well, try to look at this differently: I do see my Emacs window, but I would prefer not to see the Windows desktop that NaturallySpeaking runs on. I don't normally use Windows, in fact NaturallySpeaking has been the primary application which forced me to start using it. What I do is I start Windows in a VMware emulator on my normal Linux desktop. The VMware window might, or might not be visible -- and in any case, this is not where I work. This means that I had to do away with the whole activation code that was present in VR-mode, because it simply doesn't apply in my case. I want to dictate into the VR-mode window on the Windows desktop -- in fact, that is going to be the only window on my Windows desktop. > >As I was getting a better understanding of the code, I was also trying > >to simplify it. The first thing to go away on the VR.exe side were > >multiple Clients. The original code tried to maintain a separate Client > >object for each Emacs frame, with each client object maintaining a list > >of buffers. This doesn't really reflect very well how Emacs manages > >frames and buffers, and introduces a lot of complexity. So, I have > >reworked the code to use only a single client. > > the clients do not (I think...) use a separate clients object for each > Emacs frame, the clients are used to separate instances of Emacs. For > example, if I'm running one Emacs on my own Windows machine, and another > Emacs on a remote machine and both are connected to VR mode, they will > be represented by two different client objects. Maybe you could use a > single client object (I haven't looked at the code recently, so I can't > remember if it would work), but I think to Dragon, each client object > represents the different windows for which dictation is enabled. Thanks for the explanation, I think I understand better now. However, I don't really see how this functionality could be useful if one runs only one instance of Emacs... > >After that, I got the synchronization in a single buffer to work. It > >actually wasn't as difficult as I had feared. I am now at a point where > >I can dictate into an XEmacs buffer, use the select and say > >functionality, and use "scratch that". It works for simple text buffers, > >also with auto-fill enabled. Of course, you can also do edits on the > >XEmacs side and mix them with dictation freely. There are still some > >bugs remaining, but the basic functionality is there. > > > >Then, I proceeded to implement dictation in multiple buffers. This > >started getting tricky: I had to be very careful which buffers are > >active in which corresponding custom dictation objects are > >activated. And then, I finally hit a real problem. What I wanted to > >achieve was flawless integration. I wanted to be able to dictate into > >Emacs at any time, not just into "voice-activated" buffers. Basically, > >whenever I can see the point on the screen, I should be able to dictate. > > This is also possible with VR mode, as long as natural text is enabled. > There are some cases where it does not necessarily make sense to enable > VR mode on a buffer, for example if the buffer contains some strange > structures or has a lot of customized key mappings that will confuse VR > mode to the point that it's not useful. In those cases, it's usually > simpler to just let natural text send the keystrokes to Emacs directly. > If one really wanted to, it would be a simple change to make to VR mode > as well Well, "natural text" wasn't really an option for me, since my Emacs runs under a different operating system. [...] > >Of course, there are some ways of artificially working around that -- > >but I started thinking: why do we bother with storing the complete > >contents of several buffers on the DNS side. It takes a lot of effort, > >and a lot of communication, and is rather wasteful of resources. I also > >have serious doubts whether DNS actually uses this contents for anything > >-- I suspect only the immediate context is used by Dragon for things > >like capitalization, spacing and punctuation, and the rest is never > >looked at. So, I think the next step for me will be to rework VR.exe to > >just use a single custom dictation object, whose contents would be > >synchronized to whatever is currently needed. I also think that I will > >not bother with synchronizing entire buffers. Instead, I will only > >synchronized a small area -- closely related to what is actually visible > >on the users screen. Comments and suggestions are appreciated. > > this has been discussed before, I think, and I'm pretty sure this is how > VoiceCoder does it. Though I'm not sure the contents are not used, I > definitely have the impression that if I dictate some strange word it > has a higher probability of being correctly recognized if it's already > visible somewhere in the window. But sending the visible area should > take care of that as well. The only drawback I can imagine is that most > transmissions will become longer which may impact responsiveness over a > slow network. I'm also not sure how "scratch that"-functionality works > in those instances, but I'm pretty sure it can be done. I decided to just go ahead and try this approach. For the moment, I have reworked the code so that it uses just a single custom dictation object. That object gets updated with whatever is in my active XEmacs window at the beginning of each recognition. Ticks are being used for synchronization if the buffer name hasn't changed, otherwise the entire buffer gets sent. The result works extremely well -- in fact, it already achieves about 80% of what I was hoping for. I don't have to "voice activate" any buffers, I can simply start dictating anywhere. I can also switch into any buffer at any time and say "select something" and "scratch that" and have it work. Auto-fill and LaTeX double quotes work just fine. The only problem that I can see remaining for simple dictation is that Dragon insists on its own spacing rules. I did not find a way to disable automatic spacing in the Dragon API. This is a little bit annoying at times: for example, if you open a quote (") in latex, Emacs will transform it into a double leftquote (``), and if you subsequently start dictating right after the quote, DNS will insist on inserting a space there. I would also very much like to find a way to use the quick correct menu remotely, and a way to make sure that my corrections make it into what DNS knows about my speech. For the moment, I'm afraid DNS cannot learn a lot from my corrections. > >Another thing which got me thinking was that there are two ways of > >interfacing to Dragon NaturallySpeaking: you can use the Dragon native > >API or SAPI. From what I can see, the Dragon native API is slightly > >easier to use, but also assumes a lot about what you're trying to do > >(for example, it automatically manages spacing for you, and there is no > >way to turn it off), while SAPI is more complex, but also more > >flexible. An example of this flexibility is the "Phrase Hypothesis" > >method, which is supposed to provide several hypotheses for a given > >utterance. This method might be very useful if we know a bit about the > >context, which we often do in Emacs. However, I have no idea how well > >Dragon supports SAPI and whether all methods are actually implemented. I > >also don't know what is the strategic direction for Dragon > >NaturallySpeaking -- which API will continue to be supported in the > >future. > > > >And finally (this e-mail is already long enough) -- does anybody know if > >there is a way to implement things like the QuickCorrect menu, adding > >words to the dictionary, or word pronunciation training without the > >Windows desktop? It seems to me that the Dragon API and assumes that the > >user is going to sit in front of a Windows desktop. You can request a > >dialog box to be shown, but you can't simply access the functionality > >yourself. But perhaps I simply haven't found a way to do it yet. > > well, assuming that you are going to sit in front of the Windows machine > seems to be a pretty safe assumption given that Dragon only runs on > Windows machines? NatLink has functionality for this, but I haven't > played with it so I don't know exactly what it can do. VMware, win4lin, WINE and other emulators undermine this assumption quite a bit. Also, I have been assuming that Dragon NaturallySpeaking is being used also in applications that do not display Windows desktop, such as automated transcription, or other server-side applications. > I hope some of this was useful information... ;-) Definitely! thanks, --Jan |
From: Patrik J. <gri...@us...> - 2004-07-25 18:27:45
|
Hi Jan, Wow, reimplementing VR mode, that's impressive! I fully agree that there are things in it that should be redone, but it works for me and will largely be made obsolete by VoiceCoder anyway, so I haven't been motivated to do anything. vr mode should work with every Emacs earlier than v21. there were strange issues with version 21 that I could not track down (Eric also noticed some of this and suspected an Emacs bug), and since I didn't need any features from 21 I just continued using 20. I haven't even tried it lately, so I don't know if those issues are still up. In any case, I had no idea xEmacs did not have overlays, that's too bad! I think it would be in simpler in any case to just use change events instead of overlays, but that's how it was written. I've commented on some specifics further down in the message. Regards, /Patrik At 05:15 PM 7/23/2004 -0700, Jan Rychter wrote: >Hi, > >Having recently decided that I really need dictation in my XEmacs, I >have taken a close look at VR-mode and started hacking on it. I have >spent about a week on it, and it's time to write about my experiences >and bounce some ideas around. > >First of all, many thanks to Barry Jaspan. Without his code, I would not >have been able to get started with this work. > >I started by just trying to run VR-mode under XEmacs. That would not >work for a number of reasons. The code has really been written with GNU >Emacs in mind -- it uses overlays, which XEmacs does not have (instead >providing extents). It also seems to be quite old and works only with >older versions of Emacs. > >The next logical step was to try to modify the code and adapt it to >XEmacs. But the more I looked at the code, the more I realized that is >very complex and that I simply cannot understand it. So, I decided to >start from scratch on the XEmacs side and promptly proceeded to rip out >all the code which I could not understand. Needless to say, that has >left me with very little code :-) yeah, it's a mess... ;-) >In the process of rebuilding the code I have learned quite a bit about >how it works, also diving into VR.exe along the way. I have discovered >many things which were rather puzzling to me -- for example, in spite of >the efforts to keep all changes synchronized after every buffer >modification, a complete resynchronization is done at the beginning of >each recognition. This is not how it should work. A complete resynchronization should only happen if the buffers loose sync. Are you actually seeing this happen? In that case I am very puzzled. If you are just inferring from the code, I believe you missed something. It certainly does not resynchronize every time for me. >The first changes to VR.exe were rather simple -- since my goal was to >dictate into Emacs/XEmacs running anywhere (not necessarily on the same >Windows desktop), I had to change the way I dictation was activated. In >principle, I wanted to dictate "into" the VR.exe window. The advantage >of these changes is that it let me get rid of the "hook.DLL" dependency. I'm confused. VR mode does provide for the possibility of dictating into instances of Emacs running on other computers, as long as you have a window (through an X server or terminal window) that Dragon can connect to. Do you really mean you want to dictate into an Emacs that you can't see? Or am I not understanding what you mean? >As I was getting a better understanding of the code, I was also trying >to simplify it. The first thing to go away on the VR.exe side were >multiple Clients. The original code tried to maintain a separate Client >object for each Emacs frame, with each client object maintaining a list >of buffers. This doesn't really reflect very well how Emacs manages >frames and buffers, and introduces a lot of complexity. So, I have >reworked the code to use only a single client. the clients do not (I think...) use a separate clients object for each Emacs frame, the clients are used to separate instances of Emacs. For example, if I'm running one Emacs on my own Windows machine, and another Emacs on a remote machine and both are connected to VR mode, they will be represented by two different client objects. Maybe you could use a single client object (I haven't looked at the code recently, so I can't remember if it would work), but I think to Dragon, each client object represents the different windows for which dictation is enabled. >After that, I got the synchronization in a single buffer to work. It >actually wasn't as difficult as I had feared. I am now at a point where >I can dictate into an XEmacs buffer, use the select and say >functionality, and use "scratch that". It works for simple text buffers, >also with auto-fill enabled. Of course, you can also do edits on the >XEmacs side and mix them with dictation freely. There are still some >bugs remaining, but the basic functionality is there. > >Then, I proceeded to implement dictation in multiple buffers. This >started getting tricky: I had to be very careful which buffers are >active in which corresponding custom dictation objects are >activated. And then, I finally hit a real problem. What I wanted to >achieve was flawless integration. I wanted to be able to dictate into >Emacs at any time, not just into "voice-activated" buffers. Basically, >whenever I can see the point on the screen, I should be able to dictate. This is also possible with VR mode, as long as natural text is enabled. There are some cases where it does not necessarily make sense to enable VR mode on a buffer, for example if the buffer contains some strange structures or has a lot of customized key mappings that will confuse VR mode to the point that it's not useful. In those cases, it's usually simpler to just let natural text send the keystrokes to Emacs directly. If one really wanted to, it would be a simple change to make to VR mode as well >The problem that appears is that if you simply start dictating into a >new buffer, VR.exe still thinks you're dictating into the old buffer and >it requests the contents of the old buffer for resynchronization. There >is no easy way around that, because the new custom dictation object for >the new buffer doesn't exist yet, and even if we create it at this >moment, it is already too late -- dictation has started and DNS things >we are dictating into the old custom dictation object. There is a hook you can use, that tells you when point has been moved to another buffer. This is how VR mode tells VR.exe that it should now enable another buffer for dictation. >Of course, there are some ways of artificially working around that -- >but I started thinking: why do we bother with storing the complete >contents of several buffers on the DNS side. It takes a lot of effort, >and a lot of communication, and is rather wasteful of resources. I also >have serious doubts whether DNS actually uses this contents for anything >-- I suspect only the immediate context is used by Dragon for things >like capitalization, spacing and punctuation, and the rest is never >looked at. So, I think the next step for me will be to rework VR.exe to >just use a single custom dictation object, whose contents would be >synchronized to whatever is currently needed. I also think that I will >not bother with synchronizing entire buffers. Instead, I will only >synchronized a small area -- closely related to what is actually visible >on the users screen. Comments and suggestions are appreciated. this has been discussed before, I think, and I'm pretty sure this is how VoiceCoder does it. Though I'm not sure the contents are not used, I definitely have the impression that if I dictate some strange word it has a higher probability of being correctly recognized if it's already visible somewhere in the window. But sending the visible area should take care of that as well. The only drawback I can imagine is that most transmissions will become longer which may impact responsiveness over a slow network. I'm also not sure how "scratch that"-functionality works in those instances, but I'm pretty sure it can be done. >Another thing which got me thinking was that there are two ways of >interfacing to Dragon NaturallySpeaking: you can use the Dragon native >API or SAPI. From what I can see, the Dragon native API is slightly >easier to use, but also assumes a lot about what you're trying to do >(for example, it automatically manages spacing for you, and there is no >way to turn it off), while SAPI is more complex, but also more >flexible. An example of this flexibility is the "Phrase Hypothesis" >method, which is supposed to provide several hypotheses for a given >utterance. This method might be very useful if we know a bit about the >context, which we often do in Emacs. However, I have no idea how well >Dragon supports SAPI and whether all methods are actually implemented. I >also don't know what is the strategic direction for Dragon >NaturallySpeaking -- which API will continue to be supported in the >future. > >And finally (this e-mail is already long enough) -- does anybody know if >there is a way to implement things like the QuickCorrect menu, adding >words to the dictionary, or word pronunciation training without the >Windows desktop? It seems to me that the Dragon API and assumes that the >user is going to sit in front of a Windows desktop. You can request a >dialog box to be shown, but you can't simply access the functionality >yourself. But perhaps I simply haven't found a way to do it yet. well, assuming that you are going to sit in front of the Windows machine seems to be a pretty safe assumption given that Dragon only runs on Windows machines? NatLink has functionality for this, but I haven't played with it so I don't know exactly what it can do. I hope some of this was useful information... ;-) |
From: Jan R. <ja...@ry...> - 2004-07-24 00:17:36
|
Hi, Having recently decided that I really need dictation in my XEmacs, I have taken a close look at VR-mode and started hacking on it. I have spent about a week on it, and it's time to write about my experiences and bounce some ideas around. First of all, many thanks to Barry Jaspan. Without his code, I would not have been able to get started with this work. I started by just trying to run VR-mode under XEmacs. That would not work for a number of reasons. The code has really been written with GNU Emacs in mind -- it uses overlays, which XEmacs does not have (instead providing extents). It also seems to be quite old and works only with older versions of Emacs. The next logical step was to try to modify the code and adapt it to XEmacs. But the more I looked at the code, the more I realized that is very complex and that I simply cannot understand it. So, I decided to start from scratch on the XEmacs side and promptly proceeded to rip out all the code which I could not understand. Needless to say, that has left me with very little code :-) In the process of rebuilding the code I have learned quite a bit about how it works, also diving into VR.exe along the way. I have discovered many things which were rather puzzling to me -- for example, in spite of the efforts to keep all changes synchronized after every buffer modification, a complete resynchronization is done at the beginning of each recognition. The first changes to VR.exe were rather simple -- since my goal was to dictate into Emacs/XEmacs running anywhere (not necessarily on the same Windows desktop), I had to change the way I dictation was activated. In principle, I wanted to dictate "into" the VR.exe window. The advantage of these changes is that it let me get rid of the "hook.DLL" dependency. As I was getting a better understanding of the code, I was also trying to simplify it. The first thing to go away on the VR.exe side were multiple Clients. The original code tried to maintain a separate Client object for each Emacs frame, with each client object maintaining a list of buffers. This doesn't really reflect very well how Emacs manages frames and buffers, and introduces a lot of complexity. So, I have reworked the code to use only a single client. After that, I got the synchronization in a single buffer to work. It actually wasn't as difficult as I had feared. I am now at a point where I can dictate into an XEmacs buffer, use the select and say functionality, and use "scratch that". It works for simple text buffers, also with auto-fill enabled. Of course, you can also do edits on the XEmacs side and mix them with dictation freely. There are still some bugs remaining, but the basic functionality is there. Then, I proceeded to implement dictation in multiple buffers. This started getting tricky: I had to be very careful which buffers are active in which corresponding custom dictation objects are activated. And then, I finally hit a real problem. What I wanted to achieve was flawless integration. I wanted to be able to dictate into Emacs at any time, not just into "voice-activated" buffers. Basically, whenever I can see the point on the screen, I should be able to dictate. The problem that appears is that if you simply start dictating into a new buffer, VR.exe still thinks you're dictating into the old buffer and it requests the contents of the old buffer for resynchronization. There is no easy way around that, because the new custom dictation object for the new buffer doesn't exist yet, and even if we create it at this moment, it is already too late -- dictation has started and DNS things we are dictating into the old custom dictation object. Of course, there are some ways of artificially working around that -- but I started thinking: why do we bother with storing the complete contents of several buffers on the DNS side. It takes a lot of effort, and a lot of communication, and is rather wasteful of resources. I also have serious doubts whether DNS actually uses this contents for anything -- I suspect only the immediate context is used by Dragon for things like capitalization, spacing and punctuation, and the rest is never looked at. So, I think the next step for me will be to rework VR.exe to just use a single custom dictation object, whose contents would be synchronized to whatever is currently needed. I also think that I will not bother with synchronizing entire buffers. Instead, I will only synchronized a small area -- closely related to what is actually visible on the users screen. Comments and suggestions are appreciated. Another thing which got me thinking was that there are two ways of interfacing to Dragon NaturallySpeaking: you can use the Dragon native API or SAPI. From what I can see, the Dragon native API is slightly easier to use, but also assumes a lot about what you're trying to do (for example, it automatically manages spacing for you, and there is no way to turn it off), while SAPI is more complex, but also more flexible. An example of this flexibility is the "Phrase Hypothesis" method, which is supposed to provide several hypotheses for a given utterance. This method might be very useful if we know a bit about the context, which we often do in Emacs. However, I have no idea how well Dragon supports SAPI and whether all methods are actually implemented. I also don't know what is the strategic direction for Dragon NaturallySpeaking -- which API will continue to be supported in the future. And finally (this e-mail is already long enough) -- does anybody know if there is a way to implement things like the QuickCorrect menu, adding words to the dictionary, or word pronunciation training without the Windows desktop? It seems to me that the Dragon API and assumes that the user is going to sit in front of a Windows desktop. You can request a dialog box to be shown, but you can't simply access the functionality yourself. But perhaps I simply haven't found a way to do it yet. Your thoughts and suggestions would be very much appreciated. --J. |
From: C R <mi1...@ho...> - 2003-08-19 20:17:28
|
Hello Patrik , I am interested to hear about the impending release of VoiceCode. I have looked at the source forge web page several times but never really got a sense of what you are trying to do. I have never managed to get the movie to download correctly How impending is impending? Do you guys need any help? I don't have a lot of python so I probably wouldn't be much use in that regard, but I could certainly help you with other aspects of the project (documentation, Emacs integration, any other general work you need done) There are a lot of people who need this stuff! Colin Rhodes >From: Patrik <pa...@uc...> >To: "cr" <mi1...@ho...>, ><ema...@li...> >Subject: Re: [Emacs-vr-mode-devel] how to change the command list based on >the mode >Date: Mon, 18 Aug 2003 16:45:04 -0700 > >hi, > >This functionality is not part of VR mode. I've been thinking of adding >it, but with the impending release of VoiceCode, I've been reluctant to put >in the fairly significant effort needed. However, there is something >called Emacs VoiceCommander which provides this functionality. >Essentially, it creates a DVC file with all the Emacs commands, and uses a >hook into the title bar to select them. The drawback is that the command >is sent as an escape-x keyboard command sequence which has minor >compatibility problems with VR mode, but I'm using it all the time. The >way I've set it up is that common, mode-insensitive, commands like the ones >that are in the VR mode default command list are executed through VR mode, >and the more rarely used mode-sensitive commands are executed through voice >commander. > >Check out http://home.hetnet.nl/~vandamhans/commander/commanderindex.htm > >If you have any further questions, let me know. > >Regards, > >/Patrik > >003 -0700, cr wrote: > >Hello everybody, > >I am an avid user of the mode and fledgeling emacs programmer. This >software has saved me from going nuts because I thought I wouldn't be able >to program again when my hands really played up. So thank you in advance! > >What I would really like to be able to do is to change the command list >based on the major mode. In short, when I am in C++ mode I want a different >set of keywords than when I am in lisp mode. > >I know how to modify the variable from inside the mode hook, but it doesn't >seem to work. Has anybody tried this are managed to make it work? > >Thank you, > >Colin Rhodes > > > >------------------------------------------------------- >This SF.Net email sponsored by: Free pre-built ASP.NET sites including >Data Reports, E-commerce, Portals, and Forums are available now. >Download today and enter to win an XBOX or Visual Studio .NET. >http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 >_______________________________________________ >Emacs-vr-mode-devel mailing list >Ema...@li... >https://lists.sourceforge.net/lists/listinfo/emacs-vr-mode-devel _________________________________________________________________ <b>Get MSN 8</b> and help protect your children with advanced parental controls. http://join.msn.com/?page=features/parental |
From: Patrik <pa...@uc...> - 2003-08-18 23:45:17
|
hi, This functionality is not part of VR mode. I've been thinking of adding it, but with the impending release of VoiceCode, I've been reluctant to put in the fairly significant effort needed. However, there is something called Emacs VoiceCommander which provides this functionality. Essentially, it creates a DVC file with all the Emacs commands, and uses a hook into the title bar to select them. The drawback is that the command is sent as an escape-x keyboard command sequence which has minor compatibility problems with VR mode, but I'm using it all the time. The way I've set it up is that common, mode-insensitive, commands like the ones that are in the VR mode default command list are executed through VR mode, and the more rarely used mode-sensitive commands are executed through voice commander. Check out http://home.hetnet.nl/~vandamhans/commander/commanderindex.htm If you have any further questions, let me know. Regards, /Patrik 003 -0700, cr wrote: Hello everybody, I am an avid user of the mode and fledgeling emacs programmer. This software has saved me from going nuts because I thought I wouldn't be able to program again when my hands really played up. So thank you in advance! What I would really like to be able to do is to change the command list based on the major mode. In short, when I am in C++ mode I want a different set of keywords than when I am in lisp mode. I know how to modify the variable from inside the mode hook, but it doesn't seem to work. Has anybody tried this are managed to make it work? Thank you, Colin Rhodes |
From: cr <mi1...@ho...> - 2003-08-18 23:31:57
|
Hello everybody, I am an avid user of the mode and fledgeling emacs programmer. This = software has saved me from going nuts because I thought I wouldn't be = able to program again when my hands really played up. So thank you in = advance! What I would really like to be able to do is to change the command list = based on the major mode. In short, when I am in C++ mode I want a = different set of keywords than when I am in lisp mode. I know how to modify the variable from inside the mode hook, but it = doesn't seem to work. Has anybody tried this are managed to make it = work? Thank you, Colin Rhodes |
From: Patrik J. <pa...@uc...> - 2002-02-27 00:47:14
|
>I see. > >I had misunderstood the purpose of the buffer change notification gizmo. I >thought it was meant to keep Emacs' and DNS' versions of the buffer in sync so >that you could support "scratch that/correct that". But from what you say, it >sounds like all vr.el is interested in is is keeping the visible regions in >sync so that we can support "Select XYZ". > >This raises a new question for me. Why do we bother with a complex (and >probably very fragile) buffer change notification approach instead of sending >the whole visible region at the beginning of every utterance? Unless you >have a >very slow connection, sending the whole visible region (note: just the visible >region, not the whole buffer) can't be that expensive. that's a VERY good question! This is just the way VR mode is made, and I don't know why Barry made that decision. It might have been that he simply wasn't aware of the other possibilities. I haven't thought about it, my changes to VR mode has been evolutionary, not revolutionary... :-) As for the communication overhead, I don't really know. I HAVE noticed a significant delay when doing the full buffer resynchronization over a tunneled secure shell link to external hosts, but I don't remember how large those buffers were. I also have never tried running it over for example a DSL link, which I suspect will be significantly slower. In my mind, the responsiveness of the recognition is one of the most important factors affecting how usable it is, so I'd prefer erring on the side of caution. However, this is easily testable! Simply add (setq vr-resynchronize t) at the beginning of get-buffer-info, and a full buffer resynchronization will be performed for every utterance. If you try this on a buffer which just about fills your window you should get a pretty good estimate of the communications delay. /Patrik ============================================================ Patrik Jonsson (831) 459-3809 Department of Astronomy & Astrophysics University of California, Santa Cruz, CA 95064 This message has been written using a voice recognition system. Words that don't make sense or not the fault of the author... |
From: Patrik J. <pa...@uc...> - 2002-02-27 00:25:54
|
Hey Alain, At 04:32 PM 2/25/2002 -0500, you wrote: >Hi Patrik, > >How is the situation with your unstable windoze? I'm recovering. At least NaturallySpeaking, Emacs and VR mode are working again. I had to wipe and reformat my drive, so now I'm slowly reinstalling all my software... at least I had no problems backing up all my data before I did it. >I've been cranking away on a version of vr.el that will work with VCode, >and I'm deeply puzzled by some parts of 'vr-cmd-make-changes. >In particular this: > > ;; send back a "delete command", since when > ;; command is executed it will send the insertion. > (let ((cmd (format "change-text \"%s\" %d %d %d %d %s" > (buffer-name) (1- (point)) (point) > 1 (buffer-modified-tick) "")) > >It seems that whenever VR.exe tells emacs to make a change to the >buffer, Emacs reports back a change that is the reverse of the change >requested by VR.exe. OK, this is a hack. It used to be that VR mode would directly insert the characters NaturallySpeaking asked it to. This had some drawbacks, when the Emacs key binding was different from just self inserting the same character. For example, in latex mode the double quote is bound to two single quotes, and this is the behavior that you would expect when dictating as well. If you study the code, you'll find that this part is only executed when the command is not "self-insert-command", which means that the character that will be inserted (may be) different from what NaturallySpeaking thinks. In that case, to remain in sync, we must tell NaturallySpeaking that the character it thinks is in the buffer has been deleted, and instead the character of the key binding is inserted. This is done by explicitly sending the delete command, and then letting the normal change detection report what's actually been inserted. I hope that explains things. >My best guess at why this is done is that unless Emacs does that, VR.exe >will end up assuming that everything gets typed in twice. That's >because: > >i) VR.exe is based on VDct which assumes that everything the user says >is typed verbatim > >ii) but Emacs actually does its own thing with the text to be inserted >(for example, auto-fill) and then reports the changes it actually did to >VR.exe. To VR.exe, those changes appear exactly as though they were >typed by the user. > >But that doesn't seem to make sense either because the only reason you >might care about VR.exe's perception of what got typed in response to an >utterance, is if you wanted to support correction of selected text. But >there seems to be no way that this scheme can support such correction, >because Emacs is telling VR.exe that the text it (VR.exe) thought got >inserted as a result of an utterance is immediatly erased by the user. your right, you're giving up the "scratch that/correct that" ability in order for the buffers to remain synchronized. The problem is that if you don't do this, the buffers will be out of sync, and using select and say will produce incorrect results. >Any idea what the purpose of this hack is? > >There is a similar one in ...? :-) I assume you meant the "read-only buffer hack". And its purpose is the same, to keep the buffers in sync in order for selection by voice to produce the desired results. by the way, regarding your problem with compiling hook.dll, that code has never been changed so you can just copy it from one of the distributed versions. I don't know if you'll still have problems compiling vr.exe, but it might help. I'll go back to your old e-mail and see if there were any questions I haven't answered. /Patrik ============================================================ Patrik Jonsson (831) 459-3809 Department of Astronomy & Astrophysics University of California, Santa Cruz, CA 95064 This message has been written using a voice recognition system. Words that don't make sense or not the fault of the author... _______________________________________________ Emacs-vr-mode-users mailing list Ema...@li... https://lists.sourceforge.net/lists/listinfo/emacs-vr-mode-users |
From: Patrik J. <pa...@uc...> - 2002-02-07 23:14:33
|
At 03:51 PM 2/7/2002 -0500, you wrote: >Here are some other questions. > >1. What is the purpose of the message 'frame-activated? beats me. I don't think that VR mode deactivates the buffer when the frame is not in focus, so it's always seemed redundant to me. There may be some issue with the way that VR mode tracks changes to the buffer, using an overlay. This is probably not the best way to do it, there are other ways of having Emacs report changes (before- and after-change-functions), that Barry was not aware of when he designed VR mode. Several people have suggested to me this is a better way of doing it, and it certainly much simpler than having to manage the "VR-overlay". I want to make this change, but it's kind of a big one and I'm not sure it's worth the effort. 2. Is there a regression test suite for VR mode? surely you are joking! :-) No, there are no such tests. Most of the time it's readily apparent when something breaks, VR mode is a fragile piece of software that seems to be living on the edge all the time... :-) If you want to play with VR.el, would you like to become a developer on the SourceForge project? That way, you can just make a branch in the CVS and it would be easier to keep track of what's going on. Let me know, and I'll set it up. /Patrik >If I start modifying vr.el and parametrising it so it can be configured for >VoiceCode, I would like to have some assurance that I am not breaking >anything in >terms of connections to VR.exe. Is there a series of standardised tests I >could do >on it that would give me that assurance? There is a regression test for >VoiceCode, >so I can test that vr.el is well behaved w.r.t. to the VoiceCode server. > >Well, that's it for today. > >Alain ============================================================ Patrik Jonsson (831) 459-3809 Department of Astronomy & Astrophysics University of California, Santa Cruz, CA 95064 This message has been written using a voice recognition system. Words that don't make sense or not the fault of the author... |
From: Patrik J. <pa...@uc...> - 2002-02-07 23:14:13
|
Hi again, I'll try to describe the startup sequence for you. It's different depending upon whether VR mode starts VR.exe itself, or whether VR-host is set. For a remote connection, it's easy. vr-host and vr-port is set, so Emacs is simply opens two network streams to the specified host/port. The first stream is for commands going from Emacs to vr.exe, the second one for the other way. The second stream is attached to vr-output-filter in vr.el. Vr.exe sends "connected" to Emacs, which responds with an "initialize"command, telling vr.exe what the window title and class are. For a local connection, with vr-host nil, things are slightly more convoluted since the process has to be started. Emacs starts the vr.exe process, giving it the "-port" option if vr-port is set. It attaches standard output from the process to vr-output-filter. Vr.exe starts up by saying "listening port", which goes *through standard output* to vr-output-filter, which executes vr-cmd-listening, which in turn causes Emacs to open the network streams to 1 27.0.0.1, on the port given by the listening command. once the streams are opened, standard output is detached from VR-output-filter. from there on, things proceed as in the remote case. This description is mostly based on my understanding of the lisp code, I really haven't had much reason to begin to the C++ I/O code, so I could be wrong. Hope that helps, /Patrik :11 PM 2/6/2002 -0500, you wrote: >Patrik Jonsson wrote: > > > Hi Alain, > > > > Yes, I saw the messages. I was just too busy to reply until now. > >No problem. > > > I believe the arm of the differentiates the two streams just based on the > > fact that one is opened after the other, but I really have never bothered > > to find out. > >Actually, after a closer look at the C++ code, it looks like: > >- Emacs opens a first connection on some port X >- VR.exe acknowledges it with message "listening portNum" >- Emacs then opens a second connection on the port number it received from >VR.exe >(portNum) > >That still seems potentially dangerous. For example, suppose you start two >instances of emacs at the same time (say by mistake you clicked twice on >the Emacs >icon in your Win2K toolbar). Suppose that instance A sends its first >connection >before instance B, but that for some reason, instance B beats instance A >and sends >its second connection before A does. Then VR.exe will be assume that B's >second >connection actually belongs to A. > >It could be that VR.exe acknowledges each initial connection with a >different port >number, in which case there is no problem (becaus A and B each open their >follow up >connection on different ports). But in that case, you can't tunnel through SSH >because you don't know ahead of time what port numbers VR.exe will assign >to the >various instances. > >In VoiceCode what I do is that an editor instance sends a request on a >port (called >the VC_LISTEN port). VoiceCode replies with a randomly generated unique >ID. Then >the editor instance opens a connection on a second port (the VC_TALK >port), and >sends the unique ID it received from VoiceCode. That way, VoiceCode knows >for sure >that this second connection belongs to whatever instance started the first >one. > > > Having two ports seems to make sense, the only thing is that > > you have to port-forward two ports instead of one to get through the > > firewall... > >Similar situation to FTP where you have a data and a command ports to forward. > > > I'm interested in incorporating whatever improvements you can think of into > > VR Mode. > >That's great! hopefully I won't have to do too much surgery on it to get >it to work >for VoiceCode. > >Is it allright if I send you questions? The code for VR-Mode has become >much bigger >than it used to be and I find it difficult to wrap my brain around it. > >Here's my first question. > >Could you describe to me in a few words the handshaking protocol between >Emacs and >the VR server? From what I saw, it looks something like: > >- When a new buffer is started with minor mode vr-mode, Emacs connects to >VR.exe on >the remote machine. > >- VR.exe on the remote machine replies with "listening portNum", where >portNum is >the port number it expects the second connection on. Not sure if portNum >is the >same for all instances or if VR.exe assigns a different port on the fly >for each >instance. > >- Emacs opens a connection to the LOCAL HOST on portNum. So it seems like >there are >two instances of VR.exe running. One on the local host, one on the remote >host. Not >sure why that is. > > > > Currently, I'm working on a redesign of the communication > > > protocol(it's in a branch on the CVS) in order to better handle > > else-mode. Basically, the problem occurs when the cursor is sitting in an > > else placeholder, which Emacs will delete as soon as it receives keyboard > > input, leading to loss of synchronization and funny spacing in the > > buffer. The way I was going to solve it was by sending change commands as > > soon as an utterance starts, deleting the "volatile" text from the > > NaturallySpeaking buffer, and then reinserting it when the utterance ends, > > if it's still there. Kind of clumsy, but it's the only way I could figure > > out that solves the two problems above. The price you pay is by having > > "scratch that" become invalid, but with a good undo command that shouldn't > > be too bad. > > > I don't know enough about how VoiceCoder is supposed to work to know if the > > above will be an issue for you as well... > >David Fox has designed and is implementing the utterance mapping >infrastructure, >and I don't have a good handle on what he plans to do. My limited >understanding of >it is that it will take into account all the changes that have happened in >Emacs as >a result of an utterance, including things like automatic formatting and >automatic >abbreviation substitutions. > > > (By the way, didn't you write an Emacs undo command? (utterance mode?) I > > meant to send to a bug report about it hard-crashing my NT Emacs, but I > > don't remember if actually did.) > >No. That might have been Nils Karslund. > >Cheers. > >Alain ============================================================ Patrik Jonsson (831) 459-3809 Department of Astronomy & Astrophysics University of California, Santa Cruz, CA 95064 This message has been written using a voice recognition system. Words that don't make sense or not the fault of the author... |
From: Patrik J. <pa...@uc...> - 2002-02-07 23:13:51
|
Hi Alain, Yes, I saw the messages. I was just too busy to reply until now. I believe the arm of the differentiates the two streams just based on the=20 fact that one is opened after the other, but I really have never bothered=20 to find out. Having two ports seems to make sense, the only thing is that= =20 you have to port-forward two ports instead of one to get through the=20 firewall... I'm interested in incorporating whatever improvements you can think of into= =20 VR Mode. Currently, I'm working on a redesign of the communication=20 protocol(it's in a branch on the CVS) in order to better handle=20 else-mode. Basically, the problem occurs when the cursor is sitting in an= =20 else placeholder, which Emacs will delete as soon as it receives keyboard=20 input, leading to loss of synchronization and funny spacing in the=20 buffer. The way I was going to solve it was by sending change commands as= =20 soon as an utterance starts, deleting the "volatile" text from the=20 NaturallySpeaking buffer, and then reinserting it when the utterance ends,= =20 if it's still there. Kind of clumsy, but it's the only way I could figure= =20 out that solves the two problems above. The price you pay is by having=20 "scratch that" become invalid, but with a good undo command that shouldn't= =20 be too bad. I don't know enough about how VoiceCoder is supposed to work to know if the= =20 above will be an issue for you as well... (By the way, didn't you write an Emacs undo command? (utterance mode?) I=20 meant to send to a bug report about it hard-crashing my NT Emacs, but I=20 don't remember if actually did.) /Patrik 01:27 PM 2/6/2002 -0500, you wrote: >I just realised I sent those messages to VoiceCoder instead of to you >directly. In case you >missed them, I am sending an additional copy directly to you. > >Alain > >---------------------------------------------------------------------------= - >Oops, nevermind. > >I noticed that you do invoke open-network-stream when *host* is not *nil*. > >But now I'm puzzled by something else. In *vr-connect*, you open two= network >streams. Looks like one is for Emacs to send commands to VR.exe and receive >replies, the other is for VR.exe to send commands to Emacs and receive >replies. >But both network streams are opened on the same port! So how does VR.exe= know > >which of the two connections is for what? In VoiceCode, I use different= ports > >for the two connections. > >I'm reading the vr.el code carefully this morning and it looks like I= should >be >able to use it for VoiceCode with only minor surgery. > >Basically, I'm planning to make all the message interpretation part of the >system configurable. This is necesssary because the messages that VoiceCode >needs to send to Emacs seem to be a superset of what vr-mode currently >handles. >Also, VoiceCode uses a more flexible XML-based messaging protocol. > >I think this change might make vr-mode more flexible, and able to connect >with >other specialised voice addins besides VR.exe and VoiceCode. If you think >this >is worth including in the main version of vr-mode, I can send you a more >detailed plan of what I want to do once I have it, and you can comment on= it. > >Thx > >Alain > >Alain D=E9silets wrote: > > > I'm looking at vr-mode.el and am a bit puzzled by something. > > > > The readme file says that vr-mode can be used over a network. But in >vr-mode > > function, you connect to vr.exe by starting a process on the local= machine. > > > > > How does that end up connecting over the network? Is it just that= vr.exe, > > when invoked with a host name will connect to an instance of vr.exe= that's > > listening on that host? > > > > If so, wouldn't it make more sense for Emacs to connect directly to the > > remote vr.exe by invoking network-connection function? > > > > Thx > > > > Alain > > > > shane_3m wrote: > > > > > that would be me... :-) > > > > > > And yes, it is open source, at emacs-vr-mode.sourceForge.net > > > > > > Take what you can! :-) > > > > > > /Patrik > > > > > > --- In VoiceCoder@y..., Alain D=E9silets <alain.desilets@n...> wrote: > > > > I am just about to start working on connecting VoiceCode to Emacs > > > and > > > > thought I would use vr-mode.el as a starting point. > > > > > > > > Was VR-mode made OpenSource in the end? Who is maintaining it these > > > > days? > > > > > > > > Thanks. > > > > > > > > Alain D=E9silets > > > > > > ------------------------ Yahoo! Groups Sponsor= ---------------------~--> > > > Sponsored by VeriSign - The Value of Trust > > > Secure all your Web servers now - with a proven 5-part > > > strategy. The FREE Server Security Guide shows you how. > > > http://us.click.yahoo.com/uCuuSA/VdiDAA/yigFAA/saFolB/TM > > >= ---------------------------------------------------------------------~-> > > > > > > Community email addresses: > > > Post message: Voi...@on... > > > Subscribe: Voi...@on... > > > Unsubscribe: Voi...@on... > > > List owner: Voi...@on... > > > > > > Shortcut URL to this page: > > > http://www.onelist.com/community/VoiceCoder > > > > > > Your use of Yahoo! Groups is subject to= http://docs.yahoo.com/info/terms/ > > > >Received: from nrcmrddc1.imsb.nrc.ca ([132.246.56.35]) by nrcmrdbh2.nrc.ca= =20 >with SMTP (Microsoft Exchange Internet Mail Service Version 5.5.2653.13) > id 1BBXQ2SM; Wed, 6 Feb 2002 10:57:29 -0500 >Received: from nrc.ca (ii158.ai.iit.nrc.ca [132.246.128.158]) by=20 >nrcmrddc1.imsb.nrc.ca with SMTP (Microsoft Exchange Internet Mail Service= =20 >Version 5.5.2653.13) > id D8C3AP71; Wed, 6 Feb 2002 10:57:30 -0500 >Message-ID: <3C6...@nr...> >Date: Wed, 06 Feb 2002 11:06:38 -0500 >From: Alain D=E9silets <ala...@nr...> >X-Mailer: Mozilla 4.75 [en] (Windows NT 5.0; U) >X-Accept-Language: en >MIME-Version: 1.0 >To: Voi...@ya... >Subject: Re: [VoiceCoder] Re: Who maintains VR-mode? >References: <a3n2d0+p0gl@eGroups.com> <3C6...@nr...> >Content-Type: text/plain; charset=3Diso-8859-1 >Content-Transfer-Encoding: 8bit > >Oops, nevermind. > >I noticed that you do invoke open-network-stream when *host* is not *nil*. > >But now I'm puzzled by something else. In *vr-connect*, you open two= network >streams. Looks like one is for Emacs to send commands to VR.exe and receive >replies, the other is for VR.exe to send commands to Emacs and receive=20 >replies. >But both network streams are opened on the same port! So how does VR.exe= know >which of the two connections is for what? In VoiceCode, I use different= ports >for the two connections. > >I'm reading the vr.el code carefully this morning and it looks like I=20 >should be >able to use it for VoiceCode with only minor surgery. > >Basically, I'm planning to make all the message interpretation part of the >system configurable. This is necesssary because the messages that VoiceCode >needs to send to Emacs seem to be a superset of what vr-mode currently=20 >handles. >Also, VoiceCode uses a more flexible XML-based messaging protocol. > >I think this change might make vr-mode more flexible, and able to connect= with >other specialised voice addins besides VR.exe and VoiceCode. If you think= this >is worth including in the main version of vr-mode, I can send you a more >detailed plan of what I want to do once I have it, and you can comment on= it. > >Thx > >Alain > >Alain D=E9silets wrote: > > > I'm looking at vr-mode.el and am a bit puzzled by something. > > > > The readme file says that vr-mode can be used over a network. But in=20 > vr-mode > > function, you connect to vr.exe by starting a process on the local= machine. > > > > How does that end up connecting over the network? Is it just that= vr.exe, > > when invoked with a host name will connect to an instance of vr.exe= that's > > listening on that host? > > > > If so, wouldn't it make more sense for Emacs to connect directly to the > > remote vr.exe by invoking network-connection function? > > > > Thx > > > > Alain > > > > shane_3m wrote: > > > > > that would be me... :-) > > > > > > And yes, it is open source, at emacs-vr-mode.sourceForge.net > > > > > > Take what you can! :-) > > > > > > /Patrik > > > > > > --- In VoiceCoder@y..., Alain D=E9silets <alain.desilets@n...> wrote: > > > > I am just about to start working on connecting VoiceCode to Emacs > > > and > > > > thought I would use vr-mode.el as a starting point. > > > > > > > > Was VR-mode made OpenSource in the end? Who is maintaining it these > > > > days? > > > > > > > > Thanks. > > > > > > > > Alain D=E9silets > > > > > > ------------------------ Yahoo! Groups Sponsor= ---------------------~--> > > > Sponsored by VeriSign - The Value of Trust > > > Secure all your Web servers now - with a proven 5-part > > > strategy. The FREE Server Security Guide shows you how. > > > http://us.click.yahoo.com/uCuuSA/VdiDAA/yigFAA/saFolB/TM > > >= ---------------------------------------------------------------------~-> > > > > > > Community email addresses: > > > Post message: Voi...@on... > > > Subscribe: Voi...@on... > > > Unsubscribe: Voi...@on... > > > List owner: Voi...@on... > > > > > > Shortcut URL to this page: > > > http://www.onelist.com/community/VoiceCoder > > > > > > Your use of Yahoo! Groups is subject to= http://docs.yahoo.com/info/terms/ > > > > -- > > Alain D=E9silets > > > > Agent de recherche > > Conseil National de Recherches du Canada > > > > Research Officer > > National Research Council of Canad =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Patrik Jonsson (831) 459-3809 Department of Astronomy & Astrophysics University of California, Santa Cruz, CA 95064 This message has been written using a voice recognition system. Words that don't make sense or not the fault of the author... |