Its an interesting question. Let's look at each one:
1) keyboard - by this, I think you need natural language input by a human at a console. For this, absolutely! This was the first goal of the aikernel. But it shouldn't be limited to just the keyboard. I imagine that it could handle natural language from instant messaging, e-mail, and just about anything you can think of.
2) voice - yes, if it converts to natural language. I've done a lot of things around speech recognition, but, given the scope of such an understaking, I think we'll just focus on voice that has already been translated to machine readable text (i.e., #1)
3) digital - no you are on to something. I intentionally changed the input event in 1.3.1 to switch from just a text wrapper (like #1) to actually carry a byte array payload. This way, as we enhance the recoginition systems to be more than language specific we can do intelligent analysis on any kind of incoming data. If you take a look, you'll see that there is a channel that you could use to shunt different data types to different processing alogrithms.
Thanks for the question, does this help?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Are there three type of inputs that the AIK will be required to reconize.
1. Keyboard
2. Voice
3. Digital
Its an interesting question. Let's look at each one:
1) keyboard - by this, I think you need natural language input by a human at a console. For this, absolutely! This was the first goal of the aikernel. But it shouldn't be limited to just the keyboard. I imagine that it could handle natural language from instant messaging, e-mail, and just about anything you can think of.
2) voice - yes, if it converts to natural language. I've done a lot of things around speech recognition, but, given the scope of such an understaking, I think we'll just focus on voice that has already been translated to machine readable text (i.e., #1)
3) digital - no you are on to something. I intentionally changed the input event in 1.3.1 to switch from just a text wrapper (like #1) to actually carry a byte array payload. This way, as we enhance the recoginition systems to be more than language specific we can do intelligent analysis on any kind of incoming data. If you take a look, you'll see that there is a channel that you could use to shunt different data types to different processing alogrithms.
Thanks for the question, does this help?
Does this also hold true for output.