From: <Dan...@pa...> - 2015-01-13 23:37:55
|
Hello Nicolay, Thanks very much for your thoughtful answer. My context was that I wondered whether there might be occasionally be an advantage to mapping words to word phrases in G rather than assigning probabilities to words. I assumed that someone had tried it and it was known not to work well since no one seemed to do it. I couldn't find a record of anyone trying it, so thought I'd ask. Dan -----Original Message----- From: Nickolay Shmyrev [mailto:nsh...@gm...] Sent: Tuesday, January 13, 2015 3:01 PM To: Davies, Dan <Dan...@pa...> Cc: Kal...@li... Subject: Re: [Kaldi-developers] Kaldi comparison with Hydra? > 13 янв. 2015 г., в 3:35, Dan...@pa... написал(а): > > I apologize in advance for asking a newbie question. I’ve been googling around and haven’t seen an obvious answer. > > In the same sense that the Lexicon maps phonemes to words, what happens if the Language Model is set up as a Finite State Transducer instead of a Finite State Acceptor and maps words to word phrases? Most of the phrases would be very short (1-2 words), but when used in constrained applications, there might be an interesting number of longer phrases. For example, “How may I help you today?” and “We’ll be back after these messages.” Hello Dan In ASR task we search for the most likely output label sequence given the input feature sequence. If your transducer has phrases as output labels you’ll have those phrases as output, it should be no problem. Phrases might differ from the actual word, for example two words «back after» recognized in engine might output whole «We’ll be back after these messages.» Sort of semantic recognition instead of just recognition of a word sequence. If you are just interested in using grammars, you can use them in acceptor form. Maybe you could provide some more context so we can clarify. |