|
From: Mailing l. u. f. U. C. a. U. <kal...@li...> - 2013-06-03 20:43:19
|
Assuming those are relative numbers, they're probably within the margin of error. See if it's the same on other test sets. Because various algorithms call rand(), and different machines implement this differently, results aren't fully reproducible. But to check that it's not some code change that hurts the results, you could try checking out a copy with the same revision number as the RESULTS file, and running again and seeing what results you get. If there is a difference, I'd like to know. It's also possible some files were missing in your WSJ distribution-- let me know the data count reported in one the */log/update.log files and I'll compare with a local copy. Dan On Mon, Jun 3, 2013 at 4:34 PM, Mailing list used for User Communication and Updates <kal...@li...> wrote: > Hi, > > I'm following the WSJ s5 recipe, and I wasn't able to reproduce the > results stated in the RESULTS file. Are we supposed to get exactly the > same numbers with the (hyper-)parameters given in the recipe? > > Just to give a sense of how far I'm off. My monophone models perform 10% > worse than the reported results on eval92 and the resulting triphone > models perform 4% worse. > > Thanks, > Hao > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |