This list is closed, nobody may subscribe to it.
2002 |
Jan
(29) |
Feb
(11) |
Mar
(134) |
Apr
(55) |
May
(91) |
Jun
(77) |
Jul
(46) |
Aug
(147) |
Sep
(96) |
Oct
(103) |
Nov
(130) |
Dec
(103) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(207) |
Feb
(132) |
Mar
(87) |
Apr
(183) |
May
(89) |
Jun
(44) |
Jul
(41) |
Aug
(22) |
Sep
(39) |
Oct
(37) |
Nov
(47) |
Dec
(8) |
2004 |
Jan
(22) |
Feb
(54) |
Mar
(52) |
Apr
(7) |
May
(47) |
Jun
(19) |
Jul
|
Aug
(43) |
Sep
(7) |
Oct
(5) |
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
(6) |
Apr
(20) |
May
|
Jun
(4) |
Jul
(2) |
Aug
(1) |
Sep
|
Oct
(17) |
Nov
(11) |
Dec
(14) |
2009 |
Jan
(3) |
Feb
(56) |
Mar
(19) |
Apr
(81) |
May
(24) |
Jun
|
Jul
|
Aug
(5) |
Sep
(57) |
Oct
(24) |
Nov
(34) |
Dec
(56) |
2010 |
Jan
(15) |
Feb
(42) |
Mar
(79) |
Apr
(45) |
May
(27) |
Jun
(41) |
Jul
(69) |
Aug
(14) |
Sep
(18) |
Oct
(19) |
Nov
(33) |
Dec
(25) |
2011 |
Jan
(4) |
Feb
(22) |
Mar
(53) |
Apr
(47) |
May
(29) |
Jun
(18) |
Jul
(18) |
Aug
(2) |
Sep
(6) |
Oct
(32) |
Nov
(18) |
Dec
|
2012 |
Jan
(8) |
Feb
(12) |
Mar
(62) |
Apr
(37) |
May
(22) |
Jun
(8) |
Jul
(3) |
Aug
(3) |
Sep
(2) |
Oct
(2) |
Nov
(28) |
Dec
(19) |
2013 |
Jan
(28) |
Feb
(24) |
Mar
(8) |
Apr
(17) |
May
(6) |
Jun
(14) |
Jul
(6) |
Aug
(5) |
Sep
(17) |
Oct
(12) |
Nov
(24) |
Dec
(16) |
2014 |
Jan
(13) |
Feb
(9) |
Mar
(2) |
Apr
(3) |
May
|
Jun
(2) |
Jul
(6) |
Aug
(6) |
Sep
(28) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2015 |
Jan
(5) |
Feb
(11) |
Mar
(6) |
Apr
(10) |
May
(9) |
Jun
(13) |
Jul
(5) |
Aug
(10) |
Sep
(13) |
Oct
(4) |
Nov
(14) |
Dec
(14) |
2016 |
Jan
(32) |
Feb
(33) |
Mar
(37) |
Apr
(6) |
May
(7) |
Jun
(11) |
Jul
(3) |
Aug
(2) |
Sep
(9) |
Oct
(11) |
Nov
(16) |
Dec
(3) |
2017 |
Jan
(3) |
Feb
(15) |
Mar
(41) |
Apr
(10) |
May
(8) |
Jun
(4) |
Jul
|
Aug
(63) |
Sep
|
Oct
(7) |
Nov
(2) |
Dec
(3) |
2018 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(3) |
Jun
(15) |
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(1) |
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(1) |
Aug
(6) |
Sep
(3) |
Oct
(2) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: luca m. <luc...@gm...> - 2017-10-16 10:17:40
|
Ok, thank you. On 16 Oct 2017 12:14, "Nickolay Shmyrev" <nsh...@gm...> wrote: > Sphinx4 does not work on android, it is too heavy for mobiles. You need to > use pocketsphinx. > > Tutorial is here > > http://cmusphinx.github.io/wiki/tutorialandroid > > > 16 окт. 2017 г., в 11:15, luca martino <luc...@gm...> > написал(а): > > > > I installed sphinx4 libraries in Android studio based on cmu official > guide (https://cmusphinx.github.io/wiki/tutorialsphinx4/) and I > succeeded, Android studio does not signal problems in code but, at launch, > the program crashes because It can not find models, contained in > sphinx4-data, passed to the Configuration object and I don' t know why, > please help me. > > <Cattura.PNG><Cattura2.PNG><Cattura3.PNG><Cattura4.PNG>< > Cattura5.PNG><Cattura6.PNG><Cattura7.PNG>----------------- > ------------------------------------------------------------- > > Check out the vibrant tech community on one of the world's most > > engaging tech sites, Slashdot.org! http://sdm.link/slashdot______ > _________________________________________ > > Cmusphinx-devel mailing list > > Cmu...@li... > > https://lists.sourceforge.net/lists/listinfo/cmusphinx-devel > > |
From: Nickolay S. <nsh...@gm...> - 2017-10-16 10:14:29
|
Sphinx4 does not work on android, it is too heavy for mobiles. You need to use pocketsphinx. Tutorial is here http://cmusphinx.github.io/wiki/tutorialandroid > 16 окт. 2017 г., в 11:15, luca martino <luc...@gm...> написал(а): > > I installed sphinx4 libraries in Android studio based on cmu official guide (https://cmusphinx.github.io/wiki/tutorialsphinx4/) and I succeeded, Android studio does not signal problems in code but, at launch, the program crashes because It can not find models, contained in sphinx4-data, passed to the Configuration object and I don' t know why, please help me. > <Cattura.PNG><Cattura2.PNG><Cattura3.PNG><Cattura4.PNG><Cattura5.PNG><Cattura6.PNG><Cattura7.PNG>------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________ > Cmusphinx-devel mailing list > Cmu...@li... > https://lists.sourceforge.net/lists/listinfo/cmusphinx-devel |
From: luca m. <luc...@gm...> - 2017-10-16 08:15:12
|
I installed sphinx4 libraries in Android studio based on cmu official guide (https://cmusphinx.github.io/wiki/tutorialsphinx4/) and I succeeded, Android studio does not signal problems in code but, at launch, the program crashes because It can not find models, contained in sphinx4-data, passed to the Configuration object and I don' t know why, please help me. |
From: Farzin D. <din...@gm...> - 2017-10-16 00:08:44
|
Hello cmusphinx, I would like to build and target PocketSphinx for ST Micro 32 bit like STM32F413ZH. Is that possible? What are the steps and which packages do I need? I would really appreciate any support. Thank you. |
From: PANKAJ B. <pan...@du...> - 2017-10-15 18:38:56
|
Hey everyone! This year, CMUSphinx is participating in Google Code-in! It is much like Google Summer of Code, but for younger students and with much less burden on the mentor. You can know more about it here: https://codein.withgoogle.com So, if you are interested in helping CMUSphinx get through the selection procedure, you will need to follow 1 simple instruction: > Go to: https://developers.google.com/open-source/gci/resources/example-tasks Check out the example tasks. And help us create similar such tasks by adding a new row in this document: https://docs.google.com/spreadsheets/d/1yK_isyTqbinz4tGryx86 > tsW9yP4ojutUqsiZJG88TmI/edit?usp=sharing Each row will be treated as SubTasks. Just complete the columns mentioned > and THAT'S IT! Some small but long overdue tasks could be completed this way. And you could get to mentor these students on behalf of CMUSphinx if we get selected! Last Date is 24-October. So, we need to hurry! Your contribution could help CMUSphinx get selected. Thanks |
From: Nickolay S. <nsh...@gm...> - 2017-08-31 14:26:01
|
Hello Scott You can simply use ps_start_utt/ps_end_utt to restart search. You can also cut the required piece of audio and submit it for processing. I'm not sure what the problem is. > 29 авг. 2017 г., в 2:42, Scott Guthery <sb...@ac...> написал(а): > > I'd like to apply an n-gram search over a particular sequence of frames, say from frame A to frame B. > > Is there a recognized/idiomatic way to accomplish this? > > If not, is one way to rewind the frames, advance frames ("by hand") until we get to A and then abort the search (again "by hand") when we get to B. > > Thanks in advance for any insight. > > Cheers, Scott > > |
From: Nickolay S. <nsh...@gm...> - 2017-08-30 21:41:50
|
Hello Stefan Try Microsoft Speech API, it should be pretty easy to use. > 30 авг. 2017 г., в 22:44, Stefan Reich via Cmusphinx-devel <cmu...@li...> написал(а): > > Hi, > > I give up for now and declare official failure on compiling PS for Windows. I just can't do it. > > It absolutely baffles me why no directly usable binaries are provided. It's like you don't want this product used. > > Either way, that is the result: I simply can't use it. > > Not sure what I will replace it with. > > Greetings, > Stefan |
From: Nickolay S. <nsh...@gm...> - 2017-08-30 21:36:30
|
Hello Scott > 30 авг. 2017 г., в 23:48, Scott Guthery <sb...@ac...> написал(а): > > Sphinxers ... > > A couple questions, if I may ... > > 1) Would it make any sense to run phone_loop in parallel with allphones? I see that it is used by N-Gram but not FSG searches. No, phone loop is helpful for improving the speed of the decoding with context-dependent models. When you proceed from one word to another you need to guess which phones to activate next and this is what phone loop is doing. If you have small vocabulary like in FSG or allphone search you do not need phone loop. > > 2) Speaking of sense and nonsense, would it make any sense to run phone_loop by itself; i.e. the DEFAULT search? If so, how does one set parameters so that this happens? Just setting pl_window yields “No search model ...” grumps. I'm sorry I do not see much sense in doing that. |
From: Scott G. <sb...@ac...> - 2017-08-30 21:11:43
|
Sphinxers ... A couple questions, if I may ... 1) Would it make any sense to run phone_loop in parallel with allphones? I see that it is used by N-Gram but not FSG searches. 2) Speaking of sense and nonsense, would it make any sense to run phone_loop by itself; i.e. the DEFAULT search? If so, how does one set parameters so that this happens? Just setting pl_window yields “No search model ...” grumps. Thanks for any insight. Cheers, Scott P.S. In re Windows builds, I had no trouble at all. I just clicked on the sln files and went for a cup of coffee. When I came back all the examples ran just fine. My compliments to whomsoever set these up. |
From: Stefan R. <ste...@go...> - 2017-08-30 19:44:33
|
Hi, I give up for now and declare official failure on compiling PS for Windows. I just can't do it. It absolutely baffles me why no directly usable binaries are provided. It's like you don't want this product used. Either way, that is the result: I simply can't use it. Not sure what I will replace it with. Greetings, Stefan |
From: Nickolay S. <nsh...@gm...> - 2017-08-30 15:59:36
|
Dear Hendrik There is no exact on the source of this model, it was trained on variety of data sources, mostly clean but some are noisy/reverberated. Total amount of data is about 1000 hours. > 29 авг. 2017 г., в 18:00, Evandro Gouvea <eg...@gm...> написал(а): > > Folks, > Message bounced, sent by non-subscriber. Please Cc sender. > > --Evandro > > ---------- Forwarded message ---------- > From: Hendrik Barfuss <hen...@fa...> > To: cmu...@li... > Cc: > Bcc: > Date: Tue, 29 Aug 2017 16:09:25 +0200 > Subject: Question regarding US English acoustic model > Dear cmusphinx team, > > would you be able to share some information on the US English acoustic model (cmusphinx-en-us-5.2.tar.gz) which you offer in your download section? In particularly, I would be interested in whether it was trained on clean speech or noisy/reverberated speech and how many utterances you used for the training. > I really would appreciated your help. Thank you very much in advance! > > Kind regards, > Hendrik > -- > Hendrik Barfuss, M.Sc. > > Chair of Multimedia Communications and Signal Processing > Friedrich-Alexander University Erlangen-Nürnberg > Wetterkreuz 15, 91058 Erlangen, Germany > > Phone: > +49 9131 85 25489 > > EMail: > hen...@fa... > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________ > Cmusphinx-devel mailing list > Cmu...@li... > https://lists.sourceforge.net/lists/listinfo/cmusphinx-devel |
From: Evandro G. <eg...@gm...> - 2017-08-29 15:00:32
|
Folks, Message bounced, sent by non-subscriber. Please Cc sender. --Evandro ---------- Forwarded message ---------- From: Hendrik Barfuss <hen...@fa...> To: cmu...@li... Cc: Bcc: Date: Tue, 29 Aug 2017 16:09:25 +0200 Subject: Question regarding US English acoustic model Dear cmusphinx team, would you be able to share some information on the US English acoustic model (cmusphinx-en-us-5.2.tar.gz <https://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/US%20English/cmusphinx-en-us-5.2.tar.gz/download>) which you offer in your download section? In particularly, I would be interested in whether it was trained on clean speech or noisy/reverberated speech and how many utterances you used for the training. I really would appreciated your help. Thank you very much in advance! Kind regards, Hendrik -- Hendrik Barfuss, M.Sc. Chair of Multimedia Communications and Signal Processing Friedrich-Alexander University Erlangen-Nürnberg Wetterkreuz 15, 91058 Erlangen, Germany Phone: +49 9131 85 25489 <09131%208525489> EMail: hen...@fa... |
From: Scott G. <sb...@ac...> - 2017-08-28 23:42:22
|
I'd like to apply an n-gram search over a particular sequence of frames, say from frame A to frame B. Is there a recognized/idiomatic way to accomplish this? If not, is one way to rewind the frames, advance frames ("by hand") until we get to A and then abort the search (again "by hand") when we get to B. Thanks in advance for any insight. Cheers, Scott |
From: PANKAJ B. <pan...@du...> - 2017-08-27 13:03:23
|
> > > I think the major thing remaining is actually to import the implementation > into ROS main distribution, I would start with updating > My co-mentor, Sarah Elliott, is in talks with Mike Ferguson (the owner of the official pocketsphinx repository in ROS) to look into this and maybe point the package's source to my repo. It's gonna take a week as she is currently out of station. |
From: Nickolay S. <nsh...@gm...> - 2017-08-24 17:02:59
|
Right, I'd say an amazing job by everyone involved. > 24 авг. 2017 г., в 16:53, James Salsman <jsa...@ta...> написал(а): > > Hi Nikolay, > > Brij did a really exceptional fantastic job; Brij's work is described here: > > https://raw.githubusercontent.com/jsalsman/featex/master/Spoken-English-Intelligibility-Remediation.pdf > > Pavel and Sahith also both did awesome work: > > https://github.com/akreal/diphones > > https://www.reddit.com/r/dataisbeautiful/comments/6vfuw6/one_of_my_google_summer_of_code_students_made/ > > https://github.com/SND96/pocketsphinx-scores > > Mritunjay's work is way above average but I'm not sure it will look > very good until the next GSoC student works on it: > > https://mritunjaygoutam1.github.io > > That is so awesome but it really needs another programmer to redo it. > That is not saying anything bad about Mritunjay, it's just the nature > of the fully-general problem, which would defy any individual work > because of the physiological reactions to work towards the general > solution. > > Rishi and Saur are still helping each other out for the finishing > touches. Stay tuned! > > On Thu, Aug 24, 2017 at 9:29 PM, Nickolay Shmyrev <nsh...@gm...> wrote: >> Hello Pankaj >> >> I think the major thing remaining is actually to import the implementation into ROS main distribution, I would start with updating >> >> http://wiki.ros.org/pocketsphinx >> >> You need to talk to ROS guys about that, https://discourse.ros.org should be a good start. >> >>> 24 авг. 2017 г., в 11:17, PANKAJ BARANWAL <pan...@du...> написал(а): >>> >>> Hey, >>> Here is small blog post summarizing my work and how you can use it if you like. ;) >>> >>> https://medium.com/@PankajB96/gsoc-a-joruney-just-started-53d6b8b68931 >>> >>> The code can be found here: >>> https://github.com/Pankaj-Baranwal/pocketsphinx/ >>> >>> I have developed a package for pocketsphinx in ROS which now facilitates the following features: >>> a) Keyword spotting mode: For recognizing small keyphrases. >>> b) Grammar mode: This covers jsgf grammar mode and language model. >>> c) Continuous mode: It first recognizes a keyword, and then switches to grammar mode for continuous recognition. >>> d) Speaker Verification: Adapt a model on your voice and the system will be able to differentiate between your voice and someone else's with decent accuracy. >>> >>> Cheers >>> Pankaj >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Cmusphinx-gsoc mailing list >> Cmu...@li... >> https://lists.sourceforge.net/lists/listinfo/cmusphinx-gsoc >> |
From: James S. <jsa...@ta...> - 2017-08-24 13:53:22
|
Hi Nikolay, Brij did a really exceptional fantastic job; Brij's work is described here: https://raw.githubusercontent.com/jsalsman/featex/master/Spoken-English-Intelligibility-Remediation.pdf Pavel and Sahith also both did awesome work: https://github.com/akreal/diphones https://www.reddit.com/r/dataisbeautiful/comments/6vfuw6/one_of_my_google_summer_of_code_students_made/ https://github.com/SND96/pocketsphinx-scores Mritunjay's work is way above average but I'm not sure it will look very good until the next GSoC student works on it: https://mritunjaygoutam1.github.io That is so awesome but it really needs another programmer to redo it. That is not saying anything bad about Mritunjay, it's just the nature of the fully-general problem, which would defy any individual work because of the physiological reactions to work towards the general solution. Rishi and Saur are still helping each other out for the finishing touches. Stay tuned! On Thu, Aug 24, 2017 at 9:29 PM, Nickolay Shmyrev <nsh...@gm...> wrote: > Hello Pankaj > > I think the major thing remaining is actually to import the implementation into ROS main distribution, I would start with updating > > http://wiki.ros.org/pocketsphinx > > You need to talk to ROS guys about that, https://discourse.ros.org should be a good start. > >> 24 авг. 2017 г., в 11:17, PANKAJ BARANWAL <pan...@du...> написал(а): >> >> Hey, >> Here is small blog post summarizing my work and how you can use it if you like. ;) >> >> https://medium.com/@PankajB96/gsoc-a-joruney-just-started-53d6b8b68931 >> >> The code can be found here: >> https://github.com/Pankaj-Baranwal/pocketsphinx/ >> >> I have developed a package for pocketsphinx in ROS which now facilitates the following features: >> a) Keyword spotting mode: For recognizing small keyphrases. >> b) Grammar mode: This covers jsgf grammar mode and language model. >> c) Continuous mode: It first recognizes a keyword, and then switches to grammar mode for continuous recognition. >> d) Speaker Verification: Adapt a model on your voice and the system will be able to differentiate between your voice and someone else's with decent accuracy. >> >> Cheers >> Pankaj > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Cmusphinx-gsoc mailing list > Cmu...@li... > https://lists.sourceforge.net/lists/listinfo/cmusphinx-gsoc > |
From: Nickolay S. <nsh...@gm...> - 2017-08-24 13:29:55
|
Hello Pankaj I think the major thing remaining is actually to import the implementation into ROS main distribution, I would start with updating http://wiki.ros.org/pocketsphinx You need to talk to ROS guys about that, https://discourse.ros.org should be a good start. > 24 авг. 2017 г., в 11:17, PANKAJ BARANWAL <pan...@du...> написал(а): > > Hey, > Here is small blog post summarizing my work and how you can use it if you like. ;) > > https://medium.com/@PankajB96/gsoc-a-joruney-just-started-53d6b8b68931 > > The code can be found here: > https://github.com/Pankaj-Baranwal/pocketsphinx/ > > I have developed a package for pocketsphinx in ROS which now facilitates the following features: > a) Keyword spotting mode: For recognizing small keyphrases. > b) Grammar mode: This covers jsgf grammar mode and language model. > c) Continuous mode: It first recognizes a keyword, and then switches to grammar mode for continuous recognition. > d) Speaker Verification: Adapt a model on your voice and the system will be able to differentiate between your voice and someone else's with decent accuracy. > > Cheers > Pankaj |
From: Arseniy G. <gor...@ya...> - 2017-08-24 08:32:13
|
<div>Great job, Pankaj!</div><div> </div><div>Hope to see the project further improvements and a large community of users.</div><div> </div><div>Happy coding ;)</div><div>Arseniy</div><div> </div><div>24.08.2017, 11:17, "PANKAJ BARANWAL" <pan...@du...>:</div><blockquote type="cite"><div><div>Hey,</div><div>Here is small blog post summarizing my work and how you can use it if you like. ;)</div><div> </div><a href="https://medium.com/@PankajB96/gsoc-a-joruney-just-started-53d6b8b68931">https://medium.com/@PankajB96/gsoc-a-joruney-just-started-53d6b8b68931</a><div> </div><div>The code can be found here:</div><div><a href="https://github.com/Pankaj-Baranwal/pocketsphinx/">https://github.com/Pankaj-Baranwal/pocketsphinx/</a><div> </div><div>I have developed a package for pocketsphinx in ROS which now facilitates the following features:</div><div>a) <strong>Keyword spotting mode:</strong> For recognizing small keyphrases.</div><div>b) <strong>Grammar mode:</strong> This covers jsgf grammar mode and language model.</div><div>c) <strong>Continuous mode:</strong> It first recognizes a keyword, and then switches to grammar mode for continuous recognition.</div><div>d) <strong>Speaker Verification:</strong> Adapt a model on your voice and the system will be able to differentiate between your voice and someone else's with decent accuracy.<div> </div><div>Cheers</div><div>Pankaj</div></div></div></div></blockquote><div> </div><div> </div> |
From: PANKAJ B. <pan...@du...> - 2017-08-24 08:17:51
|
Hey, Here is small blog post summarizing my work and how you can use it if you like. ;) https://medium.com/@PankajB96/gsoc-a-joruney-just-started-53d6b8b68931 The code can be found here: https://github.com/Pankaj-Baranwal/pocketsphinx/ I have developed a package for pocketsphinx in ROS which now facilitates the following features: a) *Keyword spotting mode:* For recognizing small keyphrases. b) *Grammar mode:* This covers jsgf grammar mode and language model. c) *Continuous mode:* It first recognizes a keyword, and then switches to grammar mode for continuous recognition. d) *Speaker Verification:* Adapt a model on your voice and the system will be able to differentiate between your voice and someone else's with decent accuracy. Cheers Pankaj |
From: Nickolay S. <nsh...@gm...> - 2017-08-20 20:49:20
|
Hello Scott g2p-seq2seq requires tensorflow version 1.0, you can use pip install tensorflow==1.0.0 API changes too fast and we have no time to update the code. > 20 авг. 2017 г., в 22:24, Scott Guthery <sb...@ac...> написал(а): > > I’m trying to run a clean install of g2p-seq2seq on a clean install of Python with clean install of tensorflow without success. The console log is below but the core of the problem seems to be: > > AttributeError: module 'tensorflow.contrib.rnn' has no attribute 'core_rnn_cell' > > Anybody else seen this? It seems the API to tensorflow has changed and the latest and greatest of tensorflow doesn't agree with the latest and greatest of g2p-seq2seq. > > I'm not fluent in the innards of Python so I can't tell what's happening. Any advice greatly appreciated. > > Cheers, Scott > > E:\g2p\g2p-seq2seq-master\g2p_seq2seq>C:\Users\s_gut\AppData\Local\Programs\Python\Python35\Scripts\g2p-seq2seq --decode backtalk.txt --model E:\g2p\g2p-seq2seq-cmudict > Loading vocabularies from E:\g2p\g2p-seq2seq-cmudict > 2017-08-20 15:15:39.414169: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. > 2017-08-20 15:15:39.414599: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. > Creating 2 layers of 512 units. > Traceback (most recent call last): > File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\Scripts\g2p-seq2seq-script.py", line 11, in <module> > load_entry_point('g2p-seq2seq==5.0.0a0', 'console_scripts', 'g2p-seq2seq')() > File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\lib\site-packages\g2p_seq2seq-5.0.0a0-py3.5.egg\g2p_seq2seq\app.py", line 82, in main > File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\lib\site-packages\g2p_seq2seq-5.0.0a0-py3.5.egg\g2p_seq2seq\g2p.py", line 96, in load_decode_model > File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\lib\site-packages\g2p_seq2seq-5.0.0a0-py3.5.egg\g2p_seq2seq\seq2seq_model.py", line 121, in __init__ > AttributeError: module 'tensorflow.contrib.rnn' has no attribute 'core_rnn_cell' |
From: Scott G. <sb...@ac...> - 2017-08-20 19:46:10
|
I’m trying to run a clean install of g2p-seq2seq on a clean install of Python with clean install of tensorflow without success. The console log is below but the core of the problem seems to be: AttributeError: module 'tensorflow.contrib.rnn' has no attribute 'core_rnn_cell' Anybody else seen this? It seems the API to tensorflow has changed and the latest and greatest of tensorflow doesn't agree with the latest and greatest of g2p-seq2seq. I'm not fluent in the innards of Python so I can't tell what's happening. Any advice greatly appreciated. Cheers, Scott E:\g2p\g2p-seq2seq-master\g2p_seq2seq>C:\Users\s_gut\AppData\Local\Programs\Python\Python35\Scripts\g2p-seq2seq --decode backtalk.txt --model E:\g2p\g2p-seq2seq-cmudict Loading vocabularies from E:\g2p\g2p-seq2seq-cmudict 2017-08-20 15:15:39.414169: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-08-20 15:15:39.414599: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. Creating 2 layers of 512 units. Traceback (most recent call last): File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\Scripts\g2p-seq2seq-script.py", line 11, in <module> load_entry_point('g2p-seq2seq==5.0.0a0', 'console_scripts', 'g2p-seq2seq')() File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\lib\site-packages\g2p_seq2seq-5.0.0a0-py3.5.egg\g2p_seq2seq\app.py", line 82, in main File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\lib\site-packages\g2p_seq2seq-5.0.0a0-py3.5.egg\g2p_seq2seq\g2p.py", line 96, in load_decode_model File "C:\Users\s_gut\AppData\Local\Programs\Python\Python35\lib\site-packages\g2p_seq2seq-5.0.0a0-py3.5.egg\g2p_seq2seq\seq2seq_model.py", line 121, in __init__ AttributeError: module 'tensorflow.contrib.rnn' has no attribute 'core_rnn_cell' |
From: Pavel D. <pav...@gm...> - 2017-08-19 17:10:19
|
Hi Nickolay and James, Here is the pull request for the PocketSphinx repo: https://github.com/cmusphinx/pocketsphinx/pull/90 And here is the accompanying pull request for the wiki repo: https://github.com/cmusphinx/cmusphinx.github.io/pull/30 I will update them as I will get feedback and make changes. Regards, Pavel 2017-08-17 15:04 GMT+02:00 Nickolay Shmyrev <nsh...@gm...>: > Hi Pavel > > Yes please, submit a pull request. > > > 17 авг. 2017 г., в 4:45, Pavel Denisov <pav...@gm...> > написал(а): > > > > Here are all my changes to PocketSphinx: https://github.com/cmusphinx/ > pocketsphinx/compare/master...akreal:diphones > > Should I create a pull request or submit them in different format? I > would also squash everything to a single commit. > > Tomorrow I'll get to the documentation and tests. > > > > 2017-08-15 20:02 GMT+02:00 Pavel Denisov <pav...@gm...>: > > Alright, looks like it worked. > > Here are the changes I had to make in dict2pid: > https://github.com/akreal/pocketsphinx/commit/ > 3134dd65b28fd438038429d15ac07a1e0f513121?diff=split > > And here is the model itself: https://github.com/akreal/ > diphones/tree/master/data/model > > I'll gather all my code changes together tomorrow. > > > > 2017-08-14 22:49 GMT+02:00 Pavel Denisov <pav...@gm...>: > > Thanks for the idea! I'll try. > > > > 2017-08-14 21:45 GMT+02:00 Nickolay Shmyrev <nsh...@gm...>: > > > > > 14 авг. 2017 г., в 22:42, Pavel Denisov <pav...@gm...> > написал(а): > > > > > > Hi guys, > > > > > > I think I got an acoustic model with diphones as CI units, but I have > troubles with testing it. > > > > Great. > > > > > dict2pid_build function tries to build own triphones from CI units, > and with 899 CI units in my case it becomes infeasible (or extremely > ineffective) to do so. I'm investigating if it is possible to put dict2pid > structure away, but would like to know what you think about about this. > > > > I would just edit it to support context-independent phones only. This > should not touch any other files, just dict2pid internals. > > > > > > > > > > > > |
From: Nickolay S. <nsh...@gm...> - 2017-08-19 09:01:19
|
You can train it yourself with http://www.openslr.org/33/ and http://www.openslr.org/18/ > 19 авг. 2017 г., в 7:43, James Salsman <jsa...@gm...> написал(а): > > Does anyone have a Mandarin acoustic model with toned phonemes? > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Cmusphinx-devel mailing list > Cmu...@li... > https://lists.sourceforge.net/lists/listinfo/cmusphinx-devel |
From: Evandro G. <eg...@gm...> - 2017-08-19 07:20:11
|
Folks, Message sent by non-subscriber, resending. When you respond, please Cc the sender. Atacílio, The Federal University of Pará has or had a speech and dialogue group that put together acoustic and language models for CMU Sphinx, among others. You can check the list of software and models that they make publicly available: http://www.laps.ufpa.br/falabrasil/downloads.php --Evandro ---------- Forwarded message ---------- From: Atacilio Cunha <a....@sa...> To: cmu...@li... Cc: Bcc: Date: Fri, 18 Aug 2017 14:51:26 -0400 Subject: About Portuguese (PT_BR) model Hi, I’m studying the Sphinx API to work with a Portuguese Speech to Text test application, but I couldn’t find a portuguese (PT_BR) model in your website. Do you have any model to it? Or maybe we can use another model to work with a portuguese test application? I’ll be waiting for your answer, but first thank you for your attention. Best regards, Atacílio Cunha |
From: James S. <jsa...@gm...> - 2017-08-19 04:43:51
|
Does anyone have a Mandarin acoustic model with toned phonemes? |